The increasing role of AI in regulation
How the Biden administration will change the AI playing field, and what you should be doing now.
With President Biden having made some important appointments recently, there’s a lot of speculation about what we can expect from his administration over the course of the next four years with respect to AI/ML and in particular, with regulating Artificial Intelligence applications — to make the technology safer, fairer and more equitable.
As an analyst covering this space at Info-Tech Research Group, I’m naturally going to throw my hat into the ring. Here are my top four predictions.
Regulation of AI will be fast-tracked through the House and Senate
We may not have all the details yet, but the direction and pace are both fairly clear: we can expect regulation to be fast-tracked at the federal level to complement state-level bills. The roadmap includes both recently introduced bills like Algorithmic Accountability Act of 2019 as well as the modernization of existing statutes such as the Civil Rights Act (1964), Fair Housing (1968), and others to cover AI and algorithmic decision-making systems.
In fact, the driving force behind the Algorithmic Accountability Act — Senators Ron Wyden and Cory Booker, and Representative Yvette Clark — are planning to reintroduce their bills in the Senate and House this year.
Altogether, we can expect to see the administration pursue an agenda that better incorporates AI/ML into existing and new legislative frameworks, and also leaves enough room for flexibility as AI standards and practices continue evolving.
Ethical AI standards will be developed quickly
For regulation to be effective it needs to be driven by values, informed by evidence, grounded in a sound risk model, and supported by standards and certifications. So we expect that government agencies will soon sharpen their focus on AI as the administration’s guidance takes shape. NIST and others will double down on developing benchmarks, standards and measurement frameworks for AI technologies, algorithmic bias, explainability, and AI governance and risk management.
Some of this work is already in progress, for example Facial Recognition Vendor Test and Explainable AI, but we can expect this plan to accelerate fairly quickly.
Regulators will be collaborating across borders
In this interconnected world, any regulation of technology cannot be pursued in isolation, especially with technologies such as AI/ML. There ar several signs of lawmakers willing to join forces and learn from each other, especially from nations who made it a priority early on. (After all, when done right, regulation is not an impediment to innovation — more on this below.)
Over the next four years, we will see increased collaboration on AI regulation, standards, certification, and auditing with European and international organizations, and with neighboring countries, many of whom are already ahead of their U.S. counterparts. Higher levels of global partnership will positively impact efforts to build a more comprehensive legislative framework, both in the U.S. and abroad.
Federal agencies will get broader mandates that include AI/ML
A law can tell you what you can or cannot do, but its power comes from being enforced by the courts and by oversight agencies with authority to impose penalties and other regulatory sanctions. At this time, it is unclear what this authority is and how it is divided among the various federal agencies relative to AI/ML.
We expect this situation to be addressed fairly quickly by broadening of the mandates of existing oversight bodies to include Machine Learning and AI-powered applications and systems, as well as directives to create training, certification, accreditation, and oversight of AI auditors — especially AI bias auditors — similar to food inspectors and consumer safety inspectors.
What does all this mean for your organization?
So, what are the implications for your organization, whether you are just thinking about leveraging AI/ML or having been doing it for years?
My opinion is that regulation — and its flip side, governance — are not evil. When executed properly, regulation creates certainty, establishes a level-playing field, and promotes competition. It also informs internal policy, governance, and accountability. And governance helps to frame the discussion about acceptable risks and rewards from monetizing AI — improving the organization’s odds of success.
Governance (and hence regulation) also help to establish and strengthen trust: internally within the organization, but, most importantly, with its customers. Indeed, trust is the foundation of all business.
You can get ahead of any impending regulatory shifts
Don’t wait until regulation becomes a reality! There are three easy steps you can take to avoid surprises down the road and to prepare your organization:
- Don’t wait for AI regulation to come to you! Engage in shaping it through industry associations, think tanks, public policy and civic interest groups, and your House representatives.
- Actively govern your organization’s AI-powered applications to establish your process maturity before everyone else — including government — catches up. Business simply can’t afford to wait. Or face the risk of deploying a biased system that could harm your customers and, as a result, your reputation and balance sheet.
- Document and proactively disclose how and where you use AI/Machine Learning, data and analytics, and how these systems are built. AI registers — as leveraged by the cities of Amsterdam and Helsinki, for example — are a straightforward way to share this information with your customers and to increase their trust and loyalty. They will also work for auditors and regulators. And they create the foundation of a minimal viable framework for internal AI governance.
Governance and regulation are truly not a burden. And even if they cost money, they represent an important, value-added investment in business success. Governance is a mechanism to create value, monetize new technologies like AI, and grow and strengthen the business (while monitoring and mitigating risks). The greater risk lies in ignoring the potential of AI, or in allowing competitors to get there first.
Author: Natalia Modjeska
Source: Towards Data Science