The Growing Influence of Ethical AI in Data Science
Industries such as insurance that handle personal information are paying more attention to customers’ desire for responsible, transparent AI.
AI (artificial intelligence) is a tremendous asset to companies that use predictive modeling and have automated tasks. However, AI is still facing problems with data bias. After all, AI gets its marching orders from human-generated data -- which by its nature is prone to bias, no matter how evolved we humans like to think we are.
With the wide adoption of AI, many industries are starting to pay attention to a new form of governance called responsible or ethical AI. These are governance practices associated with regulated data. For most organizations, this involves removing any unintentional bias or discrimination from their customer data and cross-checking any unexpected algorithmic activity once the data moves into production mode.
This is an especially important transformation for the insurance industry because consumers today are becoming far more attuned to their personal end-to-end experience in any industry that relies on the use of personal data. By advancing responsible, ethical AI, insurers can confidently map to the way consumers want to search for insurance and find insurance policies, and they can align with the values and ethics that govern this kind of personal search.
What Does Inherent Bias Look Like in AI Algorithms Today?
One of the more noticeable examples of human-learned, albeit unintentional, data bias today is around gender. This happens when the AI system does not behave the same way for a man versus a woman, even when the data provided to the system is identical except for the gender information. One example outcome is that individuals who should be in the same insurance risk category are offered unequal policy advice.
Another example is something called the survivor bias, which is optimizing an AI model using only available, visible data -- i.e., “surviving” data. This approach inadvertently overlooks information due to the lack of visibility, and the results are skewed to one vantage point. To move past this weakness, for example in the insurance industry, AI must be trained not to favor the known customer data over prospective customer data that is not yet known.
More enterprises are becoming aware of how these data determinants can expose them to unnecessary risk. A case in point: in their State of AI in 2021 report, McKinsey reviewed industry regulatory compliance through the filter of a company’s allegiance to equity and fairness data practices --and reported that two of companies’ top three global concerns are the ability to establish ethical AI and to explain their practices well to customers.
How Can Companies Proactively Eliminate Data Bias Company-wide?
Most companies should already have a diversity, equity, and inclusion (DEI) program to set a strong foundation before exploring practices in technology, processes, and people. At a minimum, companies can set a goal to remove ingrained data biases. Fortunately, there are a host of best-practice options to do this.
- Adopt an open source strategy. First, enterprises need to know that biases are not necessarily where they imagine them to be. There can be a bias in the sales training data or in the data at the later inference or prediction time, or both. At Zelros, for example, we recommend that companies use an open source strategy to be more open and transparent in their AI initiatives. This is becoming an essential baseline anti-bias step that is being practiced at companies of all sizes.
- Utilize vendor partnerships. Companies that want to put a bigger stake in the ground when it comes to regulatory compliance and ethical AI standards can collaborate with organizations such as isahit, dedicated to helping organizations across industries become competent in their use and implementation of ethical AI. As a best practice, we recommend that companies work toward adopting responsible AI at every level, not just with their technical R&D or research teams, then communicate this governance proliferation to their customers and partners.
- Initiate bias bounties. Another method for eliminating data bias was identified by Forrester as a significant trend in their North American “Predictions 2022” guide. It is an initiative called bias bounties. Forrester stated that, “At least five large companies will introduce bias bounties in 2022.”
Bias bounties are like bug bounties, but instead of rewarding users based on the issues they detect in software, users are rewarded for identifying bias in AI systems. The bias happens because of incomplete data or existing data that can lead to discriminatory outcomes from AI systems. According to Forrester, in 2022, major tech companies such as Google and Microsoft will implement bias bounties, and so will non-technology organizations such as banks and healthcare companies. With trust high on stakeholders’ agenda, basing decisions on accountability and integrity is more critical than ever.
- Get certified. Finally, another method for establishing an ethical AI approach -- one that is gaining momentum -- is getting AI system certification. Being able to provide proof of the built-in governance through an external audit goes a long way. In Europe, the AI Act is a resource for institutions to assess their AI systems from a process or operational standpoint. In the U.S., the NAIC is a reference organization providing guiding principles for insurers to follow. Another option is for companies to align to a third-party organization for best practices.
Can an AI System Be Self-criticizing and Self-sustaining?
Creating an AI system that is both self-criticizing and self-sustaining is the goal. Through the design itself, the AI must adapt and learn, with the support of human common sense, which the machine cannot emulate.
Companies that want to have a fair prediction outcome may analyze different metrics at various subgroup levels within a specific model feature (for example gender) because that can help identify and prevent biases before they go to market with consumer-facing capabilities. With any AI, making sure that it doesn’t fall into a trap called a Simpson’s Paradox is key. Simpson's Paradox, which also goes by several other names, is a phenomenon in probability and statistics where a trend appears in several groups of data but disappears or reverses when the groups are combined. Successfully preventing this from happening ensures that personal data does not penalize the client or consumer who it is supposed to benefit.
Responsible Use of AI Can Be a Powerful Advantage
Companies are starting to pay attention to how responsible AI has the power to nurture a virtuous, profitable circle of customer retention through more reliable and robust data collection. There will be challenges in the ongoing refinement of ethical AI for many applications, but the strategic advantages and opportunities are clear. In insurance, the ability to monitor, control, and balance human bias can keep policy recommendations meant for certain races and genders fairly focused on the needs of those intended audiences. Responsible AI leads to stronger customer attraction and retention, and ultimately increased profitability.
Companies globally are revving up their focus on data equity and fairness as a relevant risk to mitigate. Fortunately, they have options to choose from to protect themselves. AI offers an opportunity to accelerate more diverse, equitable interactions between humans and machines. Solutions can help large enterprises globally provide hyper-personalized, unbiased recommendations across channels. Respected trend analysts have called out data bias a top business concern of 2022. Simultaneously, they identify responsible, ethical AI as a forward-thinking solution companies can deploy to increase customer and partner trust and boost profitability.
How are you moving toward an ethical use of AI today?
Author: Damien Philippon