In the podcast Can AI Be Biased?, our co-founder Brian Sathianathan points out that “a lot of folks, leaders, and executives are concerned about AI, but view it as a wonderful workhorse.” The basic objective of AI is to enable computers to perform such intellectual tasks as decision making, problem-solving, perception, understanding human communication, and so on. But what if that decision-making ability had a flaw from some bias in the data set?
We are aware that AI is becoming more pervasive as large organizations are beginning to incorporate it throughout operations, customer service, and strategic planning. A Deloitte report says, around 94% of enterprises face potential Artificial Intelligence problems while implementing it. Whether you have noticed the workings of AI or not, it’s real, it’s present, and it’s not going anywhere anytime soon:
These are early manifestations of the problems that AI encounters. However, it’s also a human-centered problem. AI engines are only as good as the data sets used to train them. If the humans are biased (assuming only a certain demographic or economic sector), then the data will be skewed. Humans have emotions and assumptions, and those can creep into AI as well. Human’s reaction to the output of machine learning methods with algorithmic bias worsens the situations by making decisions based on biased information, which will probably be consumed by algorithms later.
The overarching challenge that we want to unpack centers around two questions:
Specifically, in retail, the challenge we face is finding the right procedures and methods to develop the necessary frameworks, tools, processes, and policies to remove bias. AI is considered a new technology, and we are still well in the experimental stage right now. Large corporations were the first to deploy AI and thus are the first to face AI bias. But corporations aren’t pure research organizations, so there may be a gap between profit-seeking scripts and a fully equitable automated response driven by an unbiased AI.
Once AI begins to take on a bigger role in shaping industries’ business practices and operations, people will be less likely to forgive the flaws of AI. With that being said, what is bias, and how does it relate to AI?
Our Director of Innovation, Solomon Ray, sums up the definition of AI bias as “the irregularity that occurs when an algorithm produces results that are systematically prejudiced due to erroneous assumptions in the machine learning process, including the algorithm development and prejudicial training data.” AI is a tool. The adage that ‘tools don’t have biases, only their users do’ may not hold here: AI are decision-making tools, and the flaws might be “built in” with the data, the algorithm, or the deployment. What we do with it and how it’s applied can amplify those effects.
The most apparent biases are as follows:
According to Gartner, through 2030, 85% of AI projects will deliver erroneous outcomes due to bias and data algorithms, or teams responsible for managing that. On one hand, AI bias will not affect a person’s day-to-day life, it will probably help it. An example is when you or I scan our fingers to download an app from the App Store. The AI is recognizing the minutiae on our finger to approve the download. This action is mundane and repetitive at times, but worth the security and protection it provides. On the other hand, as seen in law enforcement, AI has shown that it is flawed- and is capable of ruining a person’s life. An example of a situation where this could happen is when AI is used to detect suspects at a crime scene, and it goes on to misidentify the right suspect. In this case, the reason why the AI didn’t detect the right suspect is because of a “lack of data” in the form of images, relevancy of the data (when it was collected), and where it was collected (online source). These two drastically different occurrences are representative of how bias has found its way into AI, and if not addressed, the consequences it deems for businesses now and in the future will be dire. Knowing what we know, what steps can large corporations, consumers, tech leaders, and government take to lessen AI bias?
To address AI bias, we need to look at frameworks, tools, processes, and policies, and find a holistic solution that is capable of not only detecting bias but also removing bias where possible. This means taking measurable steps such as defining metrics, finding “blind spots” in data, and testing them as a way to de-bias AI and increase people’s trust in the system. Currently, many large tech corporations use open source tools to detect AI bias and seek out those that solve specific AI-bias problems.
According to PwC, 76% of CEOs are most concerned with the potential for bias and lack of transparency when it comes to AI adoption. AI research is coming out with advanced algorithms to solve many different use cases that cannot be solved with currently deployed AI engines. New algorithms being put to practice are synthetic data generation, transfer learning, generative networks, neural networks, and reinforcement learning. All these different methods are susceptible to bias.
Companies that understand AI bias as a problem could turn it into an opportunity. Rather than a bug that needs to be fixed, companies that avoid biases will engender higher confidence with their customers, and find that it can be a helpful feature that will give users confidence in AI. To reach the full potential of AI, minimize biases and develop trust, all these solutions could be used together in an organization. Take Apple’s new privacy feature that was implemented into the iOS 14.5 update for example. The feature essentially gave users the reigns to do what they felt was appropriate with their data. Apple made efforts in protecting the privacy and security of the public, thus, making it part of a solution rather than a part of the problem.
AI bias is a product of (1) lack of complete data, and (2) human biases playing a big role in how AI is built out and used. It’s important to highlight that we have the tools readily available to mitigate all kinds of bias, but it starts with us as humans making a conscious effort to be intentional about recognizing our own biases and applying those principles into the policies, tools, processes, and frameworks in the development of AI to address them. If we want to get rid of bias as much as we realistically can, we need to learn and understand AI bias as much as possible, and that it’s not the AI’s fault, the root of it falls on its creators. The way we can approach this is by taking the current use cases that we have and think about where AI bias could occur in applications, and if the bias is removed, will it benefit or hurt business? It comes down to asking questions of how, what, and why, and addressing the problems from the team (product owners, developers, leaders) down to the data itself.
Stay ahead of trends with insights from iterate.ai experts and advisors