At best, only 70% of food gets used in the United States. The rest goes to waste. Although devastating, the good news is this massive waste of food and resources doesn’t have to continue. AI-powered systems have the capability and capacity to allow for a fair distribution of food, ultimately benefiting businesses, consumers, and the planet as a whole.
First, though, we need to tackle and overcome one of the top ethical challenges of AI: preexisting bias.
It’s important to remember that AI systems are not inherently biased. However, they learn to interpret inputs based on historical data from past events. With no further context, AI will continue to replicate the same human judgments that led to specific outcomes, meaning that the AI program may be just as biased as the human user.
For example, consider an AI-driven stock management system for a retail grocery chain. Historically, the chain has opted to shift the majority of its stock to stores in affluent neighborhoods, leaving other stores battling shortages. If the AI system is trained on this data, it will replicate those biased choices rather than shift to a more balanced, equitable distribution of merchandise.
This is not the fault of the AI system. AI is trained to make decisions as close as possible to the ones present in the original data. With no additional guidance or governance, AI will defer to what seems to make logical sense and repeat actions the programmer allowed in the past.
So does this mean that we’re destined to replicate biased decisions as we scale up AI deployments to increasingly automate and optimize industrial and retail systems? Not at all. It is our responsibility as business owners and technology to develop an overarching set of guiding principles aimed at helping organizations design and deploy ethical and, ultimately, equitable AI systems and solutions.
Where do we start when it comes to removing inherent biases from AI systems that have the potential to more equitably distribute food? First, we must thoughtfully design the desired behavior of the AI system. Instead of unguided learning from historical data, we must encode fairer principles into the system at the creation phase.
Secondly, we have to empower users to do something when they realize an AI system is showing bias. Currently, it can be hard for someone affected by an AI-powered algorithmic decision to intervene. Potentially worrisome AI decisions aren’t set up to be changed on the frontlines. They must be brought to upper management to determine whether an engineer should act upon the complaint. In the meantime, the AI system will continue to do what it’s doing, which may end up hampering food waste solutions.
For many years, professionals have struggled with how to implement AI in business so that it benefits all stakeholders. In the food industry, adding several practices to the use of AI can speed up equitable food distribution and promote better corporate branding, publicity, and customer loyalty.
Many stakeholders are affected by one AI system; thus, it’s imperative that their needs are considered from the planning and development stages. Programmers must make sure they’re involved in the process from the outset, at least in some way. For instance, many companies have uniform cultures and blind spots that end up hurting certain end-user populations. Knowing those potential pitfalls early helps AI system creators engineer systems that take a holistic approach to applying data.
No AI model can operate fairly if no one knows what “fair” or “equitable” behavior looks like. Merely allowing AI to learn from historical data isn’t the best approach to maintaining a sense of fairness or equity. To adjust for these past imbalances, programmers must make modifications to AI algorithms that correct for imbalances. Yet this can happen only if concrete standards are created upon what fairness means from the start based on a company’s values, the thoughts of independent experts, and the expectations of consumers.
AI systems should never be deploy-and-forget solutions. Their outputs must be monitored consistently to identify and rectify “data drift.” Data drift happens when the behaviors and patterns the system was trained on change over time. While data drift isn’t necessarily bad, it can lead to unanticipated behaviors. Consequently, staying on top of AI systems is essential so engineers can react to unforeseen issues and make changes when appropriate.
Can AI make the world more equitable? Every day, AI is getting closer to being able to improve people’s lives and encourage equity. The key to leveraging AI to address food waste and distribute food properly will lie in seeing AI as a tool rather than a one-stop solution. In other words, AI systems require a human touch in order to maximize their potential to promote food fairness.
By Tiago Ramalho