Tiago Ramalho

3 ways AI can create a more equitable future for food distribution

More equitable future for food distribution with AI

At best, only 70% of food gets used in the United States. The rest goes to waste. Although devastating, the good news is this massive waste of food and resources doesn’t have to continue. AI-powered systems have the capability and capacity to allow for a fair distribution of food, ultimately benefiting businesses, consumers, and the planet as a whole.

First, though, we need to tackle and overcome one of the top ethical challenges of AI: preexisting bias.

AI and ethical concerns

It’s important to remember that AI systems are not inherently biased. However, they learn to interpret inputs based on historical data from past events. With no further context, AI will continue to replicate the same human judgments that led to specific outcomes, meaning that the AI program may be just as biased as the human user.

Block Chain

For example, consider an AI-driven stock management system for a retail grocery chain. Historically, the chain has opted to shift the majority of its stock to stores in affluent neighborhoods, leaving other stores battling shortages. If the AI system is trained on this data, it will replicate those biased choices rather than shift to a more balanced, equitable distribution of merchandise.

This is not the fault of the AI system. AI is trained to make decisions as close as possible to the ones present in the original data. With no additional guidance or governance, AI will defer to what seems to make logical sense and repeat actions the programmer allowed in the past.

So does this mean that we’re destined to replicate biased decisions as we scale up AI deployments to increasingly automate and optimize industrial and retail systems? Not at all. It is our responsibility as business owners and technology to develop an overarching set of guiding principles aimed at helping organizations design and deploy ethical and, ultimately, equitable AI systems and solutions.

AI for social good and fairer food distribution

Where do we start when it comes to removing inherent biases from AI systems that have the potential to more equitably distribute food? First, we must thoughtfully design the desired behavior of the AI system. Instead of unguided learning from historical data, we must encode fairer principles into the system at the creation phase.

Secondly, we have to empower users to do something when they realize an AI system is showing bias. Currently, it can be hard for someone affected by an AI-powered algorithmic decision to intervene. Potentially worrisome AI decisions aren’t set up to be changed on the frontlines. They must be brought to upper management to determine whether an engineer should act upon the complaint. In the meantime, the AI system will continue to do what it’s doing, which may end up hampering food waste solutions.

Prioritizing the ethics of AI

For many years, professionals have struggled with how to implement AI in business so that it benefits all stakeholders. In the food industry, adding several practices to the use of AI can speed up equitable food distribution and promote better corporate branding, publicity, and customer loyalty.

  1. Take fairness seriously from the outset.

Many stakeholders are affected by one AI system; thus, it’s imperative that their needs are considered from the planning and development stages. Programmers must make sure they’re involved in the process from the outset, at least in some way. For instance, many companies have uniform cultures and blind spots that end up hurting certain end-user populations. Knowing those potential pitfalls early helps AI system creators engineer systems that take a holistic approach to applying data.

  1. Define what fairness means.

No AI model can operate fairly if no one knows what “fair” or “equitable” behavior looks like. Merely allowing AI to learn from historical data isn’t the best approach to maintaining a sense of fairness or equity. To adjust for these past imbalances, programmers must make modifications to AI algorithms that correct for imbalances. Yet this can happen only if concrete standards are created upon what fairness means from the start based on a company’s values, the thoughts of independent experts, and the expectations of consumers.

  1. Reevaluate working AI systems.

AI systems should never be deploy-and-forget solutions. Their outputs must be monitored consistently to identify and rectify “data drift.” Data drift happens when the behaviors and patterns the system was trained on change over time. While data drift isn’t necessarily bad, it can lead to unanticipated behaviors. Consequently, staying on top of AI systems is essential so engineers can react to unforeseen issues and make changes when appropriate.

Can AI make the world more equitable? Every day, AI is getting closer to being able to improve people’s lives and encourage equity. The key to leveraging AI to address food waste and distribute food properly will lie in seeing AI as a tool rather than a one-stop solution. In other words, AI systems require a human touch in order to maximize their potential to promote food fairness.

By Tiago Ramalho

Tiago Ramalho

Tiago Ramalho is the co-founder and CEO of Recursive, a technology consulting company based
in Japan that specializes in developing AI systems and helping businesses reach their
sustainability goals. Recursive collaborates with large enterprises to create innovative solutions,
merging expertise in AI research and design thinking with clients’ domain knowledge.
Ray Meiring

How AI is Reshaping Business Operations for MSPs

Fueled by extensive demand in IT, healthcare, financial services, and telecommunication—initially spurred by the pandemic-driven [...]
Read more
Derek Pilling

Episode 22: Reframing Cloud as an Insight Factory

While organizations remain focused on trying to extract more insight and value out from their [...]
Read more
Gilad David Maayan

Application Security Testing in the Cloud: A Practical Guide

What is Application Security Testing? Application security testing, or AST, is a crucial component of [...]
Read more

The Rise of ‘DIY Malware’: HP Highlights Growing Cyber Vulnerabilities

Cybersecurity Threats from Pre-Packaged Malware Kits In a recent analysis by HP, there’s a worrying [...]
Read more
Gilad David Maayan

CI/CD in the Cloud: A 2024 Guide

What Is CI/CD in the Cloud? CI/CD, or Continuous Integration and Continuous Delivery, is a [...]
Read more
Featured Thought Leaders

Get Featured: Ready to Showcase Your Insights in Interviews & Thought Leadership?

Attention technology brands! If you have a thought leader enthusiastic about being interviewed and offering guest posts insights to broaden their exposure, act now! They could be showcased on CloudTweaks.
Craig Lowell
Jeff DeVerter
Andy Hilliard
Chris Bray
Nancy Zafrani

SPONSOR PARTNER

Explore top-tier education with exclusive savings on online courses from MIT, Oxford, and Harvard through our e-learning sponsor. Elevate your career with world-class knowledge. Start now!
© 2024 CloudTweaks. All rights reserved.