Business

4 questions to ask when evaluating AI prototypes for bias


Veronica Torres is the worldwide privacy and regulatory counsel for jumiowhere she provides strategic legal counsel regarding business processes, applications and technologies to ensure compliance with privacy laws.

It’s true there has been progress around data protection in the US thanks to the passing of several laws, such as the California Consumer Privacy Act (CCPA), and non-binding documents, such as the Blueprint for an AI Bill of Rights. Yet, there currently aren’t any standard regulations that dictate how technology companies should mitigate AI bias and discrimination.

As a result, many companies are falling behind in building ethical, privacy-first tools. Nearly 80% of data scientists in the US are male and 66% are white, which shows an inherent lack of diversity and demographic representation in the development of automated decision-making tools, often leading to skewed data results.

Significant improvements in design review processes are needed to ensure technology companies take all people into account when creating and modifying their products. Otherwise, organizations can risk losing customers to competition, tarnishing their reputation and risking serious lawsuits. According to IBM, about 85% of IT professionals believe consumers select companies that are transparent about how their AI algorithms are created, managed and used. We can expect this number to increase as more users continue taking a stand against harmful and biased technology.

So, what do companies need to keep in mind when analyzing their prototypes? Here are four questions development teams should ask themselves:

Have we ruled out all types of bias in our prototype?

Technology has the ability to revolutionize society as we know it, but it will ultimately fail if it doesn’t benefit everyone in the same way.

To build effective, bias-free technology, AI teams should develop a list of questions to ask during the review process that can help them identify potential issues in their models.

There are many methodologies AI teams can use to assess their models, but before they do that, it’s critical to evaluate the end goal and whether there are any groups who may be disproportionately affected by the outcomes of the use of AI.

For example, AI teams should take into consideration that the use of facial recognition technologies may inadvertently discriminate against people of color — something that occurs far too often in AI algorithms. Research conducted by the American Civil Liberties Union in 2018 showed that Amazon’s face recognition inaccurately matched 28 members of the US Congress with mugshots. A staggering 40% of incorrect matches were people of color, despite them making up only 20% of Congress.

By asking challenging questions, AI teams can find new ways to improve their models and strive to prevent these scenarios from occurring. For instance, a close examination can help them determine whether they need to look at more data or if they will need a third party, such as a privacy expert, to review their product.

Plot4AI is a great resource for those looking to start.

Related posts

Aeva and NASA want to map the Moon with lidar-powered KNaCK pack

TechLifely

Bose reduces size and amps up noise canceling for the $299 QuietComfort Earbuds II

TechLifely

Instacart is acquiring AI-powered pricing and promotions platform Eversight

TechLifely

Leave a Comment