As artificial intelligence grows into an essential part of our decision-making, in areas like hiring and public safety, a serious problem has emerged: algorithmic bias. It’s now possible to be denied a job because a robot decided against you. It might sound abstract, like something from an episode of Black Mirror, but given just how real the problem has become, we also need real laws to help fix it, generating unprecedented attention and interest in artificial intelligence law.
Contents
Understanding Algorithmic Bias
Algorithmic bias results from systematic flaws in a model, either problematic design choices or troublesome data. Security robots may be easier to train on light skin, leading to poor outcomes whenever tasked with identifying people with darker complexions. AI-based hiring tools, meanwhile, may downplay the value of female or PoC candidates if they are skewed training data. In each case, the results, intended or not, have serious implications. It’s a clear abuse of technology that may have more than a few ethical implications. Thus, it should come as no surprise that both the US and the UK are taking action.
The U.S. Response: Algorithmic Accountability and Fairness
The US has begun to tackle AI bias in several key ways. To date, the Algorithmic Accountability Act perhaps represents the most impacting proposal. It would mandate companies take into account the impact of AI models, particularly those used for hiring and public safety. Under the proposal businesses would need to address the negative consequences of an algorithmic decision, and hold someone — or something, most likely their modeling tools or methods — accountable. In other words, companies can excuse themselves but have to provide a defense.
The U.K. Approach: Safeguarding Fairness and Equality
Across the Atlantic, UK organizations are also interested in perceived bias in AI. UK authorities are keen for companies to be transparent about the output of AI models, even when it comes to hiring. Government regulators also advocate for “explainability”, suggesting that stakeholders must have the ability to understand how an AI model reached the decision that it did. The general idea is to make sure that people are not subjected to unreasonable, or unjustifiably biased, AI-based decision-making, especially when those decisions occur in public services or policing.
Looking Forward: The Future of AI Fairness Laws
To sum up, laws prohibiting algorithm bias are crucial to promoting a level playing field in the digital world. The US legislature and UK regulatory bodies have already prepared the groundwork needed to effectively address the problem of AI bias in hiring and law enforcement and bring fact closer to science-fiction for a more inclusive deployment of the technology.