AI regulation: will the US AI Bill of Rights save us from the AI wild west?

Maybe not right now, but it certainly is a good start... Skip ahead for the cheat sheet/talking point version of events...

It is a truth universally acknowledged that when we think of regulation, we do not often think of it as being the starting point for opportunity. At best, we think of regulation as the safety mechanisms that are put in place by a benevolent organisation whose job is to protect us from harm. At worst, and this applies to many of us who have lived through the heightened, reactionary regulation measures necessitated by umpteenth global financial crises, it evokes the fear of the slow death of opportunity, as we become worn down by laborious approval processes, and drowned in a sea of paperwork.

The new US Blueprint for an AI Bill of Rights (just released on October 4, 2022) places a great deal of emphasis on providing guidance to ensure that AI solutions are built in a way that supports democratic values and civil rights, with equal opportunities and access for all. As part of the  "United States' Advancing Trustworthy AI" strategy, this blueprint contains guidance that is expected to evolve and be enacted into law, where much of this guidance focuses on regulation, governance and other types of mechanisms to protect the rights of the American public. 

Although much of the discussion about the AI Bill of Rights has focused on the regulatory aspects of it, focusing simply on that does it a massive disservice. Rather, it may be better to view the regulation as providing the foundational environment necessary to ensure we do not descend into dystopian AI. It could be argued that some large-scale regulation is needed to stifle the evergrowing list of egregiously racist, sexist and otherwise discriminatory AI systems that have been released into the wild and only retracted in the face of public scorn. In other words, putting in some regulation to avoid the overregulation that frequently follows a period of underregulation (think: Handmaid's Tale).

Many of these mechanisms include those that could be quite reasonably classified as common sense, such as clear notice and disclosure when AI is being used, informed consent when data is being collected, and disparity testing to ensure that AI solutions produce results that are equitable across demographic groups . The fact that these meagre requirements are not already the default, should give you pause...

Beyond the protections provided by the regulatory guidance, the AI Bill of Rights builds on the need to develop AI solutions that support democratic and civil rights by putting the human at the centre of these AI solutions. Many of the technical guidelines contained within the Bill of Rights emphasise the need for human-centred-ness to permeate throughout the design, build, release AND post-deployment lifecycle. This imperative for human-centred-ness begets a host of benefits and pushes inclusive collaboration to the forefront. It is intended to ensure that the bill's core values are not simply adhered to by regulation, but built into the way that we design and deliver AI solutions.

For example, Agile and Design Thinking practices such as early engagement with users, and continued consultation during the solution lifecycle, are stipulated in the AI Bill of Rights guidance, where special note is given to engage “diverse impacted communities”. Solutions can no longer be developed in a silo - this effectively mandates that the solutions need to solve genuine user problems, instead of being driven by economic and efficiency goals. Similarly, the AI Bill of Rights has defined a number of points where User eXperience (UX) research should be conducted - the recurring theme is that not only should the solutions be tailored to the user’s needs, but the interaction and engagement with the system (inclusive of notice, consent, and support) should be tailored to the user’s level of understanding as well.

From an opportunity perspective, the bill presents a huge number of collaboration and engagement opportunities, as it has a strong reliance on inputs from a diverse set of experts from outside the field of AI. These run the gamut from ensuring the socio-economic impacts of AI disruption are explored before AI-powered solutions can be released, to ensuring that explainable AI can be understood by the lay-person (less focus on the algorithms, and more focus on the reasoning and impacts, please).

This is not to say that the AI Bill of Rights is without its flaws. The elephant in the room is that the AI Bill of Rights is a set of guidelines and not legally enforceable. It has sometimes been viewed unfavourably in comparison with the European Union’s AI Act (the US AI Bill of Right's EU counterpart). Though the EU AI Act is admittedly currently draft, it contains actual legislation that imposes much stricter oversight of AI, with some outright bans of certain systems, and heavy penalties for infractions of its rules. Interestingly enough, there are those who criticise the EU AI Act for being too prescriptive with its regulation (think: the glass is half empty x 2). As neither have resulted in enaction of legislation, it is yet to be seen how the different approaches will truly play out. Furthermore, the role of public censure in these discussions should not be underestimated, and as concerns, anxieties and distrust are displayed across multiple stakeholder groups, public censure may end up playing a more important role than financial penalties (as people vote with their feet).

Overall, I believe the AI Bill of Rights signposts a number of items that will most definitely impact the future AI solutions and the AI ecosystem.

These items can be summarised as:

  • Better regulation and transparency when it comes to the use of AI and data (especially for high-stakes decision-making/solutions in sensitive domains).

  • Increased focus on user engagement and providing users with increased agency over AI solutions.

  • Increased focus on equitability, fairness and ethical AI, and the impact of these solutions to the broader community.



Things you can say to sound smart...

  • Now that both the US and the EU have published their AI protection acts, I'd like to see what we're doing in <insert_country> !

  • EITHER

    • I think that the US has taken a better approach than the EU because the AI Bill of Rights allows the legislation to evolve.

  • OR

    • I think the EU AI Act sends a stronger message than the US AI Bill of Rights since it contains actual laws and penalties for non-compliance (mentioning the draft status of the laws is optional).

  • I'd like to see if they will actually implement:

    • Restrictions on data collection and prohibition of brokering and onselling our data.

    • Protections (such as legislation) to ensure that workers are not displaced by AI.

    • Bans on surveillance.

  • It's shocking what people used-to/are-still-trying-to get away with in terms of:

    • Releasing systems that don't work for minorities.

    • Not telling us when AI was being used and/or not providing us with any way to argue back against wrong AI decisions.

    • Using AI to make high-stakes decisions, such as those in the criminal justice system.

Extra-credit lesson...

Whether that was their intention or not, the US AI Bill of Rights has been crafted using an Agile approach (which goes to show you that Agile can be used for policy development, as well as for software development).

This Agile approach involved:

  1. Defining their mission “to ensure protection of the democratic values and civil rights of the American public”.

  2. Focusing on user needs by engaging in a year long listening and feedback gathering exercise.

  3. Distilling this feedback into a set of principles and supporting guidelines.

  4. Launching and publishing it to the general public for implementation and review feedback.

  5. [FUTURE] Creating laws that support the principles and guidelines required to support the mission.

Elaine Ng

Founder of AI Shophouse.

Previous
Previous

AI bias: what’s representative data got to do with it?*