22 Feb 2024

The Unwitting AI Ethicist

If you're curious about the types of ethical decisions AI engineers are faced with, this article is for you. TLDR; AI engineers should take on some ethical responsibilities, others should be left for society. Read to find out more...

Words by
Alex Carruthers
Ethics 101 Header Image

The Unwitting AI Ethicist

This article was originally published on LinkedIn: The Unwitting AI Ethicist | LinkedIn  `

If you're curious about the types of ethical decisions AI engineers are faced with, this article is for you. TLDR; AI engineers should take on some ethical responsibilities, others should be left for society. Read on to find out more...

Introduction

The threat for modern AI to spiral out of control is a real one. AI is changing the world. Echoing talks by Sam Altman, OpenAI’s CEO (well, at least at the time of writing), no one really knows what “unlimited intelligence” will do. Nobody knows the true extent of its impending impact.

Nevertheless, that it will impact our lives significantly is not up for debate – the next AI winter is not coming, John Snow. History will show that 2023 was the year AI ‘proved concept’, and 2024 is held to be the year that AI will scale. It’s February, and already we’ve evolved from language models to auto-generated video from both Google and OpenAI. The world sits in eager anticipation for Apple’s AI (or perhaps, AiOS? You heard it here first 😉).

No wonder then that AI ethics is receiving a wave of intense attention. Amid its opportunities, there are challenges of fairness and privacy that must be met.

Importantly, AI ethics should not only be a prop for business development PowerPoints, corporate social responsibility press releases and AI conference talks, but a practical conversation for those who create the systems.

There are many steps AI engineers can take to help us march towards ethical AI, but they cannot do everything. We need to be clearer about where the engineer’s responsibility starts and ends. Many choices must made by society in the form of what it will tolerate or stand against, and which governments it elects to govern accordingly. An interesting tangent to consider along this point is that societies around the world will have very different tolerances and governments, so AI ethics will look differently everywhere. This raises a few questions about AI import/export, too. But we’ll leave that can of worms be for now and focus this writing on what AI engineers can, should, can’t and shouldn’t do, in the world of AI ethics.

What AI Engineers Can and Should Do

There is confusion and debate about what ethical responsibilities AI engineers should take on. Assuming for the sake of this point that all engineers operate within the boundaries of law, regulations are an imperfect reflection of ethics and societal expectations in any case. We can however point out that there are a few things that aren’t in question, and therefore things that engineers can and should do.

That things should be equitable is widely held as a given.

For example, facial recognition systems that are worse at recognising faces from ethnic minority groups is something an engineer should tackle.

Underrepresentation is a technical challenge, indicating the data used to train these models needed rebalancing. Some scholars have unhelpfully framed this as a deep problem that is the responsibility of wider society to solve; how’s that for a solution, eh: the world needs to be less prejudiced? Ok, ideally, it should be, but for the practitioners out there with a biased AI model, this type of dataset rebalancing problem is standard technical operation, and one engineers should tackle.

Despite a present lack of regulatory requirements, AI engineers can do much to move us all towards a world of ethical AI.

Here are a few other responsibilities we think they should include in their day to day:

  1. AI Alignment procedures. Conducting tests that score AI for robustness and resilience are just as important as accuracy metrics. These tests can be an important indicator for proactively understanding whether AI could breach ethical principles (or any objective for that matter). If you don’t have clear direction on principles from your leadership, involve the right people from across your organization and stimulate the conversation!

  2. Due diligence ensuring diverse, representative datasets. Attacks can reveal which features are underrepresented and inform the data augmentations needed to represent various features, to prevent system biases. This is particularly true in classification systems. For example, if you are training a facial recognition system, then you can check that minority groups, or people wearing glasses, or people with three noses, are reasonably represented in the training data. If not, take steps to rebalance it. If you can’t take steps to rebalance it, then you need to create boundaries so users don’t use the AI in situations where this underrepresentation may impact an action or decision.

  3. Screening data for backdoors. Adversaries can insert subtle backdoors into training data that can be triggered after deployment. This can cause the system to misfire in specific ways that might harm users or be unethical. We find that performing red-teaming and adversarial AI against your own systems, not only will you uncover vulnerabilities to attack, but you will also uncover potential causes of natural failure. Sometimes attack really is the best defence!

  4. Transparency in Algorithms: Clear documentation and model or system cards make the system’s decision making more auditable and enhances a user’s understanding of a system. Understanding a system’s weakness provides clarity on which scenarios or contexts an AI model is likely to make a good or bad decision/action.

  5. Ensuring confidential training data remains private. Looking after stakeholder data is an undeniable component of AI ethics, whether it’s mandated in your sector/country or not. This involves taking steps to plug privacy holes that exist in modern AI, such as preventing data extraction or membership inference attacks.

 

We hope, as you read these actions, that a thought occurs to you; that doing the ethical thing often aligns with doing the best thing for the organisation.

What’s not to like about AI that 1) does what it’s supposed to, 2) makes better decisions for more people, 3) prevents adversarial activity, 4) makes decisions/actions that keeps people accountable and are understood by users, and 5) keeps internal data secure?

What AI Engineers Can’t and Shouldn’t Do

Sometimes we also make the reverse mistake, where we ask AI engineers to adjudicate on complex ethical questions that should remain the responsibly of wider society. The question, “what constitutes harmful data?”, is too complex and important to be left to AI practitioners, who have their own biases and incentives. This is the sort of question that should be adjudicated by wider society.

A smart way to nail down where the responsibility of AI engineers ends is by pointing out where the responsibility of society begins. Decisions about how and where AI is deployed, and its appropriate limitations, should be made by governments and wider society.

  1. How much explainability do we require from AI? The sharp end of this debate involves areas like banking and law enforcement, where we’ve already agreed that algorithms should not make choices based on race, gender, and other protected characteristics. This debate is at a relatively advanced stage, and the dust has settled on this idea of ‘impact’. If an AI model is likely to impact well-being, then AI engineers are required to build explainable AI. However, as new capabilities emerge and impact sectors in new ways, this debate will continue.

  2. What should be excluded from AI training data? The massive datasets required to train a modern NLP system can only come from the vastness of the internet. However, we all know the internet is awash with hate speech and misinformation. As the courts are seeing, the boundaries of intellectual and artistic property is a complex one. For example, the New York Times may block OpenAI’s crawler from their system, but what about all the quoted material reposted by regular people (often presented without citation)? In some cases it may not be possible to prevent these models from absorbing IP. So, do we ban them? Should topics that relate to IP be forbidden?

  3. What is an acceptable amount of error for a particular task? How many car accidents is too many when it comes to driverless cars? Tesla talks about error rates orders of magnitude lower than human error rates, yet still this is not good enough. Our legal system is built on agency, but AI agents are not considered agents under the law. Air Canada learned this the hard way recently where it was forced to honour some of its chatbot’s offers. It’s a complicated question and every solution will be case-by-case. For instance, we will probably decide to tolerate more error in a hotel reservation robot, versus a driverless car, robot, or autonomous military vehicle.

  4. Mistakes are inescapable, so who will bear the responsibility? If a driverless car crashes, how much of the cost is borne by the manufacturer, the car owner and those who maintain the infrastructure? What about if a stock price crashes? A prison sentence, a credit score, a home loan application, the examples could go on.

 

These are only a few examples from many.

5. Should we allow human jobs to be replaced?

6. Can AI companies be expected to keep super intelligences under control?

7. How much should we regulate the users of AI versus its creators?

8. What about environmental considerations?

...

Thankfully, the pressure on engineers is being alleviated. A core aspect of incoming AI regulation and principles revolves around preventing harm and promoting equitable outcomes.

Europe’s ‘The AI Act’ is the most advanced example of this, a legislative framework seeking to balance AI innovation with ethical considerations and the protection of fundamental rights. The AI Act takes a ‘fundamental rights approach’ by focusing on the impact of the use of AI, versus regulating any specific technology. The legal reasoning for doing this is to create a set of standards that will stand the test of time, driving ethical AI development long into the future. As European parliament president, Roberta Metsola, said, it’s “legislation that will no doubt be setting the global standard for years to come.”

Whilst the time has come for international organisations to install governance procedures that comply with emerging AI regulations, there will still be many nuances not perfectly covered by the law. There will still be many moments in an engineer’s career where they will have to make up their own minds about ethical AI.

They shouldn’t have do, but they will have to.

Further reading

Read our two opinion pieces on the AI Act: