Is South Africa ready to handle AI?

As AI becomes a bigger part of our daily lives, it’s important to think about the legal and ethical issues it brings.


This year, it was reported that 2024 promised to be the groundbreaking year in the world of artificial intelligence (AI), with even more applications to be used across various fields.

However, as AI technology rapidly grows, questions arise about who should be held responsible when things go wrong.

For example, IBM Watson for Oncology was promoted as a tool to help doctors diagnose and treat cancer in the US.

Soon thereafter, reports surfaced that Watson had recommended “unsafe and incorrect” cancer treatments, leading to concerns about their reliability and potential harm to patients.

ALSO READ: In chatbots we trust, but should we?

In addition, AI tools used in radiology to detect diseases like cancer have occasionally missed diagnoses or made incorrect ones.

In some cases, this has led to delayed treatments or unnecessary procedures, potentially harming patients.

In 2018, an Uber self-driving car struck and killed a pedestrian in Tempe, Arizona, in the US.

The AI system failed to identify the pedestrian correctly and did not take appropriate action to avoid the collision.

Who to blame?

So, who should be held liable in such cases arising from AI? In South African law, determining liability for wrongful actions that result in harm is known as delict.

Delict traditionally involves a straightforward test: where we compare the conduct of the wrongdoer to what a “reasonable person” would have done.

This standard is a representation of societal demands of proper behaviour.

It aims to ensure that everyone acts responsibly. With AI systems becoming more prevalent and on the rise, the position gets more complex.

But first, and to see matters in context, it is important to define what AI is. AI represents the cutting edge of technology, comprising systems that can perform tasks that previously required human intelligence.

It is a tool designed and regulated by humans, but it operates independently. It can make decisions and take actions without direct human involvement.

There are different types of AI, but for the purposes of this article, the focus will be a type of AI that handles specific tasks like facial recognition.

ALSO READ: Meta encourages impactful use of AI through Llama Impact Grants

Facial recognition is a technology that uses AI to identify or verify a person by analysing their facial features.

For example, a camera captures an image or video of a person’s face. So, who should be blamed if an AI system causes harm? Should it be the creators, the operators, or someone else?

The legal framework for dealing with AI-related harm is still not there as yet in South Africa and is still developing.

Moreover, specific legislation for AI cases does not yet exist and technologies like self-driving cars are beginning to make an appearance.

This means our existing laws need to adapt quickly to keep up with the pace of AI development.

In the South African context, to claim damages in law, we resort to the law of delict that allows one to claim for harm sustained as a result of another person’s wrongful conduct.

The following elements must be proven: conduct that causes harm, wrongfulness, fault and causation .

Conduct refers to an action or omission (one omits to do something when they were legally supposed to do so) that causes harm.

When it comes to AI, the focus is on how AI-driven actions, guided by complex processes, result in harm.

Wrongfulness: This relates to harm caused in a legally unacceptable way.

For AI, this involves determining if the harm caused by AI falls within what is considered wrongful by law, including ethical considerations.

Fault: This involves intention or negligence. Typically, the “reasonable person” test is used to see if a person in the same situation would have foreseen and prevented the harm.

For AI, this might extend to those who design or operate these systems.

Causation: This requires showing that the conduct was the factual and legal cause of the harm.

With AI, this means understanding how AI actions lead to harm, considering factors like foreseeability and directness.

When a product causes harm, manufacturers or suppliers can be held accountable under the Consumer Protection Act of 2008.

ALSO READ: Taiwan on course to becoming an AI island

The Act dictates that products are safe and good quality. However, AI introduces new problems as it can change and learn from new data.

This self-modifying trait would make it difficult to apply traditional law principles.

As AI technology progresses, we need to keep up in SA by ensuring that our laws are aligned to and adept to the new changes, the introduction and start-up of special courts with experts in law and technology to handle AI-related cases may assist.

Lawyers need to be educated about AI and how to deal with intricate issues efficiently. Working together – lawmakers, industries and AI developers – will help ensure safe and responsible AI development.

Joint ventures with other countries can assist in creating global rules for AI. This would ensure that consumers are safe while promoting new ideas.

As AI becomes a bigger part of our daily lives, it’s important to think about the legal and ethical issues it brings.

• Khan is senior lecturer, department of private law, University of Johannesburg

For more news your way

Download The Citizen App for IOS and Android