
Deepfakes are forgeries of people’s faces, voices and likeness generated through artificial intelligence (AI). They create a serious digital deception. Deepfakes undermine constitutional rights, reduce trust in media and distort fairness in elections. While many countries have laws that address the risks caused by deepfakes, enforcement remains a challenge.
Deepfakes began to be widely created in 2017 after they’d first appeared on Reddit, a discussion website of forums where people exchange information. A Reddit user called Deepfakes shared an AI software tool that could superimpose celebrities’ faces on pornographic videos. AI-generated media became widely accessible through software apps that enable people to freely create deepfakes.
There are several types of deepfakes:
-
text deepfakes in the form of fake receipts and identification documents
-
photo deepfakes, often swapping faces and bodies using apps to create memes
-
audio deepfakes, where text-to-speech apps are used for voice cloning, often targeting politicians
-
video deepfakes, where face and movement are transferred onto someone else’s video, commonly used to create “revenge pornography”.
Deepfakes pose three main dangers:
-
They deceive audiences into believing fabricated media.
-
They enable cybercrimes, reputational harm and misrepresentation.
-
They can be published by anyone, including anonymous social media users.
The key issue is how law can protect people from the illegal use of their images, voices, and likenesses in deepfakes.
Since 2020, I have looked at laws that regulate deepfakes in South Africa and their implementation. My findings show that the biggest problem with deepfakes is law enforcement, rather than any lack of laws that prohibit the unlawful creation and distribution of deepfakes.
Deepfake threats
South Africa has seen notable cases that highlight the growing impact of deepfakes. In 2024, Leanne Manas, an award-winning South African broadcast anchor, was a victim when her image was used in fake endorsement of weight loss products and online trading on Facebook and TikTok.
South African-born businessman Elon Musk also appeared in a deepfake video that induced many South Africans to invest in a financial scam that promised high returns.
In 2025, Professor Salim Abdool Karim, the director of the Centre for the AIDS Programme of Research in South Africa, appeared in a deepfake video showing him making anti-vaccination statements while endorsing counterfeit heart medicine.
Legal protection in South Africa
South Africa has a mixed legal system that combines constitutional rights, legislation and common law rules to provide deepfake victims with remedies.
There are laws that provide remedies in both civil and criminal cases. For example:
Common law remedies
Anyone can claim violation of privacy if their private images are used without permission. They can also enforce their right to identity if a deepfake misrepresents them or gives a perpetrator commercial advantage.
I investigated these principles in an article about the impact of deepfakes on the right to identity in South Africa. Using South African cases, I found that the unauthorised use of a person’s identity attributes in a deepfake deserves protection.
The Supreme Court of Appeal confirmed, in Grütter v Lombard, that South African law protects a person’s identity from being exploited without permission. And this protection is supported by the constitutional guarantee of human dignity. Grütter and Lombard once practised on the same premises under the name “Grütter and Lombard”, but Grütter later left. Lombard kept using Grütter’s name without consent. The court ordered him to stop as it falsely implied an ongoing professional association and infringed Grütter’s right to identity.
Read more:
Deepfakes in South Africa: protecting your image online is the key to fighting them
In another case, a surfer’s magazine called ZigZag published a photo of a 12-year-old girl as a pin-up cover image. The court stressed that the key issue was whether an image was exploited for another’s benefit without consent. The defendants were ordered to pay compensation and costs.
Another case is that of South African television personality, beauty pageant titleholder, businesswoman and philanthropist Basetsana Kumalo. She sued a business that took photos of her while she was shopping in their store and used those images in an advertisement for their products without her permission. The court ruled that using someone’s likeness for false endorsements infringes identity and privacy, because it creates the misleading impression of support for the product, service or business.
These cases fit squarely into the deepfakes misuses, showing that false endorsement, election disinformation and non-consensual pornography on social media can trigger liability.
Enforcement challenges
While South African law provides remedies against deepfakes, four hurdles frustrate enforcement:
-
South African courts have capacity constraints and struggle to resolve backlogs.
-
Litigation remains a “rich man’s” option. The poor struggle to access justice or wait too long for pro bono help.
-
While South African courts can assert jurisdiction over global platforms like Meta and TikTok, serving court orders abroad and compelling compliance is still costly, and takedown notices are often enforced too late.
-
Perpetrators hide behind fake profiles and are hard to trace through the South African Police Service. Social media companies delay revealing the perpetrators’ true identities upon request.
Read more:
Artificial intelligence carries a huge upside. But potential harms need to be managed
These enforcement challenges can be addressed through capacity building and legal reform. AI research centres should work with law enforcement to train personnel and provide practical skills and tools for tracing and authenticating deepfakes. Parliament must update social media laws so that platforms are directly accountable for fast and fair action when people’s identities are misused in deepfakes.
Legal rules should set minimum standards that deepfake apps and platforms must follow. Rather than relying on age restrictions or consent alone, the law should require these tools to embed watermarking to signal that content is a deepfake, enable tracing of where it comes from, and make sure takedown systems actually work.
Justice on paper
South African law clearly prohibits the misuse of identity through deepfakes, but enforcement gaps leave victims exposed. Without affordable legal access, faster platform accountability, and effective international cooperation, illegal deepfakes will continue to increase.
by : Nomalanga Mashinini, Senior Lecturer, University of the Witwatersrand
Source link