Top privacy risks in 2024

Humans should be worried about the threat posed by artificial intelligence… but these risks are manageable” ~Bill Gates 2023. Gates has written in a recent blogpost about his 5 concerns for AI and privacy and how they will continue for years to come. His concerns revolved around deepfakes & misinformation, AI accelerated attacks on vulnerabilities, information bias, regulatory gaps and the threat of superintelligence (making decisions for us). I’ll take a look at these risks and a few other data privacy ones in this article.

What are deepfake risks?

AI generated deepfakes and misinformation are being leveraged by at an alarming scale as more AI tools become available and increase in sophistication. Bill Gates uses a whatif example in his blog post when he asks us to imagine a fake ransom demand from an AI generated voice of your child saying they’ve been kidnapped. Cases have been recorded of fake audio and video being used to mimic all kinds of scenarios from malicious AI generated CEO’s on conference calls to fake biometric personas being used to defeat banking enrollment applications.(Read related article). Perhaps one of the more better known ones in 2023 appeared on YouTube with Elon Musk promoting crypto investments and Quantum AI. It has become increasingly difficult for viewers to tell truth from fiction with deepfake media which is destined to get worse in 2024.

AI Powered Attacks

PwC’s latest digital trust insights survey might be regarded as the canary in the mine when it comes to an early warning for what to expect next year with AI privacy risks. In the survey of nearly 4,000 business and tech leaders across 71 countries, 52% of respondents expected a catastrophic cyber-attack to occur as a result of AI generated business email compromise.

How is this possible?

Attackers can use tools such as ChatGPT 4.5 to draft emails in different languages with few, if any, grammatical errors, creating multiple variations of phishing emails at scale with the click of a button. They can also train AI to imitate the writing style of an impersonated individual, making the attack seem more authentic and plausible. The threat from business email compromise can also be enhanced with embedded media such as links to fake recordings of authority figures (e.g. from the CEO) instructing them to perform an action like a donation to a fake charity.

Information Bias Risks

In May, Geoffrey Hinton, a computer scientist widely hailed as “the Godfather of AI”, resigned from his role at a major tech company in order to have the freedom to warn about the potential perils of the tools his work helped create. In particular, he was very concerned about some of the side effects of data bias. Generative AI models learn patterns and information from the data they are trained on. If the training data contains biases or prejudices, these may be reflected in the generated output. For example, if the training data is skewed towards certain demographics or perspectives, the generated content may reinforce those biases. If appropriate efforts are not made to mitigate and address biases then there is a far less chance of fair and unbiased outcomes. (Read related article on Gen AI and what you need to know). One well known example of this bias in action involved a tech company that devised an AI tool for screening CVs. The engineers behind it trained it on data about previous applicants, to teach it what counted as a “good” hire. The problem was, the company’s staff were mostly male, so the AI deduced that “being female” was an undesirable trait.

With a global surge toward AI adoption, the risks grow larger heading into 2024 and beyond which means that development efforts must carefully factor in laws of unintended consequence when designing applications.

AI Regulatory Gaps

The latest World Economic Forum report on Responsible AI Innovation in Financial Services points the way to some of the unique challenges of regulating AI. Chiefly, many of the processes in AI algorithms are opaque and self-evolutionary (without human intervention) making them difficult to constrain through regulation. The truth is that regulatory authorities are on the back foot and chasing a tsunami of AI innovation. Regulators are still trying to conquer ethics questions at the moment in the hope of preventing Skynet takeover type scenarios. From a privacy perspective there are many challenges with AI. Issues like the ownership of personal data becomes much more fuzzy in AI systems like ChatBots where information is drawn for a multitude of sources. Implementing GDPR subject access requests such as personal data erasure is relatively straightforward with organizational databases. It’s more difficult to delete data from a machine learning model and doing so may undermine the utility of the model itself. There is also the issue that there maybe many data controllers for personal data in Gen-AI environments, hence the ownership and responsibilities maybe blurred when executing data subject requests.

The truth is with GDPR and mapping to AI technology is that the former is not yet fit for purpose. AI will likely be a bolt-on to GDPR v2.0 in the next 2-3 years.



Paul Rogers,
Paul Rogers, CISSP, CISA
Founder DPO Solutions
email: [email protected]