Part 1: Deep fake Imagine seeing a video of a well-known CEO making outrageous statements, only to find out later it wasn’t real at all. That’s the power — and danger — of deep fake AI. This technology uses artificial intelligence to create highly convincing fake images, videos, and audio clips. It’s like Photoshop on steroids. While the technology itself is neutral, how it’s used is what makes it a potential threat to businesses.
Deep fakes can be incredibly realistic, often blurring the line between what’s real and what’s not. In February 2023, the New Zealand Herald reported that hackers used realistic videos of Aja Rock and other women in the Auckland area to scam people out of money. “Former socialite Aja Rock is one of the victims, with a fake video of her promoting a bitcoin investment scheme posted online.
The deepfake video of Rock shows her speaking to the camera about her financial success with supposed bitcoin adviser Mirabel Rodgers.” “Um, I invested $1000 US dollars with Mirabel, and she advised me, and I now have $8500 in my account, so I’m really happy about that and I hope this video helps you.” It didn’t.
(https://www.nzherald. co.nz/nz/hacked-deepfake-videos-of-kiwi-women-used-to-scam-friends-and-followers-out-of-thousands) In June this year, NewsHub ran a piece on “Deepfake bullying hits New Zealand schools.” It reported a growing concern of students deep fake bullying other students.
“There have been cases on both sides of the Tasman that target 50 to 60 young girls in the senior secondary space or indeed some staff so it’s certainly increasing in quantum,” Couillault said. (https://www.newshub. co.nz/home/new-zealand/2024/06/deepfake-bullying-hits-new-zealand-schools.html)
In the right hands, this technology can be used for creative projects, like making old movies come to life or creating entirely new digital content. However, in the wrong hands, deep fakes can be weaponized. The potential for reputational damage and financial loss is immense. But it’s not just about damage control; deep fakes can also be used as a tool for corporate espionage. Even though deep fake technology is becoming more sophisticated, there are ways to combat it. Businesses can invest in AI-powered detection tools that identify deep fakes or implement rigorous verification processes to ensure that what they see or hear is legitimate. While deep fake AI holds incredible potential, it also brings new risks that businesses must be aware of and prepared to address.
Part 2: Direct Prompt Injection—Hacking AI from the Inside Switching gears to something equally sneaky: direct prompt injection. If deep fakes are about fooling humans, direct prompt injection is about tricking AI itself. It’s a method where a person cleverly manipulates an AI system by feeding it misleading or malicious input. Think of it like whispering bad advice into the ear of a chatbot and having it act on that advice. In simpler terms, direct prompt injection is when someone gives an AI system a specific set of instructions designed to make it behave in unintended ways.
For example, an attacker might feed a customer service bot a cleverly crafted sentence that makes it reveal sensitive information or perform unauthorized actions. The AI doesn’t realize it’s being tricked; it just follows the instructions it’s been given. This can be particularly concerning for businesses relying on AI for customer interactions, automated decision-making, or even cybersecurity.
Imagine if a competitor could inject prompts into your AI to lower prices automatically, leak confidential data, or sabotage customer relations. The consequences could be disastrous, especially if the AI controls critical aspects of your business operations. Fortunately, like with deep fakes, there are defenses against prompt injection attacks. Regularly updating and testing AI systems, employing input validation techniques, and setting strict controls over who and what can interact with your AI are just a few ways to protect your business.
It’s all about staying one step ahead of the bad actors. AI technologies like deep fakes and direct prompt injection offer new avenues for innovation, and they also introduce new risks that businesses need to navigate carefully. The key is to stay informed, invest in the right tools, and be proactive in safeguarding your operations from these emerging threats. In the fast-paced business world, a little knowledge goes a long way—especially when it comes to outsmarting the machines.
Tom is the owner of Govern Cybersecurity. He has over 18 years in the cybersecurity and IT industry at management level, and for the past 6 years has been a lecturer in cybersecurity at the Eastern Institute of Technology. He has earned certifications in ISO 27001 Lead Auditing, Lead Implementation, SOC2, and Ethical Hacking. These certifications are considered the international gold standard for business security. tom@govern.co.nz