Artificial intelligence has become thoroughly ingrained in the digital world and the collective consciousness in record time.
The need to have a leg up on the good guys has always driven hackers to become some of the earliest tech adopters, and AI is far from an exception.
In fact, like in so many fields, it’s proving to be one of the most transformative advancements in cybercrime.
This article explores how hackers leverage artificial intelligence to further their nefarious ways. It also offers hope in the form of developments and practices one can adopt to combat the threat.
AI – A Hacker’s Helpful New Tool
Even before the proliferation of generative AI like ChatGPT, hackers have used other AI models to create adaptive malware and other forms of complex cyber threats. However, GenAI is largely responsible for the explosion of cyberattacks in the last two years. Here are the main menace users and cybersecurity professionals have to contend with.
Sophisticated phishing emails
Far from the Nigerian prince drivel that would clog up your spam folder, a targeted phishing email must be impeccably written and credible enough for a recipient with some authority to give up sensitive credentials, transfer money, etc.
Creating a successful phishing email used to be time-consuming and research-heavy. Generative AI and large language models changed that practically overnight.
Any hacker, regardless of their language skills, can now use ChatGPT to compose a message indistinguishable from something a native would write.
Moreover, the crook can access a company’s public correspondence or hack into their email accounts to obtain genuine emails and convincingly simulate their layout, tone, etc., to the point that even long-term employees in positions of power may succumb and follow the instructions.
It’s impossible to overstate ChatGPT and its derivatives’ impact on the creation and proliferation of sophisticated phishing scams. The large volume of such emails has become more than forty times greater since ChatGPT’s public release.
Advanced password cracking
Brute-force attacks have long been the bane of short or commonly used passwords. The introduction of AI’s capability to learn and adapt from existing data has exponentially increased such efforts’ efficacy and success rate.
Long, random strings password managers generate are as safe as ever. Conversely, even passwords considered moderately strong a few years ago may not be.
That’s because AI-driven cracking applications draw on data from past breaches for a smarter approach that tries words and other common patterns first, often successfully.
Eerily accurate deep fake technology
Deepfakes are another area that greatly benefited from GenAI’s ability to produce images and sound quickly and convincingly. The latter is particularly concerning, as it allows hackers to impersonate a higher-up to disastrous effect.
A failed attempt to impersonate Ferrari’s CEO is the most prominent incident. A quick-thinking executive thwarted it by asking the impersonator something only the real CEO would know. Employees targeted in future incidents may not be as resourceful.
Adaptive malware
AI has potentially the scariest and most complex impact on the next generation of malware. Some malware may analyze a system’s defensive patterns and security protocols to slip in undetected and start doing damage months after without anyone realizing what caused the initial breach.
Other variants can identify and leverage zero-day exploits and rewrite entire sections of their code while maintaining its initial purpose, bypassing even state-of-the-art security measures.
What Can You Do About These Threats?
Luckily, hackers don’t operate in a vacuum. Cybersecurity professionals are equally committed to developing their own AI tools in response. For example, anti-malware has since stopped relying on known threat lists and is using AI to detect and stop malicious behavior as it occurs and mutates.
While not yet prevalent, passkeys are a promising and much more resilient alternative to passwords, and their adoption is bound to increase significantly.
Users don’t need to remember them, and each pass key is unique to the service it protects, which already reduces the chances of compromise. Passkeys also offer better phishing protection since the user’s private key in the public and private pair is never shared with a website.
Companies and regular users are responsible for keeping up with AI-augmented cyber threats. That means undergoing cybersecurity training that accounts for the latest developments and introduces strategies anyone, regardless of their technological background, can use to protect themselves.
AI-assisted hacking attempts rely on data, so another way to make yourself less vulnerable is to ensure that as little potentially compromising data exists on you as possible.
Make a conscious effort to think before sharing on social media and reduce your online footprint. If a cursory online search reveals far more data about you than you’d like, you may need help from data removal services such as Incogni to address the issue.
Conclusion
AI has undoubtedly caused cybercrime to grow vigorously and unpredictably. The threats remain real and grow in complexity, but so do cybersecurity professionals’ protective efforts.
Keeping your devices and knowledge current is the best way to ensure digital safety. While there’s certainly a need for vigilance, there’s no need for fear if you’re smart about your behavior.