How generative AI is changing cyberattacks

How generative AI is changing cyberattacks

January 25, 2024

This Website uses cookies

Technological development always has pros and cons, and generative artificial intelligence (AI) is no different. Used to bring benefits to people and companies, this emerging technology is also being used to conduct cyberattacks such as phishing, social engineering, and ransomware.

Generative AI collects data and uses algorithms for automatic content for the most varied media, in addition to creating tools and automation. And, just as these characteristics of generative AI bring benefits to businesses and ordinary citizens, it can also bring benefits to cybercrime actors.

 

Table of contents

Cybercrime and generative AI

  1. How are LLMs and generative AI used in cyberattacks?
  2. The most common generative AI cyberattacks.

Conclusion

 

Cybercrime and generative AI

The prediction is from Google, in its report “Google Cloud Cybersecurity Forecast 2024”, which warns of a possible increase in cyberattacks using generative artificial intelligence this year.

In this report, Google cybersecurity experts warn of more credible phishing schemes and harder-to-detect malware, the rise of deepfake photos and videos, zero-day attacks, and cyber operations carried out for geopolitical gain.

To carry out attacks like these, cybercrime actors both use generative AI and Large Language Models (LLMs). With these two technologies, hackers can create content for SMS, email, or phone calls, in voice or video format, that appears perfectly legitimate.

 

How are LLMs and generative AI used in cyberattacks?

Phishing is one of the most common cyberattacks, whether hackers use generative AI or not. For phishing that uses emerging technologies, cybercrime actors use LLMs and, of course, AI to their advantage.

Large Language Models have the ability to collect unencrypted data and information in real time from corporate websites, news websites, among others. With this information always up to date, hackers can create more credible content that makes users act with a sense of urgency.

Generative AI, in turn, uses chatbots to its advantage. With this technology, hackers can create content such as text for SMS and emails, deepfake voice and video content for manipulation and extortion and faster spread of malware.

In addition to this use, AI chatbots, such as ChatGPT, can program and teach how to program. Thus, AI can train hackers to create ransomware or even write parts of the code for this purpose.

 

The most common generative AI cyberattacks

As previously mentioned, phishing is one of the most common cybercrimes that use generative AI. This is because generative AI has the power to use data to create content that, at first glance, appears legitimate. There are several subtypes of phishing, from voice, deepfake videos and photographs, and text messages, among others.

 

Phishing and generative AI

Using generative artificial intelligence, phishing attacks no longer have grammatical errors and can more reliably reproduce a sociocultural context, as well as current campaigns from entities recognizable by the majority of the population.

And, because AI can imitate human language, it becomes more complicated to know that a campaign did not come from secure sources.

 

Vishing Cyberattacks

Voice Phishing, also known as Vishing, is the strategy used to trick the target of a cyberattack into sharing sensitive information. In these cyber attacks, the voice of a family contact is replicated and an emergency scenario is created for victims to act without thinking about the details of that call.

One of the most recent Vishing attacks targeted MGM Resorts, in which an attacker posed as an employee and requested a reset of their credentials, ending up having access to the casino giant’s internal network. In the case of MGM, the attackers carried out a ransomware attack, but this type of scheme can be used for phishing purposes.

 

Social Engineering Attacks

Using social networks as a means to obtain and analyse personal data, cybercrime actors are able to create phishing schemes based on services and products that their targets are usually close to.

With these platforms, it is possible to know which products someone is looking for most or which services they use frequently to create even more convincing and effective phishing attacks.

Generative AI plays a key role in analysing data and creating emails and messages that cause urgency in targets and make them pay debts they don’t have or share personal data with a fake entity.

This type of attack is also known as spear phishing, as they are highly specific and tend to attack a small group of people at a time.

 

Deepfakes voice, videos, and photographs

With generative artificial intelligence, it is increasingly easier to create content that looks very real, even if it is fraudulent activity. In addition to being used for phishing and ransomware, deepfake content is also used for disinformation, manipulation, blackmail, and extortion.

Thus, with fake photos, voices and videos, attackers are able to replicate a person and place them in an unwanted context.

 

Automated attacks

Machine Learning algorithms can identify vulnerabilities in cybersecurity systems, as they analyze large amounts of data. In conjunction with artificial intelligence algorithms, hackers automate the various phases of an attack so that the process unfolds more quickly.

 

Evasion of defence systems

To carry out cyberattacks, hackers exploit vulnerabilities in security and defence systems using artificial intelligence algorithms. These can identify points of weakness in the code, finding gateways to evade endpoint security and disabling intrusion detection systems.

 

Intelligent malware

When you combine generative AI and malware, there is only one result: intelligent malware, which adapts to the target’s context without the need for a human hand to be always present.

This type of malware can avoid detection and deceive security systems, as well as spread quickly. The fact that it adapts to the targets of the attack makes it more difficult to identify and for targets to protect themselves against it.

 

Conclusion

The generative AI that has brought many benefits to ordinary citizens and companies is the same one that provides benefits to cybercrime actors.

This emerging technology is not, in itself, responsible for phishing schemes, ransomware or social engineering attacks, but it inevitably contributes to hackers conducting more effective cyberattacks.

With artificial intelligence, large language models and machine learning, these attackers can create more credible phishing schemes and/or with a more specific target audience, create intelligent malware that is undetectable to security systems, and still manage to carry out a crime of ransomware.