Skip to Main Content

Article

AI Deep Dive Part 3: Understanding Biases & How Threat Actors Use AI

Share

An open book with glowing digital data streams emerging, symbolizing the evolution of AI and the fusion of knowledge and technology.

In the final installment of Arete’s Artificial Intelligence (AI) deep dive, we will explore different types of biases that can be present in AI and how threat actors are able to leverage AI in cybercrime. Inherent biases are present in both human thought processes and AI models, which often influence the data used for training and the algorithms themselves. Additionally, while many AI tools are designed and intended for legitimate purposes, users should be aware that they are also continuously used by cybercriminals to enable operations.
 

Biases in AI

Humans all have inherent biases in our thought processes, which is also true in AI models. The data used to train models, and the models themselves are victims of biases which influence the final AI product and its responses. When considering biases within AI, algorithmic biases are top of mind. Algorithmic bias outlines the systematic and repeatable errors in a computer system that create unfair outcomes. A study conducted by the European Union’s Artificial Intelligence Act (EU AIA) did a great job of outlining the real-world implications of biases infiltrating AI.

Source: 5 Things You Must Know Now About the Coming EU AI Regulation

What begins with minimal risk at the bottom of this graphic, with biases influencing video gameplay and spam filters, quickly progresses to high-impact areas like transportation systems, justice systems, and even facial recognition. A real-world example of this would be the situation where an individual is wrongly accused of a crime due to biases in AI based on their age, gender, or skin color, evidence that algorithmic biases need to be understood and mitigated to correct incorrect outcomes facilitated by potential false information generated due to algorithmic bias.
 

AI in the Wrong Hands

Just as AI enables the work of professionals across many industries, cybercriminals have also begun to exploit this technology. While most public AI models have filters in place to prevent the malicious use of the models, these filters can often be bypassed to create convincing malicious social engineering content. Additionally, some threat actors, like the ransomware group Funksec, have developed their own AI models without these limitations.

During an interview, an excerpt of which is shown below, the leader of Funksec emphasized their ability to act as developers and bring high-level ideas while AI acts as the programmer, enabling their ideas using their proprietary WormGPT module. This AI use case means less technical cybercriminals are becoming increasingly able to write malware without the scripting knowledge typically required.

An excerpt from an interview with a Funksec operator. Source: Threat Actor Interview: Spotlighting on Funksec Ransomware Group

Here’s an example of a WormGPT response when asked for a webshell and C++ code.

Source: Threat Actor Interview: Spotlighting on Funksec Ransomware Group

Conclusion

In the right hands, AI is a powerful tool for productivity and advancement, but the potential risks associated with the growth of AI in connection with illegal activity such as cybercrime should be carefully monitored and addressed. Both the unintended negative effects of biases and the use of AI by cybercriminals highlight the challenges for evolving safeguards and controls protecting correct and ethical AI use.
 

Sources

 

Image Sources: