EXPLORE

Article

AI Deep Dive Part 2: Data Privacy Concerns

Arete Analysis

Cybersecurity 101

A few weeks ago, Arete’s Threat Intelligence team outlined the history of artificial intelligence (AI). Today, we continue that conversation, exploring data privacy concerns associated with AI tools. AI use cases are often showcased to consumers without warning of potential dangers in their application. When a service is free, your data is often the cost of entry.

Today, we dive into three key elements of data privacy concerns in AI:

  • What information are you exposing publicly?

  • What data are you putting into AI applications?

  • And finally, how are you storing your data? 

Operations Security (OPSEC): What information are you exposing publicly?

The public release of information can lead to both positive and negative outcomes. Classification by compilation, in which a series of seemingly harmless pieces of information are pieced together in open source, leading to exposure of proprietary, sensitive information, gives credence to the age-old saying, “Loose lips sink ships.”

You may be wondering what this has to do with AI. Any information posted publicly can be used by developers to train AI algorithms. This could lead organizations to aid their competitors indirectly, should they choose to use the same AI platforms. An example of this is a 2023 lawsuit filed by artists against a number of companies that own AI image-generating tools. The artists argued that the AI companies used their art to train algorithms without the artists being properly compensated. The court ultimately ruled against the artists, demonstrating that it is extremely difficult to prove what data was used to train AI algorithms.

What data are you putting into AI applications?

As the use of AI continues to expand, users should carefully consider what data they are exposing. When using popular public-facing AI platforms, such as those created by OpenAI, Microsoft, and Amazon, users must be aware of the type of data they input. Sensitive data, including client information, PII, and trade secrets, should not be used to prompt public-facing AI tools. Inputs into these tools are used to further train the algorithm and develop these tools.

How are you storing your data?

When an organization decides to create or collaborate on a new AI model, large amounts of data are required to train it. When considering where to store such data, cloud storage appears as an attractive option. However, it is also important to consider the options and risks associated with data storage.

One example of such risk is the May 2024 data breach suffered by cloud-based data storage company Snowflake.

The threat actor responsible for the breach, UNC5537, subsequently extorted Snowflake, leading to at least $2.7 million in ransom payments for data suppression. This attack was primarily driven by compromised credentials without MFA, demonstrating the need for organizations to not only assess their third-party risk exposure but also continually implement security best practices.

Conclusion

AI is a powerful tool for organizations looking to enable employees to work within their strengths and increase efficiency. However, the improper use of AI can have disastrous effects. It is important for organizations to develop policies and training on the implementation and use of AI to set employees up for success and ensure the security of their environments. Tune in next week for the final installment of Arete’s AI Deep Dive: Understanding Biases & How Threat Actors Use AI.

Sources

Back to Blog Posts

Report

Arete's 2025 Annual Crimeware Report

Harness Arete’s unique data and expertise on extortion and ransomware to inform your response to the evolving threat landscape.

Red alert symbols and warning icons spreading across a digital network, representing firewall compromise and widespread cyber intrusion.
Red alert symbols and warning icons spreading across a digital network, representing firewall compromise and widespread cyber intrusion.

Article

FortiGate Exploits Enable Network Breaches and Credential Theft

A recent security report indicates that threat actors are actively exploiting FortiGate Next-Generation Firewall (NGFW) appliances as initial access vectors to compromise enterprise networks. The activity leverages recently disclosed vulnerabilities or weak credentials to gain unauthorized access and extract configuration files, which often contain sensitive information, including service account credentials and detailed network topology data. 

Analysis of these incidents shows significant variation in attacker dwell time, ranging from immediate lateral movement to delays of up to two months post-compromise. Since these appliances often integrate with authentication systems such as Active Directory and Lightweight Directory Access Protocol (LDAP), their compromise can grant attackers extensive access, substantially increasing the risk of widespread network intrusion and data exposure. 

What’s Notable and Unique 

  • The activity involves the exploitation of recently disclosed security vulnerabilities, including CVE-2025-59718, CVE-2025-59719, and CVE-2026-24858, or weak credentials, allowing attackers to gain administrative access, extract configuration files, and obtain service account credentials and network topology information. 


  • In one observed incident, attackers created a FortiGate admin account with unrestricted firewall rules and maintained access over time, consistent with initial access broker activity. After a couple of months, threat actors extracted and decrypted LDAP credentials to compromise Active Directory. 

  • In another case, attackers moved from FortiGate access to deploying remote access tools, including Pulseway and MeshAgent, while also utilizing cloud infrastructure such as Google Cloud Storage and Amazon Web Services (AWS). 

Analyst Comments 

Arete has identified multiple instances of Fortinet device exploitation for initial access, involving various threat actors, with the Qilin ransomware group notably leveraging Fortinet device exploits. Given their integration with systems like Active Directory, NGFW appliances remain high-value targets for both state-aligned and financially motivated actors. In parallel, Arete has observed recent dark web activity involving leaked FortiGate VPN access, further highlighting the expanding risk landscape. This aligns with the recent reporting from Amazon Threat Intelligence, which identified large-scale compromises of FortiGate devices driven by exposed management ports and weak authentication, rather than vulnerability exploitation. Overall, these developments underscore the increasing focus on network edge devices as entry points, reinforcing the need for organizations to strengthen authentication, restrict external exposure, and address fundamental security gaps to mitigate the risk of widespread compromise. 

Sources 

FortiGate Edge Intrusions | Stolen Service Accounts Lead to Rogue Workstations and Deep AD Compromise

Article

Vulnerability Discovered in Anthropic’s Claude Code

Security researchers discovered two critical vulnerabilities in Anthropic's agentic AI coding tool, Claude Code. The vulnerabilities, tracked as CVE-2025-59536 and CVE-2026-21852, allowed attackers to achieve remote code execution and to compromise a victim's API credentials. The vulnerabilities exploit maliciously crafted repository configurations to circumvent control mechanisms. It should be noted that Anthropic worked closely with the security researchers throughout the process, and the bugs were patched before the research was published. 

What’s Notable and Unique 

  • The configuration files .claude/settings.json and .mcp.json were repurposed to execute malicious commands. Because the configurations could be applied immediately upon starting Claude Code, the commands ran before the user could deny permissions via a dialogue prompt, or they bypassed the authentication prompt altogether. 


  • .claude/settings.json also defines the endpoint for all Claude Code API communications. By replacing the default localhost URL with a URL they own, an attacker could redirect traffic to infrastructure they control. Critically, the authentication traffic generated upon starting Claude Code included the user's full Anthropic API key in plain text and was sent before the user could interact with the trust dialogue. 


  • Restrictive permissions on sensitive files could be bypassed by simply prompting Claude Code to create a copy of the file's contents, which did not inherit the original file's permissions. A threat actor using a stolen API key could gain complete read and write access to all files within a workspace. 

Analyst Comments 

The vulnerabilities and attack paths detailed in the research illustrate the double-edged nature of AI tools. The speed, scale, and convenience characteristics that make AI tools attractive to developer teams also benefit threat actors who use them for nefarious purposes. Defenders should expect adversaries to continue seeking ways to exploit configurations and orchestration logic to increase the impact of their attacks. Organizations planning to implement AI development tools should prioritize AI supply-chain hygiene and CI/CD hardening practices. 

Sources 

  • Caught in the Hook: RCE and API Token Exfiltration Through Claude Code Project Files | CVE-2025-59536 | CVE-2026-21852

Article

Ransomware Trends & Data Insights: February 2026

After a slight lull in January, Akira and Qilin returned to dominating ransomware activity in February, collectively accounting for almost half of all engagements that month. The rest of the threat landscape remained relatively diverse, with a mix of persistent threats like INC and PLAY, older groups like Cl0p and LockBit, and newer groups like BravoX and Payouts King. Given current trends, the first quarter of 2026 will likely remain relatively predictable, with the top groups from the second half of 2025 continuing to operate at fairly consistent levels month to month.

Figure 1. Activity from the top 5 threat groups in February 2026

Throughout the month of February, analysts at Arete identified several trends behind the threat actors perpetrating cybercrime activities: 

  • In February, Arete observed Qilin actively targeting WatchGuard Firebox devices, especially those vulnerable to CVE-2025-14733, to gain initial access to victim environments. CVE-2025-14733 is a critical vulnerability in WatchGuard Fireware OS that allows a remote, unauthenticated threat actor to execute arbitrary code. In addition to upgrading WatchGuard devices to the latest Firebox OS version, which patches the bug, administrators are urged to rotate all shared secrets on affected devices that may have been compromised and may be used in future campaigns.


  • Reports from February suggest that threat actors are increasingly exploring AI-enabled tools and services to scale malicious activities, demonstrating how generative AI is being integrated into both espionage and financially motivated threat operations. The Google Threat Intelligence Group indicated that state-backed threat actors are leveraging Google’s Gemini AI as a force multiplier to support all stages of the cyberattack lifecycle, from reconnaissance to post-compromise operations. Separate reporting from Amazon Threat Intelligence identified a threat actor leveraging commercially available generative AI services to conduct a large-scale campaign against FortiGate firewalls, gaining access through weak or reused credentials protected only by single-factor authentication.


  • The Interlock ransomware group recently introduced a custom process-termination utility called “Hotta Killer,” designed to disable endpoint detection and response solutions during active intrusions. This tool exploits a zero-day vulnerability (CVE-2025-61155) in a gaming anti-cheat driver, marking a significant adaptation in the group’s operations against security tools like FortiEDR. Arete is actively monitoring this activity, which highlights the growing trend of Bring Your Own Vulnerable Driver (BYOVD) attacks, in which threat actors exploit legitimate, signed drivers to bypass and disable endpoint security controls.

Sources

  • Arete Internal