Article
AI Deep Dive Part 1: The History of AI
Arete Analysis
Cyber Threats

Artificial intelligence (AI) is a subset of computer science that focuses on creating systems that can replicate human intelligence and problem-solving capabilities. This is accomplished by feeding large amounts of data into machine learning models (MLMs) and processing the data. The result is technology that can simulate human learning, comprehension, problem-solving, decision-making, creativity, and autonomy.
While often seen as new, cutting-edge technology, AI has been around far longer than most would think. While the concept of AI goes back to ancient philosophers theorizing on life and death, AI as we know it began in the early 1900s. The conception of what AI is began to be portrayed in science fiction by various authors and artists throughout the early 1900s prior to what is commonly known as “the birth of AI.”
AI Through the Ages
The Birth of AI: 1950 – 1956
Computer scientists such as Alan Turing, Arthur Samuel, and John McCarthy set the stage for the beginning of AI. Turing published “Computer Machinery and Intelligence,” which annotated a test of machine intelligence called the Imitation Game. Turing theorized that any machine able to fool a human judge would be classified as artificial intelligence.
AI Maturation: 1957 – 1979
The next twenty years showed little growth for AI at a technical level. While the concept of AI became popular in pop culture, funding-backed research was minimal during this period. However, that is not to say that strides towards what AI is today were not made. The first programming languages were created, paving the way for future development. The first AI chatbot was created, which adopted a new approach to AI that we now call deep learning, and the first examples of an autonomous vehicle were created.
AI Boom: 1980 – 1987
During the seven-year period known as the AI boom, government funding and associated research significantly increased. The first Association for the Advancement of Artificial Intelligence (AAAI) conference was held at Sanford, and the first driverless car demonstrated its ability to drive up to 55 mph on empty roads.
AI Winter: 1987 – 1993
Overall, funding and interest in AI decreased during this period, leading to fewer advancements in the technology than in years prior.
AI agents: 1993 – 2011
Despite the initial lack of investment in AI, the technology as a whole significantly increased its capabilities during this time period. Most notably, this is when AI began being integrated into people’s daily lives with items such as the Roomba and the release of Apple’s virtual assistant, Siri.
Early Generative Artificial Intelligence: 2012 – Present
This brings us up to the current state of AI. The last decade has shown impressive leaps in AI’s ability to aid humans in day-to-day functions. This is also accompanied by enormous data collection from well-known companies that are able to train their AI models, which has led to the release of consumer-facing AI models such as ChatGPT, Copilot, and more.
Conclusion
AI as a whole is a fast-changing, fluid concept. Organizations regularly unveil new capabilities and breakthroughs. This was especially evident in the recent unveiling of Deepseek and the subsequent data privacy concerns. In a single day, this overturned the sector in one fell swoop. AI will likely remain a constantly changing field in the near term.
What’s Next?
Part 2 of Arete’s AI Deep Dive will examine the risks and benefits of organizations adopting AI into their business models
Sources
Back to Blog Posts
Report
Arete's 2025 Annual Crimeware Report
Harness Arete’s unique data and expertise on extortion and ransomware to inform your response to the evolving threat landscape.
Article
FortiGate Exploits Enable Network Breaches and Credential Theft
A recent security report indicates that threat actors are actively exploiting FortiGate Next-Generation Firewall (NGFW) appliances as initial access vectors to compromise enterprise networks. The activity leverages recently disclosed vulnerabilities or weak credentials to gain unauthorized access and extract configuration files, which often contain sensitive information, including service account credentials and detailed network topology data.
Analysis of these incidents shows significant variation in attacker dwell time, ranging from immediate lateral movement to delays of up to two months post-compromise. Since these appliances often integrate with authentication systems such as Active Directory and Lightweight Directory Access Protocol (LDAP), their compromise can grant attackers extensive access, substantially increasing the risk of widespread network intrusion and data exposure.
What’s Notable and Unique
The activity involves the exploitation of recently disclosed security vulnerabilities, including CVE-2025-59718, CVE-2025-59719, and CVE-2026-24858, or weak credentials, allowing attackers to gain administrative access, extract configuration files, and obtain service account credentials and network topology information.
In one observed incident, attackers created a FortiGate admin account with unrestricted firewall rules and maintained access over time, consistent with initial access broker activity. After a couple of months, threat actors extracted and decrypted LDAP credentials to compromise Active Directory.
In another case, attackers moved from FortiGate access to deploying remote access tools, including Pulseway and MeshAgent, while also utilizing cloud infrastructure such as Google Cloud Storage and Amazon Web Services (AWS).
Analyst Comments
Arete has identified multiple instances of Fortinet device exploitation for initial access, involving various threat actors, with the Qilin ransomware group notably leveraging Fortinet device exploits. Given their integration with systems like Active Directory, NGFW appliances remain high-value targets for both state-aligned and financially motivated actors. In parallel, Arete has observed recent dark web activity involving leaked FortiGate VPN access, further highlighting the expanding risk landscape. This aligns with the recent reporting from Amazon Threat Intelligence, which identified large-scale compromises of FortiGate devices driven by exposed management ports and weak authentication, rather than vulnerability exploitation. Overall, these developments underscore the increasing focus on network edge devices as entry points, reinforcing the need for organizations to strengthen authentication, restrict external exposure, and address fundamental security gaps to mitigate the risk of widespread compromise.
Sources
FortiGate Edge Intrusions | Stolen Service Accounts Lead to Rogue Workstations and Deep AD Compromise
Article
Vulnerability Discovered in Anthropic’s Claude Code
Security researchers discovered two critical vulnerabilities in Anthropic's agentic AI coding tool, Claude Code. The vulnerabilities, tracked as CVE-2025-59536 and CVE-2026-21852, allowed attackers to achieve remote code execution and to compromise a victim's API credentials. The vulnerabilities exploit maliciously crafted repository configurations to circumvent control mechanisms. It should be noted that Anthropic worked closely with the security researchers throughout the process, and the bugs were patched before the research was published.
What’s Notable and Unique
The configuration files .claude/settings.json and .mcp.json were repurposed to execute malicious commands. Because the configurations could be applied immediately upon starting Claude Code, the commands ran before the user could deny permissions via a dialogue prompt, or they bypassed the authentication prompt altogether.
.claude/settings.json also defines the endpoint for all Claude Code API communications. By replacing the default localhost URL with a URL they own, an attacker could redirect traffic to infrastructure they control. Critically, the authentication traffic generated upon starting Claude Code included the user's full Anthropic API key in plain text and was sent before the user could interact with the trust dialogue.
Restrictive permissions on sensitive files could be bypassed by simply prompting Claude Code to create a copy of the file's contents, which did not inherit the original file's permissions. A threat actor using a stolen API key could gain complete read and write access to all files within a workspace.
Analyst Comments
The vulnerabilities and attack paths detailed in the research illustrate the double-edged nature of AI tools. The speed, scale, and convenience characteristics that make AI tools attractive to developer teams also benefit threat actors who use them for nefarious purposes. Defenders should expect adversaries to continue seeking ways to exploit configurations and orchestration logic to increase the impact of their attacks. Organizations planning to implement AI development tools should prioritize AI supply-chain hygiene and CI/CD hardening practices.
Sources
Caught in the Hook: RCE and API Token Exfiltration Through Claude Code Project Files | CVE-2025-59536 | CVE-2026-21852
Article
Ransomware Trends & Data Insights: February 2026
After a slight lull in January, Akira and Qilin returned to dominating ransomware activity in February, collectively accounting for almost half of all engagements that month. The rest of the threat landscape remained relatively diverse, with a mix of persistent threats like INC and PLAY, older groups like Cl0p and LockBit, and newer groups like BravoX and Payouts King. Given current trends, the first quarter of 2026 will likely remain relatively predictable, with the top groups from the second half of 2025 continuing to operate at fairly consistent levels month to month.

Figure 1. Activity from the top 5 threat groups in February 2026
Throughout the month of February, analysts at Arete identified several trends behind the threat actors perpetrating cybercrime activities:
In February, Arete observed Qilin actively targeting WatchGuard Firebox devices, especially those vulnerable to CVE-2025-14733, to gain initial access to victim environments. CVE-2025-14733 is a critical vulnerability in WatchGuard Fireware OS that allows a remote, unauthenticated threat actor to execute arbitrary code. In addition to upgrading WatchGuard devices to the latest Firebox OS version, which patches the bug, administrators are urged to rotate all shared secrets on affected devices that may have been compromised and may be used in future campaigns.
Reports from February suggest that threat actors are increasingly exploring AI-enabled tools and services to scale malicious activities, demonstrating how generative AI is being integrated into both espionage and financially motivated threat operations. The Google Threat Intelligence Group indicated that state-backed threat actors are leveraging Google’s Gemini AI as a force multiplier to support all stages of the cyberattack lifecycle, from reconnaissance to post-compromise operations. Separate reporting from Amazon Threat Intelligence identified a threat actor leveraging commercially available generative AI services to conduct a large-scale campaign against FortiGate firewalls, gaining access through weak or reused credentials protected only by single-factor authentication.
The Interlock ransomware group recently introduced a custom process-termination utility called “Hotta Killer,” designed to disable endpoint detection and response solutions during active intrusions. This tool exploits a zero-day vulnerability (CVE-2025-61155) in a gaming anti-cheat driver, marking a significant adaptation in the group’s operations against security tools like FortiEDR. Arete is actively monitoring this activity, which highlights the growing trend of Bring Your Own Vulnerable Driver (BYOVD) attacks, in which threat actors exploit legitimate, signed drivers to bypass and disable endpoint security controls.
Sources
Arete Internal



