AI is the Next-Gen Weapon of Choice for Cyberwarfare

February 19, 2024||

Technology is the arms dealer and AI has become the premium weapon category. Artificial Intelligence has many benefits, but with those benefits, we are seeing new creative threats appearing weekly now.

It’s up to those on our side to leverage AI against the attackers, fighting digital fire with digital fire. Currently, though, the creativity of the hackers is scoring points in their direction in this tug-of-rope AI fueled Cyber Battle. What we do know is that this conflict will be dynamic, exciting for the wrong reasons, and introduce concepts we haven’t even thought of yet.

Just last week, we saw how AI changed the game when a finance team member in Hong Kong transferred over $25 million in a Teams meeting where he was the only “real” participant. Amply named “Everyone Looked Real” this theft made us all pause on the reality that AI is already here in the world of Cybercrime.

Now this week we heard from Microsoft, partnering with OpenAI, on how Nation-State threat actors are using AI to weaponize data. The process, known as “Large Language Models” or “LLM,” relies on machine language to parse large amounts of data to discover patterns to exploit. You can review OpenAI’s research here:

Here are some of the larger criminal syndicates called out by Microsoft that are currently utilizing AI to:

  • Crimson Sandstorm: Assisting in application scripting, web development, and generating content for spear-phishing campaigns.
  • Salmon Typhoon: Process publicly available information on intelligence agencies, and smaller threat actors, to write code to conceal processes within systems.
  • Emerald Sleet: Utilizing AI to locate defense experts and organizations to generate information about vulnerabilities to assist with scripting and drafts for phishing.
  • Forest Blizzard: Process research on satellite communication protocols and radar imaging tech AI.

These bad actors are leveraging LLM technology into their offensive strategies, treating AI as another productivity tool in the cyber warfare landscape. The Microsoft / OpenAI partnership aims to ensure the safe and responsible use of AI technologies like ChatGPT, with measures taken to disrupt threat actors’ assets and accounts, while also enhancing the protection of LLM technology and providing guardrails and safety mechanisms around the models.

These actions also demonstrate Microsoft’s commitment to responsible AI innovations which aligns with the White House’s Executive Order on AI, emphasizing rigorous safety testing and government supervision.

Both companies are collaborating to protect AI platforms to act against known and emerging threat actors. While the research highlights observed activities and potential risks, it underscores the importance of maintaining cybersecurity best practices, such as multifactor authentication and Zero Trust defenses. The threat actors profiled provide a sample of observed activities, representing tactics, techniques, and procedures that the industry needs to track and counter effectively.

In this exploration, we delve into the dual nature of AI in cybersecurity, unraveling the ways in which hackers leverage this technology, both presently and in the foreseeable future, to orchestrate attacks that challenge the resilience of companies and institutions. This duality underscores a critical reality: Understanding this dichotomy is paramount for devising robust cybersecurity strategies that not only harness the power of AI for defense but also anticipate and counteract ever evolving tactics employed by cyber adversaries.

How Hackers are using AI:

  • AI-Driven Malware: Malware that can learn and evolve, making it more challenging to detect and mitigate.
  • Automated Phishing Attacks: Enhanced phishing attacks that can analyze and mimic communication patterns, making phishing emails and messages more convincing and difficult to identify.
  • Adversarial Machine Learning – fighting AI with AI: Hackers can manipulate AI models of cybersecurity systems to craft attacks specifically designed to bypass AI-powered defenses.
  • AI-Generated Deepfakes for Social Engineering: Hackers may create realistic audio, and/or video content, impersonating trusted individuals to manipulate employees into disclosing sensitive information – just like they did in China.
  • AI in Targeted Attacks: By gathering and analyzing large datasets, LLM’s, as mentioned above, specify vulnerabilities or weaknesses that can be found by analyzing a specific company’s cybersecurity defense.
  • Automated Password Cracking: Machine learning algorithms can be trained to predict and crack passwords more efficiently, especially when combined with data from previous breaches.
  • AI-Enhanced Reconnaissance: AI can be used in the reconnaissance phase of cyber-attacks to automate and analyze publicly available information to profile individuals within a company to identify potential attack targets.
  • AI-Powered DDoS Attacks: Distributed Denial of Service (DDoS) attacks can be enhanced in targeting and coordinating massive botnets, making DDoS attacks more potent and challenging to mitigate.
  • Machine Learning Evasion Techniques: AI can be used to evade detection by fooling learning-based intrusion detection systems.
  • Quantum Computing Threat: In this very new area of hyper-powerful processing, AI, combined with the power of quantum computing, can be used by hackers to break current encryption methods, posing a significant threat to the security of sensitive data.

How cybersecurity professionals use AI:

  • Automated Threat Detection: Machine learning algorithms are used by automated threat hunters to analyze vast amounts of data to identify patterns and anomalies that may indicate potential security threats.
  • Real-time monitoring: Real-time monitoring, helped by AI, can respond to cyber threats more swiftly, reducing the time between a security breach and its identification.
  • Behavioral Analytics: Allowing the development of advanced behavioral analytics tools which can understand and predict normal user behavior vs abnormal activities that may indicate a security threat.
  • Adaptive Security Measures: Systems can learn from past attacks and adapt their defenses, to provide a more proactive approach to cybersecurity.
  • AI in Endpoint Security: Playing a crucial role in endpoint secsecurity allowing intelligent antivirus solutions to prevent malware and other malicious activities at the endpoint, protecting individual devices and networks.
  • Human-Machine Collaboration: Assisting human analysts by automating routine tasks, allowing cyber protectors to focus on more complex and strategic aspects of cybersecurity.
  • AI-Powered Threat Intelligence: Transforming the field of threat intelligence by analyzing and interpreting vast amounts of data creating actionable insights into emerging threats and vulnerabilities.
  • AI for Incident Response: Significantly improved incident response times by automating the identification, containment, and remediation of security incidents. This faster response is critical in minimizing the impact of a cyber-attack.
  • Continuous Learning and Adaptation: Just like humans, computers are now evolving, requiring AI systems to adapt to new attack techniques and patterns.

As we navigate the intricate interplay between AI and cybersecurity, it becomes evident that the stakes have never been higher. The relentless innovation of hackers demands a collective and proactive response from the cybersecurity community, industry leaders, and policymakers alike. To safeguard our digital future, it is imperative that organizations invest not only in cutting-edge AI-driven defense systems, but also in nurturing a cybersecurity culture that fosters ongoing awareness and resilience.

Should you ever have any questions relating to cybersecurity, looking to have an expert assist with training your team to be human firewalls, or are looking for full-stack cybersecurity solutions, give us a call today!

More from Steve...