Automated cyberattacks using AI: When machines become cybercriminals

IMAGE CREDIT:
Image credit
iStock

Automated cyberattacks using AI: When machines become cybercriminals

Automated cyberattacks using AI: When machines become cybercriminals

Subheading text
The power of artificial intelligence (AI) and machine learning (ML) is being exploited by hackers to make cyberattacks more effective and lethal.
    • Author:
    • Author name
      Quantumrun Foresight
    • September 30, 2022

    Insight summary

    Artificial intelligence (AI) and machine learning (ML) are increasingly being used in cybersecurity, both for protecting systems and in executing cyberattacks. Their capability to learn from data and behaviors enables them to identify system vulnerabilities, but also makes it hard to trace the source behind these algorithms. This evolving landscape of AI in cybercrime raises concerns among IT experts, requires advanced defense strategies, and may lead to significant changes in how governments and companies approach cybersecurity.

    Automated cyberattacks using AI context

    Artificial intelligence and ML maintain the ability to automate nearly all tasks, including learning from repetitive behavior and patterns, making a powerful tool to identify vulnerabilities in a system. More importantly, AI and ML make it challenging to pinpoint a person or an entity behind an algorithm.

    In 2022, during the US Senate Armed Services Subcommittee on Cybersecurity, Eric Horvitz, Microsoft’s chief scientific officer, referred to the use of artificial intelligence (AI) to automate cyberattacks as “offensive AI.” He higlighted that it’s hard to determine if a cyberattack is AI-driven. Similarly, that machine learning (ML) is being used to aid cyberattacks; ML is used to learn commonly used words and strategies in creating passwords to hack them better. 

    A survey by the cybersecurity firm Darktrace discovered that IT management teams are increasingly concerned about the potential use of AI in cybercrimes, with 96 percent of respondents indicating that they’re already researching possible solutions. IT security experts feel a shift in cyberattack methods from ransomware and phishing to more complex malware that are difficult to detect and deflect. Possible risk of AI-enabled cybercrime is the introduction of corrupted or manipulated data in ML models.

    An ML attack can impact software and other technologies currently being developed to support cloud computing and edge AI. Insufficient training data can also re-enforce algorithm biases such as incorrectly tagging minority groups or influencing predictive policing to target marginalized communities. Artificial Intelligence can introduce subtle but disastrous information into systems, which may have long-lasting consequences.

    Disruptive impact

    A study by Georgetown University researchers on the cyber kill chain (a checklist of tasks performed to launch a successful cyberattack) showed that specific offensive strategies could benefit from ML. These methods include spearphishing (e-mail scams directed towards specific people and organizations), pinpointing weaknesses in IT infrastructures, delivering malicious code into networks, and avoiding detection by cybersecurity systems. Machine learning can also increase the chances of social engineering attacks succeeding, where people are deceived into revealing sensitive information or performing specific actions like financial transactions. 

    In addition, the cyber kill chain can automate some processes, including: 

    • Extensive surveillance - autonomous scanners gathering information from target networks, including their connected systems, defenses, and software settings. 
    • Vast weaponization - AI tools identifying weaknesses in infrastructure and create code to infiltrate these loopholes. This automated detection can also target specific digital ecosystems or organizations. 
    • Delivery or hacking - AI tools using automation to execute spearphishing and social engineering to target thousands of people. 

    As of 2023, writing complex code is still within the realm of human programmers, but experts believe that it won’t be long before machines acquire this skill, too. DeepMind's AlphaCode is a prominent example of such advanced AI systems. It assists programmers by analyzing large amounts of code to learn patterns and generate optimized code solutions​

    Implications of automated cyberattacks using AI

    Wider implications of automated cyberattacks using AI may include: 

    • Companies deepening their cyber defense budgets to develop advanced cyber solutions to detect and stop automated cyberattacks.
    • Cybercriminals studying ML methods to create algorithms that can secretly invade corporate and public sector systems.
    • Increased incidents of cyberattacks that are well-orchestrated and target multiple organizations all at once.
    • Offensive AI software utilized to seize control of military weapons, machines, and infrastructure command centers.
    • Offensive AI software utilized to infiltrate, modify or exploit a company’s systems to take down public and private infrastructures. 
    • Some governments potentially reorganizing the digital defenses of their domestic private sector under the control and protection of their respective national cybersecurity agencies.

    Questions to consider

    • What are the other potential consequences of AI-enabled cyberattacks?
    • How else can companies prepare for such attacks?

    Insight references

    The following popular and institutional links were referenced for this insight:

    Center for Security and Emerging Technology Automating Cyber Attacks