AI will be a Double-Edged sword in future cyber conflicts
Artificial intelligence opens up a set of new risks and opportunities for the military and intelligence community.
“Artificial Intelligence and machine learning … [are] foundational to the future of cybersecurity. We have got to work our way through how we’re going to deal with this. It is not the if, it’s only the when to me,” Adm. Mike Rogers, former chief of the National Security Agency and U.S. Cyber Command, remarked in an interview. During his presidency, Barack Obama shared his concerns about an attacker using artificial intelligence (AI) to access launch codes for nuclear weapons. “If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems,” Obama said.
AI opens up a set of new risks and opportunities for the military and intelligence community. It is, however, important to be more precise about how AI applications impact different types of military and intelligence activities. Discussing the use of AI in cyber operations is not about whether technology or humans will be more important in the future. It is about how AI can make sure developers, operators, administrators, and other personnel of cyber organizations or hacking groups do a better job. It is essential to understand some of the key applications of AI in future cyber conflicts—from both the offensive and defensive perspectives.
How AI Can Help the Attacker
There are several ways in which hackers can benefit from AI techniques to conduct cyber operations more effectively. First, AI technology might help in finding exploitable vulnerabilities. Finding unknown vulnerabilities is often done through a dynamic process called “fuzzing” in which an operator automatically inputs massive amounts of data, called fuzz, to uncover “response exceptions,” or potential signs of vulnerabilities. AI will improve these fuzzing techniques. Researchers at the Pacific Northwest National Laboratory have already demonstrated that AI-based fuzzing, complemented with conventional fuzzing techniques, is faster and more effective than conventional fuzzing alone.
Second, AI might allow for more effective forms of social engineering. Spam emails can be automatically tailored to a target’s profile. Similarly, chatbots fed with large amounts of personal data of users could engage in long conversations and gain a target’s trust. AI applications will further enable the creation of so-called deepfakes, which are created by combining existing videos or images and overlaying other images. Deepfake apps and improvements in facial recognition software have brought the inexpensive and easy creation of content within anyone’s reach—someone can simply download several facial images of a person, process them into a 3D image, and then use them in a deepfake video. It is not hard to conceive of a future in which deepfake bots will be able to masquerade as real people in live video chats to steal credentials or other information. One poorly executed application, a deepfake purporting to show Ukrainian president Volodymyr Zelenskyy capitulating to Russia, has already been seen in the Russo-Ukrainian War.
Third, AI techniques allow for the generation of malicious malware samples that can avoid detection by simulating the behavior of legitimate applications. For example, Maria Rigaki and Sebastian Garcia demonstrated that it is possible to avoid detection through the adoption of a Generative Adversarial Network, which models network traffic behavior that mimics a traffic profile, such as a Facebook chat.
Fourth, AI techniques may allow malware to spread itself more effectively. The Trojan downloader Emotet provides a glimpse of how AI-enabled propagation can operate faster than human directed operations. First identified in 2014, Emotet was originally designed as a banking malware that spread through spam emails. Over the years, Emotet has evolved, and its most recent versions are suspected to utilize machine learning to make Emotet more effective in targeting victims. “Despite attacking and compromising thousands of devices daily, it is surprisingly effective in avoiding researcher machines, honeypots and botnet trackers,” researchers from ESET note. “To achieve this, Emotet collects the telemetry of its potential victims and sends it to the attacker’s C&C server for analysis. Based on these inputs, the malware not only picks the modules that are to be included in the final payload but also appears to distinguish real human operators from virtual machines and automated environments used by researchers,” they continue. Emotet seems to be able to automatically distinguish between legitimate processes and artificial ones, subsequently allowing it to select relevant payloads—a process that would take a significant amount of time and resources if done manually.
Finally, perhaps the most significant developments will be on the back end, infrastructure side of cyber capability development. For example, AI-powered data analytics are expected to improve the collection, translation, and manipulation of data. This will reduce the need for linguists and make the jobs of analysts much easier.
Toward a Better Cyber Defense
While AI technology can have significant upsides for attackers, we should equally recognize the potential for AI to aid cyber defense. There will be AI applications for both the detection and response to cyberattacks. On the detection side, we can expect a (further) move away from so-called signature-based detection, which relies on a set of static rules that must be constantly updated. It will be flexible detection that captures what a baseline network looks like and will be able to spot any changes that appear abnormal. Even if these systems will still not be perfect and may show false positives, they can make first passes through data and reduce the need for human analysis.
AI might also facilitate intelligent responses to adversarial cyber activity. In 2016, DARPA organized the Cyber Grand Challenge, the world’s first all-machine competition to create automatic defensive systems capable of discovering, proving, and patching software flaws on a network in real-time. The automated defense system that was crowned winner of the challenge, ForAllSecure’s Mayhem, was unable to beat a team of human operators at a later event. Nevertheless, DARPA’s event was a proof of concept for autonomous cyber defense, demonstrating how automated systems can find security flaws and develop and deploy solutions in real-time. The U.S. Defense Innovation Unit Experimental (DIUx) launched a project following the event in order to determine if commercial “cyber reasoning” could be deployed to detect and remediate previously unknown vulnerabilities in weapon systems.
Finally, particularly for the more responsible states that worry about collateral damage, some AI techniques—such as those used to further develop self-propagating malware, also known as worms —will have to be used with great caution. If worms do not have clear boundaries on where they are allowed to go, it increases the risk of indiscriminate targeting and uncontrolled propagation.
Jon Lindsay and Erik Gartzke note that “cyber operations alone lack the insurance policy of hard military power, so their success depends on the success of deception.” AI provides novel opportunities for the attacker to mislead the enemy more effectively and efficiently. It can improve the attackers’ ability to find vulnerabilities, exploit the human factor, and deliver malware. But it equally provides new opportunities to quickly uncover acts of deception. Ultimately, AI cuts both ways.