Download PDF
Cyber vulnerabilities, and their exploits, pass through identifiable stages of life.

Technology isn’t human, but it has stages of life. The period after the conception of a new piece of technology is often marked by significant investments of time and resources, often with little tangible return. If this work is successful, the technology begins to enter use, benefiting from iteration and design improvements. It may then begin to spread, gaining in popularity and begetting virtuous economies of scale. If all continues to progress, the technology will mature in the marketplace. Even if it attains market dominance, however, that position will not be permanent. In time, an upstart technology will appear on the scene, and the process will begin again. Sometimes the old technology will stick around in one form or another, carving out a niche role for itself. More frequently, it will be cast aside and supplanted.

The notion of life cycles in technology and innovation is hardly new.1 Variations of the life-cycle idea can be found in a wide range of theories and case studies, from the economics of creative destruction and Moore’s Law2 to the case of the landline telephone or the digital camera. For some, this endless cycle of innovation and rebirth is central to progress, and those who drive it forward with new inventions are heralded as visionaries. It is a role the inventors themselves sometimes embrace: Steve Jobs famously said, ‘It isn’t the consumers’ job to know what they want’,3 a variant on Henry Ford’s likely apocryphal remark that, ‘If I asked my customers what they wanted, they would have said faster horses’.4 The central place of these two men in American history confirms that, when it comes to mastery of the technological life cycle, to the innovator go the spoils.

Military affairs are in many ways governed by a similar logic. Technologies that were once dominant can be quickly rendered obsolete, changing the course of conflict. History provides a litany of examples, such as the crossbow supplanting skilled archers; the Gatling gun replacing the single-shot rifle (thereby redefining infantry tactics); and the surface-to-air missile largely replacing the anti-aircraft gun. In time, these technologies matured and diffused, eventually being taken up by many states and, in some cases, by dangerous non-state groups.

But what about cyber capabilities? These are looked upon as the newest class of military technology, and there is no shortage of papers arguing for their centrality in twenty-first-century conflict. Many commentators have expressed concern about the low barrier to entry in cyber operations, the ease with which code can be copied and spread, and the dangers of such tools should they fall into the wrong hands.5 For example, a former director of the US National Security Agency (Michael Hayden) recently observed that ‘even … less capable actors can now develop and/or acquire tools and weapons that we thought in the past were so high-end that only a few nation-states could acquire and use them’.6 It is clear that the life cycle of cyber capabilities, and particularly the prospects of diffusion, merits analysis.

Vulnerabilities and exploits

Network intrusions frequently begin with the discovery of a vulnerability in a piece of software.7 Vulnerabilities are weaknesses in the software’s code that can be exploited by an unauthorised user to perform malicious actions. Such actions, carried out by code known as ‘exploits’, could include stealing data, running malicious code, gaining additional privileges, using the penetrated system as a jumping-off point for future penetrations, and more. Each vulnerability, and the associated exploit, follows a life cycle of its own.

The first stage of this cycle is discovery and development. This stage can be performed by researchers working for a state, by individuals working for themselves (who either sell exploits to states or other actors, or use them for their own gain), or by researchers seeking to make software more secure. The discovery of a vulnerability could happen via a number of methods. It can be aided by a process known as fuzzing: feeding data into a piece of software in an effort to cause it to crash in ways that reveal design weaknesses. Alternatively, if the discoverer can gain access to the source code that makes up the software, he or she can look through the code for loopholes and flaws. Another approach is to look for instances of common mistakes that can lead to vulnerabilities – a process analogous to trying various doors in the hopes of finding one unlocked.8

The discovery of a vulnerability is most significant when an exploit can be developed by a malicious actor to take advantage of that vulnerability. Security researchers, whose motives are by definition less nefarious, will sometimes write exploit code themselves, in order to show the severity of the vulnerability and the ways in which it can be exploited. In so doing, the security researcher can provide a working proof of concept and build a stronger case that the software manufacturer should promptly address the vulnerability. Both the malicious actor and the security researcher will likely test the developed exploit in controlled environments to ensure that it produces the intended effect.

An example of this process can be seen in the discovery of, and response to, the Heartbleed vulnerability (formally, CVE-2014-0160). This vulnerability, which occurs in software widely used to provide security for sensitive internet traffic, exists because the code does not properly check the input given to it by a user. This is a common, though potentially quite damaging, mistake that could be exploited to gather sensitive information from the affected systems, such as user passwords and administrative information. In this case, the oversight was independently discovered in April 2014 by a security researcher working for Google and by a researcher at a Finnish computing firm, Codenomicon. Neither engineer publicly revealed the method they had used to find the vulnerability.

It is not known to what degree researchers attempted to exploit the Heartbleed vulnerability in private, but it seems fair to assume that they tested it in a controlled fashion prior to alerting major companies. In any case, the proof of concept was quickly confirmed and publicly demonstrated by other researchers soon after the vulnerability was disclosed – to much publicity and alarm.

Following discovery and development comes the introduction stage, during which an exploit can begin to be used against operational systems. In cases where a malicious actor has discovered an exploit, that actor will often be the only one using it, since no one else will have become aware of it. This can be a tremendous advantage, as defensive systems will not be in place to detect or block the exploit’s use, and the software vendor will not yet have fixed the vulnerability. Exploits of this sort are known as ‘zero-day exploits’, because defenders have no notice before the exploit is used. Due to their increased effectiveness, zero-day exploits can reportedly sell for large sums on the grey market. There is evidence that governments, including the United States, have spent tens of millions of dollars purchasing them in order to enable cyber operations.9

Since the researchers who discovered Heartbleed were neither malicious intruders nor zero-day brokers, they did not seek to use or sell the exploit in secret. Rather, once they were sure the vulnerability could be exploited to damaging effect, they began to alert others. In this case, therefore, it was not the exploit itself that was being introduced, but rather fixes to the vulnerability. Engineers at Google benefited from the fact that one of their own had discovered the vulnerability. The firm was able to act on this knowledge and secure its software against the use of the exploit before others knew about it, removing the risk to its customers and systems.10

After the introduction stage comes growth. In the case of a malicious actor, by this point in the life cycle that actor will know that the exploit can be effectively used against a set of targets. All else being equal, the actor then has an incentive to use the exploit against those targets, since waiting runs the risk of the vulnerability being discovered by another actor or fixed by the software vendor. During the growth phase, the exploit will be integrated into a malicious actor’s operations, perhaps to be used routinely or frequently. Even so, knowledge of the exploit’s existence will normally still be confined to that actor, and any confidants.

In the case of Heartbleed, for which it was the fix (and not an exploit) that was to gain traction, word began to spread informally after the software vendor was notified in private. Though few companies are willing to identify their sources for learning about the vulnerability, several firms besides Google were alerted to the danger in advance of its public disclosure. These firms began implementing fixes for their own systems.11

Software vendors may rush out fixes

As time passes and their use increases, exploits become increasingly likely to be detected and publicised. At this maturation stage of an exploit’s life cycle, fixes are developed to address the problem, which means it is no longer considered a zero-day vulnerability. In the case of a particularly damaging exploit, software vendors may rush out emergency fixes to address the relevant vulnerability as quickly as possible, and immediately alert users to update their systems. In somewhat less severe cases, these fixes will be bundled together and sent out to users in regular periodic updates. The most famous example of this is Microsoft’s ‘Patch Tuesday’ system, which delivers batches of fixes to users on the second Tuesday of every month. As more and more fixes are applied to systems and defences are adjusted, a given exploit’s utility to malicious actors begins to drop.

With maturation comes diffusion of the capability to other actors, as news of the vulnerability becomes more widely known. This enables other actors to develop their own exploits to target the same vulnerability. Frequently, malicious actors will attempt to exploit recently announced vulnerabilities before their targets apply the relevant software fixes, a process which can take some time. Because zero-day vulnerabilities are comparatively rare – based on publicly available information, fewer than two dozen were discovered and used by malicious actors in 201312 – recently disclosed vulnerabilities may provide the best chance for a malicious actor to penetrate a target’s networks. One study showed that the top five zero-day vulnerabilities of 2013 were exploited in attempted intrusions almost 200,000 times in the 30 days immediately following their public disclosure – a clear example of diffusion.13 On the whole, it is cheaper to wait for vulnerabilities to reach a mature stage before exploiting them than it is to purchase or discover zero-day vulnerabilities at an earlier stage. Freely available software, such as Metasploit, can make diffusion easier still, providing a mechanism through which exploit code can quickly spread once developed.14

For Heartbleed, maturation came when security researchers publicly disclosed it. Given the severity of the vulnerability – which one leading security researcher described as ‘easily the worst vulnerability since mass-adoption of the Internet’15 – the maturation process unfolded with great rapidity. This process was aided by prominent news coverage in major outlets,16 widespread discussion on technology websites, and a high-profile web page put up by the security researchers.17 Before too long, even the White House saw fit to weigh in, clarifying that Heartbleed was not a tool of US intelligence agencies.18 Many leading technology firms that had not been given advance notice of the vulnerability, such as Amazon, immediately sought to implement fixes.19

As news of the Heartbleed vulnerability left the relatively controlled backchannels of the security community and emerged into public view, diffusion to malicious actors also occurred. In the days after the news broke and the fix was released, several websites – almost certainly ones that had not yet applied the fix – experienced breaches exploiting the vulnerability. These websites included the second-largest for-profit hospital chain in the United States, a discussion forum with more than one million members, and the Canadian Revenue Agency.20 It is a sign of how quickly the vulnerability spread that the intruders were not necessarily sophisticated intelligence agencies but rather, in the case of the Canadian Revenue Agency breach at least, allegedly a teenage engineering student.21

In time, vulnerabilities decline in importance as fixes are applied more broadly. That said, individuals or organisations that do not upgrade to new software versions or apply security updates will remain viable targets. According to a survey conducted in February 2015, for example, more than 20% of Internet Explorer users continue to use Internet Explorer 8, released in 2009, if not browsers that are older still.22 Similarly, almost 20% of all desktop-computer users still use Windows XP,23 an operating system that was first released in 2001 and for which Microsoft has stopped issuing technical and security fixes.24 Users without up-to-date software are vastly more likely to be targeted for intrusion.

The Heartbleed case once again serves to illustrate this point. Because of the widespread attention the vulnerability received, most prominent services had applied fixes within days. This included many potential targets for intrusion, such as major web companies. Yet, an industry survey conducted in spring 2015 – almost a year after the disclosure – indicated that many sites had not yet applied the fix.25 Another survey, which drew on large amounts of internet data, found that, as of February 2015, Heartbleed still merited a place among the top ten ‘most critical and prevalent security vulnerabilities’.26

Even though other exploits have not received the same amount of attention, there is reason to believe that Heartbleed is not alone in conforming to a life-cycle model. In some cases, exploits are discovered and used by a state or state-sponsored actor before being diffused more widely. For example, in 2013 a sophisticated actor targeted the CVE-2013-1347 vulnerability to hack the website of the United States Department of Labor. The hackers were believed to be state-sponsored because the infected web page was one frequented by Department of Energy employees working on nuclear-related illnesses, and because the same exploit was used simultaneously to hack non-profit organisations and a large European defence, aerospace and security company.27 After the discovery of this high-impact exploit, it underwent a similar process of maturation, including diffusion. The software vendor quickly prepared a fix, while modules designed to exploit the vulnerability were added to publicly available tools.28

Other techniques and operational concepts

The applicability of life-cycle analysis to cyber security is not limited to software vulnerabilities and exploits. It can be applied to specific techniques, as well as to broader operational concepts. In such cases, life-cycle stages may not be as clearly delineated, but a similar arc of discovery, introduction, growth, maturation and decline can usually be discerned.

Take, for example, the technique of distributed denial-of-service attacks. These attacks seek to disable a computer system. To do so, they send a flood of often meaningless data to the target computer or network from a wide variety of sources, causing it to become overwhelmed and thus unable to process ordinary requests. Though it is not possible to identify the first such attack, the technique began to emerge in the late 1990s and early 2000s. Some of these attacks were successful, drawing attention to their possibilities and dangers.29 The technique gained geopolitical prominence, scale and sophistication with the 2007 denial-of-service attack against Estonia, for which the Estonians blamed Russia, a state with a long history of cyber operations.30 A year later, distributed denial-of-service attacks against targets in Georgia coincided with Russian conventional military offensives against that country, garnering still more public attention.31 In 2009, it is believed that North Korea launched a distributed denial-of-service attack on American and South Korean targets.32 During this period, non-state groups such as Anonymous also began launching distributed denial-of-service attacks as a form of political protest (including a notable attack against the Church of Scientology), employing a tool to make the attacks easier for individual members to join.33 These attacks marked the maturation and diffusion stages of the technique’s life cycle.

In time, defences were developed and the technique began to decline. To be sure, distributed denial-of-service attacks persisted – for example, it is believed that Iranian operators leveraged large data centres to aid their 2012 denial-of-service attack on American banks,34 and that Chinese attackers used the state’s powerful position on the internet in their 2015 denial-of-service attack on GitHub35 – but other attack methods gained much more attention.36 A North Korean computer attack on South Korean banks in 2013 used malicious code that wiped hard drives,37 as did a North Korean computer attack on Sony Pictures Entertainment and an Iranian attack on Sands Casino in 2014.38 These attacks had longer-lasting effects than their more transient distributed denial-of-service predecessors.

The life-cycle model applies to other operational concepts as well. This includes techniques that have not yet begun meaningful decline, but that have reached a certain stage of maturity and diffusion. An example of this is social engineering, a technique initially associated with non-state actors in the 1980s and 1990s.39 In the 2000s, the technique spread to state-sponsored actors, before gaining greater popularity among criminals seeking financial gain. A second example is known as ‘persistence on the network’, which involves burrowing so deeply into systems that defenders have a difficult time removing the malicious code. This technique was once within the exclusive domain of those interested in espionage, who desired a way to collect intelligence from their target over the long term; some states, such as the United States, developed highly sophisticated means of doing this in the mid- to late 2000s.40 Although financially motivated actors initially saw little value in persistence, preferring to take whatever data they could upon making entry before quickly moving on to the next target, this has begun to change.

Of course, techniques can diffuse from financial criminals to state-sponsored actors as well. This happened in the case of techniques to steal personally identifiable information, which were once primarily used by criminals seeking to commit fraud or identity theft. In recent years, however, state-sponsored actors have begun seeking out large stores of personally identifiable information, perhaps as a means of informing future social-engineering attempts or other operations.41

Exceptions to the life-cycle model

Despite the overall usefulness of the life-cycle model as applied to cyber operations, exceptions do exist, particularly with respect to the notion of diffusion. There are at least four possible reasons why a particular technique or concept in cyber operations might not diffuse. Firstly, the technology or concept might require a significant presence on the infrastructure of the internet, or significant access to telecommunications providers. Secondly, it might require specialised test beds or expensive physical resources. Thirdly, it might target an esoteric piece of hardware or software. Finally, it might require coordination with other parts of a sophisticated intelligence service. Thus, techniques satisfying one or more of these conditions will usually fail to diffuse not because they are secret or because potential users do not wish to employ them, but rather because they do not have the capacity to put their knowledge of the technique to practical use.

An example of a technique requiring its user to have a significant presence on the cables, routers and switches that enable the internet to function is the collection of unencrypted data on a large scale from the main thoroughfares of the internet.42 This technique can really only be employed by actors possessing the kind of access to the internet enjoyed, for example, by the Five Eyes – the signals-intelligence alliance comprising the United States, the United Kingdom, Canada, Australia and New Zealand.43 Non-state groups are unlikely to possess the necessary capabilities to make use of this technique, and are therefore unlikely to be able to carry out the kind of analysis this technique supports.

Access to the infrastructure of the internet can foster additional operational techniques, such as man-on-the-side attacks, in which a target’s request to a web server is covertly intercepted. In such instances, attackers are able to create what is called a ‘race condition’, sending malicious traffic to the target before the legitimate web server responds. The technique, and variants thereof, has appeared in US National Security Agency (NSA) documents,44 and has reportedly been used by GCHQ, the British signals-intelligence agency.45 In theory, high-level internet and telecommunications access could also enable advanced data-exfiltration operations, in which pilfered information is covertly smuggled out of targeted networks. Once again, leaked NSA documents make reference to such techniques.46

The best example of an operation requiring a specialised test bed or other expensive physical resources is Stuxnet, a remarkably sophisticated operation carried out by the United States and Israel that targeted Iranian centrifuges.47 According to news reports, the operation, which first came to light in 2010, was in part enabled by a test bed of similar centrifuges, arranged in a similar configuration, located in Israel.48 This resource allowed the attackers to plan their operation carefully and to refine their malicious code in secret. Given the kind of subtle effects Stuxnet aimed to cause,49 such testing was probably essential for its success. Any actor not possessing a similar physical test bed, or at least significant knowledge of the hardware system to be targeted, would have a hard time emulating Stuxnet’s physical effects.50

In addition, operations like Stuxnet are unlikely to diffuse because they are narrowly crafted to target very specific software and hardware configurations. While Stuxnet demonstrated that a kinetic attack can be carried out via cyber means, the specific code and techniques employed by Stuxnet are unlikely to be of much use to other actors seeking to launch similar attacks without being substantially modified.51 This is because they were designed for the specific configuration of centrifuges present in Iran, a configuration that is highly unlikely to be shared by other facilities.52 Nor was the Stuxnet code easy to understand. Indeed, it was so specialised that even leading security organisations were obliged to ask the broader public for help in identifying what it did.53

Finally, examples of an operation that might require the work of other parts of an intelligence organisation include so-called ‘supply chain interdiction’ or ‘off-net operations’, which involve physically intercepting a new computer or computer part while it is in transit to the target, so that malicious code can be added to it. These operations would be difficult for lone actors to emulate, as they would almost certainly require coordination with larger teams, as well as knowledge of the supply chain, including packaging and shipping systems, and an ability to perform the physical interdiction. NSA documents indicate that the agency has pursued this sort of operation in partnership with the CIA.54 Companies whose products are likely to be targeted, such as Cisco, have taken steps to try to evade such interdiction, including shipping parts under false names, or to false addresses.55

Countering diffusion

Techniques and exploits that do conform to the life-cycle model generally fall into at least one of the following three categories: those that target widely used software platforms; those that are broadly accessible with few prerequisites (these can sometimes be automated in the form of a tool); and those that rely on scale, rather than sophistication, to be effective. Attacks on widely used operating systems and other common software packages are much more frequent than those targeting obscure industrial-control systems. The barriers to entry for many operations are made lower by tools, such as Metasploit, which aggregate exploits. And there are many vulnerable users, including those who have not yet applied even critical security fixes to their systems, meaning that operations that rely on scale for effectiveness, such as financial crime, can still be effective.

Computer code is easily copied

Given the way that exploits typically diffuse, classification and export controls are not likely to be effective in preventing the diffusion of non-exceptional cyber capabilities. Though classification certainly has a role to play in keeping important operational concepts out of the public domain, it is far from a panacea. The Snowden disclosures have underscored the difficulty in keeping information secret. Even if such disclosures can be described as anomalous,56 the cyber-security community has proven quite capable of observing, analysing and disclosing secret operations carried out by states. American cyber-security companies have exposed Chinese operations,57 for example. Private interests have also exposed operations carried out by the Five Eyes.58 (Other capabilities that are subject to classification, such as biological or chemical capabilities, are disclosed far less often by private actors dedicated to exposing government operations in splashy reports.) Moreover, states can learn from their adversaries, as Iran apparently did.59 All told, while classification may still be necessary to slow diffusion, it is hardly sufficient to stop it altogether.

Export control is also of limited use in thwarting diffusion. Indeed, as of 2015, very few cyber capabilities were subject to US export controls. This is for at least two good reasons. Firstly, many technologies are ‘dual-use’, meaning they can be used for legitimate purposes just as easily as for illicit ones. Export restrictions would be unwelcome because they would interfere with these legitimate uses. Secondly, computer code is easily copied and moved. This is most notably exemplified by the so-called ‘Crypto Wars’ of the 1990s. That debate, which centred on the US government’s efforts to prevent the export of advanced cryptographic tools to other countries, was ultimately resolved in favour of looser controls.60 The government saw the difficulty in trying to limit the spread of encryption algorithms. It had also come to appreciate the ways in which secure cryptography was essential for many important aspects of modern commerce.61 It could be that at least a limited export-control regime will be implemented in the future, perhaps focused on zero days and building on accords such as the Wassenaar Arrangement (a multilateral export-control regime targeting conventional arms and dual-use goods), but it remains unclear how effective such a regime could be at stopping diffusion.62

Certainly, the origin of a technique or exploit does not seem to affect its eventual prospects for maturation and diffusion. The Heartbleed vulnerability was discovered by a security researcher, while CVE-2013-1347 appears to have been found by a sophisticated, and likely state-sponsored, hacking group. The technique of collecting personally identifiable information began as a means to financial gain, but is now a tool of intelligence agencies, while ‘seeking persistence in operations’ has diffused in the reverse direction. Distributed denial-of-service attacks have been employed by a wide variety of actors, with varying degrees of effectiveness, throughout their comparatively long history. A broad range of cyber operations has come under private-sector scrutiny, and states are getting better at analysing the behaviour of their counterparts. It is clear that actors of all stripes regularly learn from one another.

Given the life-cycle analysis presented here, a final conclusion can be drawn: sophisticated states with an interest in broader stability and cyber security should consider carefully the techniques and exploits they employ in operations. One component of the authorisation calculus should be whether those exploits and techniques, once discovered, could diffuse to rival states, less sophisticated states or non-state actors. There will no doubt be other important factors to consider, such as cost, scale, effectiveness or concerns about civil liberties. Nevertheless, all other things being equal, states seeking stability should prefer operations using techniques that are less likely to spread to their potential adversaries. These states may also attempt to weaken capabilities that are likely to diffuse. For example, they might try to encourage the discovery and fixing of vulnerabilities by researchers, or might take concrete steps of their own to fix, rather than exploit, the vulnerabilities they uncover.63

Perhaps there will come a time when cyber operations are subject to the same kind of consideration as are operations with implications for public health. Even states that seek stability nonetheless perform military and intelligence operations, but many of these have either unilaterally or mutually agreed to avoid unduly harming efforts at public health. This is one of the reasons the CIA’s choice to run a fake vaccination programme in Pakistan during the hunt for Osama bin Laden is so controversial: apart from being unsuccessful, the operation made it vastly more difficult for future, legitimate vaccination programmes to take place.64 This ultimately caused harm to efforts to eradicate polio – an objective of global interest. In a similar way, the spread of potentially dangerous technologies and techniques might have some short-term benefits, but could in the long run negatively affect the overall health of the computing ecosystem.65 States may nonetheless choose to deploy technologies or techniques that will diffuse, but the choice should be an informed one.


The author is grateful to Robert M. Lee and Daniel Moore for reading an earlier draft of this article and providing helpful feedback.


1 For a seminal work on the life-cycle concept, see Everett Rogers, Diffusion of Innovations (New York: Simon and Schuster, 2010).

2 Creative destruction is the process through which economic inefficiencies are overcome by the introduction of better technologies or processes. In general terms, Moore’s Law states that computer processing power grows exponentially over time.

3 Steve Lohr, ‘Can Apple Find More Hits without Its Tastemaker?’, New York Times, 18 January 2011,

4 Patrick Vlaskovits, ‘Henry Ford, Innovation, and That “Faster Horse” Quote’, Harvard Business Review, 29 August 2011,

5 For a small sampling, see Richard Clarke and Robert Knake, Cyberwar (New York: HarperCollins, 2010); Joel Brenner, Glass Houses (New York: Penguin, 2014); Martin C. Libicki, Crisis and Escalation in Cyberspace (Santa Monica, CA: Rand Corporation, 2012).

6 Nicole Perlroth, ‘Hacking for Security, and Getting Paid for It’, New York Times, Bits blog, 14 October 2015,

7 For more on how vulnerabilities occur, and on how software developers can reduce the number of vulnerabilities in the code they produce, see Michael Howard and Steve Lipner, The Security Development Lifecycle (Redmond, WA: Microsoft Press, 2009).

8 For more on the vulnerability-discovery process, see ibid., pp. 153–69.

9 Barton Gellman and Ellen Nakashima, ‘U.S. Spy Agencies Mounted 231 Offensive Cyber-Operations in 2011, Documents Show’, Washington Post, 30 August 2013, For more on the zero-day market, see Charlie Miller, ‘The Legitimate Vulnerability Market’, Independent Security Evaluators, 6 May 2007,

10 Danny Yadron, ‘After Heartbleed Bug, a Race to Plug Internet Hole’, Wall Street Journal, 9 April 2014,

11 Ben Grubb, ‘Heartbleed Disclosure Timeline: Who Knew What and When’, Sydney Morning Herald, 15 April 2014,

12 Additional zero days were discovered by actors without malicious intent, such as security researchers and firms, and sent to software vendors for patching. Others were surely discovered and used in secret. This figure is the number of zero days actually used in known attacks, according to a review by a leading security firm. See Symantec Corporation, ‘Internet Security Threat Report 2014’, April 2014, p. 34, For more on zero days, see Leyla Bilge and Tudor Dumitras, ‘Before We Knew It: An Empirical Study of Zero Day Attacks in the Real World’, in Proceedings of the 2012 ACM Conference on Computer and Communication Security (New York: Association for Computing Machinery, 2012), pp. 833–44.

13 Symantec Corporation, ‘Internet Security Threat Report 2014’, p. 35.

14 Metasploit need not be malicious. Indeed, its purpose is to aid penetration testing, the targeting of one’s own network by simulated intruders to facilitate improved defences. For more, see the Metasploit website at

15 Yadron, ‘After Heartbleed Bug, a Race to Plug Internet Hole’.

16 See, for example, ibid.; Nicole Perlroth, ‘Heartbleed Internet Security Flaw Used in Attack’, New York Times, 18 April 2014,; and Elise Hu and Steve Henn, ‘What to Do Now That the Heartbleed Bug Exposed the Internet’, All Tech Considered, 9 April 2014,

17 Codenomicon, ‘The Heartbleed Bug’,

18 Michael Daniel, ‘Heartbleed: Understanding When We Disclose Cyber Vulnerabilities’, White House blog, 28 April 2014,

19 Yadron, ‘After Heartbleed Bug, a Race to Plug Internet Hole’.

20 Grubb, ‘Heartbleed Disclosure Timeline: Who Knew What and When’.

21 Jose Pagliery, ‘Canadians Arrest a Heartbleed Hacker’, CNN, 16 April 2014,

22 Craig Buckler, ‘Browser Trends March 2015: Renewed Interest in Opera?’, Sitepoint, 3 March 2015,

23 NetMarketshare, ‘Desktop Top Operating System Share Trend’,

24 Microsoft, ‘Support for Windows XP Has Ended’,

25 Cisco, ‘Cisco Annual Security Report Reveals Widening Gulf Between Perception and Reality of Cybersecurity Readiness’, 20 January 2015,

26 Qualys, ‘Top 10 Vulnerabilities February 2015’,

27 Jaime Blasco, ‘New Internet Explorer Zeroday Was Used in the DoL Watering Hole Campaign’, AlienVault, 5 May 2013, Some analysts have concluded, based on command-and-control techniques used in the malicious code, that the group is likely a sophisticated operation based in China known to the security community as ‘DeepPanda’. Eddie Mitchell, ‘Part 2 – K.I.A. – US Dept. Labor Watering Hole Pushing Poison Ivy Via Ie8 Zero-Day’, Invincea, 3 May 2013, The same group is reported by other firms to have conducted other significant operations, including a notable 2014 watering-hole attack on the website that targeted American defence contractors and financial-services firms, as well as Chinese dissidents. See Stephen Ward, ‘Cyber Espionage Campaign Compromises Web Properties to Target US Financial Services and Defense Companies, Chinese Dissidents – Cve-2015-0071 and Cve-2014-9163’, iSight Partners, 10 February 2015,

28 Kelly Jackson Higgins, ‘Metasploit Module Released for IE Zero-Day Flaw Used in Labor Attack’, DarkReading, 6 May 2013,

29 See, for example, Corey Grice, ‘How a Basic Attack Crippled Yahoo’, CNET, 2 January 2002,

30 ‘Estonia Hit by “Moscow Cyber War”’, BBC News, 17 May 2007,

31 For a more detailed analysis of this case, see GreyLogic, ‘Project Grey Goose Phase II Report: The Evolving State of Cyber Warfare’, 20 March 2009,

32 Choe Sang-Hun and John Markoff, ‘Cyberattacks Jam Government and Commercial Web Sites in U.S. and South Korea’, New York Times, 8 July 2009,

33 Quinn Norton, ‘Anonymous 101 Part Deux: Morals Triumph over Lulz’, Wired, 30 December 2011, For more on Anonymous, including the group’s use of denial-of-service attacks, see Gabriela Coleman, Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous (London and New York: Verso, 2014).

34 Nicole Perlroth and Quentin Hardy, ‘Bank Hacking Was the Work of Iranians, Officials Say’, New York Times, 8 January 2013,

35 Paul Mozur, ‘China Appears to Attack GitHub by Diverting Web Traffic’, New York Times, 30 March 2015,

36 Denial-of-service attacks also have some utility as a form of political protest, or as part of efforts at extortion for financial gain, but overall it is fair to say they are no longer the principal focus of cyber-security discussions.

37 Sean Gallagher, ‘Your Hard Drive Will Self-Destruct at 2pm: Inside the South Korean Cyberattack’, Ars Technica, 21 March 2013,

38 Kurt Baumgartner, ‘Sony/Destover: Mystery North Korean Actor’s Destructive and Past Network Activity’, Securelist, 4 December 2014,; Ben Elgin and Michael Riley, ‘Now at the Sands Casino: An Iranian Hacker in Every Server’, Bloomberg, 11 December 2014,

39 Social engineering most often takes the form of spear-phishing, in which a seemingly genuine email is sent to a target who, it is hoped, will click a malicious link or open a malicious attachment.

40 Michael Mimoso, ‘Inside Nls_933w.DLL, the Equation Apt Persistence Module’, ThreatPost, 17 February 2015,

41 Mandiant, ‘M-Trends 2015: A View from the Front Lines’, p. 20,

42 Ewen MacAskill et al., ‘GCHQ Taps Fibre-Optic Cables for Secret Access to World’s Communications’, Guardian, 21 June 2013,; Ryan Gallagher, ‘Profiled’, The Intercept, 25 September 2015,

43 This is not to draw any conclusions about how the Five Eyes do or don’t use their current levels of access (an entirely separate debate), only to suggest that their partnerships with telecommunications companies enable additional possible capabilities. For a report on the telecommunications access of one Five Eyes signals-intelligence agency, see Geoff White, ‘Spy Cable Revealed: How Telecoms Firm Worked with GCHQ’, Channel 4, 20 November 2014,

44 ‘Quantum Insert Diagrams’, The Intercept, 12 March 2014,; Bruce Schneier, ‘Attacking Tor: How the NSA Targets Users’ Online Anonymity’, Guardian, 4 October 2013,

45 ‘Quantum Spying: GCHQ Used Fake LinkedIn Pages to Target Engineers’, Der Spiegel, 11 November 2013,

46 NSA, ‘Analytic Challenges from Active–Passive Integration’, 2007.

47 For more on Stuxnet, including a discussion of the testing procedures and detailed knowledge required to configure the attack correctly, see Kim Zetter, Countdown to Zero Day (New York: Crown, 2014).

48 William Broad, John Markoff and David Sanger, ‘Israeli Test on Worm Called Crucial in Iran Nuclear Delay’, New York Times, 15 January 2011,

49 For more on these effects, see Ralph Langner, ‘Stuxnet’s Secret Twin’, Foreign Policy, 19 November 2013,

50 Some components of Stuxnet, however, are not subject to this restriction, since they targeted widely used software. For example, the exploits used to target Windows computers could conceivably be used by other actors, in accordance with the model outlined in this article.

51 The Stuxnet worm itself reportedly accidentally spread into other networks and caused some instability, but nothing like the damage that was done to the targeted Iranian facility. See Rachel King, ‘Stuxnet Infected Chevron’s IT Network’, Wall Street Journal, 8 November 2012,

52 Zetter, Countdown to Zero Day, p. 175.

53 Kim Zetter, ‘How Digital Detectives Deciphered Stuxnet, the Most Menacing Malware in History’, Wired, 11 July 2011,

54 Peter Maass and Laura Poitras, ‘Core Secrets: NSA Saboteurs in China and Germany’, The Intercept, 11 October 2014,

55 Darren Pauli, ‘Cisco Posts Kit to Empty Houses to Dodge NSA Chop Shops’, The Register, 18 March 2015,

56 It is not at all clear that this is the case. In some important ways, unauthorised disclosures may be getting easier. For more, see Andy Greenberg, This Machine Kills Secrets (New York: Dutton, 2012).

57 For a seminal example, see Mandiant, ‘APT1: Exposing One of China’s Cyber Espionage Units’,

58 A noteworthy example is Kaspersky Lab, ‘Equation Group: Questions and Answers’, 2015.

59 Glenn Greenwald, ‘NSA Claims Iran Learned from Western Cyberattacks’, The Intercept, 10 February 2015,

60 The debate is significant enough to warrant book-length treatment. See Steven Levy, Crypto (New York: Penguin Books, 2001).

61 In light of the Snowden revelations of the Five Eyes’ activity against secure cryptography, this debate is being revisited. See, for example, Julian Hattem, ‘“Crypto Wars” Return to Congress’, The Hill, 20 October 2014,

62 For more on zero days and their market, see Andreas Kuehn and Milton Mueller, ‘Shifts in the Cybersecurity Paradigm: Zero-Day Exploits, Discourse, and Emerging Institutions’, in Proceedings of the 2014 New Security Paradigms Workshop (New York: ACM, 2014). See also Mailyn Fidler, ‘Regulating the Zero-Day Vulnerability Trade: A Preliminary Analysis’, I/S: A Journal of Law and Policy for the Information Society (forthcoming).

63 For more on this argument, see Bruce Schneier, ‘Should U.S. Hackers Fix Cybersecurity Holes or Exploit Them?’, Atlantic, 19 May 2014, For the current American position, which permits both fixing and exploiting vulnerabilities depending on the circumstance, see Daniel, ‘Heartbleed: Understanding When We Disclose Cyber Vulnerabilities’.

64 ‘How the CIA’s Fake Vaccination Campaign Endangers Us All’, Scientific American, 1 May 2013,

65 Myriam Dunn Cavelty has advanced a variant on this argument, focused specifically on the use of exploits by states. See Myriam Cavelty, ‘Breaking the Cyber-Security Dilemma: Aligning Security Needs and Removing Vulnerabilities’, Science and Engineering Ethics, vol. 20, no. 3, September 2014, pp. 701–15.

Ben Buchanan is a PhD candidate in the Department of War Studies at King’s College London, where he is a Marshall Scholar. He is also a Public Policy Fellow at the Woodrow Wilson International Center for Scholars.

Back to content list

Survival: Global Politics and Strategy

February-March 2016

Also available in Kindle and iPad format:

Kindle UK > 

Kindle US >

iPad >

iBookstore UK >

iBookstore US >

Table of Contents

Available to download as a PDF >