By Will Casey
Senior Member of the Technical Staff
Exclusively technical approaches toward attaining cyber security have created pressures for malware attackers to evolve technical sophistication and harden attacks with increased precision, including socially engineered malware and distributed denial of service (DDoS) attacks. A general and simple design for achieving cybersecurity remains elusive and addressing the problem of malware has become such a monumental task that technological, economic, and social forces must join together to address this problem. At the Carnegie Mellon University Software Engineering Institute’s CERT Division, we are working to address this problem through a joint collaboration with researchers at the Courant Institute of Mathematical Sciences at New York University led by Dr. Bud Mishra. This blog post describes this research, which aims to understand and seek complex patterns in malicious use cases within the context of security systems and develop an incentives-based measurement system that would evaluate software and ensure a level of resilience to attack.
In March of this year, an attacker issued a DDoS attack that was so massive, it slowed internet speeds around the globe. Known as Spamhaus/Cyberbunker, this attack clogged servers with dummy internet traffic at a rate of about 300 gigabits per second. By comparison, DDoS attacks against banks typically register only 50 gigabits per second, according to a recent article in Business Week. The Spamhaus attack came 13 years after the publication of best practices on preventing DDoS attacks, and it was not an isolated event.
The latest figures indicate that cyberattacks continue to rise. Research from the security firm Symantec indicates that in 2012 targeted cyber attacks increased by 42 percent. How is this possible? In part, existing technologies facilitate the role of attacker over the role of defender, since in this hide-and-seek game, the tricks to hide an attack are many, whereas the techniques to seek them are meager and resource intensive.
In the SEI CERT Division our work aims at going beyond simply detecting strategic and deceptive actions of an attacker, by reversing the very incentives that ultimately make more transparent the choices made in hide-and-seek dynamics. Attackers have incentives to find weaknesses software which facilitate system compromise. We envision the possibility that these dynamics can be reversed through an altered incentive structure, credible deterrence/threats, and powerful measurement systems. For example, we may incentivize an emerging group to acquire and deploy particular expertise to evaluate software and guarantee their validity, albeit empirically using techniques from machine learning. This combination of techniques including expert systems, model checking and machine learning can ensure increased level of resilience without loss of transparency. Moreover, game theory provides a means to evaluate the dynamics of incentives and to understand the impacts of new technologies and use cases.
Deterring Malicious Use in Systems
Existing proposals for deterring malware attacks rely on the isolation of an elite network with enhanced security protocols, which undermines the utility of networking and does little to deter incentives for maliciousness. Instead, this strategy concentrates digital assets in one place, putting all eggs in one highly vulnerable basket. Such proposals, while costly and risky, underscore the importance of introducing alternative ideas into the discussion of common information assurance goals.
For example, since computer networks gather users with a variety of different interests and intents, we may wish to incentivize computer users to take steps that will compel them to reassure other users that they have not been maliciously compromised. To obtain this assurance we may leverage the work of technical and security experts, which involves sophisticated software vulnerably probing techniques (such as fuzz testing) and trust mechanisms (such as trusted hardware modules), etc. With these assurances we demonstrate the possibility of economic incentives for software adopters to have deeper and clearer expectations about a network’s resilience and security.
Foundations in Game Theory
Many of the ideas in our approach can be traced back to John von Neumann, a Princeton University mathematician who, with his colleague Oskar Morgenstern, created the basic foundations of modern game theory, which studies how rational agents make strategic choices as they interact. An example of one such strategic choice is the concept of mutual assured destruction (MAD), which describes a doctrine thata war in which two sides would annihilate each other would leave no incentive for either side to start a war. Once the two sides have come to such a mutually self-enforcing strategy, neither party will deviate as long as the opponent does not. Such a state-of-affairs is described in game-theory by the concept of Nash equilibrium. We aim to cast the cyber-security problem in a game-theoretic setting so that every “player” will choose to be honest, will check that they are honest and not an unwitting host for malware, can prove to others that they are honest and accept confidently the proofs that others are honest and not acting deceptively.
A Collaborative Approach
Through our collaboration with the research team at NYU—Dr. Mishra and Thomson Nguyen—we can access their extensive knowledge in game theory, pattern finding, model checking, and machine learning. We are also collaborating with Anthony Smith of the National Intelligence University. This collaboration will allow us to access Smith’s advanced knowledge in network security science and technology. Researchers from the SEI’s CERT Division involved in the project include Michael Appel, Jeff Gennari, Leigh Metcalf, Jose Morales, Jonathan Spring, Rhiannon Weaver , and Evan Wright. Building on the deep domain knowledge from CERT about the nature and origin of malicious attacks and how often those attacks occur, our collaboration with the Courant Institute will provide a better understanding of the implications of such attacks in a larger system. The theoretical framework for this approach is based on model checking and follows strategies similar to what Dr. Mishra developed for hardware verification as a graduate student at CMU.
One of our preliminary tasks was to develop the mathematical frameworks to describe vulnerabilities including attack surface, trace data, software errors and faults, and malicious traces. The ability to rigorously define these patterns allowed us to formalize and detect a large class of malicious patterns as they transfer from user to user.
As an interesting use-case, we focused on several critical patterns to identify malicious behaviors in traces. By studying the Zeus Trojan horse, which was used to steal users’ banking information, we were able to identify atomic actions that allow malicious users to persist on a system and compromise their web browsers.
The enhanced system that we are proposing will additionally provide some degree of guaranteed resilience. When fully implemented, our approach will provide three key benefits to our stakeholders in government and industry;
- a well-understood theoretical model of interactions among benign and malicious users, providing a more accurate picture of forces (technological, economic, political and social) that shape the security landscape
- a scalable method for malware mitigation, including an adaptive system that can identify and address new threats
- a transparent mechanism to vet commercialized software, which touches upon our notions of trusted computing at multiple levels, from firmware to Android applications.
Measures for Resilience to Malicious Attacks
Our new system has many attractive features: it does not simply stop after identifying a network attack. Instead, it motivates and enables deployment of measures of weaknesses using practical techniques such as vulnerability assessments for servers, fuzz testing binaries for weaknesses, and verifying adherence to best practice. These measures provide decision makers and users alike with means to adapt best practices and keep the network operational. Consequently, the system’s designers will also better understand what security features are needed in response to current threats and attack methods. Many software vulnerabilities result from implicit assumptions made at the time of design. While it may not be possible to anticipate all the attacks against a design, we can begin to measure and minimize the time it takes for designers to respond to current attacks within our frame work of measures.
In summary, we believe the proposed system will not just deter malicious attackers, but will also motivate users to ensure that their computers and mobile devices have not been purposefully or unintentionally compromised. In addition designers will benefit from adding in security as user demands for security increase.
There is no widely accepted definition of what constitutes malicious behaviors stated in a way that can be understood and guarded against by average users. We need to be able to help average users and those in government and industry gain a better understanding of whether behavior is malicious or benign.
A simpler and much more popular concept used in the physical context is trust. If trust is perceived to be a valuable economic incentive in the cyber context, and users can assess whether they can trust a server or a software application, then a trust-based technique can be used and can benefit a diverse group of users, ranging from individual users to personnel in industry and government.
Our approach will be especially powerful in trusted computing situations, where trust may be based on crytopgraphic signatures that validate the source code that operates a device. Granted that complete certainty may still be elusive. Note that users can entertain some assurance about the health of their network, because a third party can verify and certify that all are components are trustworthy and are behaving well.
Our Next Steps
Our next step is to check our design, as well as our implicit assumptions about how individuals behave in this framework. To this end, we have been developing a system that we can simulate in silico—initially aimed at understanding only the incentives to attack and counter attacks with mitigation —so that we can better understand how the individuals strategize to select equilibrium (a strategy profile from which no single player is incentivized to deviate). In the future, we plan to invest in deep, multi-trace modeling, extending the game theory to include temporal patterns of attacks on software and systems, which will involve simulation modeling and model checking. By simulation modeling we can estimate the resource needs, overheads and other requirements of the system for practical deployments.
For more information about the work of researchers in the SEI’s CERT Division, please visit
For more information about New York University’s Courant Institute of Mathematical Sciences, please visit