2013: The Research Year in Review

Agile , Architecture Analysis & Design Language (AADL) , CERT , Insider Threat , Malware Add comments

By Douglas C. Schmidt
Principal Researcher

Douglas C. Schmidt As part of our mission to advance the practice of software engineering and cybersecurity through research and technology transition, our work focuses on ensuring that software-reliant systems are developed and operated with predictable and improved quality, schedule, and cost. To achieve this mission, the SEI conducts research and development activities involving the Department of Defense (DoD), federal agencies, industry, and academia. As we look back on 2013, this blog posting highlights our many R&D accomplishments during the past year.

Before turning to our accomplishments, it’s important to note that 2013 brought the arrival of Kevin Fall as deputy director and chief technology officer. In the blog post, A New CTO and Technical Strategy for the SEI, Fall provided some background on his experience, as well as his technical goals for the SEI:

  • Develop an even higher quality and more cohesive research program
  • Increase collaboration with Carnegie Mellon University and other academic researchers
  • Enhance accessibility to the SEI’s work

Kevin leads R&D at the SEI, which benefits the DoD and other sponsors by identifying and solving key technical challenges facing developers and managers of current and future software-reliant systems. The R&D work at the SEI presented in this blog focused on a range of software engineering and cybersecurity areas, including

  • Securing the cyber infrastructure. This area focuses on enabling informed trust and confidence in using information and communication technology to ensure a securely connected world to protect and sustain vital U.S. cyber assets and services in the face of full-spectrum attacks from sophisticated adversaries.
  • Advancing disciplined methods for engineering software. This area focuses on improving the availability, affordability, and sustainability of software-reliant systems through data-driven models, measurement, and management methods to reduce the cost, acquisition time, and risk of our major defense acquisition programs.
  • Accelerating assured software delivery and sustainment for the mission. This area focuses on ensuring predictable mission performance in the acquisition, operation, and sustainment of software-reliant systems to expedite delivery of technical capabilities to win the current fight.
  • Innovating software for competitive and tactical advantage. This area focuses on safety-critical avionics, aerospace, medical, and automotive systems, all of which are becoming increasingly reliant on software. Other posts in this area highlight innovations that revolutionize development of assured software-reliant systems to maintain the U.S. competitive and tactical edge in software technologies vital to national security.

What follows is a sampling of the SEI’s R&D accomplishments in each of these areas during 2013, with links to additional information about these projects.

Securing the Cyber Infrastructure

Some cybersecurity attacks against DoD and other government organizations are caused by disgruntled, greedy, or subversive insiders, employees, or contractors with access to that organization’s network systems or data. Over the past 13 years, researchers at the CERT Insider Threat Center have collected incidents related to malicious activity by insiders from a number of sources, including media reports, the courts, the United States Secret Service, victim organizations, and interviews with convicted felons.

In a series of blog posts, members of the research team have presented some of the 26 patterns identified by analyzing the insider threat database. Through our analysis, insider threat researchers have identified more than 100 categories of weaknesses in systems, processes, people, or technologies that allowed insider threats to occur. One aspect of their research focuses on identifying enterprise architecture patterns that organizations can use to protect their systems from malicious insiders.

Now that we’ve developed 26 patterns, our next priority is to assemble these patterns into a pattern language that organizations can use to bolster their resources and make them more resilient against insider threats. The blog post, A Multi-Dimensional Approach to Insider Threat, is the third installment in a series that described research to create and validate an insider threat mitigation pattern language to help organizations balance the cost of security controls with the risk of insider compromise.

Exposed vulnerable assets make a network a target of opportunity, or low-hanging fruit for attackers. According to the 2012 Data Breach Investigations Report, of the 855 incidents of corporate data theft reported in 2012, 174 million records were compromised. Of that figure, 79 percent of victims were targets of opportunity because they had an easily exploitable weakness, according to the report. The blog post Network Profiling Using Flow highlighted recent research in how a network administrator can use network flow data to create a profile of externally-facing assets on mid- to large-sized networks.

New malicious code analysis techniques and tools being developed at the SEI will better counter and exploit adversarial use of information and communication technologies. Through our work in cybersecurity, we have amassed millions of pieces of malicious software in a large malware database. Analyzing this code manually for potential similarities and identifying malware provenance is a painstaking process. The blog post Prioritizing Malware Analysis outlined a research collaborative with CMU’s Robotics Institute aimed at developing an approach to prioritizing malware samples in an analyst’s queue (allowing analysts to home in on the most destructive malware first) based on the file’s execution behavior.

Another blog post, Semantic Comparison of Malware Functions, described research aimed at helping analysts derive precise and timely actionable intelligence to understand and respond to malware. The approach described in the post uses the semantics of programming languages to determine the origin of malware.

The blog post Analyzing Routing Tables highlighted another aspect of our work in cybersecurity.  The post detailed maps that a CERT researcher developed using Border Gateway Protocol (BGP) routing tables to show the evolution of public-facing autonomous system numbers (ASNs). These maps help analysts inspect the BPG routing tables to reveal disruptions to an organization’s infrastructure. They also help analysts glean geopolitical information for an organization, country, or a city-state, which helps them identify how and when network traffic is subverted to travel nefarious alternative paths to place communications deliberately at risk.

Exclusively technical approaches toward attaining cybersecurity have created pressures for malware attackers to evolve technical sophistication and harden attacks with increased precision, including socially engineered malware and distributed denial of service (DDoS) attacks. A general and simple design for achieving cybersecurity remains elusive, and addressing the problem of malware has become such a monumental task that technological, economic, and social forces must join together to address this problem. The blog post Deterrence for Malware: Towards a Deception-Free Internet, detailed a collaboration between the SEI’s CERT Division and researchers at the Courant Institute of Mathematical Sciences at New York University. Through this collaboration, researchers aim to understand and seek complex patterns in malicious use cases within the context of security systems and develop an incentives-based measurement system that would evaluate software and ensure a level of resilience to attack.

Our security experts in the CERT Division are often called upon to audit software and provide expertise on secure coding practices. The blog posting Using the Pointer Ownership Model to Secure Memory Management in C and C++, described a research initiative aimed at eliminating vulnerabilities resulting from memory management problems in C and C++.  Memory problems in C and C++ can lead to serious software vulnerabilities including difficulty fixing bugs, performance impediments, program crashes (including null pointer deference and out-of-memory errors), and remote code execution.

Advancing Disciplined Methods for Engineering Software

New data sources, ranging from diverse business transactions to social media, high-resolution sensors, and the Internet of Things, are creating a digital tidal wave of big data that must be captured, processed, integrated, analyzed, and archived. Big data systems that store and analyze petabytes of data are becoming increasingly common in many application areas. These systems represent major, long-term investments requiring considerable financial commitments and massive scale software and system deployments.

With analysts estimating data storage growth at 30 to 60 percent per year, organizations must develop a long-term strategy to address the challenge of managing projects that analyze exponentially growing data sets with predictable, linear costs. The blog post, Addressing the Software Engineering Challenges of Big Data, described a lightweight risk reduction approach called Lightweight Evaluation and Architecture Prototyping (for Big Data). The approach is based on principles drawn from proven architecture and technology analysis and evaluation techniques to help the DoD and other enterprises develop and evolve systems to manage big data.

The post Architecting Systems of the Future is the first in a series highlighting work from the SEI’s newest program, the Emerging Technology Center. This post highlighted research aimed at creating a software library that can exploit the heterogeneous parallel computers of the future and allow developers to create systems that are more efficient in terms of computation and power consumption. 

Accelerating Assured Software Delivery and Sustainment for the Mission

SEI researchers work with acquisition professionals and system integrators to develop methods and processes that enable large-scale software-reliant government systems to innovate rapidly and adapt products and systems to emerging needs within compressed time frames and within constrained budgets. To deliver enhanced integrated warfighting capability at lower cost across the enterprise and over the lifecycle, the DoD must move away from stove-piped solutions and towards a limited number of technical reference frameworks based on reusable hardware and software components and services. There have been previous efforts in this direction, but in an era of sequestration and austerity, the DoD has reinvigorated its efforts to identify effective methods of creating more affordable acquisition choices and reducing the cycle time for initial acquisition and new technology insertion.

In 2013, we published two postings as part of an ongoing series on Open Systems Architecture (OSA) that focused on:

  • Affordable Combat Systems in the Age of Sequestration expanded upon earlier coverage of how acquisition professionals and system integrators can apply OSA practices to decompose large monolithic business and technical designs into manageable, capability-oriented frameworks that can integrate innovation more rapidly and lower total ownership costs.
  • The Architectural Evolution of DoD Combat Systems described the evolution of DoD combat systems from ad hoc stovepipes to more modular and layered architectures. Despite substantial advances in technical reference frameworks during the past decade, widespread adoption of affordable and dependable OSA-based solutions has remained elusive. It is therefore important to look at past open-systems efforts across the DoD to understand what worked, what hasn’t, and what can be done to make the development of systems more successful in the future.

Government agencies, including the departments of Defense, Veteran Affairs, and Treasury, are being asked by their government program offices to adopt Agile methods. These organizations have traditionally used a waterfall life cycle model (as epitomized by engineering “V” charts). Programming teams in these organizations are accustomed to being managed via a series of document-centric technical reviews that focus on the evolution of the artifacts that describe the requirements and design of the system rather than its evolving implementation, as is more common with Agile methods.

As a result of the factors outlined above, many organizations struggle to adopt Agile practices. For example, acquisition professionals often wonder how to fit Agile measurement practices into their progress tracking systems. They also find it hard to prepare for technical reviews that don’t account for both implementation artifacts and the availability of requirements/design artifacts. A team of SEI researchers is dedicated to helping government programs prepare for and, if appropriate, implement Agile. In 2013, the SEI continued its series of blog posts on the Readiness & Fit Analysis (RFA) approach, which helps organizations understand the risks involved when contemplating or embarking on the adoption of new practices, in this case Agile methods. Blog installments published in the series thus far outlined factors to study when considering agile adoption including

The verification and validation of requirements are a critical part of systems and software engineering. The importance of verification and validation (especially testing) is a major reason that the traditional waterfall development cycle underwent a minor modification to create the V model that links early development activities to their corresponding later testing activities. A blog post published in November introduced three variants on the V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.

A widely cited study for the National Institute of Standards & Technology (NIST) reports that inadequate testing methods and tools annually cost the U.S. economy between $22.2 billion and $59.5 billion, with roughly half of these costs borne by software developers in the form of extra testing and half by software users in the form of failure avoidance and mitigation efforts. The same study notes that between 25 percent and 90 percent of software development budgets are often spent on testing.

In April, we kicked off a series on common testing problems that highlighted results of an analysis that documents problems that commonly occur during testing. Specifically, this series of posts identifies and describes 77 testing problems organized into 14 categories; lists potential symptoms by which each can be recognized, potential negative consequences, and potential causes; and makes recommendations for preventing them or mitigating their effects. The first post in the series explored issues surrounding the reality that software testing is less effective, less efficient, and more expensive than it should be. The second posting highlighted results of an analysis that documents problems that commonly occur during testing.

Innovating Software for Competitive and Tactical Advantage

Mission- and safety-critical avionics, aerospace, defense, medical, and automotive systems are increasingly reliant on software. Malfunctions in these systems can have significant consequences including mission failure and loss of life, so they must be designed, verified, and validated carefully to ensure that they comply with system specifications and requirements and are error free.  Ensuring these properties in a timely and cost-effective manner is also vital to ensure competitive advantage for companies who produce these technologies.

In March, we kicked off a series of blog posts that explored recent developments with the Architecture Analysis Design Language (AADL) standard, which provides formal modeling concepts for the description and analysis of application systems architecture in terms of distinct components and their interactions. The series aimed to highlight how the use of AADL helps alleviate mismatched assumptions between the hardware, software, and their interactions that can lead to system failures. The series has included the following posts thus far

Another post highlighting our work on safety-critical systems introduced the Reliability Validation and Improvement Framework that will lead to early defect discovery and incremental end-to-end validation.

The Advanced Mobile Systems Initiative at the SEI focuses on helping soldiers and first responders, whether they are in a tactical environment (such as a war zone) or responding to a natural disaster. Both scenarios lack effective, context-aware use and adaptation of tactical resources and the ability to get relevant information when they critically need it. Software and system capabilities do not keep pace with these users’ changing needs and must be adapted at the operational edge, or periphery, of the network. Posts describing research in this area include the following

Concluding Remarks

As you can see from this summary of accomplishments, 2013 has been a highly productive and exciting year for the SEI technical staff. Moreover, this blog posting just scratches the surface of SEI R&D activities. Please come back regularly to the SEI Blog for coverage of these and many other topics we’ll be doing in the coming year. As always, we’re interested in new insights and new opportunities to partner on emerging technologies and interests. We welcome your feedback and look forward to engaging with you on the blog, so please feel free to add your comments below.

Additional Resources


For the latest SEI technical reports and papers, please visit
www.sei.cmu.edu/library/reportspapers.cfm

For more information about R&D at the SEI as well as opportunities for collaboration, please visit
www.sei.cmu.edu/research/

Share this

Share on Facebook Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

0 responses to “2013: The Research Year in Review ”

Add Comment


Leave this field empty: