By Douglas C. Schmidt
In the first half of this year, the SEI blog has experienced unprecedented growth, with visitors in record numbers learning more about our work in secure coding for Android, malware analysis, Heartbleed, and V Models for Testing. In the first six months of 2014 (through June 20), the SEI blog has logged 60,240 visits, which is nearly comparable with the entire 2013 yearly total of 66,757 visits. As we reach the mid-year point, this blog posting takes a look back at our most popular areas of work (at least according to you, our readers) and highlights our most popular blog posts for the first half of 2014, as well as links to additional related resources that readers might find of interest.
Secure Coding for the Android Platform
One of the most popular areas of research among SEI blog readers so far this year has been the series of posts highlighting our work on secure coding for the Android platform. Android is an important area to focus on, given its mobile device market dominance (82 percent of worldwide market share in the third quarter of 2013), the adoption of Android by the Department of Defense, and the emergence of popular massive open online courses on Android programming and security.
Since its publication in late April, the post Two Secure Coding Tools for Analyzing Android Apps, by Will Klieber and Lori Flynn, has been among the most popular on our site. The post highlights a tool they developed, DidFail, that addresses a problem often seen in information flow analysis: the leakage of sensitive information from a sensitive source to a restricted sink (taint flow). Previous static analyzers for Android taint flow did not combine precise analysis within components with analysis of communication between Android components (intent flows). CERT’s new tool analyzes taint flow for sets of Android apps, not only single apps.
DidFail is available to the public as a free download. Also available is a small test suite of apps that demonstrates the functionality that DidFail provides.
The second tool, which was developed for a limited audience and is not yet publicly available, addresses activity hijacking attacks, which occur when a malicious app receives a message (an intent) that was intended for another app, but not explicitly designated for it.
The post by Klieber and Flynn is the latest in a series detailing the CERT Secure Coding team’s work on techniques and tools for analyzing code for mobile computing platforms.
In April, Flynn also authored a post, Secure Coding for the Android Platform, that highlights secure coding rules and guidelines specific to the use of Java in the Android platform. Although the CERT Secure Coding has developed secure coding rules and guidelines for Java, prior to 2013 the team had not developed a set of secure coding rules that were specific to Java’s application in the Android platform. Flynn’s post discusses our initial set of Android rules and guidelines, which include mapping our existing Java secure coding rules and guidelines to Android and creating new Android-specific rules for Java secure coding.
Readers interested in finding out more about the CERT Secure Coding Team’s work in secure coding for the Android platform can view the following additional resources:
- Paper: Android Taint Flow Analysis for App Sets (SOAP 2014 workshop)
- Presentation: Android Taint Flow Analysis for App Sets
- Thesis: Precise Static Analysis of Taint Flow for Android Application Sets
- CERT Secure Coding Rules and Guidelines: CERT Secure Coding Rules and Guidelines for Android wiki
For more than 10 years, the CERT Secure Coding Initiative at the SEI has been working to develop guidance—most recently The CERT C Secure Coding Standard: Second Edition—for developers and programmers through the development of coding standards by security researchers, language experts, and software developers using a wiki-based community process. In a post published in early May, CERT Secure Coding technical manager, Robert Seacord, explored the importance of a well-documented and enforceable coding standard in helping programmers circumvent pitfalls and avoid vulnerabilities like Heartbleed.
Readers interested in finding out more about the CERT Secure Coding Team’s work on the C Coding Standard can view the following additional resources:
- Book: The CERT C Coding Standard, Second Edition: 98 Rules for Developing Safe, Reliable, and Secure Systems
- Newsletter: To subscribe to our Secure Coding eNewsletter, please click here.
- CERT Secure Coding Rules and Guidelines: CERT C Coding Standard wiki (To sign up for a free account on the CERT Secure Coding wiki, please visit http://www.securecoding.cert.org.)
The Heartbleed bug, a serious vulnerability in the Open SSL crytopgraphic software library, enables attackers to steal information that, under normal conditions, is protected by the Secure Socket Layer/Transport Layer Security (SSL/TLS) encryption used to secure the Internet. Heartbleed left many questions in its wake:
- Would the vulnerability have been detected by static analysis tools?
- If the vulnerability has been in the wild for two years, why did it take so long to bring this to public knowledge now?
- Who is ultimately responsible for open-source code reviews and testing?
- Is there anything we can do to work around Heartbleed to provide security for banking and email web browser applications?
In April 2014, researchers from the SEI and Codenomicon, one of the cybersecurity organizations that discovered the Heartbleed vulnerability, participated in a panel to discuss Heartbleed and strategies for preventing future vulnerabilities. During the panel discussion, researchers ran out of time to address all of the questions asked by the audience, so they transcribed the questions and panel members wrote responses. We published the questions and responses as a blog post that was among our most popular posts in the last six months.
Readers interested in finding out more about Heartbleed can view the following additional resources:
- Webinar: A Discussion on Heartbleed: Analysis, Thoughts, and Actions
- Vulnerability Note: CERT researchers created a vulnerability note about the Heartbleed bug that records information about affected vendors as well as other useful information.
Automated Testing in Open Systems Architecture Initiatives
In March, we published our first SEI video blog with my post, The Importance of Automated Testing in Open Systems Architecture Initiatives, which was also well received by our readers. In the post, I described how the Navy is operationalizing Better Buying Power in the context of their Open Systems Architecture and Business Innovation initiatives. Given the expense of our major defense acquisition programs—coupled with budget limitations stemming from the fiscally constrained environment—the United States Department of Defense (DoD) has made cost containment a top priority. The Better Buying Power 2.0 initiative is a concerted effort by the DoD to achieve greater efficiencies in the development, sustainment, and recompetition of major defense acquisition programs through cost control, elimination of unproductive processes and bureaucracy, and promotion of open competition.
In the post, I also presented the results from a recent online war game that underscore the importance of automated testing in these initiatives to help avoid common traps and pitfalls of earlier cost containment measures. The Massive Multiplayer Online Wargame Leveraging the Internet (MMOWGLI) platform used for this online war game was developed by the Naval Post Graduate School in Monterey, California. This web-based platform supports thousands of distributed players who work together in a crowd-sourcing manner to encourage innovative thinking, generate problem solving ideas, and plan actions that realize those ideas.
Given the current fiscal climate in the DoD, it’s not surprising that many action plans in the second Business Innovation Initiative MMOWGLI war game dealt with cost-containment strategies. In the post, I listed several actions plans (followed by the goal of that action plan in italics) that came out of the second Business Innovation Initiative MMOWGLI war game:
- providing a bonus to Navy team members who save money on acquisition programs. The goal is to incentivize program office teams to take both a short- and long-term view toward efficient acquisitions by optimizing prompt/early delivery of artifacts with accrued savings over the lifecycle.
- rewarding a company for saving money on an acquisition contract: top savers would be publicly recognized and rewarded. The goal is to allow effective public image improvement for both government and industry partners of all sizes and types to receive tangible recognition of cost-saving innovations.
- increasing the incentive paid to a contractor if the actual cost of its delivered solution was less than the targeted cost. The goal is to give industry a clear mechanism for reporting cost savings, a clear way to calculate the reward for cost savings, and a transparent method for inspecting actuals over time.
Readers interested in finding out more about other work in this field, can view the following resources:
- Video Blog: The Importance of Automated Testing in Open Systems Architecture Initiatives
- Paper: Experiences Using Online War Games to Improve the Business of Naval Systems Acquisition
Three Variations on the V Model of Testing
Don Firesmith’s post, Using V Models for Testing, which was published in November, remains one of the most popular posts on our site throughout the first half of this year. It introduces three variants on the traditional V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.
The V model builds on the traditional waterfall model of system or software development by emphasizing verification and validation. The V model takes the bottom half of the waterfall model and bends it upward into the form of a V, so that the activities on the right verify or validate the work products of the activity on the left.
More specifically, the left side of the V represents the analysis activities that decompose the users’ needs into small, manageable pieces, while the right side of the V shows the corresponding synthesis activities that aggregate (and test) these pieces into a system that meets the users’ needs.
- The single V model modifies the nodes of the traditional V model to represent the executable work products to be tested rather than the activities used to produce them.
- The double V model adds a second V to show the type of tests corresponding to each of these executable work products.
- The triple V model adds a third V to illustrate the importance of verifying the tests to determine whether they contain defects that could stop or delay testing or lead to false positive or false negative test results.
In the triple-V model, it is not required or even advisable to wait until the right side of the V to perform testing. Unlike the traditional model, where tests may be developed but not executed until the code exists (i.e., the right side of the V), with executable requirements and architecture models, tests can now be executed on the left side of the V.
Readers interested in finding out more about Firesmith’s work in this field, can view the following resources:
- Book: Common System and Software Testing Pitfalls
- Podcast: Three Variations on the V Model for System and Software Testing
With the post An Introduction to DevOps, C. Aaron Cois kicked off a series exploring various facets of DevOps from an internal perspective and his own experiences as a software engineering team lead and through the lens of the impact of DevOps on the software community at large.
Here’s an excerpt from his initial post:
At Flickr, the video- and photo-sharing website, the live software platform is updated at least 10 times a day. Flickr accomplishes this through an automated testing cycle that includes comprehensive unit testing and integration testing at all levels of the software stack in a realistic staging environment. If the code passes, it is then tagged, released, built, and pushed into production. This type of lean organization, where software is delivered on a continuous basis, is exactly what the agile founders envisioned when crafting their manifesto: a nimble, stream-lined process for developing and deploying software into the hands of users while continuously integrating feedback and new requirements. A key to Flickr’s prolific deployment is DevOps, a software development concept that literally and figuratively blends development and operations staff and tools in response to the increasing need for interoperability.
Earlier this month, Cois continued the series with the post A Generalized Model for Automated DevOps. In that post, Cois presents a generalized model for automated DevOps and describes the significant potential advantages for a modern software development team.
Readers interested in learning more about DevOps, should listen to the following resource:
Every day, analysts at major anti-virus companies and research organizations are inundated with new malware samples. From Flame to lesser-known strains, figures indicate that the number of malware samples released each day continues to rise. In 2011, malware authors unleashed approximately 70,000 new strains per day, according to figures reported by Eugene Kaspersky. The following year, McAfee reported that 100,000 new strains of malware were unleashed each day. An article published in the October 2013 issue of IEEE Spectrum, updated that figure to approximately 150,000 new malware strains. Not enough manpower exists to manually address the sheer volume of new malware samples that arrive daily in analysts’ queues.
CERT researcher Jose Morales sought to develop an approach that would allow analysts to identify and focus first on the most destructive binary files. In his blog post A New Approach to Prioritizing Malware Analysis, Morales describes the results of research he conducted with fellow researchers at the SEI and CMU’s Robotics Institute highlighting an analysis that demonstrates the validity (with 98 percent accuracy) of an approach that helps analysts distinguish between the malicious and benign nature of a binary file. This blog post is a follow up to his 2013 post Prioritizing Malware Analysis that describes the approach, which is based on the file’s execution behavior.
Readers interested in learning more about prioritizing malware analysis, should listen to the following resource:
In the coming months, we will be continuing our series on DevOps, and are also creating posts that will explore metrics of code quality, as well as contextual computing, and many other topics.
Thank you for your support. We publish a new post on the SEI blog every Monday morning. Let us know if there is any topic you would like to see covered in the SEI Blog.
We welcome your feedback in the comments section below.