Entries for month: June 2012

The CERT Perl Secure Coding Standard

Secure Coding 21 Comments »

David Svoboda
Software Security Engineer
CERT Secure Coding Initiative

David SvobodaAs security specialists, we are often asked to audit software and provide expertise on secure coding practices. Our research and efforts have produced several coding standards specifically dealing with security in popular programming languages, such as C, Java, and C++. This posting describes our work on the CERT Perl Secure Coding Standard, which provides a core of well-documented and enforceable coding rules and recommendations for Perl, which is a popular scripting language.

Read more...

Improving Security in the Latest C Programming Language Standard

Secure Coding 1 Comment »

By David Keaton,
Researcher
The CERT Secure Coding Program

David KeatonBuffer overflows—an all too common problem that occurs when a program tries to store more data in a buffer, or temporary storage area, than it was intended to hold—can cause security vulnerabilities. In fact, buffer overflows led to the creation of the CERT program, starting with the infamous 1988 “Morris Wormincident in which a buffer overflow allowed a worm entry into a large number of UNIX systems. For the past several years, the CERT Secure Coding team has contributed to a major revision of the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) standard for the C programming language. Our efforts have focused on introducing much-needed enhancements to C and its standard library to address security issues, such as buffer overflows. These security enhancements include (conditional) support for bounds-checking interfaces, (conditional) support for analyzability, static assertions, “no-return” functions, support for opening files for exclusive access, and the removal of the insecure gets() function. This blog posting explores two of the changes—bounds-checking interfaces and analyzability—from the December 2011 revision of the C programming language standard, which is known informally as C11 (each revision of the standard cancels and replaces the previous one, so there is only one C standard at a time).

Read more...

Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE): An Update

Measurement & Analysis , Software Cost Estimates 6 Comments »

By Dave Zubrow,
Chief Scientist
Software Engineering Process Management Program

Dave ZubrowBy law, major defense acquisition programs are now required to prepare cost estimates earlier in the acquisition lifecycle, including pre-Milestone A, well before concrete technical information is available on the program being developed. Estimates are therefore often based on a desired capability—or even on an abstract concept—rather than a concrete technical solution plan to achieve the desired capability. Hence the role and modeling of assumptions becomes more challenging.  This blog posting outlines a multi-year project on Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE) conducted by the SEI Software Engineering Measurement and Analysis (SEMA) team. QUELCE is a method for improving pre-Milestone A software cost estimates through research designed to improve judgment regarding uncertainty in key assumptions (which we term program change drivers), the relationships among the program change drivers, and their impact on cost.

Read more...

Software Producibility for Defense

Common Operating Platform Environments (COPEs) , Software Sustainment , Architecture , Acquisition , Ultra Large Scale Systems , System of Systems No Comments »

By Bill Scherlis,
Chief Technology Officer (Acting)
SEI

Bill Scherlis The extent of software in Department of Defense (DoD) systems has increased by more than an order of magnitude every decade. This is not just because there are more systems with more software; a similar growth pattern has been exhibited within individual, long-lived military systems.  In recognition of this growing software role, the Director of Defense Research and Engineering (DDR&E, now ASD(R&E)) requested the National Research Council (NRC) to undertake a study of defense software producibility, with the purpose of identifying the principal challenges and developing recommendations regarding both improvement to practice and priorities for research. The NRC appointed a committee, which I chaired, that included many individuals well known to the SEI community, including Larry Druffel, Doug Schmidt, Robert Behler, Barry Boehm, and others. After more than three years of effort—which included an intensive review and revision process—we issued our final report, Critical Code: Software Producibility for Defense. In the year and a half since the report was published, I have been asked to brief it extensively to the DoD and the Networking and Information Technology Research and Development (NITRD) communities.

This blog posting, the first in a series, highlights several of the committee’s key findings, specifically focusing on three areas of identified improvements to practice—areas where the committee judged that improvements both are feasible and could substantially help the DoD to acquire, sustain, and assure software-reliant systems of all kinds.

Read more...