By Douglas C. Schmidt
The SEI Blog continues to attract an ever-increasing number of readers interested in learning more about our work in agile metrics, high-performance computing, malware analysis, testing, and other topics. As we reach the mid-year point, this blog posting
highlights our 10 most popular posts, and links to additional related
resources you might find of interest (Many of our posts cover related
research areas, so we grouped them together for ease of reference.)
By Kevin Fall
Deputy Director, Research, and CTO
Department of Defense (DoD) and other government agencies increasingly
rely on software and networked software systems. As one of over 40 federally funded research and development centers sponsored by the United States government, Carnegie Mellon University’s Software Engineering Institute (SEI)
is working to help the government acquire, design, produce, and evolve
software-reliant systems in an affordable and secure manner. The
quality, safety, reliability, and security of software and the
cyberspace it creates are major concerns for both embedded systems and
enterprise systems employed for information processing tasks in health
care, homeland security, intelligence, logistics, etc. Cybersecurity risks, a primary focus area of the SEI’s CERT Division, regularly appear in news media and have resulted in policy action at the highest levels of the US government (See Report to the President: Immediate Opportunities for Strengthening the Nation’s Cybersecurity
). This blog posting is the first in a series describing the SEI’s
five-year technical strategic plan, which aims to equip the government
with the best combination of thinking, technology, and methods to
address its software and cybersecurity challenges.
by Scott McMillan
Senior Member of the Technical Staff
SEI Emerging Technology Center
This blog post was co-authored by Eric Werner.
algorithms are in wide use in Department of Defense (DoD) software
applications, including intelligence analysis, autonomous systems,
cyber intelligence and security, and logistics optimizations. In late
2013, several luminaries from the graph analytics community released a position paper
calling for an open effort, now referred to as GraphBLAS, to define a
standard for graph algorithms in terms of linear algebraic operations.
BLAS stands for Basic Linear Algebra Subprograms
and is a common library specification used in scientific computation. The
authors of the position paper propose extending the National Institute of
Standards and Technology’s Sparse Basic Linear Algebra Subprograms
(spBLAS) library to perform graph computations. The position paper
served as the latest catalyst for the ongoing research by the SEI’s Emerging Technology Center in the field of graph algorithms and heterogeneous high-performance computing (HHPC). This blog post, the second in our series,
describes our efforts to create a software library of graph algorithms
for heterogeneous architectures that will be released via open source.
By Eric Werner
SEI Emerging Technology Center
The power and speed of computers have increased exponentially in recent years. Recently, however, modern computer architectures are moving away from single-core and multi-core (homogenous) central processing units (CPUs) to many-core (heterogeneous) CPUs. This blog post describes research I’ve undertaken with my colleagues at the Carnegie Mellon Software Engineering Institute (SEI)—including colleagues Jonathan Chu and Scott McMillan of the Emerging Technology Center (ETC) as well as Alex Nicoll, a researcher with the SEI’s CERT Division—to create a software library that can exploit the heterogeneous parallel computers of the future and allow developers to create systems that are more efficient in terms of computation and power consumption.