Entries Tagged as 'High-Performance Computing '

A Five-Year Technical Strategic Plan for the SEI

Big Data , Cyber-physical Systems , High-Performance Computing , Model-Based Engineering 3 Comments »

By Kevin Fall
Chief Technology Officer

Kevin FallThe Department of Defense (DoD) and other government agencies increasingly rely on software and networked software systems. As one of over 40 federally funded research and development centers sponsored by the United States government, Carnegie Mellon University’s Software Engineering Institute (SEI) is working to help the government acquire, design, produce, and evolve software-reliant systems in an affordable and secure manner. The quality, safety, reliability, and security of software and the cyberspace it creates are major concerns for both embedded systems and enterprise systems employed for information processing tasks in health care, homeland security, intelligence, logistics, etc. Cybersecurity risks, a primary focus area of the SEI’s CERT Division, regularly appear in news media and have resulted in policy action at the highest levels of the US government (See Report to the President: Immediate Opportunities for Strengthening the Nation’s Cybersecurity ). This blog posting is the first in a series describing the SEI’s five-year technical strategic plan, which aims to equip the government with the best combination of thinking, technology, and methods to address its software and cybersecurity challenges.


Developing a Software Library for Graph Analytics

High-Performance Computing No Comments »

by Scott McMillan
Senior Member of the Technical Staff
SEI Emerging Technology Center

This blog post was co-authored by Eric Werner.

Scott McMillanGraph algorithms are in wide use in Department of Defense (DoD) software applications, including intelligence analysis, autonomous systems, cyber intelligence and security, and logistics optimizations. In late 2013, several luminaries from the graph analytics community released a position paper calling for an open effort, now referred to as GraphBLAS, to define a standard for graph algorithms in terms of linear algebraic operations. BLAS stands for Basic Linear Algebra Subprograms and is common library specification used in scientific computation. The authors of position paper propose extending the National Institute of Standards and Technology’s Sparse Basic Linear Algebra Subprograms (spBLAS) library to perform graph computations. The position paper served as the latest catalyst for the ongoing research by the SEI’s Emerging Technology Center in the field of graph algorithms and heterogeneous high-performance computing (HHPC). This blog post, the second in our series, describes our efforts to create a software library of graph algorithms for heterogeneous architectures that will be released via open source.


Architecting Systems of the Future

High-Performance Computing 2 Comments »

By Eric Werner
Chief Architect
SEI Emerging Technology Center

Eric WernerThe power and speed of computers have increased exponentially in recent years. Recently, however, modern computer architectures are moving away from single-core and multi-core (homogenous) central processing units (CPUs) to many-core (heterogeneous) CPUs. This blog post describes research I’ve undertaken with my colleagues at the Carnegie Mellon Software Engineering Institute (SEI)—including colleagues Jonathan Chu and Scott McMillan of the Emerging Technology Center (ETC) as well as Alex Nicoll, a researcher with the SEI’s CERT Division—to create a software library that can exploit the heterogeneous parallel computers of the future and allow developers to create systems that are more efficient in terms of computation and power consumption.