Improving Testing Outcomes Through Software Architecture

Architecture Analysis & Design Language (AADL) , Service-Oriented Architecture , Testing Add comments

By Paul Clements, Senior Member of the Technical Staff
Research, Technology, & System Solutions

Paul ClementsTesting plays a critical role in the development of software-reliant systems. Even with the most diligent efforts of requirements engineers, designers, and programmers, faults inevitably occur. These faults are most commonly discovered and removed by testing the system and comparing what it does to what it is supposed to do. This blog posting summarizes a method that improves testing outcomes (including efficacy and cost) in a software-reliant system by using an architectural design approach, which describes a coherent set of architectural decisions taken by architects to help meet the behavioral and quality attribute requirements of systems being developed.

Developers of software-reliant systems must address several testing-related challenges. For example, testing is expensive and can account for more than 50 percent of a project’s schedule and budget, depending on the criticality of the system. Unfortunately, some organizations assign a budget for testing and stop when that budget is consumed.  Safety-critical software organizations also have a budget, but they often must reach a confidence level independent of the expenditures to meet certification standards. In either situation, improving the efficacy and cost of testing is essential to meeting requirements and business goals.

Another challenge is that few organizations inform the testing process by considering the software architecture, which comprises the structure of the software elements in a system, the externally visible properties of those elements, and the relationships among them. Ignoring or overlooking the software architecture during the testing process is problematic because the structures that comprise the software architecture ensure the quality attributes and enable the system to meet its requirements and business goals.

To address the challenges outlined above, we have developed an architectural design approach to testing software-reliant systems. The foundation of this approach involves creating testability profiles that give testers an actionable description of a design approach’s effect on the testing practice. Each testability profile consists of four parts:

  1. In the first part, testers conduct an initial analysis to determine that the architecture design approach (often expressed in the form of architecture styles and patterns) is actually used in the product or artifact that is being tested. This part of the testability profile defines the essential characteristics of the architecture design approach and describes how to recognize those characteristics in an artifact. Ideally, this step is accomplished by referring to specific views in the architecture documentation. Realistically, verifying the presence of the architecture design approach may require correlating information from various parts of the architecture documentation. Techniques such as design structure matrices (DSM) and architecture-level call graphs can be used to identify structural patterns in either an architecture description or implementation. Some DSM tools will parse source code, producing a matrix of dependencies that actually exist in the code.
  2.  

  3. The second part of a testability profile includes a fault model that consists of the following two subparts:

    • The first subpart describes the system or component and characterizes possible failures associated with the chosen architectural approach. It is possible to associate a fault model with a particular architecture design approach. For example, in a pipe and filter architecture, the pipes cannot change the order of value of the data in their data streams or communicate with other pipes. If they do, this is considered a fault, and, more importantly, it is considered a fault that is associated with a particular architectural style or pattern.
    • The second subpart enumerates the set of possible failures that this particular architecture design approach relieves the system from. For example, if the architect has selected a state machine design approach to encapsulate the control logic of a module, a subsystem, or the entire system, then (assuming an implementation that is demonstrably compliant to the architecture) no control logic errors are possible outside the state machine’s encapsulating component.
  4.  

  5. A third part examines the available analysis that corresponds to the fault model to determine if a particular analysis can be performed based on the architecture design approach and whether that analysis can tell conclusively whether a particular fault exists in the system. This part of the profile details any available tools and methods, such as the Architecture Analysis and Design Language (AADL) analytical toolset or model checkers, that can be used to draw conclusions about systems that are compliant with that architecture. The analysis may be architecture- or code-based.
  6.  

  7. The final part includes tests that have been made redundant or that can be de-prioritized as a result of the fault model and the analysis. If the first three parts are completed for an architecture that includes the architecture design approach, then it should render certain tests for corresponding faults unnecessary or de-emphasize the faults in the testing procedures. For example, if analysis can show that deadlock is impossible in architecture-compliant implementation, then it should be unnecessary to test for deadlock.

Testability profiles do not currently exist for architectural design approaches. Although pattern catalogs, such as the Pattern-Oriented Software Architecture (POSA) series and the “Gang of Four” book, are now common, there was a time when pattern descriptions were not widely available. If the cost and/or efficacy of testing can be substantially improved through the use of testability profiles, we might one day expect to see them documented alongside (or as part of) the pattern description of an approach.

To see how the testability profile and the architecture design approach fit together, suppose an engineer decided to build a service-oriented architecture (SOA) for a system. While that engineer has gained a lot of quality for the system, that system may now be susceptible to a class of faults specific to SOA. For example, the network may not send a service request the way that it should, or a particular service may not contain the quality attributes that are needed.  After an architecture design approach testability profile is established, testers can decide whether to perform one or more of the following steps:

  1. Check the implementation of an architecture design approach’s observables to verify, or at least gain confidence, that an architecture design approach is present.
  2. Invoke the profile’s architecture-based analysis to determine whether the system contains architecture design approach-related faults.
  3. Remove or de-emphasize from the test portfolio some or all of the test cases associated with faults ruled out by the architecture design approach’s fault model.
  4. Remove or de-emphasize from the test portfolio some or all of the test cases associated with faults ruled out by analysis.

As a result of our research, testers will be able to determine the most important things to test for by illuminating new failure models that might not have been known before. Conversely, testers will also be able to determine failure models that they can safely assume will not occur. It is our hypothesis that this approach is broadly applicable to many types of systems. We are interested in working with organizations to pilot this approach, so if you would like us to consider your organization for our pilot program, please send an email to info@sei.cmu.edu.

Additional Resources:
For more information about our research in architecture support for testing, please visit
www.sei.cmu.edu/architecture/research/archpractices/Architecture-Support-for-Testing.cfm

Share this

Share on Facebook Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

4 responses to “Improving Testing Outcomes Through Software Architecture ”

  1. Stephany Bellomo Says:
    Great entry!
  2. vikneshraj Says:
    sir if any tools for evaluating the pattern in software architecture is available .
  3. Taeho Kim Says:
    Thank you for a good article.
    I think your approach is more useful for using unit testing than system testing. What do you think about it?
    Could I get a case study for improving testing outcomes through the architectural approach?
  4. Peter Feiler Says:
    Your observation is correct. The initial work has its roots in unit testing. However, concepts such as state machine also apply at the architecture level. They represent operational and failure modes, interaction protocols between architectural components and externally observable component behavior.
    We have now combined the initial approach to testability profiles with Architecture-centric Modeling through AADL (http://www.sei.cmu.edu/library/assets/ResearchandTechnology_AADLandMBE.pdf), specifically by leveraging its ability to capture architecture fault models through the Error Model Annex extension (https://wiki.sei.cmu.edu/aadl/index.php/Standardization).
    AADL has been successfully used to demonstrate early discovery of faults in the System Architecture Virtual Integration (SAVI) initiative (see https://wiki.sei.cmu.edu/aadl/index.php/Projects_and_Initiatives#AVSI_SAVI). We have incorporated AADL into a Virtual Upgrade Validation (VUV) method that goes beyond an ATAM by focusing on the technical architecture and guides the modeler to known fault root cause areas (http://www.sei.cmu.edu/library/abstracts/reports/12tr005.cfm).
    We are currently applying the combined approach (known under the project Architecture-focused Testing) with emphasis on earlier discovery of testable faults, discovery of hard to test for faults (many non-functional problems), and discovery of unhandled faults on several customer system. Case study reports should become available in the near future.
    See the AADL website www.aadl.info and public Wiki https://wiki.sei.cmu.edu/aadl for more on AADL.

Add Comment


Leave this field empty: