Using V Models for Testing

Testing Add comments

By Donald Firesmith
Senior Member of the Technical Staff
Software Solutions Division

Donald FiresmithThe verification and validation of requirements are a critical part of systems and software engineering. The importance of verification and validation (especially testing) is a major reason that the traditional waterfall development cycle underwent a minor modification to create the V model that links early development activities to their corresponding later testing activities. This blog post introduces three variants on the V model of system or software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.

The Traditional V Model

Verification and validation are typically performed using one or more of the following four techniques:

  • analysis—the use of established technical or mathematical models, simulations, algorithms, or scientific principles and procedures to determine whether a work product meets its requirements
  • demonstration—the visual examination of the execution of a work product under specific scenarios to determine whether it meets its requirements
  • inspection—the visual examination (possibly including physical manipulation or the use of simple mechanical or electrical measurement) of a non-executing work product to determine whether it meets its requirements
  • testing—the stimulation of an executable work product with known inputs and preconditions followed by the comparison of its actual with required response (outputs and postconditions) to determine whether it meets its requirements

The V model is a simple variant of the traditional waterfall model of system or software development. As illustrated in Figure 1, the V model builds on the waterfall model by emphasizing verification and validation. The V model takes the bottom half of the waterfall model and bends it upward into the form of a V, so that the activities on the right verify or validate the work products of the activity on the left. More specifically, the left side of the V represents the analysis activities that decompose the users’ needs into small, manageable pieces, while the right side of the V shows the corresponding synthesis activities that aggregate (and test) these pieces into a system that meets the users’ needs.

Traditional Single V Model of System Engineering Activities

Figure 1: Traditional Single V Model of System Engineering Activitie. To view a larger version of this model, please click on the image.

Like the waterfall model, the V model has both advantages and disadvantages. On the positive side, it clearly represents the primary engineering activities in a logical flow that is easily understandable and balances development activities with their corresponding testing activities. On the other hand, the V model is a gross oversimplification in which these activities are illustrated as sequential phases rather than activities that typically occur incrementally, iteratively, and concurrently, especially on projects using evolutionary (agile) development approaches.

Software developers can lessen the impact of this sequential phasing limitation if they view development as consisting of many short-duration V’s rather than a small number of large V’s, one for each concurrent iterative increment. When programmers apply a V model to the agile development of a large, complex system, however, they encounter some potential complications that require more than a simple collection of small V models including the following:

  • The architecturally significant requirements and associated architecture must be engineered and stabilized as rapidly as is practical. All subsequent increments depend on the architecture, which becomes hard—and expensive—to modify after the initial increments have been based on it.
  • Multiple, cross-functional agile teams will be working on different components and subsystems simultaneously, so their increments must be coordinated across teams to produce consistent, testable components and subsystems that can be integrated and released.

Another problem with the V model is that the distinction between unit, integration, and system testing is not as clear cut as the model implies. For instance, a certain number of test cases can sometimes be viewed as both unit and integration tests, thereby avoiding redundant development of the associated test inputs, test outputs, test data, and test scripts. Nevertheless, the V model is still a useful way of thinking about development as long as everyone involved (especially management) remembers that it is merely a simplifying abstraction and not intended to be a complete and accurate model of modern system or software development.

Many testers still use the traditional V model because they are not familiar with the following V models that are more appropriate for testing.

V Models from the Tester’s Point of View

While a useful if simplistic model of system or software development, the traditional V model does not adequately capture development from the tester’s point of view. This blog discusses three variations of the traditional V model of system/software development that make it more useful to testers, quality engineers, and other stakeholders interested in the use of testing as a verification and validation method.

  • The single V model modifies the nodes of the traditional V model to represent the executable work products to be tested rather than the activities used to produce them.
  • The double V model adds a second V to show the type of tests corresponding to each of these executable work products.
  • The triple V model adds a third V to illustrate the importance of verifying the tests to determine whether they contain defects that could stop or delay testing or lead to false positive or false negative test results.

As mentioned above, testing is a major verification technique intended to determine whether an executable work product behaves as expected or required when stimulated with known inputs. Testers test these work products by placing them into known pretest states (preconditions), stimulating them with appropriate inputs (data, messages, and exceptions), and comparing the actual results (postconditions and outputs) with the expected or required results to find faults and failures that can lead to underlying defects.

Figure 2 shows the tester’s single V model, which is oriented around these work products rather than the activities that produce them. In this case, the left side of the V illustrates the analysis of ever more detailed executable models, whereas the right side illustrates the corresponding incremental and iterative synthesis of the actual system. Thus, this V model shows the executable work products that are tested rather than the general system engineering activities that generate them.

 

Figure 2: Tester’s Single V Model of Testable Work Products

Figure 2: Tester’s Single V Model of Testable Work Products. To view a larger version of this model, please click on the image.


The Tester’s Double V Model

Traditionally, only the right side of the V model dealt with testing. The requirements, architecture, and design work products on the left side of the model have been documents and informal diagrams that were best verified by such manual verification techniques as analysis, inspections, and reviews. With the advent of model-based development, the requirements, architecture, and design models became better defined by using more formally defined modeling languages, and it became possible to use automated tools that implement static analysis techniques to verify these models. More recently, further advances in modeling languages and associated tools have resulted in executable models that can actually be tested by stimulating the executable models with test inputs and comparing actual with expected behavior.

Figure 3 shows the Tester’s double-V model, which adds the corresponding tests to the tester’s single V model. The double V model allows us to detect and fix defects in the work products on left side of the V before they can flow into the system and its components on the right side of the V.

In the double V model, every executable work product should be tested. Testing need not—and in fact should not—be restricted to the implemented system and its parts. It is also important to test any executable requirements, architecture, and design so that the defects in the models are found and fixed before they can migrate to the actual system and its parts. This process typically involves testing an executable requirements, architecture, or design model (or possibly a prototype) that


Tests should be created and performed as the corresponding work products are created. In Figure 3, the short arrows with two arrowheads are used to show that (1) the executable work products can be developed first and used to drive the creation of the tests or (2) test driven development (TDD) can be used, in which case the tests are developed before the work product they test.

The top row of the model uses testing to validate that the system meets the needs of its stakeholders (that is, that the correct system is built). Conversely, the bottom four rows of the model use testing to verify that the system is built correctly (that is, architecture conforms to requirements, design conforms to architecture, implementation conforms to design, and so on).

In addition to the standard double V model, there are two variants that deserve mention.

  • There is little reason to perform unit testing if model-driven development (MDD) is used, a trusted tool is used to automatically generate the units from the unit design, and unit design testing has been performed and passed.
  • Similarly, there is little reason to perform separate unit design testing if the unit design has been incorporated into the unit using the programming language as a program design language (PDL) so that unit testing verifies both the unit’s design and implementation.

Figure 3: Tester’s Double V Model of Testable Work Products and Corresponding Tests

Figure 3: Tester’s Double V Model of Testable Work Products and Corresponding Test. To view a larger version of this model, please click on the image.


The Tester’s Triple V Model

The final variant of the traditional V model, the triple V model, consists of three interwoven V models. The left V model shows the main executable work products that must be tested. The middle V model shows the types of tests that are used to verify and validate these work products. The right V model shows the verification of these testing work products in the middle V. The triple V model uses the term verification rather than tests because the tests are most often verified by analysis, inspection, and review.

Figure 4 below documents the tester’s triple V model, in which additional verification activities have been added to determine whether the testing work products are sufficiently complete and correct that they will not produce numerous false-positive and false-negative results.

Figure 4: The Tester’s Triple V Model of Work Products, Tests, and Test Verification
Figure 4: The Tester’s Triple V Model of Work Products, Tests, and Test Verification. To view a larger version of this model, please click on the image.

Conclusion

As we have demonstrated above, relatively minor changes to the traditional V model make it far more useful to testers. Modifying the traditional V model to show executable work products instead of the associated development activities that produce them, emphasizes that these are the work products that testers will test.

By associating each of these executable work products with its associated tests, the double V model makes it clear that testing does not have to wait until the right side of the V. Advances in the production of executable requirements, architectures, and designs enable testing to begin much earlier on the left side of the V so that requirements, architecture, and design defects can be found and fixed early before they can propagate into downstream work products.

Finally, the triple V model makes it clear that it is not just the primary work products that must be verified. The tests themselves should be deliverables and must be verified to ensure that defects in the tests do not invalidate the test results by causing false-positive and false-negative test results.

The V models have typically been used to describe the development of the system and its subsystems. The test environments or beds and test laboratories and facilities are also systems, however, and must be tested and otherwise verified. Thus, these test-oriented V models are applicable to them as well.

This blog entry has been adapted from chapter one of my book Common System and Software Testing Pitfalls, which will be published this December by Addison Wesley as part of the SEI Series in Software Engineering.

I would welcome your feedback on these suggested variations of the traditional V model in the comments section below.

Additional Resources

To read the SEI technical report, “Reliability Validation and Improvement Framework” by Peter Feiler, John Goodenough, Arie Gurfinkel, Charles Weinstock, and Lutz Wrage, please visit
http://www.sei.cmu.edu/reports/12sr013.pdf.

Share this

Share on Facebook Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Add to Technorati favorites  Save this page on your Google Home Page 

8 responses to “Using V Models for Testing ”

  1. Skip Pletcher Says:
    The V&V construct assumes that requirements and tests are different. How about eliminating Verification as a layer? If the requirement is written as a test, that test must be validated against business need, but verification (as a layer) goes away.

    The V-model or (as we once said, bathtub model) was designed to describe a construct for capturing (not reducing) errors in the waterfall. A most significant limitation of the V defers capture of the most significant errors until the later stages of product development. Double and triple V models make the significant and necessary move to achievement vice activity, but cobntinue to focus on capture rather than prevention. Adapting with Double and triple V seem analogous to using first a bucket and then a bilge pump rather than repairing the hole in a ship's hull. The quality leak may be in our attempt to specify testability without using the test pro forma.

    Extend "what gets measured gets done" to read "if you can't specify it as a test, it isn't a requirement."
  2. William Payne Says:
    This model seems to be most useful in that it gives rise to an explicit taxonomy of testing activities.

    As you suggest, it seems like it would be a mistake to treat it as a recipe, (Money for old rope?), and that discretion is required when deciding which analysis & testing activities are required at each particular point in time.

    Most of the software projects that I am involved in have a significant research aspect - so only a proportion of requirements are known up-front. Also, I normally have a very small team and fairly tight deadlines to deliver something workable. Finally, whilst quality is normally an absolute that few stakeholders are willing to publicly compromise on, it is something that rarely has champions on the business side. I.e. you are never given any resources to ensure quality, but if a mistake is made, your head will roll, which makes most jobs a bit more of an exciting gamble than we like to admit.

    I would be interested to know how to sequence these QA activities to maximise quality within the constraints listed above.
  3. Don Firesmith Says:
    Skip,
    In spite of what some in the Agile community believe, there are fundamental distinctions between requirements and tests, and this distinction has real-world practical ramifications. Requirements are more general than test cases. Requirements specify needs while an individual test case captures only one instance of that need out of a great many. For example, from mission and safety-criticality of the requirement, one can determine the degree of rigor and completeness of the required testing. For example for white-box testing, this means not only deciding on what level code coverage one needs but also whether a simple single test case chosen from a huge equivalence class of possible test cases is sufficient or does one need to use something like boundary-value testing. With no requirements and only tests, it is difficult if not impossible in practice to know requirements have been sufficiently verified.

    Good testing does not mean that one does not try to the extent practical to build quality in via excellent requirements, architecture, design, and implementation. However, regardless of how well you perform the left side of the V, there will ALWAYS be latent defects that should be found. This involves verifying that all of the requirements have been properly implemented, and that involves various verification techniques including testing.

    Also, it is possible that the implementation of a requirement should be verified via a verification technique other than testing so even your last point doesn't quite capture what needs to be done.
  4. Donald Firesmith Says:
    William,
    As I mentioned early in the blog entry, although the V model diagrams make it look like the V suffers from the same problems as the sequential Waterfall model, it can be modified for an evolutionary (i.e., incremental, iterative, and parallel) development cycle, especially when the requirements are poorly and incompletely known at the beginning. In practice on medium to large projects, this typically results in many medium to small Vs, often occurring simultaneously on different more or less Agile sprints.

    As far as quality is concerned, there is a wide range of projects from quick and dirty developments of prototypes that are not intended to be maintained to mission-, safety-, and security-critical systems that must be maintained and extended for decades. The appropriate amount and types of testing will vary widely between the two ends of this spectrum.

    By the way, these V models only identify a small number of types of testing based on the scope of testing. There are many more types of testing that are not shown, most of which are orthogonal to the scope of the system/subsystem/software under test (SUT). For example, there are specialty engineering tests such as capacity, performance, reliability, safety, security, and usability testing
  5. Adrian Jasso Says:
    Firesmith, Good post. From a V-model perspective, sometimes the ability to "test" on the left side of the process is very hard. In large organizations, these verification mechanisms are sometimes not controlled by standard testers. I believe your point about quality is also important. To me, "Testing" on the left side of the model amounts to quality assurance. Testers should always have a strong understanding of the requirements to provide feedback on their feasibility (based on previous testing cycles), testability, and measurability. Moreover, the feasibility - as you stated - should be in the context of the physical design (internal) and operational architecture (external). Maybe this would be a good subject for a podcast.

    Thanks again.
    Adrian Jasso
  6. Donald Firesmith Says:
    I agree that up until recently it has been for all practical purposes impossible to test the left side of the V. The ability to test requirements, architectures, and designs requires them to be executable. You cannot test traditional textual requirements, static UML diagrams, DODAF diagrams, etc. because they are not executable. You cannot stimulate text and Visio diagrams with test inputs and then compare their responses with expected/required responses. This is why other methods (e.g., analysis, demonstration, and especially inspection) have historically been used to verify the left side of the V. Thankfully, technology has advanced and there are now executable models that can be tested (and statically analyzed via tools).
  7. Rawad Says:
    Many thanks for this useful article. My question is:
    do you think V model can be used with any research methodology (i.e. Design Science Research Methodology (DSRM) ) or it is fine to use it with any of them?

    consequently, do you think V model can be used with any
    software development approach (waterfall, agile, etc.) or it is ok to use it with any of them?

    Many thanks in advance
  8. Donald Firesmith Says:
    I think that these testing-oriented V models can be used with any development cycle; differences are primarily how many Vs, the scheduling of the Vs (including concurrent), and the scope of the Vs: the size of the component/increment and which nodes on the left side of the V have executable models that can be tested. I think that the usefulness of the testing V models extends from a strictly sequential Waterfall to evolutionary (i.e., incremental, iterative, and concurrent) development cycles.

    On the other hand, Agile user stories (requirements) are merely text and thus not executable, which means that the top node on the left side of the V must rely on analysis, demonstations, or (most likely) inspection.

    As for research methodologies, I don't know enough about them to assess the V models' applicability. My experience is in system and software development projects.

Add Comment


Leave this field empty: