Second in a Series
By Charles B. Weinstock
Distinguished Principal Researcher
Software Solutions Division
Software used in safety-critical systems—such as automotive, avionics, and healthcare applications, where failures could result in serious harm or loss of life—must perform as prescribed. How can software developers and programmers offer assurance that the system will perform as needed and with what level of confidence? In the first post in this series, I introduced the concept of the assurance case as a means to justify safety, security, or reliability claims by relating evidence to the claim via an argument. In this post I will discuss Baconian probability and eliminative induction, which are concepts we use to explore properties of confidence that the assurance case adequately justifies its claim about the subject system.
The Case for Confidence
Assurance cases are now becoming a standard practice in the development and deployment of safety-critical systems. In May 2013, at the First International Workshop on Assurance Cases for Software-Intensive Systems, it was stated that
Several certification standards and guidelines, e.g., in the defense, transportation (aviation, automotive, rail), and healthcare domains, now recommend and/or mandate the development of assurance cases for software-intensive systems.
In the first post in this series, I discussed our research on assurance cases and how they can be used when developing software-intensive systems. It is not enough, however, to simply have an assurance case. It is also important to understand why you should have confidence in the assurance case. Achieving confidence is important to all stakeholders in the subject system, including those acquiring the system, those producing the system, those using the system, and (in certain cases) those certifying or otherwise evaluating the system.
It is impossible to examine every possible circumstance covered by an assurance case. One approach to the problem of achieving confidence in an assurance case is to provide a parallel argument that gives reasons why the assurance case should be believed. Hawkins, Kelly, Knight and Graydon have taken this approach with their concept of the “confidence case.”
We have taken a different approach. Consider the following notional assurance case arguing that the system is safe (taken from the first posting in this series):
Figure 1: A Notional Assurance Case
We want to be able to understand how the evidence (Ev1…Ev3) leads to confidence that the claim (C1) holds and by how much. We would also like to understand what “confidence” in the claim means, and how existing confidence can be increased. The remainder of this post will focus on a more detailed description of Baconian probability and eliminative induction.
Two Forms of Induction
Deciding how to use evidence as a means of evaluating belief in a hypothesis is a classic philosophical problem—one that is classically solved through the use of induction. When we talk about induction, we typically mean enumerative induction. But there is another form of induction, eliminative induction.In enumerative induction, support for a hypothesis is increased every time confirming evidence is found. Past experience forms a basis for an expectation of future behavior.
For example, consider a person coming into a room and turning on a light.
If the light has gone on before, the person will have some confidence that it will go on again. If it has gone on many times before, the person should have even higher confidence that it will go on the next time.
Eliminative induction in contrast to enumerative induction looks for reasons to doubt the hypothesis. So our person deciding whether to believe a light will go on when he throws the switch will have to consider reasons why it might not, and then eliminate them.
When the person walks into the room, he may have some initial, perhaps unfounded, confidence that the light will go on when he throws the switch. If he checks the wiring and finds that the switch is connected, he will have more confidence. If he verifies that there is power to the switch, he will have still more confidence. And, if he shakes the bulb and does not hear a rattle, he will have even more confidence that the light will go on.
Eliminative Argumentation and the Confidence Map
We can now begin to answer the questions posed in our notional assurance case shown in Figure 1. The evidence in the case leads to increased confidence in the claim to the extent that it removes doubts about the claim. We will achieve increased confidence in the claim C1 as we eliminate more doubts. This method is the idea behind what we call eliminative argumentation, which is based on assurance cases, eliminative induction, and defeasible reasoning.
Reasoning about doubts in an argument is a use of defeasible reasoning. A defeasible argument is always subject to revision as we learn more about the claims and evidence that form the argument. There are only three kinds of doubts relevant to an assurance case: we can doubt claims, we can doubt evidence, and we can doubt the inference between claims or between two claims or between a claim and its supporting evidence. We show these doubts explicitly in a modified assurance case— a confidence map.
Figure 2: The "light turns on" assurance case fragment
Figure 2 is a portion of the assurance case for the claim that the light turns on when the switch is thrown. Notice that each of the three sub-claims of C1.1 is based on a doubt about why the light might not turn on when the switch is thrown. In a confidence map these doubts are expressed explicitly.
Figure 3: Rebutting defeaters
Figure 3 shows the beginning of a confidence map for the “light turns on” example. R2.1 … R2.3 are called rebutting defeaters. They attack the validity of claim C1.1. If any of these defeaters is true, then the claim is invalid. If all of the defeaters can be eliminated, then we have no reason to doubt the claim. If we have no knowledge about the truth or falsity of one or more of these defeaters, then there remains a doubt as to whether claim C1.1 is valid.
There is an implicit inference rule in the confidence map shown above; namely, if all of the rebutting defeaters are shown to be false, then the claim is valid. It may be that there are additional, as yet unidentified, rebutting defeaters to claim C1.1. In the confidence map notation we make this explicit as shown in Figure 4.
Figure 4: Adding an inference rule and attacking it via an undercutting defeater
The confidence map now makes the inference explicit and adds an additional undercutting defeater, namely UC3.3, which raises a doubt about the sufficiency of the inference rule.
Figure 5: Attacking the evidence via an undermining defeater
Figure 5 expands one leg of the confidence map to show the evidence that is being used to eliminate defeater R2.3, namely that the bulb does not rattle when shaken. A rattle would indicate a broken filament. The figure also shows an example of an undermining defeater—one that questions the evidence by suggesting that the person shaking the bulb may be hard of hearing. It also includes a second inference rule (that if a bulb does not rattle then it is good), which is undercut by a defeater suggesting that the bulb may not be incandescent, and therefore the absence of a rattle may be meaningless. The grey circle under the undercutting defeater indicates that we have assumed that the bulb is, in fact, incandescent and will consider that defeater to be eliminated without any further analysis.
Of course this does not represent a complete confidence map, even for the elimination of R2.3. For instance the bulb may not rattle when shaken because the filament did not burn out, but merely was defective.
Figure 6: Counting confidence
At the beginning of this post, I stated that one of our goals was to measure confidence. One way to go about this is to simply count defeaters near the leaves of the map and then count uneliminated defeaters. In Figure 6, I recap the confidence map as discussed. In this example, we have eliminated one defeater (UC4.2 as indicated by the grey circle) out of a total of five. Expressed as 1|5, this is an example of Baconian probability (named after Sir Francis Bacon, who first proposed the idea of measuring degrees of certainty). Note that the Baconian probability is not a ratio and cannot be reduced. 1|5 is read “one out of five” while 2|10 would be read “two out of ten” a very different level of doubt: in the first case there are four unresolved doubts, whereas in the second there are eight.
We can increase our confidence in C1.1 by eliminating additional defeaters. For instance, if the examiner has recently undergone a hearing test, we could eliminate UM4.1.
This blog post discusses achieving confidence in a claim and introduces the idea of defeaters and confidence maps. To review, doubt in an argument can result from uncertainty about the claims, evidence, or inferences. Such uncertainty is represented by a defeater. A rebutting defeater attacks a claim by suggesting a reason why it can be false. An undermining defeater attacks the evidence and asks why the evidence may be compromised. Finally, an undercutting defeater attacks the inference rule and asks how the premise can be OK, but the conclusion uncertain.
Confidence in a claim increases as the reasons for doubt are eliminated. If we have eliminated zero defeaters, then we have no reason to have confidence in the claim. If we have eliminated all of the defeaters, then we have no reason to doubt the claim. But if there are still uneliminated defeaters, then there remains some doubt about the claim.
It may be hard to completely eliminate a defeater. Suppose we know little about the light bulb examiner, but we know that 1 percent of the population as a whole will have trouble hearing a rattle in a defective light bulb. Shouldn’t we be able to mostly eliminate defeater UM4.1 on that basis? That and related topics will be discussed in the next article in this series.
To read the SEI paper Measuring Assurance Case Confidence using Baconian Probabilities, please visit http://www.sei.cmu.edu/library/assets/whitepapers/icsews13assure-id5-p-16156-preprint.pdf
To read the SEI paper Eliminative Induction: A Basis for Arguing System Confidence, please visit
To read the SEI Technical Report, Toward a Theory of Assurance Case Confidence, please visit
To read the Hawkins, Kelly, Knight, and Graydon paper A New Approach to Creating Clear Safety Arguments, please visit http://www-users.cs.york.ac.uk/~rhawkins/papers/HawkinsSSS11.pdf