The concept for this installment of verification 101, is something that I have been talking about for a number of years now, and yet does not appear to be widely known or recognized. That does not mean that people are not using this concept, just that they are unaware and thus unable to properly plan for it in their verification methodology. I call these concepts positive and negative verification.
Positive verification is used to ensure that required functionality is present in a design or implementation, whereas negative verification concentrates on the elimination of bugs. This may seem like a strange differentiation as you may think that the lack of bugs means that verification is complete. This is not necessarily the case. Similarly, if all system functionalities are shown to be present, then you may also reason that you have finished verification. This is in most cases also not true.
When a block is verified it is verified against the specification for that block. We can ensure, to the best of our abilities, through either simulation or formal verification techniques, that there are no erroneous behaviors existing in the block that do not match the specification. If we achieve this, then we can say that no bugs exist in the block’s implementation of the specification. However, we do not know that the specification is correct, or that the specification as defined is what is required in the system. This is a validation function. So when we integrate a verified block into the context of a larger system we are:
- verifying the functionality of the larger system and
- validating the specification against which the lower level block was implemented.
Consider a simple example. A processor is to be used in a system and obtained from a third party. Given that most processors come from very stable ad reputable companies, who have spent enormous amounts of time and money on the verification task, it can be reasonably expected that the processor will function as specified, especially if it has already been used in several designs. Does this mean that the processor will work in every design? The answer to this is clearly no. There are expectations about the capabilities that the processor will provide, the buses and interfaces that it will provide and the performance that it has. Thus while it is reasonable to expect that it matches its specification (and its data sheet), we cannot be certain that all aspects of its specification have been understood when it was selected or when the system level design was being performed. Neither can we be sure that the interfaces to it were fully understood, such that problems may surface when it is integrated into a larger system. These problems are magnified when considering arbitrary hardware blocks, especially if this is the first time that they have been used within a company. Thus just because the processor vendor performed copious amounts of negative verification, he is unable to perform positive verification. That must be done after the processor has been integrated into a system.
Now let’s look at this from the other direction. For a moment, consider the hypothetical possibility that we have developed a set of tests that ensure 100% of the necessary functionality exists in a block – and nothing more. We may have done this on an abstract model of the block. We further assume that the specification was correct and complete. Now when a block has been implemented, either through manual methods, or synthesized, it can be integrated back into the system model and those same tests executed. If everything passes, we have proven that the block performs every function necessary without any bugs and that it does this in the context of the complete system. At first, it may be thought that we need to perform no further verification of the block. However, if for example, we were to collect code coverage data for that block, we may expect to find that large portions of the code have not been executed. This is because the code is redundant to its meeting the high-level specification. All of that code could be removed without affecting its operation in any way. (Remember we are assuming that the set of tests activates all necessary functionality.)
There are some exceptions to this general statement, such as redundant logic for error detection and correction which cannot be adequately modeled at the high-level. Putting those cases aside, while these statements are generally true, it is also a very dangerous position to take. Part of the system is probably software and it is very likely that the software will change over the life of the product. A change in that software could cause completely different aspects of the hardware to be exercised. If this newly exercised functionality has not been tested, or indeed eliminated, then we could be stuck with hardware that only works with specific versions of the software. It is important that all aspects of the hardware implementation that could ever possibly be used are adequately verified. This problem of hardware only working with certain versions of software has happened to some companies in the past and it is a sign that they did not perform enough negative verification. Most of these companies relied on physical prototypes for much of their verification, where it is difficult to inject stimulus that was not generated by a real system. The way in which these situations are handled can, and does, create differences of opinion between verification experts. This is especially prevalent when considering the verification of processor based system and the use of actual production software.
If the system does contain one or more processors, then one school of thought says that we should run the drivers and other software as the correct way to exercise the rest of the system. In many industries, they say that if it runs the software, then it works. The other school of thought says that this limits the inputs to the system too much and much better control can be obtained by injecting bus cycles directly, even if they do not represent typical or possible patterns.
Whether production software is used to verify hardware is a question of balancing positive and negative verification. Neither answer is completely right as both are a necessary part of the overall system verification function.
Previous Installment – Directed and Random Testing
Brought to you by Brian Bailey