How many models does it take?

I have recently started to ask people at conferences and exhibitions a simple question: How many models does it take to ensure a design is correct? It is of course a trick question, but I started asking it because it surprised me how few people have actually thought about this issue and the planning that has to go into which models to create. I should perhaps talk about a couple of definitional things before I actually ask you for your answer.

Adapted from an article previously published on in 2007

First: What do I mean by a model? A model is an abstract representation of a system or its environment that enables you to understand or analyze the interactions between multiple entities or behaviors. It does not have to be an executable model such as an RTL representation of a piece of a system, but could also be a natural language document, so long as it helps you to analyze and solve a problem. Within a system, no matter how large or small, abstraction is used to help isolate the important aspects of the problem that you are trying to solve, and to hide all of the other unnecessary details. So an RTL model hides all of the sub-clock timing, the routing issues, the transistors that make up the gates, the polygons of the masks that make up the transistors and so forth. As we want to handle larger pieces of the system, the level of abstraction has to increase so that it is possible to comprehend all of the dynamics of the system.

Now, the question does not include all of the abstractions that may be used to describe the design of a system. The design, for this discussion is considered to be a single model that is refined, transformed and adapted as it passes along the design flow.

Implicit and Explicit
Sometimes a model is not explicitly defined, but it still exists. Let me use as an example a set of test vectors that contain the expected response values. The user who created that test worked through, in their mind, how the design would react and what results it would yield. Those results were then encoded as the response vectors. In this case, the engineer was a model. Of course this model is not very reliable as they may often have incorrect or inconsistent ideas of what correct means. It may also be influenced by the number of beers that they have just consumed. It is usually an iterative process to get this model correct. Of course if the engineer gets run over by a bus, you have lost the model and must create a new one – in this case another engineer who understands the complete operation of the system.

Back to the question
Having given you the ground rules, I will now ask the question again. I hope you will think about this for a while before you formulate the number in your mind so that you can see if you agree with me or not. I will give you my reasoning for the number I chose and would love to hear from you if you came up with a different answer.

My answer is – at least 2 ½ models and possibly as many as 4.

The obvious models
Well, I hope nobody said one. That would mean that the only model to exist is the design itself and that the engineer is so perfect as to never make a mistake. This could either be because the system is so trivial that the chance of a mistake is low or that you have not really counted all of the models that are implicit. For example, if your design is the assembly of a few IP blocks that you test by putting it on an FPGA, then you have not counted all of the models. The IP was verified before it was given to you and thus more models exist even if you do not use them directly. When you map the design into an FPGA, you have to subject it to some kind of stimulus. Maybe you can put the FPGA directly into a fully functional system, but you still have to decide how you are going to verify that the operations are indeed working. This brings us to that second model – one that always exists somewhere.

The verification model
In order to verify a design, any design, you have to know what good behavior means. The act of verification is the comparison of two models. Now this is a very inexact process because even if we could prove that the two models were identical under all conditions, it does nothing to tell us if it right, only that the two models are consistent. Think of it this way: with equivalence checking we can determine if a gate level model is functionally equivalent to an RTL model. It does not require any vectors and in most cases the operation is exhaustive. However, all it tells you is that they are the same. If the RTL has a bug, then the gate level version is also guaranteed to contain the same bug. However, history has shown us that comparing two independently derived models provides us with a reasonable level of confidence that the design does faithfully represent the specification. However, the specification is a single point of potential failure in this case because both the design and verification model are derived from this. Thus verification will catch most places where the specification is incomplete, ambiguous or a host of other potential problems. However, if the specification is just plain wrong, then just like with the former example of equivalence checking, we can guarantee, (unless there is a fortuitous error made by one of the teams that highlights the problem) that the specification error will be put into the design. Checking that the specification is right is actually given a different name. Rather than it being verification, specification checking is called validation.

Now let’s say for a moment, that a company has worked out how to automatically and perfectly derive an implementation from a specification. Does that mean that this second verification model has gone away? Not at all! Verification is still necessary and if the verification model is derived from the specification as well, then we are back to the case of equivalence checking, except that those fortuitous bugs can no longer happen. There must still be some verification of the specification and at this point, validation becomes absolutely essential. There is no exception to this model. It may be implicit as talked about in a couple of examples already. It could be in the engineers head, or it could be represented in the physical world. A common example for a protocol that has been modeled and mapped into an FPGA, is to also model and map the inverse protocol so that the unconverted values can be compared against those that went through both transformations. The inverse function is the verification model and if the results after these transformation are not identical, then additional logic is placed in the comparison function to verify that the transformations are acceptable. An example of this would be logic that changes the order of operations. In a soft testbench this is handled by the scoreboard – a terrible name, but the one that the industry has chosen to use.

What was my ½ model?
If you remember, my answer to the question was – at least 2 ½ and as many as 4. What is that half model? That model is one that was until fairly recently always implicit or contained as part of the verification model. Today many people are now beginning to see it as a model in its own right. That model is the environment model, and its necessity is not immediately obvious. Think for a moment of a pseudo random verification environment. The ranges of possible legal values that can be fed into the design have to be defined as a set of constraints. This is a partial model of the environment. In a different design or a different system, those legal constraints may change. If the environment were completely modeled then the answer would have been 3 models, but in many cases the aspect of the environment that reacts to the outputs of a design and automatically feeds this back to the system that is generating the stimulus is not modeled. In other words in an open loop verification environment, only part of this model exists. However, in a closed loop verification environment, the model is complete.

So now let’s test if this model always exists. In the case of directed tests, the model becomes implicit. The construction of the input vector sets implicitly only contain legal vector sets, unless you are purposely verifying the behavior of the design under illegal conditions. This however means that you are defining the set of illegal conditions that you want to test for and this too is part of that environment model. So the next possible objection is if you are using formal property checking instead of simulation. That argument does not work either. For a formal verification tool you still have to provide the constraints on its inputs otherwise it would attempt to verify equivalence under all conditions. In the case of property checking, the verification model is replaced by the set of properties, but this is really no different. So in this regard there is only one simple distinction between simulation and property checking. Property checking includes all legal inputs values or sequences of inputs and simulation samples from the legal set. That sampling is either done manually or automatically.

The Fourth Model
So now you have heard my arguments for either 2 ½ or 3 models. What is the potential 4th model? This is a newer model and in some cases one that I do not believe long term will continue to exist. It is necessary today because of a limitation in technology. The model in question is the coverage model. This model provides us with a way of measuring the completeness of verification. In the past, this model was implicit. An example of it may have been code coverage that looks to see if simulation has activated every line of code in an RTL description. However, with the emergence of pseudo random vector generation, functional coverage has become an explicit model. A verification engineer now has to determine and put in a model all of the aspects of a systems functionality that must be verified and how to determine that those functionalities have indeed been exercised. Many companies are grappling today with the creation of these models. Processes and techniques have been designed by a number of consultants and organization to help ensure that this model if both efficient and effective. If important functionality is omitted from this model, it may provide an artificially high level of confidence in the verification process. If too much is put into this model, then it may result in wasted verification effort and that in turns means being late to market or having costs that are too high to recoup in the product. This model has to balance a number of factors.

So why do I think this model will go away? Simply because all of the information necessary in this model is contained in the verification model and the environment model and it is the lack of technology and tools that require that this be manually generated today. The other reason for its existence is the last point mentioned in the previous paragraph, that being economics. An automated system is likely to extract too many coverage points – in fact the exhaustive set as represented by the verification model. Not all functionalities are created equal, and when time pressures exist, we must prioritize the features that need the most verification.

The more models the better
If it were not for economics, most people would welcome even more models as each additional independent model provides an extra level of confidence in the verification process and thus in the ultimate quality of the design. However, I do not believe that we can ever go below the 2 ½ model minimum that I have defined here and I would argue that today for companies using advanced verification methodologies it is 2 and two ½’s.

Let me know if you have come up with a different answer. I would love to be proven wrong and to be told that someone has worked out how to do it reliably with less than 2 ½ models. I challenge you all to prove me wrong.

Brian Bailey – keeping you covered

All photographs were taken by me and are copyrighted images. If you would like prints of any of them, please email me at

Prices are $25 for 8×12, $45 for 11×16, $95 for 13×19 and $200 for 16×24