Evaluating Our Models

It is obvious that models can be evaluated by testing how they predict outcomes that are observed in real life scenarios. But not all models can be tested in that manner and some scenarios may not be so obvious as to be anticipated.

Models, by their nature, are abstracts of reality. If they were an exact copy of a process, they could be expected to function exactly as the original. By abstracting some of the key elements of a process, we assume we can reproduce the same functionality with less complexity and less effort.

In abstracting key elements, we leave out others. If they are not involved a given scenario, the model may still work at predicting outcomes. But when elements come into play that were not abstracted to be used in the model, the model may fail.

Testing models against anticipated scenarios remains viable, but it is incomplete. We also need to define a range of scenarios that are unanticipated but possible. And we need to carefully define the abstraction of elements that produced the model and the range of elements that were not included in the abstraction.