Explanations are “backward-looking” in that they deal with events of the past. We don’t explain the future, and we don’t predict the past.
The answers to these sorts of explanatory questions differ from answers to such questions as “Why did you wear a black shirt?” or “Why did you ask them to leave the tomato off your cheeseburger?” These questions are not investigative requests; they are requests for justification or excuse, which we will treat later.
We can gesture toward the distinction by saying that, in the cheeseburger and black shirt cases, only one person can judge the best explanation; these are personal questions, as opposed to the public questions that will occupy us here. Discerning the difference between these is its own skill, and parallels the skill of discerning the difference between opinion and argument.
We will treat two sorts of explanatory results here: analyses of problems and answers to mysteries. With respect to problems, we investigate why something happened. With respect to mysteries, we investigate who did it or what happened.
When investigating problems, we call the process troubleshooting. We could call this “diagnosis”, much as we call figuring out why a throat is sore a medical diagnosis, or much as we run diagnostic checks on a car at the shop. Similarly we call it “troubleshooting” when we try to figure out why the television won’t turn on or why the ice maker stopped dropping cubes. Here we pick “troubleshooting” because it conveys a sense of trying things out. We’re interested in the fitness of explanations against shifting sets of evidence. In the course of an investigation, evidence comes and goes, and “troubleshooting” captures this well.
When investigating mysteries, we call the process detecting. It’s worth distinguishing troubleshooting from detecting because, generally, we focus our efforts differently in each process. We troubleshoot when anomalies arise. We detect unexpected human (and sometimes otherwise) behaviours, from breaches of protocol to transgression of laws.
With respect to anomalies, we will separate them into systematic and conditional. We call anomalous behaviour systematic when the behaviour follows a pattern. We call anomalous behaviour conditional when, instead of identifying patterns, we identify single causes of single behaviours. Given sufficient time and resources, we might find that a conditional anomaly becomes a systematic anomaly by our having discovered a pattern. Discovering such a pattern depends on how we follow up an investigation and what further evidence we discover. The system we develop here is sensitive to this possibility, and we should sensitise ourselves to the possibility as well.
With respect to detection, we will develop strategies to follow-up evidence. Here we will get creative with the questions we ask and the hypothetical situations we entertain. The process of detection, largely, consists of strategic and creative evidence collection. The detective’s disposition toward investigation is somewhat unique. The evidence itself, and similarly the lead questions, are shifting targets. The wildest of rivals can become best explanations as detectives uncover previously-overlooked evidence and resources.
The Nature of Explanation
In our ordinary talk, explanations can be clarifications, reasons, descriptions, interpretations, and so forth. When we want to explain a thing, by which we’ll usually mean an action or result, the thing we explain is often extra-ordinary. That is, we usually ask for explanations when something out-of-the-ordinary happens. When everything is as you expect, you tend not to ask for explanations.
Scientific investigation is an exception to this, which is partly why we will give it a special place here. Often, scientific explanation is of-the-ordinary. What stands out is that we do not have an explanation of some phenomenon, and genuine curiosity motivates our want to remedy that.
For example, we notice that after easterly storms, lots of muck washes up on the beach. That is, muck on the beach is normal behaviour after an easterly. Our investigation into this ordinary behaviour is a scientific-style investigation, meaning an explanation of a seemingly-normal result. In these sorts of cases, the evidence we explain will not be extra-ordinary in the same way as the troubleshooting cases we’ve seen so far. In designing scientific investigations, one must exercise a skill at picking out what to explain in the absence of anomaly. Again, we will develop these points further later in the text.
To make the comparison of kinds of cases clearer: a hail storm in the middle of summer is something unusual that we might want to explain. The fact that we sleep daily (usually) isn’t. Note that we can almost always imagine circumstances, no matter how exotic, under which an explanation of the most seemingly ordinary things would be useful. But here we’re sticking with normal circumstances, and under normal circumstances, people sleep daily, and it doesn’t hail in the middle of summer.
A simple example of an explanation is determining why a car won’t start. During the investigation, you’ll collect evidence and try to tell a story that connects all the evidence in a reasonable way. Let’s say the car’s headlights have been dimming for the past couple of days and it has been blowing fuses. An electrical problem would explain the evidence nicely.
The Hard-Starting Car
LQ: Why won’t the car start?
EE1: The car won’t start.
EE2: The lights have been dimming recently.
EE3: Fuses have been blowing.
RA1: Electrical failure.
RA2: Out of fuel.
RA3: Fouled spark plugs.
BE: The car won’t start because of an electrical failure.
This case is obvious, and therefore non-controversial, and so it well-illustrates the concepts we’re developing. The LQ contains the behaviour that motivates the investigation, namely the fact that the car will not start. The two additional pieces of evidence quite clearly involve electrical problems. Most car drivers are generally sufficiently knowledgeable about cars and lights to know that dimming and blown fuses signal a problem. There might be contexts in which, even with this simple a case, it would be worth specifying that problems with dimming lights and fuses are typically associated with electrical failures. But those contexts are rather rare. Here we choose to present the evidence without further qualification.
By way of illustration, some answers fail to account for the evidence we’ve presented. The amount of fuel in the tank and the dimming of lights seem clearly non-associated. Answers to investigations are meant to explain evidence, and so answers should be at least associated with evidence, so to preserve our principle of comprehensiveness. As well, the spark plugs have nothing to do with illumination. There might be a story to connect spark plugs to blown fuses, but we would have to tell an exotic tale to make the case. To accept this as better than a general electrical failure would violate the principle that we favour uncomplicated answers.
In the end, the clearly superior answer to the car starting example is A1: there is an electrical failure somewhere in the car. But the story doesn’t necessarily end here. The results of this investigation are a good start on the solution to a larger problem, which is not a request for an explanation. Instead, it is a request for a recommendation. We can frame the larger problem as follows:
How can we get the car started?
Notice that questions of this form (“What should we do?”) look for recommendations. In order to make good recommendations, what we will come to call “reliable recommendations” in our technical vocabulary, we often need explanations of what’s happening in the circumstances. We’ll explore the relationship between explanation and recommendation later. For now, keep in mind that once we’ve identified best explanations, depending on context and purposes, more work will remain.