Scientific Investigations

Scientific investigation blends explanation and prediction. Specifically: a scientific explanation can also function as a reason to think some prediction reliable, in relevantly similar circumstances.

For example, the explanation of the hard-starting car is not something we would use as a supporting resource in a Request for Prediction. It solves a particular case, and aspires no further. Similarly, my explanation of who stole my bicycle (I’m sure it was Roger; he said once if it weren’t my bicycle he would steal it) would not support a prediction, and therefore not “scientific”. My explanation of why I’ve never seen an apple fall upwards is scientific. Notice: the case is general: when I ask about apples rather than this apple, we expect that my explanation will have some use within a prediction as well.

In the sciences, we often aim to explain novel phenomena, sometimes deep enough to call mystery. Say we’ve developed a new composite material. We might be interested in its electrical properties, such as its ability to allow electricity to flow through it. To determine this, we would design an experiment, collect evidence (data), then tell a story, likely about the physical structure of the material, that makes sense of the data. Our conclusion will not only explain the evidence (data), but will establish a pattern we can use in a prediction of future results. Ideally this pattern will contribute to our knowledge of the electrical resistance of certain kinds of exotic materials.

When we design experiments, or tests of any sort, prior to any actual testing we describe our expected results. Upon performance of a test, if our expectation (or hypothesis) is not met, then we troubleshoot. If the expectation is satisfied, this gives us reason to think the best explanation of the lead question is useful within a prediction as well. We will consider this latter case below. First, let’s look at unmet expectations.

Example 2.5.1
Resistivity in Material A
First, regarding the concept of resistivity: all the stuff around us either lets electricity through or doesn’t, with countless shades of electrical flow in between. Glass and plastic don’t allow electricity through, which is partly why we coat wires in plastic, for example. Copper lets electricity through easily, which is why we make wires out of it. The extent to which a material doesn’t let electricity through is resistivity.
Say we want to investigate the resistivity of an exotic material, let’s call it Material A. We hypothesise that at very low temperature, electrons (which is to say, electricity) will move differently through Material A than at higher temperatures. We might make a prediction about that movement, say that at a temperature below -100C, resistance drops significantly. We expect to see corresponding results.
We design an experiment, including isolating the material in a controlled environment. We do this so that we can eliminate a wide range of contextual considerations from our analysis later. This makes it easier for others to try the same experiment.
A common way to control for context is to use what is called a “vacuum chamber”. In the vacuum chamber, you can keep constant “air” pressure. Then we apply an electrical charge at one end of the sample of material, and measure how much charge passes through it. We record data.
Upon analysis of the data, we find that our expectations went unmet. This is anomalous behaviour, and we need to investigate. A lead question will guide our troubleshooting.
LQ: Why didn’t we measure a drop in resistivity at low temperature?

Notice that we do not ask “Why didn’t the resistance drop?” A question of this form assumes that the equipment did not fail. In order to preserve the possibility of equipment failure, we ask instead about the data and the instrumentation combined: in this case: why didn’t we measure a drop.

Various data we collected are evidence to explain. Let’s list them as follows:

EE1: Sensors recorded the same electrical resistance at all temperatures.
EE2: Sensors indicated constant pressure in the environment.
EE3: The charge generator indicated that we applied the expected charge.

Most experiment designers in the field will know that resistivity of a material and temperature can be connected. But for a more general audience, it is wise to provide this as a resource. It is, in fact, our expectation.

ER1: Low temperatures sometimes alters a material’s electrical resistance.

Given this evidence and this resource, we can come up with a few possible answers to why we didn’t measure as we expected:

RA1: The vacuum chamber failed.
RA2: One of the sensors failed.
RA3: There were impurities in the material.
RA4: This is what happens.

With any of the first three answers, we can tell stories about why our expectations went unmet. In the first case, we might say that for whatever pressure was in the vacuum chamber at the time, there is no change, but under different pressures (for whatever reasons) we would see the changes we expected.

We might say that in fact the change we expected happened, but that the failure of a sensor (A2) prevented us from observing the expected change.

Or we could say that an impure sample will not behave as expected, for a variety of structural reasons we could provide (if we required more detail).

We include A4 because it might simply be that our expectation was off; our hypothesis might have been wrong. And this is what we call falsification (which we will return to in more detail later). If our hypothesis was wrong, then it might be that our observations were simply observations of what happens. In this case, the results would become part of a general case. The results, that is, would be part of a pattern that we could use as a supporting resource within a prediction of future behaviour of Material A.

Without any further evidence of broken equipment or faults in the material, A4 is our best explanation.

BE: We didn’t measure a drop in the resistivity of Material A because this is what happens.

We should ask, though: is this enough? And of course, the answer is no. In a case where expectations went unmet, that is, where there has been an anomaly, we’re wise to follow up our investigation. First, it would be best to verify that all the equipment works. When verified, those facts would become Explanatory Resources in our formalisation.

Then, we could look closely at the material to determine whether there was a problem in its production, leaving it “impure”, which means we weren’t dealing with the material we expected to begin with.

Let’s say that we test the equipment and it all passes, and that we test the material and it appears to be pure. Then we can say with some assurance:

Example 2.5.2
Resistivity in Material A, formalised
LQ: Why didn’t we measure a drop in resistivity at low temperature?

EE1: Sensors recorded the same electrical resistance at all temperatures.
EE2: Sensors indicated constant pressure in the environment.
EE3: The charge generator indicated that we applied the expected charge.

RA1: The vacuum chamber failed.
RA2: One of the sensors failed.
RA3: There were impurities in the material.
RA4: This is what happens.

ER1: Low temperatures usually alter a material’s electrical resistance.
ER2: All the equipment was functioning correctly.
ER3: Material A was a pure sample.

BE: We didn’t measure a drop in resistivity in Material A because resistivity doesn’t drop in Material A under cooler temperatures.

In this case, a falsification of our hypothesis led to a general statement about what happens to Material A under some circumstances. This general statement, which is our Best Explanation, both explains the results of our experiment, and can be used as a supporting resource within a prediction of what will happen should anyone else try the same experiment later.

(Ideally, in the sciences, others would try this experiment and come up with the same results. This would establish a pattern, which would underwrite predictions we make using the principle established in the experiment. That is, predictions about Material A would be reliable, given an established pattern of its behaviour.)

A Best Explanation repeated a sufficient number of times establishes a pattern. A series of explanations following a pattern acquire predictive force.

When we use explanations to underwrite a claim that this will happen, we are, essentially, making recommendations. Recommendations include answers to questions of the form “What should I do?” and “What will happen?”. Let us turn our attention now to a new structure that uses much of what we’ve learned here: Recommendations.


Last modified: Wednesday, 7 February 2018, 10:19 PM