Request for Guidance

We ask for guidance when we are unsure how to proceed under some given circumstances. When we decide which action is best under the given circumstances, we compare the Rival Recommendations to things we hope will happen, and things we would definitely avoid. Here we use the technical terms “Aspirations” and “Aversions” to capture this distinction.

Formalisation of Request for Guidance

Recommendation Request (RFG) is a well-framed question that asks for guidance regarding how to act or proceed in some given context.

Aspirations (ASP) are descriptions of outcomes you would like to see. This will include statements of preference or virtues.

Aversions (AVR) are descriptions of outcomes you would like to avoid. This might include moral principles or prohibitions.

Rival Recommendations (RR) are suggested courses of action in the given circumstances.

Supporting Resources (SR) are descriptions of the circumstances under which you are asking for guidance.

Best Recommendation (BR) is the recommendation we offer as best among the possibilities.


Aspirations

Aspirations (ASP) are positive statements of preferences. For example, if I’m asking for guidance on which plumber to use, I might state that I want a plumber who is available on Monday, and is licensed. Preferences are violable. It might be that the only licensed plumber available is only available on Tuesday. Though this violates my preference for Monday, I’m willing to entertain the recommendation.


Aversions

Aversions (AVR) are negative statements of outcomes you will avoid. Should one of these conditions not hold, you would reject the recommendation. For example, I do not want a plumber who specialises in commercial installations. If you suggest a commercial plumber, I will immediately reject your recommendation. In our terminology: a recommendation that includes an Aversion is unreliable.

In this way, Aversions are not simply negations of Aspirations. Aspirations are boxes I would like to see ticked; Aversions are deal-breakers.


Moral Recommendations

A Moral Recommendation is a particular species of Request for Guidance. In these cases, our Aspirations and Aversions take the form of moral principles or “norms”, as mentioned briefly above.

In Requests for Solution, we explain our Distinguishing Conditions. An explanatory investigation embeds in our structure, we said.

In Requests for Guidance, we do not explain our Aspirations and Aversions. However, we can still embed an investigation whose results is a principle or norm or standard. That is, within a Request for Guidance, an important part of the contextual considerations will be why you aspire to or are averse to a particular kind of outcome.

As an example, consider the following:

RFG: How should I treat the animals on my farm?

I might cite that I aspire to treat animals in such a way that they do not suffer. (Notice here I have not made this an aversion, suggesting that a Rival Recommendation that includes some degree of animal suffering might be acceptable to me.) It is fair enough to ask why I feel this way. That is, we might investigate my aspiration.

LQ: Why do you not want your farm animals to suffer?

In this case, the evidence is rather academic: I don’t want them to suffer. For clarity, let’s specify that:

EE1: I do not want the animals to suffer.

A few Rival Answers might be:

RA1: I don’t like the noises that suffering animals make.
RA2: I don’t like being cruel to animals.
RA3: The animals deserve a pain-free life.

To decide which answer is the Best Explanation, I might offer some Explanatory Resources:

ER1: Farm animals are known to feel pain, i.e., are “sentient”.
ER2: Sentient creatures deserve pain free lives.

And we might conclude:

BE: I don’t want the animals to suffer because they deserve a pain-free life.

Notice that within this investigative exercise, the LQ does not include “should”. Questions including “should” signal Requests for Recommendation; Lead Questions do not include “should”. That is, the results of an investigation, led by a LQ, are descriptive. The results of a Recommendation Requests, here led by RFG, are normative.

Notice also that you might be tempted to ask about ER2. You might ask “Why should we try to make sentient creatures’ lives pain-free?” If we were to structure this request formally, as an RFG, we would inevitably include another resource about which we could ask yet another normative question; we could launch yet another Request for Guidance. Moral questions run a risk of spiralling in this way. To avoid a vicious spiral, we would need to appeal, ultimately, to fundamental moral principles. At some point, explanations must come to an end; we must get on with the task of figuring out which is the best recommendation.

Endless explanations might result in an ideal or perfect recommendation, but that is not our purpose. Fallibility is not a threat to our systematic thinking. Infallibility might be the purpose of deep and abstract ethical theory, but that is a wholly separate topic from what we’re treating here. Trying to treat that topic here would not only be distracting, but would not advance the cause of learning how to structure a Best Recommendation.

Last modified: Wednesday, 7 February 2018, 10:04 PM