33
/de/
AIzaSyAYiBZKx7MnpbEhh9jyipgxe19OcubqV5w
August 1, 2025
8803919
837261
2

1 Jan 2071 Jahr - The Dangers of "Reason" (Challenges of consensus)

Beschreibung:

KIT THORNTON
SEP 12, 2023


"

Reason is the pivot around which all civilization, indeed, all human interaction hangs. Without it, we are not able to truly cooperate, truly communicate, or even co-exist in a way that is productive. But it is not without hazards, and can be easily misused.

We can’t successfully build a society on “gut hunches,” or “intuition” because these are not communicable. You can’t feel my “hunch,” and cannot share my intuitive insight. I might be able manipulate or intimidate you into adopting my point of view, but I can’t give you legitimate reasons that you can verify. You either decide to have “faith,” and accept my conclusions without proof, or you reject them if you don’t share my emotional reaction. Once you start trying to convince me with evidence, or point out that something in my position doesn’t make sense, we’re in the realm of reason, not intuition.

The problem with intuition is that it stops discussion dead in its tracks. There are millions of people out there with sincere intuition - “from their heart,” or their “guts” that:

Black people are inherently inferior to white people,
Non-white immigrants are criminals,
Queer people are a danger to their children,
Poor people are simply lazy,
Vaccinations are the work of Satan, or other nefarious forces; and/or
More military weapons in as many hands as possible make us safer.

If these “intuitions” are sincerely held, and we can assume they are, then how are you to answer them without evidence or argument? If you have ever tried to convince a true believer of anything that goes against their “heart,” you know that the rejection of evidence or argument comes with such “heartfelt” convictions.

So, let’s get a working definition of “reason.”
Reasonable conclusions are based on:

Evidence that is verifiable from sources that we agree are legitimate, and
arguments that logically follow from their premises.

When I make an evidence based argument to you, you can check my evidence’s validity. We can discuss whether my sources are good, and my argument is sound. This allows us to come to some sort of reasonable conclusion.

So, problem solved. If everyone is being “reasonable,” we’ll be able to come to consensus.

Not so fast, Aristotle.

First, there’s the obvious problem. We may not be able to come to agreement about what constitutes legitimate evidence.

In legal matters, we have arbiters - judges, who will decide whether evidence is admissible by the rules we’ve established. It isn’t perfect, but it does allow us to get to a conclusion, which can be reviewed by other judges. Of course, this is only as good as the honorable commitment of the arbiters to reasonable arguments and fact based conclusions, but that’s a failure of duty by the arbiters, not a failure of reason itself.

In our day to day lives, we have no arbiter. We must work this out among ourselves. If we are to do so in a reasonable fashion, we have to conduct our discussion by a set of rules:

Evidence can only be rejected if it can be shown to be invalid via better evidence,
Arguments can only be answered by showing them to be invalid on their face (usually as the result of informal fallacies, such as “ad hominem” or “ad populi” attacks), or
The argument is unsound - it doesn’t logically follow from its premises to its conclusions (circular reasoning, begging the question, etc.)

There are obviously a number of things to consider here. What constitutes “better evidence?” A scientific paper, written by an expert and subject to peer review is generally considered more reliable than the off-gassing of some non-expert on the Internet. Learning which sources are more credible than others is a matter of testing their statements for accuracy, understanding their methodology, and least reliably, asking why they are making this particular argument. The danger of “ad hominem” (arguing against the messenger, rather than the argument) is obvious here, and must never be relied upon alone to refute evidence or argument.

So if reasoning is so difficult to do, why do we do it? Because it is not the conclusions we come to that matter, it is the process itself. We uphold the idea of respecting the reasonable conclusions of others, and the primacy of evidence. Without these things, we are each making our own guesses in the dark. An individual, or a society that operates this way is running through the woods wearing a blindfold. You may get some way in through sheer luck, but sooner or later, a root, or a rock, or a tree trunk is going to inflict the consequences of your unwillingness to see with reasonable clarity.

But there is another hazard of “pure reason” that must be kept in mind. It is perfectly possible to play by the rules of reason and come to pretty hideous conclusions. For example, if you accept the premise that life is more suffering than joy, then you can very reasonably conclude that the way to eliminate suffering is to eliminate humanity. All of us. Then, no more suffering. Voila’!

What’s the problem with this argument? It’s perfectly “reasonable.” But, obviously, it’s not an outcome we would desire.

Reason is a tool, not an end in itself. It allows us to see clearly to our goals, but it does not, in itself, determine the worthiness or desirability of those goals. There are several ways of approaching this problem, but we’ll focus on three.

Some thinkers have posited the idea of an inherent moral sense that guides us prior to reason. Immanuel Kant is the best known of this cadre. His argument can be usefully oversimplified into an intuitive moral maxim that he called the “categorical imperative” - if everyone operated by the rules I’m making this decision by, what sort of world would we have?

For example, what if everyone felt free to lie to their own advantage? Well, it’s hard to see how any society, even any friendship could exist without some level of trust. So you shouldn’t lie.

The problem comes with an example Kant himself gives, known in the philosophy game as “Kant’s Axe.” What if you borrow an axe from a friend, promising to give it back whenever he asks for it. In the meantime, your friend has gone mad, and is now knocking on your door, demanding his axe back because his family has been possessed by demons and he has to hack off their heads to save the world.

Kant argues that you should give him his axe back. The moral imperative of keeping your promise is on you, what he does with it is, morally speaking, on him.

I might posit a “categorical imperative” that asks what sort of world it would be if we gave madmen axes whenever they asked for them, but I suppose, considering life in the United States after Sandy Hook and Columbine, we already know the answer to that one.

But Kant’s logic here is perfectly consistent. Within his argument, he can’t come to any other conclusion. If you accept his premise, and his argument, his conclusion is inescapable.

I think that’s nuts. But we’ll come back to it.

To take another answer to the values problem, we will Utilitarianism (which is actually an instance of the “First Principles” argument, treated below.) Utilitarianism is described as “Seeking the greatest good for the greatest number.”

We’ll illustrate Utilitarianism via a philosophical “golden oldie,” the Trolley problem.

[Full disclosure - I bloody well despise the trolley problem. It assumes far too much about moral decision making and forces a false dichotomy, but then again, Phillipa Foot, who first proposed it, didn’t like how it was misused much either.]

For those who haven’t been exposed to this rotten old chestnut, the Trolley Problem proposes that you are in charge of a switch that will switch a trolley onto one of two tracks. On one track, the one the trolley is currently on, five people are tied to the track. On the other one person is tied to the tracks.

Because this is a particularly reductive and fundamentally stupid thought experiment, you have only one possible decision to make. You can either flip the switch, killing one person, or leave the switch as is, killing five people. No avoiding the decision, these are your only two options.

The utilitarian would argue that you should flip the switch, killing the single person, rather than allowing the trolley to make a much bigger mess of five people. Sounds good, right?

Here’s the problem, and it’s evident in the statement of Jeremy Bentham, the first and heaviest hitter in the utilitarian lineup. Emphasis added by me - “The greatest good for the greatest number.”

Every one of the potential wheel greasers is just a number. It reduces moral decisions to a calculus. What if the one is a doctor working on a cure for cancer, and the five are convicted, unrepentant war criminals? There is no true value content to this moral mathematic.

Consider a situation where one person has a compound in their system and organs that is a cure for some dread disease. But in order to extract it, you have to subject that person to vivisection. They will die a horrible, painful death, but many thousands will be saved. The proposed savior is not keen on the idea, and protests vigorously.

The utilitarian has no good way of opposing this murder. Performing this act would disregard many moral values, such as not torturing helpless people to death, bodily autonomy, and the limits of power over individuals. If the person with the compound in their system is willing, that’s another question, but what if they’re not? You are, in the name of morality, strapping them to a table and taking them apart.

See, I can make stupid, morally unsolvable forced dichotomies, too. Take that, Professor Foot!

The utilitarian might protest that “the greatest good for the greatest number” includes such ideas as bodily autonomy and not vivisecting people, but then, they have nowhere to go. Almost any action that benefits a majority disadvantages a minority. If the majority good always wins, then whence the humanity of the minority?

Another way of approaching the “reasoning for what?” problem is the idea of “first principles.” A first principle is a maxim, a guideline that operates pre-logically. No argument that violates these first principles is held valid by the holder of the first principle.

Mistakes are often made here. “Thou shalt not murder,” for example, is not a first principle. There’s something behind it. One might ask, “Why not murder?” and at that point, you’re back to logical arguments, unless you say, “because seeing people murdered distresses me,” the emotional argument. This won’t do because, again, emotional reactions are not arguments. The guy with the axe that doesn’t belong to Kant might well say, “It doesn’t bother me at all.” Then, well…

A first principle is one that is considered a value in itself. It contains the value statement. For example:

”All human action should be consciously directed at human flourishing.” A “why” question isn’t really relevant. The value of “human flourishing” is inherent in the statement itself. It is “pre-logical,” and it is understood that any reasoning the proposer of the principle does is directed toward that end. You can further define “human flourishing,” but human flourishing is the goal in itself.

A problem here is that a first principle is only as good as the principle. “My first principle is that the Aryan race (whatever that is) is superior to all others and is destined to rule the world,” is not a useful first principle, because it is immediately subject to a slew of value questions, and it relies on a “destiny” argument, which is automatically invalid - the future hasn’t happened yet, and proposing it as a fact is “prima face” (on its face) invalid.

There is the possibility of really lousy first principles that are valid on their face. But by proposing first principles, the speaker discloses the limits of reason in their thinking process. A first principle is a guardrail, beyond which the speaker has informed you they will not go. There is a certain useful honesty in this, that facilitates further discussion (or points out the futility of it.) Unlike “intuitive” thought, it respects reason as a process; as a tool to get at what the speaker considers a worthy goal.

And first principles are subject to logical challenge. They are not subject to contradiction. A person who states the “human flourishing” principle is still open (if they’re doing it right) to questions about what “human action,” or “human flourishing” can be reasonably taken to mean. What the speaker is saying is that they will not consider valid any line of thought that is contra to human flourishing.

This is obviously takes a lot of work to get right. The good news is that the more of it you do, assuming you are approaching it honestly and rigorously, the better you get at it. First principles, once established, can, in fact, be changed or disregarded - this is reason, not religion - once one has been convinced they don’t make sense, or that they don’t serve their intended purpose. This has the effect of pulling a card out of the base of a house of cards, collapsing one’s entire value structure, but it is possible. Most of us will have to do it at some point.

Here, to sum, is the danger of reason: Failing to answer the question, “reasoning to what end?” There are always those who dishonestly dress up nonsense, or even active evil in reason’s clothes, but that’s not the worst danger, in my view. The most serious danger comes we try to reason in a vacuum, or without an end in mind. Solving the population problem with an engineered virus might make sense if you don’t consider human life and suffering significant.

This is the A.I. nightmare scenario - a superintelligence that operates on pure reason without first principles. Why we wouldn’t program first principles in at the outset - Asimov’s Laws of Robotics are a flawed effort to do just that - is anyone’s guess, but it is a useful metaphor. Reason without values, and the willingness to - within guidelines - think seriously about one’s premises can unleash moral horrors clad in the armor of sound arguments.

Hic sunt dracones, my friends."

Zugefügt zum Band der Zeit:

Datum:

1 Jan 2071 Jahr
Jetzt
~ 45 years and 0 months later