
Knowledge, Reality, and Value - by Michael Huemer
ISBN: 9798729007028Date read: 2025-08-01
How strongly I recommend it: 6/10
(See my list of 430+ books, for more.)
Go to the Amazon page for details and reviews.
Perfect introduction to academic philosophy. Real philosopher’s definitions. Differs from the more self-help style philosophy I love. Great examples of clear thinking.
my notes
People do not acquire concepts by hearing definitions.
We acquire concepts by seeing examples.
(You acquire the concept “green” by seeing examples of green things, not by someone trying to tell you what green is.)
A proposition is the thing that you believe, not the belief itself.
“We will colonize Mars” and “Nous allons coloniser Mars” are not the same sentence, but they express the same proposition.
Valid or invalid:
An argument is said to be valid (or “deductively valid” or “logically valid”) when the premises entail the conclusion.
That is, it would be impossible (in the sense of contradictory) for all the premises to be true and the conclusion to be false.
Note: This is not the ordinary English usage of “valid”; this is a special, technical usage among philosophers.
Virtually all philosophers use the word this way, so you have to learn it.
Sound or unsound:
An argument is said to be sound when it is valid and all of its premises are true.
(In this case, of course, the conclusion must also be true – you can see that if you understood the definition of “valid”.)
An argument is unsound whenever it is invalid or has a false premise.
Be explicit, and say things correctly.
Freedom isn’t a concept.
Freedom is an absence of constraints.
One way of characterizing rational beliefs is to say that they are the beliefs that are likely to be true, given the experiences and information available to you at the time.
Forming irrational beliefs makes you morally to blame if you act on those beliefs and there is a bad outcome.
By contrast, forming rational beliefs insulates you from that kind of blame.
If you think rationally, and you do the thing that is right according to your rationally formed beliefs, then you are not morally to blame if things go wrong.
Belief can have practical consequences down the line.
Irrational beliefs can also have an impact on your belief-forming methods, causing you to adopt less rational methods of forming beliefs in the future.
Suppose you accept, purely on blind faith, that there is a God.
This might lead to your adopting the more general belief that blind faith is an acceptable way of forming beliefs.
But once you accept that, you are liable to form all kinds of false beliefs, because there are so many false beliefs that could be adopted by blind faith.
You have to start from some rational background beliefs in order to reason about what beliefs are likely to cause harm.
A rational thinker is not a person who strives to base his beliefs on objective evidence rather than on his emotions.
Having feelings does not make you irrational.
Believing that the world must be a certain way because of your feelings does make you irrational.
Objectivity is a disposition to resist bias, and hence to base one’s beliefs on the objective facts.
The main failures of objectivity are cases where your beliefs are overly influenced by your personal interests, emotions, or desires, or by how the phenomenon in the world is related to you, as opposed to how the external world is independent of you.
Neutrality is a matter of not taking a stand on a controversial issue.
It is generally false that both sides are equally good.
You should not refuse to evaluate issues.
Treat the other side fairly, even while defending your side.
Treat intellectual debate as a mutual truth-seeking enterprise, rather than as a personal contest.
As a rational thinker, you want your beliefs to be true, so you should welcome the opportunity to discover if your own current view is wrong.
Failures of objectivity are very common, and they often lead us very far astray.
Objectivity is the main thing we need to make progress on debates in philosophy (and religion, and politics).
The human mind is not really designed for discovering abstract, philosophical truths.
Our natural tendency is to try to advance our own interests or the interests of the group we identify with.
We tend to treat intellectual issues as a proxy battleground for that endeavor.
Factors that make someone biased about a topic are also the factors that make them knowledgeable about it.
If we discount “non-objective” perspectives, that could mean throwing out the perspectives of the most knowledgeable people.
Examples:
Suppose your company is hiring a new employee, and one of the candidates is a friend of yours whom you have known for ten years.
You would probably be more knowledgeable about the candidate than anyone else at your company, while at the same time being the most biased.
Suppose you are involved in a discussion about war, and you are a veteran of a past war.
You would probably be the most knowledgeable person present about what wars are like; but you would also likely have the most biases, because the experiences that gave you that knowledge also gave you strong feelings.
Suppose you are in a discussion about racism, and you are a member of a minority race.
Then you are likely to be especially knowledgeable about what it is like to be a member of a minority, including how often such minorities experience discrimination.
But the same experiences that gave you that knowledge are likely to have given you personal, emotional biases on the subject.
Collect information from the most sophisticated sources, not (as most people do) the most entertaining sources.
That usually means looking at academic sources, rather than popular media.
Challenge yourself to try to think of reasons why your own views might be wrong.
When you give an argument, try to think of evidence against your own conclusions.
Withhold judgment on that issue until you understand the rational reasons on the other side.
Dogmatic people have beliefs that are overly persistent and insufficiently receptive to disconfirmation.
We underestimate appropriate belief revision.
Use weak, widely-shared premises.
A “strong” claim is one that says a lot; a “weak” claim says not very much.
For instance, “All politicians are liars” is a strong claim; by contrast, “Some politicians are liars” is a much weaker claim.
In general, the more controversial claims you make, and the stronger the claims are, the more likely that your argument is wrong.
So try to build arguments that use the weakest, least controversial premises possible.
Don’t claim more than you have to.
Appeal to ignorance:
Concluding that something is the case merely because we don’t know anything to the contrary.
Authors sometimes try to lure you into this mistake by writing things like, “There is no reason why X would be true” (hoping that you’ll infer that X isn’t true) or “There is no reason to doubt X” (hoping you’ll infer that X is true).
Appeal to the people:
Inferring that something is true from the fact that it is popularly believed.
False Analogy:
An argument by analogy that’s no good, because the two things being compared are not really comparable.
“The government should be able to exclude foreigners, just as I can exclude strangers from my house.”
The house might not be analogous to (not a fair comparison to) the whole country (perhaps because the government does not own the whole country in the same way an individual owns a house).
If a belief is to be justified, the belief must at least be very likely to be true.
If a belief is highly probable, then any alternative to it (whether “relevant” or not) must be highly improbable.
Your ideas are determined by what things you have actually perceived and interacted with, that caused you to form your ideas.
This is known as the causal theory of reference.
Probability theory:
The way a theory gets to be probabilistically supported is that the theory predicts some evidence that we should see in some circumstance, we create that circumstance, and the prediction comes true.
More precisely, evidence supports a theory provided that the evidence would be more likely to occur if the theory were true than otherwise.
The theories that we consider “falsifiable” are those that make relatively sharp predictions: That is, they give high probability to some observation that is much less likely on the alternative theories.
If those observations occur, then the theory is supported; if they don’t, then the theory is disconfirmed (rendered less probable).
“Unfalsifiable” theories are ones that make weak predictions or no predictions – that is, they don’t significantly alter the probabilities we would assign to different possible observations.
They allow pretty much any observation to occur, and they don’t predict any particular course of observations to be much more likely than any other.
On this account, “falsifiability” is a matter of degree.
A theory is more falsifiable to the extent that it makes more predictions and stronger predictions.
A highly falsifiable theory, by definition, is open to strong disconfirmation (lowering of its probability), in the event that its predictions turn out false.
But, by the same token, the theory is open to strong support in the event that its predictions turn out true.
By contrast, an unfalsifiable theory cannot be disconfirmed by evidence, but for the same reason, it cannot be supported by evidence either.
Suppose that you have two theories to explain some phenomenon, with one being much more falsifiable than the other.
Suppose also that the evidence turns out to be consistent with both theories (neither of them make any false predictions).
Then the falsifiable theory is supportedby that evidence, while the unfalsifiable theory remains unsupported.
At the end of the day, then, the highly falsifiable theory is more worthy of belief.
Moral realism says that there are objective moral facts, which we can know.
Plausibility comes in degrees:
Among propositions that are initially plausible, some are more plausible (they are more obvious, or more strongly seem correct) than others.
So, if you have an inconsistent set of propositions that each seem plausible, you should reject whichever proposition has the lowest initial plausibility.
Foundationalists think that there are certain items of knowledge, or justified beliefs, that are “foundational”.
(also called “immediately justified”, “directly known”, “self-evident”, “non-inferentially justified”).
Foundational justification is, by definition, justification that does not rest on reasons.
In other words, sometimes you can rationally believe something in a way that doesn’t require it to be supported by any other beliefs.
Foundationalists think that some justification is foundational, and all other justification depends on support from foundational beliefs.
Think about how you actually form beliefs when you’re pursuing the truth.
You do it based on what seems true to you.
Now, there are some cases where beliefs are based on something else.
For instance, there are cases of wishful thinking, where someone’s belief is based on a desire:
You believe P because you want it to be true.
But those are not the cases where you’re seeking the truth, and cases like that are generally agreed to be unjustified beliefs.
So we can ignore things like wishful thinking, taking a leap of faith, or other ways of forming unjustified beliefs.
With that understood, your beliefs are based on what seems right to you.
I don’t think one first needs grounds for thinking one’s appearances are reliable.
I think we may rely on appearances as long as we don’t have grounds for thinking they aren’t reliable.
If you require positive evidence of reliability, then you’re never going to get that evidence, for the reasons given by the skeptic.
Phenomenal Conservatism says that we are entitled to presume that whatever seems to us to be the case is in fact the case, unless and until we have reasons to think otherwise.
In the middle ages, everyone knew that the Sun orbited the Earth.
What this really means is something like:
“People in the middle ages would have described themselves as ‘knowing’ that the Sun went around the Earth.”
They didn’t genuinely know it, though.
---
Mr. Lucky has gone down to the racetrack to bet on horses.
He knows nothing about the horses or their riders, but when he sees the name “Seabiscuit”, he has a good feeling about that name, which causes him to confidently believe that Seabiscuit will win.
He bets lots of money on it.
As chance would have it, Seabiscuit does in fact win the race.
“I knew it!” the gambler declares.
If we seem to observe something, it is very lame to simply say, “Oh, that’s an illusion” and move on.
Rational people assume that what we seem to observe is real, unless there is evidence to the contrary.
They don’t assume that whatever we seem to observe is illusory until proven real.
To say that someone either “should” or “should not” do some action seems to imply that they have a choice about whether they do it.
There’s no sense in saying that I should do x if I cannot do it, or I cannot avoid doing it, or I have no control over whether I do it.
So those who deny free will would seemingly have to disagree with the judgment “I should not stick this fork in my eye.”
An argument is an attempt to give one’s audience a reason for believing a certain conclusion.
To criticize a view, you don’t have to prove that it’s false.
It’s also a good criticism if you can show the view to be unjustified.
Being more self-aware:
If you are aware of the factors influencing your emotions and desires, you are less likely to fall prey to influences that you would not endorse.
This is why it is good to reflect periodically on why you make the choices you do.
The unobservability of souls makes it possible to account for any intuitions about personal identity.
They seem to be ad hoc posits designed to let us maintain whatever views we want about identity of persons, while making no definite predictions about persons or anything else.
Perhaps the soul is also mysterious in the sense of being poorly understood.
But there is no reason to assume that poorly understood things don’t exist.
I find this “objection” empty.
I think it’s basically just a negative emotional reaction masquerading as an argument.
Identity is intrinsic, not extrinsic.
You might think “murder is wrong” is an objective ethical truth.
This would be to say that murder is wrong regardless of anyone’s attitudes toward it.
It’s wrong independent of whether we approve or disapprove of it, like or dislike it, etc.
If our society has a sudden change of conventions and people start approving of murder, murder won’t become morally okay.
Rather, our society will just be wrong.
That’s what it means to say that murder is “objectively” wrong.
Introspectively, moral judgments seem exactly like beliefs, and not like emotions or desires.
Intuitionists like to compare ethics to mathematics.
This does not mean that ethics is exactly like mathematics in all ways.
If that were true, this wouldn’t be a comparison.
Ethics would just be mathematics.
An intuition is a mental state that you have, in which something just seems true to you, upon reflecting on it intellectually, in a way that does not depend upon your going through an argument for it.
It is rational to assume that things are the way they appear, unless and until one has specific reasons to doubt this.
This is the foundation of all reasonable beliefs.
That includes the beliefs that we get from perception, memory, introspection, and reasoning, as well as intuition.
Intuition is just like reason, observation, and memory in this respect:
You can’t check its reliability without using it.
Discussion of hypothetical examples is not like real life decision-making.
In real situations, you should always look for ways out of a dilemma or ways of avoiding having to confront a hard issue.
But in discussing hypothetical examples, we’re trying to illuminate a specific theoretical issue.
Thus, in discussing hypotheticals, one should never try to avoid the dilemma or avoid addressing the hard issue that the example is trying to present.
One also should not bring up possible consequences that are not related to that issue.
Absolute deontology, or absolutism, holds that there are certain types of action that are always wrong, regardless of how much good they might produce or how much harm they might avert.
Categorical Imperative (1st version):
Always act in such a way that you could will that the maxim of your action should be a universal law.
In Kant’s view, morality gives us categorical imperatives.
It’s not that we must behave morally if we want something else to happen; we just have to behave morally, period.
“The maxim of your action” is a rule that explains what you’re doing and why.
An action is wrong if you couldn’t universalize the maxim.
Act so that you treat humanity, whether in your own person or in that of another, always as an end, never merely as a means.
Act so that you treat humanity, whether in your own person or in that of another, always as an end, never merely as a means.
This offers an explanation for the difference between the Trolley Problem and the Footbridge case.
In the Footbridge case, it is wrong to push the fat man off the bridge, since doing so would treat the fat man as a mere means to saving the other five.
By contrast, in the original Trolley case, diverting the trolley does not treat the one person on the right-hand track as a means.
The person on the right track isn’t a means to saving the five at all.
You can see that because if the one person were not present, you would still divert the trolley, thereby saving the five on the left track in exactly the same way.
The fact that your plan works just as well if the one person is not present shows that he is not a means to achieving the goal.
Doctrine of Double Effect says that it’s worse (harder to justify) to intentionally harm someone than it is to harm someone as a foreseen side effect of one’s action.
When you intentionally harm someone, the harm to the other person is either the end that you’re aiming at or a means to that end.
When you harm someone as a mere side effect, the harm isn’t aimed at, neither as a means nor as an end, even though you might know it’s going to happen.
The DDE is often used in military ethics to distinguish between acceptable collateral damage and war crimes.
If you deliberately target civilians, that’s a war crime.
It’s widely regarded as immoral, even if the war is otherwise just.
Deontological:
You have a right to something when:
(1) other people are obligated to give you that thing (in the case of a positive right),
(2) or to not interfere with your having that thing (in the case of a negative right).
This obligation is understood deontologically:
People have to respect your rights even if slightly better consequences would follow in a particular case from violating your right.
Rights are agent-centered constraints.
This means you yourself don’t violate any rights.
So if you could violate a right and thereby prevent someone else from violating two rights, you shouldn’t do it.
Moderate deontologists (as I call them) hold a middle ground position between consequentialism and absolute deontology.
They think that some kinds of actions normally should not be performed, even if they produce better consequences.
However, in some extreme cases it is permissible to perform them, to prevent something vastly worse from happening.
Because it accommodates common sense ethical intuitions, a pluralistic view posits more than one distinct moral principle.
All ethical theories are problematic in one way or another.
Suppose there’s a used car dealer who obtains all his cars by murdering innocent people and stealing their cars.
No one specifically told him to do this, but everyone, including you, knows that this is how he in fact gets his cars.
It would be uncontroversially wrong to buy a car from this dealer.
This illustrates the principle that if it’s wrong to do something, it’s also wrong to pay other people for doing it.