Derek Sivers

How Not to Be Wrong - by Jordan Ellenberg

How Not to Be Wrong - by Jordan Ellenberg

ISBN: 9780143127536
Date read: 2019-02-01
How strongly I recommend it: 6/10
(See my list of 320+ books, for more.)

Go to the Amazon page for details and reviews.

Mathematics as an extension of common sense. I'd like to go through this again, doing and thoroughly understanding all the examples. On the first read, I let it pass over me.

my notes

Mathematics is the study of things that come out a certain way because there is no other way they could possibly be.

Math is like an atomic-powered prosthesis that you attach to your common sense, vastly multiplying its reach and strength.

Mathematics is the extension of common sense by other means.

Without the rigorous structure that math provides, common sense can lead you astray.

Linearity vs nonlinearity is one of the central distinctions in mathematics.
Nonlinear thinking means which way you should go depends on where you already are.
The relation between eating and health isn’t linear, but curved, with bad outcomes on both ends.

The Laffer curve:
The horizontal axis here is level of taxation, and the vertical axis represents the amount of revenue the government takes in from taxpayers.
On the left edge of the graph, the tax rate is 0%.

Linear reasoning is everywhere. You’re doing it every time you say that if something is good to have, having more of it is even better.

Pythagorean's philosophy was a chunky stew of things we’d now call mathematics, things we’d now call religion, and things we’d now call mental illness.

If the universe hands you a hard problem, try to solve an easier one instead, and hope the simple version is close enough to the original problem that the universe doesn’t object.

George Berkeley denounced Newton’s infinitesimals in a tone of high mockery sadly absent from current mathematical literature.

Multiply both sides by 3 and you’ll see 0.99999…=3/3=1
There’s a whole field of mathematics that specializes in contemplating numbers of this kind, called nonstandard analysis.
.9 + .09 + .009 + .0009 + …
That pesky ellipsis is the real problem.
In the real world, you can never have infinitely many.
What’s the numerical value of an infinite sum? It doesn’t have one - until we give it one.
We should simply define the value of the infinite sum to be 1.

There are good choices and there are bad ones. In the mathematical context, the good choices are the ones that settle unnecessary perplexities without creating new ones.

In mathematics, you very seldom get the clearest account of an idea from the person who invented it.

Linear regression is a marvelous tool, versatile, scalable, and as easy to execute as clicking a button on your spreadsheet.
Whenever you want to understand which variables drive which other variables, and in which direction, it’s the first thing you reach for. And it works on any data set at all.
But, like a table saw, if you use it without paying careful attention to what you’re doing, the results can be gruesome.

When you’re field-testing a mathematical method, try computing the same thing several different ways. If you get several different answers, something’s wrong with your method.

The smaller the sample size - the greater the variation.
The law of averages is not very well named, because laws should be true, and this one is false.
Coins have no memory. So the next coin you flip has a 50-50 chance of coming up heads, the same as any other.
The way the overall proportion settles down to 50% isn’t that fate favors tails to compensate for the heads that have already landed; it’s that those first ten flips become less and less important the more flips we make.

Don’t talk about percentages of numbers when the numbers might be negative.

The more chances you give yourself to be surprised, the higher your threshold for surprise had better be.

An identical set of winning numbers came up twice in a single week.
It was improbable that the names of medieval rabbis are hidden in the letters of the Torah. But is it?
Improbability is a relative notion, not an absolute one.
When we say an outcome is improbable, we are always saying, explicitly or not, that it is improbable under some set of hypotheses we’ve made about the underlying mechanisms of the world.

The null hypothesis, in executive bullet-point form:
Run an experiment.
Suppose the null hypothesis is true, and let p be the probability (under that hypothesis) of getting results as extreme as those observed.
The number p is called the p-value.
If it is very small, rejoice; you get to say your results are statistically significant.
If it is large, concede that the null hypothesis has not been ruled out.

New things require new vocabulary. There are two ways to go:
You can cut new words from fresh cloth.
But more commonly, we adapt existing words for our own purposes, which stand in only the most tenuous relation to the ordinary things referred to by those words.
“significance” is one of those.

The lexical double-booking (double meaning) of the word “significance” has consequences.
Twice a tiny number is a tiny number.
Both numbers are more or less zero.
“Statistically noticeable” or “statistically detectable” would be a better term than “statistically significant”
That would be truer to the meaning of the method, which merely counsels us about the existence of an effect but is silent about its size or importance.
But it’s too late for that. We have the language we have.

Statistical study that’s not refined enough to detect a phenomenon of the expected size is called underpowered - the equivalent of looking at the planets with binoculars.
Moons or no moons, you get the same result, so you might as well not have bothered.

Players who had just made a shot were more likely to take a more difficult shot on their next attempt.
The “hot hand” in basketball might “cancel itself out”: players, believing themselves to be hot, get overconfident and take shots they shouldn’t.

Suppose the hypothesis H is true.
It follows from H that a certain fact F cannot be the case.
But F is the case. Therefore, H is false.
Suppose the null hypothesis H is true.
It follows from H that a certain outcome O is very improbable (say, less than Fisher’s 0.05 threshold).
But O was actually observed. Therefore, H is very improbable.

Impossible things never happen. But improbable things happen a lot.

Every positive number can be expressed in just one way as a product of prime numbers.
This is why we don’t take 1 to be a prime, though some mathematicians have done so in the past; it breaks the uniqueness, because if 1 counts as prime, 60 could be written as 2 × 2 × 3 × 5 and 1 × 2 × 2 × 3 × 5 and 1 × 1 × 2 × 2 × 3 × 5…

The primes are the atoms of number theory, the basic indivisible entities of which all numbers are made.

Primes are not random, but it turns out that in many ways they act as if they were.

Pairs of primes that are separated by only 2, like 3-5 and 11-13, and called twin primes.

Goldbach conjecture: every even number greater than 2 is the sum of two primes.

Scientists, subject to the intense pressure to publish lest they perish, are not immune to temptations.
It takes a lot of mental strength to stuff years of work in the file drawer.
Scientists may “torture the data until it confesses.”

The purpose of statistics isn’t to tell us what to believe, but to tell us what to do.
Statistics is about making decisions, not answering questions.

What’s the purpose of a criminal trial?
We might naively say it’s to find out whether the defendant actually committed the crime he’s on trial for. But that’s obviously wrong. There are rules of evidence, which forbid the jury from hearing testimony obtained improperly, even if it might help them accurately determine the defendant’s innocence or guilt.
The purpose of a court is not truth, but justice.
We have rules, the rules must be obeyed, and when we say that a defendant is “guilty” we mean, if we are careful about our words, not that he committed the crime he’s accused of, but that he was convicted fair and square according to those rules.

Most practicing scientists have no interest in denying themselves the self-polluting satisfaction of forming an opinion about which hypotheses are actually true.
The significance test is the detective, not the judge.
The provocative and oh-so-statistically-significant finding isn’t the conclusion of the scientific process, but the bare beginning.
The replication process is supposed to be science’s immune system, swarming over newly introduced objects and killing the ones that don’t belong.
But who wants to publish the paper that does the same experiment a year later and gets the same result?