Derek Sivers
Life 3.0 - by Max Tegmark

Life 3.0 - by Max Tegmark

ISBN: 1101970316
Date read: 2019-03-15
How strongly I recommend it: 7/10
(See my list of 360+ books, for more.)

Go to the Amazon page for details and reviews.

A deep, bold, and visionary dive into Artificial Intelligence and its many implications. One of the most interesting books I've ever read. If you haven't read much of AI yet, start with “Surviving AI” as an intro, then read this as a deep-dive. His perspective is amazingly thorough. Defining terminology was a great way to start. For example life is a “process that can retain its complexity and replicate”. Intelligence is the “ability to accomplish complex goals”. That keeps it broad enough to define future technology as alive and intelligent.

my notes

Beauty is in the eye of the beholder, not in the laws of physics, so before our Universe awoke, there was no beauty.
We transformed our Universe from a mindless zombie with no self-awareness into a living ecosystem harboring self-reflection, beauty and hope - and the pursuit of goals, meaning and purpose.
Had our Universe never awoken, then, as far as I’m it would have been completely pointless - merely a gigantic waste of space.

Let’s define life very broadly, simply as a process that can retain its complexity and replicate. What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged.
Think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

If bacteria are Life 1.0 and humans are Life 2.0, then you might classify mice as 1.1: they can learn many things, but not enough to develop language or invent the internet. Because they lack language, what they learn gets largely lost when they die, not passed on to the next generation.
Similarly, you might argue that today’s humans should count as Life 2.1: we can perform minor hardware upgrades such as implanting artificial teeth, knees and pacemakers, but nothing as dramatic as getting ten times taller or acquiring a thousand times bigger brain.
We can divide the development of life into three stages, distinguished by life’s ability to design itself:
Life 1.0 (biological stage): evolves its hardware and software
Life 2.0 (cultural stage): evolves its hardware, designs much of its software
Life 3.0 (technological stage): designs its hardware and software

Terminology:
Life = Process that can retain its complexity and replicate
Life 1.0 = Life that evolves its hardware and software (biological stage)
Life 2.0 = Life that evolves its hardware but designs much of its software (cultural stage)
Life 3.0 = Life that designs its hardware and software (technological stage)
Intelligence = Ability to accomplish complex goals
Artificial Intelligence (AI) = Non-biological intelligence
Narrow intelligence = Ability to accomplish a narrow set of goals, e.g., play chess or drive a car
General intelligence = Ability to accomplish virtually any goal, including learning
Universal intelligence = Ability to acquire general intelligence given access to data and resources
[Human-level] Artificial General Intelligence (AGI) = Ability to accomplish any cognitive task at least as well as humans
Human-level AI = AGI
Strong AI = AGI
Superintelligence = General intelligence far beyond human level
Civilization = Interacting group of intelligent life forms
Consciousness = Subjective experience
Qualia = Individual instances of subjective experience
Ethics = Principles that govern how we should behave
Teleology = Explanation of things in terms of their goals or purposes rather than their causes
Goal-oriented behavior = Behavior more easily explained via its effect than via its cause
Having a goal = Exhibiting goal-oriented behavior
Having purpose = Serving goals of one’s own or of another entity
Friendly AI = Superintelligence whose goals are aligned with ours
Cyborg = Human-machine hybrid
Intelligence explosion = Recursive self-improvement rapidly leading to superintelligence
Singularity = Intelligence explosion
Universe = The region of space from which light has had time to reach us during the 13.8 billion years since our Big Bang
(Many misunderstandings about AI are caused by people using the words above to mean different things.)

You’re probably not an ant hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants.
The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

Universal intelligence: given enough time and resources, it can make itself able to accomplish any goal as well as any other intelligent entity.

Intelligence is ultimately all about information and computation, not about flesh, blood or carbon atoms.

Substrate independence: there are many beautiful examples of it in physics.
Waves, for instance: they have properties such as speed, wavelength and frequency, and physicists can study the equations they obey without even needing to know what particular substance they’re waves in. When you hear something, you’re detecting sound waves caused by molecules bouncing around in the air, and we can calculate all sorts of interesting things about these waves - how their intensity fades as the square of the distance, such as how they bend when they pass through open doors and how they bounce off of walls and cause echoes - without knowing what air is made of. We can ignore all details about oxygen, nitrogen, carbon dioxide, etc., because the only property of the wave’s substrate that matters and enters into the famous wave equation is a single number that we can measure: the wave speed.
This wave equation was first discovered and put to great use long before physicists had even established that atoms and molecules existed.

A wave can travel across a lake, even though none of its water molecules do - they mostly bob up and down, like fans doing “the wave” in a sports stadium.

You can’t have computation without matter, but any matter will do as long as it can be arranged into NAND gates, connected neurons or some other building block enabling universal computation.

It’s often only the substrate-independent aspect that we’re interested in: a surfer usually cares more about the position and height of a wave than about its detailed molecular composition.

Nobody knows for sure what the next blockbuster computational substrate will be, but we do know that we’re nowhere near the limits imposed by the laws of physics.
That limit is a whopping 33 orders of magnitude (1033 times) beyond today’s state of the art.
Even if we keep doubling the power of our computers every couple of years, it will take over two centuries until we reach that final frontier.

An artificial neural network represents the state of each neuron by a single number and the strength of each synapse by a single number.
In this model, each neuron updates its state at regular time steps by simply averaging together the inputs from all connected neurons, weighting them by the synaptic strengths, optionally adding a constant, and then applying what’s called an activation function to the result to compute its next state.
The easiest way to use a neural network as a function is to make it feedforward, with information flowing only in one direction, plugging the input to the function into a layer of neurons at the top and extracting the output from a layer of neurons at the bottom.

If we don’t change direction soon, we’ll end up where we’re headed.

Basic reinforcement learning is a classic machine learning technique inspired by behaviorist psychology, where getting a positive reward increases your tendency to do something again and vice versa.
DeepMind combined this idea with deep learning: they trained a deep neural net to predict how many points would on average be gained by pressing each of the allowed keys on the keyboard, and then the AI selected whatever key the neural net rated as most promising.
That's deep reinforcement learning.

If you’re a robot, life itself can be viewed as a game.
All that’s needed is a system that gives it points whenever it makes progress.

AlphaGo displayed both intuition and creativity.
Combining the intuition of deep learning with the logic of “good old-fashioned AI” can produce second-to-none strategy.
AlphaGo was to marry the intuitive power of deep learning with the logical power of “good old-fashioned AI”.
They used a massive database of Go positions from both human play and games where AlphaGo had played a clone of itself, and trained a deep neural network to predict from each position the probability that white would ultimately win.
They also trained a separate network to predict likely next moves.
They then combined these networks with a “good old-fashioned AI” method that cleverly searched through a pruned list of likely future-move sequences to identify the next move that would lead to the strongest position down the road.

The Winograd Schema Challenge asks what “they” refers to here:
1. “The city councilmen refused the demonstrators a permit because they feared violence.”
2. “The city councilmen refused the demonstrators a permit because they advocated violence.”

It’s getting progressively harder to argue that AI completely lacks goals, breadth, intuition, creativity or language.

Verification asks “Did I build the system right?”
Validation asks “Did I build the right system?”

Does the system rely on assumptions that might not always be valid?
If so, how can it be improved to better handle uncertainty?

Industrial accidents have decreased rather than increased as technology has improved, dropping from about 14,000 deaths in 1970 to 4,821 in 2014.

What if a phishing email appears to come from your credit card company and is followed up by a phone call from a friendly human voice that you can’t tell is AI-generated?

Once AI becomes able to generate fully realistic fake videos of you committing crimes, will you vote for a system where the government tracks everyone’s whereabouts at all times and can provide you with an ironclad alibi if needed?

If a self-driving car causes an accident, who should be liable? The car itself!
Self-driving cars might be allowed (and required) to hold car insurance.
Models with a sterling safety record will qualify for premiums that are very low, probably lower than what’s available to human drivers, while poorly designed models from sloppy manufacturers will only qualify for insurance policies that make them prohibitively expensive to own.

If machines such as cars are allowed to hold insurance policies, should they also be able to own money and property?
If so, there’s nothing legally stopping smart computers from making money.
If AI systems eventually get better than humans at investing (which they already are in some domains), this could lead to a situation where most of our economy is owned and controlled by machines.
Most of our economy is already owned by another form of non-human entity: corporations.

In a dogfight between a fully autonomous drone that can respond instantly and a drone reacting more sluggishly because it’s remote-controlled by a human halfway around the world, which one do you think would win?

Those who stand to gain most from an arms race aren’t superpowers but small rogue states and non-state actors such as terrorists, who gain access to the weapons via the black market once they’ve been developed.

Once mass-produced, small AI-powered killer drones are likely to cost little more than a smartphone.
Whether it’s a terrorist wanting to assassinate a politician or a jilted lover seeking revenge on his ex-girlfriend, all they need to do is upload their target’s photo and address into the killer drone: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure that nobody knows who was responsible.

Would it be easier to enforce a requirement that enemy autonomous weapons be 100% ethical than to enforce that they aren’t produced in the first place?

Digital Athens:
The reason that the Athenian citizens of antiquity had lives of leisure where they could enjoy democracy, art and games was mainly that they had slaves to do much of the work.
But why not replace the slaves with AI-powered robots?

Inequality? The main cause is something else: technology. Digital technology drives inequality.

Career Advice: choose professions that seem unlikely to get automated in the near future. Example:
Teacher, nurse, doctor, dentist, scientist, entrepreneur, programmer, engineer, lawyer, social worker, clergy member, artist, hairdresser or massage therapist.
Does it require interacting with people and using social intelligence?
Does it involve creativity and coming up with clever solutions?
Does it require working in an unpredictable environment?
Don’t be the radiologist who analyzes the medical images and gets replaced by IBM’s Watson, but the doctor who orders the radiology analysis, discusses the results with the patient, and decides on the treatment plan.

During the Industrial Revolution, we started figuring out how to replace our muscles with machines, and people shifted into better-paying jobs where they used their minds more.
Now we’re gradually figuring out how to replace our minds by machines. If we ultimately succeed in this, then what jobs are left for us?

The vast majority of today’s occupations are ones that already existed a century ago, and when we sort them by the number of jobs they provide, we have to go all the way down to twenty-first place in the list until we encounter a new occupation: software developers, who make up less than 1% of the U.S. job market.

The main trend on the job market isn’t that we’re moving into entirely new professions. Rather, we’re crowding into those pieces of terrain that haven’t yet been submerged by the rising tide of technology!

Imagine two horses seeing an automobile in the year 1900.
“I’m sure there’ll be new new jobs for horses that we haven’t yet imagined. That’s what’s always happened before, like with the invention of the wheel and the plow.”
People decided to take care of horses.
Can we similarly take care of our fellow humans in need?

Game theory elegantly explains that entities have an incentive to cooperate where cooperation is a so-called Nash equilibrium: a situation where any party would be worse off if they altered their strategy.
To prevent cheaters from ruining the successful collaboration of a large group, it may be in everyone’s interest to relinquish some power to a higher level in the hierarchy that can punish cheaters.
For example, people may collectively benefit from granting a government power to enforce laws, and cells in your body may collectively benefit from giving a police force (immune system) the power to kill any cell that acts too uncooperatively.
For a hierarchy to remain stable, its Nash equilibrium needs to hold also between entities at different levels: for example, if a government doesn’t provide enough benefit to its citizens for obeying it, they may change their strategy and overthrow it.

How is technology changing the hierarchical nature of our world?
History reveals an overall trend toward ever more coordination over ever-larger distances.
Although our present world remains stuck in a multipolar Nash equilibrium, with competing nations and multinational corporations at the top level, technology is now advanced enough that a unipolar world would probably also be a stable Nash equilibrium.

Most AI researchers and neuroscientists guess that the quickest route to superintelligence is to bypass brain emulation and engineer it in some other way.
After all, why should our simplest path to a new technology be the one that evolution came up with.
The aviation industry didn’t start with mechanical birds.

A fast AI takeoff makes world takeover easier, while a slow one makes an outcome with many competing players more likely.

It’s a mistake to passively ask “what will happen,” as if it were somehow predestined!
Instead ask: “What should happen? What future do we want?”
If we don’t know what we want, we’re unlikely to get it.
Do you want to maximize positive experiences, minimize suffering, or leave this to sort itself out?

To the vastly more intelligent entities that would exist at that time, an uploaded human may seem about as interesting as a simulated mouse or snail would seem to us.
Although we currently have the technical capability to reanimate old spreadsheet programs from the 1980s in a DOS emulator, most of us don’t find this interesting enough to actually do it.

Valuing diversity, and recognizing that different people have different preferences, the AI has divided Earth into different sectors for people to choose between, to enjoy the company of kindred spirits.
Here are some examples:
• Knowledge sector: Here the AI provides optimized education, including immersive virtual-reality experiences, enabling you to learn all you’re capable of about any topics of your choice. Optionally, you can choose not to be told certain beautiful insights, but to be led close and then have the joy of rediscovering them for yourself.
• Art sector: Here opportunities abound to enjoy, create and share music, art, literature and other forms of creative expression.
• Hedonistic sector: Locals refer to it as the party sector, and it’s second to none for those yearning for delectable cuisine, passion, intimacy or just wild fun.
• Pious sector: There are many of these, corresponding to different religions, whose rules are strictly enforced.
• Wildlife sector: Whether you’re looking for beautiful beaches, lovely lakes, magnificent mountains or fantastic fjords, here they are.
• Traditional sector: Here you can grow your own food and live off the land as in yesteryear - but without worrying about famine or disease.
• Gaming sector: If you like computer games, the AI has created truly mind-blowing options for you.
• Virtual sector: If you want a vacation from your physical body, the AI will keep it hydrated, fed, exercised and clean while you explore virtual words through neural implants.
• Prison sector: If you break rules, you’ll end up here for retraining unless you get the instant death penalty.

The AI enforces two tiers of rules: universal and local.
Universal rules apply in all sectors, for example a ban on harming other people.
Individual sectors have additional local rules on top of this, encoding certain moral values.
The sector system therefore helps deal with values that don’t mesh.

Talk of a robot-rights movement is met with chuckles. Why? One common pro-slavery argument is that slaves don’t deserve human rights because they or their race/species/kind are somehow inferior. For enslaved animals and machines, this alleged inferiority is often claimed to be due to a lack of soul or consciousness.

Consciousness is by far the most remarkable trait.
It’s how our Universe gets meaning. Galaxies are beautiful only because we see and subjectively experience them.
If in the distant future our cosmos has been settled by high-tech zombie AIs, then it doesn’t matter how fancy their intergalactic architecture is: it won’t be beautiful or meaningful, because there’s nobody and nothing to experience it - it’s all just a huge and meaningless waste of space.

There are computer tournaments in so-called losing chess.

A human-extinction scenario that some people may feel better about: viewing the AI as our descendants.
Parents with a child smarter than them, who learns from them and accomplishes what they could only dream of, are likely happy and proud even if they know they can’t live to see it all.
In this spirit, AIs replace humans but give us a graceful exit that makes us view them as our worthy descendants.
Humans are gradually phased out via a global one-child policy, but are treated so exquisitely well until the end that they feel they’re in the most fortunate generation ever.
As long as the AIs eliminate poverty and give all humans the opportunity to live full and inspiring lives.

The only viable path to broad relinquishment of technology is to enforce it through a global totalitarian state.
If some but not all relinquish a transformative technology, then the nations or groups that defect will gradually gain enough wealth and power to take over.

We’ve dramatically underestimated life’s future potential.
We're not limited to century-long life spans marred by disease.
Life has the potential to flourish for billions of years, throughout the cosmos.

These ultimate limits are set not by our understanding, but by the laws of physics.
This, ironically, makes it in some ways easier to analyze the long-term future of life than the short-term future.

Unambitious civilizations simply become cosmically irrelevant.
Almost all life that exists will be ambitious life.

A blue whale is rearranged krill.

There are two mathematically equivalent ways of describing each physical law: either as the past causing the future, or as nature optimizing something.
The second way is more elegant and profound.

Goal-oriented behavior was hardwired in the very laws of physics.
To rescue a swimmer as fast as possible, a lifeguard won’t go in a straight line, but a bit further along the beach where she can go faster than in the water.
(Nature does this too.)

The second law of thermodynamics states that entropy tends to increase until it reaches its maximum possible value.
When you pour cold milk into hot coffee, for example, your beverage appears to march irreversibly toward its own personal heat death goal, and before long, it’s all just a uniform lukewarm mixture.
If a living organism dies, its entropy also starts to rise, and before long, the arrangement of its particles tends to get much less organized.

Gravity behaves differently from all other forces and strives to make our Universe not more uniform and boring but more clumpy and interesting.

Dissipation-driven adaptation:
Random groups of particles strive to organize themselves so as to extract energy from their environment as efficiently as possible.
Molecules exposed to sunlight would over time tend to arrange themselves to get better and better at absorbing sunlight.
In other words, nature appears to have a built-in goal of producing self-organizing systems that are increasingly complex and lifelike.

The second law of thermodynamics has a life loophole: although the total entropy must increase, it’s allowed to decrease in some places as long as it increases even more elsewhere.
So life maintains or increases its complexity by making its environment messier.

There are many known examples of such emergent self-replication. For example:
Vortices in turbulent fluids can make copies of themselves, and clusters of microspheres can coax nearby spheres into forming identical clusters.
At some point, a particular arrangement of particles got so good at copying itself that it could do so almost indefinitely by extracting energy and raw materials from its environment.
We call such a particle arrangement life.

A living organism is an agent of bounded rationality that doesn’t pursue a single goal, but instead follows rules of thumb for what to pursue and avoid.
Our human minds perceive these evolved rules of thumb as feelings, which guide our decision making toward the ultimate goal of replication.
Feelings of hunger and thirst protect us from starvation and dehydration, feelings of pain protect us from damaging our bodies, feelings of lust make us procreate, feelings of love and compassion make us help other carriers of our genes and those who help them and so on.
Guided by these feelings, our brains can quickly and efficiently decide what to do without having to subject every choice to a tedious analysis of its ultimate implications for how many descendants we’ll produce.
The ultimate authority is now our feelings, not our genes.
Human behavior strictly speaking doesn’t have a single well-defined goal at all.

If you’d been observing Earth’s atoms since our planet formed, you’d have noticed three stages of goal-oriented behavior:
1. All matter seemed focused on dissipation (entropy increase).
2. Some of the matter came alive and instead focused on replication and subgoals of that.
3. A rapidly growing fraction of matter was rearranged by living organisms to help accomplish their goals.

The real risk with Artificial General Intelligence isn’t malice but competence.
A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

Three tough subproblems:
1. Making AI learn our goals
2. Making AI adopt our goals
3. Making AI retain our goals

Midas asked that everything he touched turn to gold, but was disappointed when this prevented him from eating and even more so when he inadvertently turned his daughter to gold.
In the stories where a genie grants three wishes, there are many variants for the first two wishes, but the third wish is almost always the same: “Please undo the first two wishes, because that’s not what I really wanted.”
To figure out what people really want, you can’t merely go by what they say.
You also need a detailed model of the world, including the many shared preferences that we tend to leave unstated because we consider them obvious.
Once we have such a world model, we can often figure out what people want even if they don’t tell us, simply by observing their goal-oriented behavior.

Children of hypocrites learn more from what they see their parents do than from what they hear them say.

We are currently trying hard to enable machines to infer goals from behavior, and this will be useful also long before any superintelligence comes on the scene.
For example, a retired man may appreciate it if his eldercare robot can figure out what he values simply by observing him, so that he’s spared the hassle of having to explain everything with words or computer programming.
One challenge involves finding a good way to encode arbitrary systems of goals and ethical principles into a computer.
Another challenge is making machines that can figure out which particular system best matches the behavior they observe.
Inverse reinforcement learning is that we make decisions all the time, and that every decision we make reveals something about our goals.
By observing lots of people in lots of situations (either for real or in movies and books), the AI can eventually build an accurate model of all our preferences.

The time window during which you can load your goals into an AI may be quite short: the brief period between when it’s too dumb to get you and too smart to let you.

A superintelligent AI will resist being shut down if you give it any goal that it needs to remain operational to accomplish - and this covers almost all goals!
If you give a superintelligence the sole goal of minimizing harm to humanity, for example, it will defend itself against shutdown attempts because it knows we’ll harm one another much more in its absence through future wars and other follies.

The propensity to change goals in response to new experiences and insights increases rather than decreases with intelligence.

The ethical views of many thinkers can be distilled into four principles:
• Utilitarianism: Positive conscious experiences should be maximized and suffering should be minimized.
• Diversity: A diverse set of positive experiences is better than many repetitions of the same experience, even if the latter has been identified as the most positive experience possible.
• Autonomy: Conscious entities/societies should have the freedom to pursue their own goals unless this conflicts with an overriding principle.
• Legacy: Compatibility with scenarios that most humans today would view as happy, incompatibility with scenarios that essentially all humans today would view as terrible.

Would we really want people from 1,500 years ago to have a lot of influence over how today’s world is run?
If not, why should we try to impose our ethics on future beings that may be dramatically smarter than us?

If some sophisticated future computer programs turn out to be conscious, should it be illegal to terminate them?
If there are rules against terminating digital life forms, then need there also be restrictions on creating them to avoid a digital population explosion?

A fast-forward replay of our 13.8-billion-year cosmic history:
1. Matter seemingly intent on maximizing its dissipation
2. Primitive life seemingly trying to maximize its replication
3. Humans pursuing not replication but goals related to pleasure, curiosity, compassion and other feelings that they’d evolved to help them replicate
4. Machines built to help humans pursue their human goals

The only currently programmable goals that are guaranteed to remain truly well-defined as an AI gets progressively more intelligent are goals expressed in terms of physical quantities alone, such as particle arrangements, energy and entropy.
However, humans are a historical accident, and aren’t the optimal solution to any well-defined physics problem.

How should we strive to shape the future of our Universe?
If we cede control to a superintelligence before answering these questions rigorously, the answer it comes up with is unlikely to involve us.
This makes it timely to rekindle the classic debates of philosophy and ethics, and adds a new urgency to the conversation!
Philosophy with a deadline.

Many arguments generate more heat than light.
(because the antagonists are talking past each other.)

Galileo described nature as “a book written in the language of mathematics.”

Scientists started taking Newton’s theory of gravity seriously because they got more out of it than they put into it.
Simple equations could accurately predict the outcome of every gravity experiment ever conducted.

Emergent phenomenon has properties above and beyond those of its particles.
Wetness: A drop of water is wet, but an ice crystal and a cloud of steam aren’t, even though they’re made of identical water molecules.
Why? Because the property of wetness depends only on the arrangement of the molecules, the phenomenon of wetness emerges only when there are many molecules, arranged in the pattern we call liquid.
What particle arrangements are conscious?
Consciousness is an emergent phenomenon.
Consciousness is the way that information feels when it’s processed in certain ways.
It must be substrate-independent; it’s only the structure of the information processing that matters, not the structure of the matter doing the information processing.

Which particle arrangements are conscious and which aren’t?
If we can answer that, then we can figure out which AI systems are conscious.
It can also help emergency-room doctors determine which unresponsive patients are conscious.

We may sometimes have “consciousness without access,” that is, subjective experience of things that are too complex to fit into our working memory for later use.
For example, when you experience inattentional blindness by being too distracted to notice an object in plain sight, this doesn’t imply that you had no conscious visual experience of it, merely that it wasn’t stored in your working memory.
Should it count as forgetfulness rather than blindness?

When people ask about the meaning of life as if it were the job of our cosmos to give meaning to our existence, they’re getting it backward.
It’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.

Contrast sapience (the ability to think intelligently) with sentience (the ability to subjectively experience qualia).
Humans have built our identity on being Homo sapiens.
I suggest that we rebrand ourselves as Homo sentiens.

Science gathers knowledge faster than society gathers wisdom.

Mindful optimism is the expectation that good things will happen if you plan carefully and work hard for them.

Develop positive visions for the future.
Positive visions form the foundation of all collaboration.
After all, why sacrifice something you have if you can’t imagine the even greater gain that this will provide?
This means that we should be imagining positive futures not only for ourselves, but also for society and for humanity.

Do you want to own your technology or do you want your technology to own you?