Monday, December 17, 2018

21 Lessons for the 21st Century by Yuval Noah Harari

The first lines of the introduction give a clue to Yuval's focus, "In a world deluged by irrelevant information clarity is power."  He has already given us a focus on how man came to be and where man can go.  This book reflects on the stresses and contradictions in the world today and explores a few ways we might extricate ourselves.  Fortunately he does have a sense of humor and offers many attention getting examples.

There are more thought provoking ideas than I can cover in a short post.  A lot to digest.  My attempts to sort through are only a taste of what you can expect.  To me this is the Book of the year.

In the middle of the last century the world was offered three global political philosophies--Fascism, Communism and Liberalism.  Fascism was killed during World War II and Communism collapsed by the close of the century leaving Liberalism to expand its umbrella.  However our current news reflects a new range of anti liberal trends.  The 21 Lessons reviews several alternatives we might consider for the 21st century.

Economic growth has underscored liberal thinking, but the most critical problems today are being undercut by economic growth.  Technological innovation threatens job security.  Climate change and pollution suffer with economic growth.  I see a problem with his concern about the need to reduce meat consumption:

Social media is taking over the lives of the entire globe.  Yuval expresses concern about online vs offline.  Online does have potential to steer people to offline activities that can be healthy for bodies and social beings. Generally social media is likely to cut down physical interaction and be unhealthy.

Algorithms are becoming increasingly more invasive.  One simple example given was how a GPS system can tell us to turn right or left.  Artificial Intelligence combined with bio tech is now getting  an understanding of emotions.  Algorithms will know you better than you know yourself.  Trust in algorithms will increase as they will become more reliable.

Ethics can be and will be integrated with algorithmic decisions.  Philosophers will have a demand as many decisions will need to be made split second with examples coming from such endeavors as self driving cars.

Happiness depends less on circumstances than on expectations.  Humans are easily satiated.

Inequality is likely to increase as those who control algorithms will have tools to squeeze more.  But it might not just be financial wealth, but also longevity as biotech will be more accessible to some.  The future of the masses will depend upon the goodwill of a small elite.  Some nations with a tradition of liberalism such as France or New Zealand will more likely support the masses while those with a more capitalist tradition like that of the United States may well dismantle the welfare state.  Newly emerging states like India and China, Brazil) are more likely to see an increase in inequality.

Killing a few people in Belgium draws far more attention than killing hundreds in Nigeria or Iraq.

Most people believe they are the centre of the world and their culture the linchpin of human history.  Rather than denigrating other cultures Yuval, a Jew living and working in Israel makes a few points about "God's Chosen People."   The universe is at least 13 billion years old with Earth being formed about 4.5 billion years ago.  Humans have existed for at least 2 million years.  Jerusalem was founded  about 5,000 years ago which does not mean it is eternal.  He also pointed out that Orthodox Jews usually hold the balance of power in Israel and have helped pass laws that curtail activities on the Sabbath including for secular Jews.

Morality predates religion.  He gives the example of pups playing until one bites too hard and they will not play with a bully.

Author quote:  "Questions you cannot answer are usually far better for you than answers you can not question."

We are all complicit to some degree--"How can anyone understand the web of relations among thousands of intersecting groups across the world."

The meaning of life is looking for a role to play and a story to provide identity.  A wise man asked about meaning of life replied, "I have learned that I am on earth in order to help other people.  What I still haven't figured out is why the other people are here?"

He goes on to say that asking about the meaning of life is the wrong question.  The better question is how do we stop suffering.  He does seem to have a Buddhist bias, but is upfront about it.

Going back to Confucius, rituals are good for social stability.  The most meaningful ritual is sacrifice.  The author contends that rituals are an obstacle to seeking truth.

On the question of free will Yuval asks to define it first.  If you mean the freedom to do what you desire, yes.  But if you mean the freedom to choose what to desire then no.  Humans do not have free will.  He asks us to think where does a thought come from?  He concludes that although we don't have free will we can be a bit more free from the tyranny of our will.

Mankind has made much progress in studying the brain, but have barely begun learning about the mind.  He personally has found meditation to be a tool for observing your own mind directly.  Self observation has always been difficult because there are so many stories surrounding us.  In the future  algorithms will create more stories making it more difficult to observe your mind.

My little sketches do not do justice to his overview of how we might look at life out of our complacent perspective.  Well worth reading and I expect different readers will get different values from the effort.

Read my thoughts  on "Sapiens":

Read my thoughts on "Homo Deos":


  1. t was Aristotle, who, in _The Nicomachean Ethics_, says that we cannot expect the same exactness in ethics as we do in epistemology. Hence most traditional philosophers distinguish sharply between epistemology and ethics.

    In order for ethics to be algorithmic, we must first agree on the principles and ethical theory to trump over others. Do we want a deontological theory? Consequentialist? Something different? Sure, some principles may be self-evident, such as the idea that killing people is a bad thing (a necessary principle for self-driving cars, to be sure). However, in the unlikely event that a car has to make a split “decision,” we can debate about what decision is “right”: should the car veer right to kill one person instead of three if it goes straight? Is a moral action right because it is right in and of itself, or is right by virtue of its consequences?

    Such moral dilemmas (and other countless scenarios) cannot be reduced to an algorithm. Sure, programming a car to recognize a person and not kill them is a wise idea, but things get more complex if decisions are involved. With that said, this isn’t a car I would want to drive me, and decide for me what my values ought to be. Leave ethics to me, please, for I am the only authorized driver of my own values.

  2. Forgive the missing “I” in the first word.