Skip to Content

Book: Human Compatible

tags
AGI
  • Click rate optimizing algos change their environment by making people more predictable to increase their reward.

  • Complexity limits decision making in decision making agents.

    • Thought: Might overcoming social complexity be the driving force in human intelligence evolution?
  • One difference between humans and machines seem to be that humans seem to have a very wide competence boundary.

  • According to Godel’s incompleteness theorem, given a collection of knowledge, a question posed in first order logic can be answered

  • Predicts possible computational breakthroughs to reach AGI but what about possible theoretical constraints?

    • Hominid brain size stopped increasing meaningfully about 1 million years ago.
    • Acknowledges that intelligence explosion might not happen due to some future discovered constraints
      • But are we willing to bet the future of humanity on it?
  • Question: Is there an advantage that has brain cell chemistry has over silicon?

    • Since there’s a lot of mentioned advantages of silicon over brain cells.
  • The current age of misiniformation is a failure of the marketplace of ideas.

    • Thought: Since a market is an information processing system (Hayek), I think the Author is suggesting we need something better.
  • We can build news over a web of prior inviolable facts

    • News sources publishing false news should suffer a hit to their reputation
    • Thus news sources carries some popperian risk as it’s easy to hurt them if just one involable fact is violated.
      • This might be hard in reality
  • The author bets heavily that most jobs will be automated away.

    • Thought: This has arisen many times in the past but this time it’s different
  • There’s been a divergence between capital and labour since 1970 with a higher proportion flowing to capital.

    • Question: Why is this attributed solely to automation and not the IP laws that allow monopolies?
      • Perhaps in a world without IP laws, we could see much less monopolies and more even distributions of wealth?
  • Humans in amazon fullfilment centres are already being controlled by algorithms.

    • Effectively, many humans are operated at the same level as machines.
    • Thought: This is currently the case for Swiggy delivery boys.
  • Historically, humans have written about value alignment issues.

    • Ex: King Midas, Be careful what you wish for etc.
    • Previously the scale of damage was small but AI might make it much bigger.
  • Thought: The author tries to group both trivial and non-trivial arguments together.

  • One solution to the paperclip maximizing AI/the Midas problem is for AI to constantly care about human suffering.

  • 3 Principles for such an AI

  • The changes already made to the world is good data as it reflects our preferences.

  • Ideally, we could make provably beneficial machines but there’s a difference between mathematical proofs and the real-world.

    • Ex: Cybersecurity where mathematical proofs for security exists but in the real world we see implementation defects/hardware faults etc.
  • Machine uncertainty about human preferences leads to them being incentivized to be shut down either by themselves or us when they’re uncertain about the harm they could cause us.

backlinks
Wireheading,