The dangers of AI are far more than that of a Nuclear War — Elon Musk

We know Elon Musk is pretty close to the cutting edge stuff in Artificial Intelligence (AI). In this talk he discusses his concerns about AI.

He answers some of the pressing questions and concerns and why some kind of regulation is much needed.

He understands the threats that the so called AI experts fail to see. And he admits that it scares the hell out of him.

Just 5 mins into this talk and you get an idea of the kind of impact AI can have. Let me add here that he mention the difference between Narrow or Weak AI and Digital Super Intelligence, discussed later.

I am sharing some of the highlights from his talk.

Several AI “experts” think more than they do and they think they are smarter than they actually are. They don’t understand the repercussions. He mentions that the rate of improvement is exponential in this area.

  1. Consider this AlphaGO, in a span of 6–9 months, was able to defeat the champions in the Game. AlphaGo Zero crushed AlphaGo. It learnt by playing itself. You can put in rules for any game and it can pretty much beat the best players in that. Question is did the experts predict that?
    Similarly for self driving cars. They are predicting it to be 100–200% safer in an year or two.
  2. Narrow or Weak AI does not pose a risk to species. It will result in lost jobs, better weaponry etc. But Digital Super Intelligence does. Thats why we need to do it very very carefully.

A super intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. –Wiki

3. He talks about regulations to ensure everyone is developing AI safely. Even though the dangers of AI are far more why no regulations?

To conclude he wishes that these developments are symbiotic with Humanity. And that we don’t create systems that pose a threat to us.

I will highly recommend that you watch this talk.

Leave a Comment

Your email address will not be published. Required fields are marked *