
Can we build AI without losing control over it?
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the politics and ethics of how best to contain and manage them.
Summary
In his TED talk, Sam Harris warns about the existential risks of superintelligent AI, urging careful development to maintain control. This directly ties into the AI Revolution, highlighting how unchecked advancements could reshape tomorrow's world in catastrophic ways, emphasizing ethical safeguards for a positive future.
"We have to admit that we're either going to build something that's smarter than we are, or we're not. And if we do, we have to figure out how to align its goals with ours."
Discuss: What steps should society take to align AI development with human values and prevent loss of control during the AI Revolution?







