
Can we build AI without losing control over it?
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the politics and ethics of how best to contain and manage them.
Summary
In his TED talk, Sam Harris explores the ethical challenges of building superintelligent AI without losing control, directly tying into the AI Revolution's transformation of society by emphasizing the need for safeguards to prevent existential risks and ensure alignment with human values.
"The moment we build machines as smart as we are, our prospects on earth become rather dicey."
Discuss: What steps should society take to maintain control over rapidly advancing AI technologies?







