
Can we build AI without losing control over it?
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the politics and ethics of how best to contain and manage them.
Summary
In the context of 'The Dawn of Conscious AI,' Sam Harris's TED talk serves as a stark warning: as AI approaches consciousness and superintelligence, we must urgently address the ethical and political frameworks to maintain control, preventing existential risks from machines that outpace human oversight.
"We are probably going to build AI that is superhuman in its intelligence, and we have not yet grappled with the politics and ethics involved in that."
Discuss: With AI potentially achieving consciousness, how should society balance innovation with safeguards to prevent loss of control?













