
Can we build AI without losing control over it?
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the politics and ethics of how best to contain and manage them.
Summary
Sam Harris's TED talk 'Can we build AI without losing control over it?' delves into the risks of superintelligent AI outpacing human oversight, directly tying into the ethical implications of AI in everyday life by warning of potential existential threats if we fail to align AI goals with human values.
"The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate."
Discuss: What steps should society take to mitigate the ethical risks of AI integration in daily activities, ensuring human control remains paramount?




