
Can we build AI without losing control over it?
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris -- and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the politics and ethics of how best to contain and manage them.
Summary
In his TED talk, Sam Harris explores the ethical challenges of building superintelligent AI, warning that without proper control mechanisms, the AI revolution could lead to unintended societal transformations or even existential risks, urging a proactive approach to align AI with human values.
"The moment we admit that information processing is the source of intelligence, that some appropriate computational system is what the human brain is, and that we can build something like that artificially, we open the door to the possibility that we might build something that is much smarter than we are."
Discuss: How can society ensure ethical oversight in the AI revolution to prevent loss of control, as discussed by Sam Harris?







