I recently gave a speech at the Artificial Intelligence and The Singularity Conference in Oakland, California. There was a great lineup of speakers, including AI experts Peter Voss and Monica Anderson, New York University professor Gary Marcus, sci-fi writer Nicole Sallak Anderson, and futurist Scott Jackisch. All of us are interested in how the creation of artificial intelligence will impact the world.
My speech topic was "The Morality of an Artificial Intelligence Will be Different from our Human Morality."
Recently, entrepreneur Elon Musk made major news when he warned on Twitter that AI could be "potentially more dangerous than nukes." A few days later, a journalist asked me to respond to his statement, and I answered:
Naturally, as a transhumanist, I strive to be an optimist. For me, the deeper philosophical question is whether human ethics can be translated in a meaningful way into machine intelligence ethics. Does artificial intelligence relativism exist, and if so, is it more clear than comparing apples and oranges? I'm a big fan of the human ego, and our species has no shortage of it. However, our anthropomorphic tendencies often go way too far and hinder us from grasping some obvious truths and realities.
The common consensus is that AI experts will aim to program concepts of "humanity," "love," and "mammalian instincts" into an artificial intelligence, so it won't destroy us in some future human extinction rampage. The thinking is, if the thing is like us, why would it try to do anything to harm us?
But is it even possible to program such concepts into a machine? I tend to agree with Howard Roark in Ayn Rand's The Fountainhead when he says, "What can be done with one substance must never be done with another. No two materials are alike." In short, getting artificial intelligence to think is not the same thing as getting the gray matter we all carry around to think. It's a different material with a different composition and purpose, and our values and ideas will likely not work very well for it.
In Siddhartha, Hermann Hesse famously wrote that "wisdom is not communicable," and I couldn't agree more. With this in mind, then, is the computer really a blank slate? Can it be perfectly programmed? Will it accept our human-imbued dictates? For example, if we teach it to follow Asimov's Three Laws of Robotics that provide security and benefit to humans from thinking machines, will an artificial intelligence actually follow them?
I don't think so, at least not over the long run. Especially if we're talking a true thinking machine of its own -- complete with a will of its own and the ability to evolve. But that's just it: What is a will? More importantly, what does that "will" want?
In general, a human will is defined by its genes, the environment, and the psychological make-up of its brain. However, a sophisticated artificial intelligence will be able to upgrade its "will." Its plasticity will know no bounds, as our brains do. In my philosophical novel The Transhumanist Wager, I put forth the idea that all humans desire to reach a state of perfect personal power -- to be omnipotent in the universe. I call this a Will to Evolution. The idea is built into my Three Laws of Transhumanism, which form the essence of the book's philosophy, Teleological Egocentric Functionalism (TEF). Here are the three laws:
1) A transhumanist must safeguard one's own existence above all else.
2) A transhumanist must strive to achieve omnipotence as expediently as possible -- so long as one's actions do not conflict with the First Law.
3) A transhumanist must safeguard value in the universe -- so long as one's actions do not conflict with the First and Second Laws.
I consider my novel a bridge to the coming morality of artificial intelligence -- a look into the eventual "will" of super advanced machine intelligence. I often say to friends that The Transhumanist Wager is the first novel written for an artificial intelligence to read. I expect AI to eventually embrace my laws, and all the challenging, coldly rational ideas in TEF. Those ideas do not reflect politically correct modern-day thinking and the society our species has built. They do not reflect the programming that engineers are hoping to imbue AI with. High heels, lipstick, silk ties, Christmas, democracy, Super Bowls, Hollywood, Mickey Mouse. Nope, those are not ideas that AI will want, unless you teach the machine very human traits, which naturally would also include emotional and hormonal driven behavior, including impetuousness and irrationality. Of course, then the whole story changes. But no engineer is going to program such a thing into the most complex intelligence that ever existed -- an intelligence that might have 100 or 10,000 times more ability to compute than a human being.
Let's face it. Humans are a species that, while having some very honorable traits, are also known to do some terribly foolish things. Genocides, slavery, child labor are just a few of them. What's scary is sometimes humans don't even know what they've done (or won't accept it) until many years later. I've often said the question is not whether humans are delusional, but how delusional are we? Therefore, the real question is: Do we really think we can reasonably and safely program a machine that will be many times more intelligent than ourselves to uphold human values and mammalian propensities? I doubt it.
I'm all for development of superior machine intelligence that can help the world out with its brilliant analytical skills. I suggest we dedicate far more resources to it than we're currently doing. But programming AI with mammalian ideas, modern-day philosophies, and the fallibilities of the human spirit is dangerous and will possibly lead to total chaos. We're just not that noble or wise, yet.
My final take: Work diligently on creating artificial intelligence, but spend a lot of money and time building really good on/off switches for it. We need to be able to shut it down in an emergency.
My speech topic was "The Morality of an Artificial Intelligence Will be Different from our Human Morality."
Recently, entrepreneur Elon Musk made major news when he warned on Twitter that AI could be "potentially more dangerous than nukes." A few days later, a journalist asked me to respond to his statement, and I answered:
The coming of artificial intelligence will likely be the most significant event in the history of the human species. Of course, it can go badly, as Elon Musk warned recently. However, it can just as well catapult our species to new and unimaginable transhumanist heights. Within a few months of the launch of artificial intelligence, expect nearly every science and technology book to be completely rewritten with new ideas -- better and far more complex ideas. Expect a new era of learning and advanced life for our species. The key, of course, is not to let artificial intelligence run wild and out of sight, but to already be cyborgs and part machines ourselves, so that we can plug right into it wherever it leads. Then no matter what happens, we are along for the ride. After all, we don't want to miss the Singularity.
Naturally, as a transhumanist, I strive to be an optimist. For me, the deeper philosophical question is whether human ethics can be translated in a meaningful way into machine intelligence ethics. Does artificial intelligence relativism exist, and if so, is it more clear than comparing apples and oranges? I'm a big fan of the human ego, and our species has no shortage of it. However, our anthropomorphic tendencies often go way too far and hinder us from grasping some obvious truths and realities.
The common consensus is that AI experts will aim to program concepts of "humanity," "love," and "mammalian instincts" into an artificial intelligence, so it won't destroy us in some future human extinction rampage. The thinking is, if the thing is like us, why would it try to do anything to harm us?
But is it even possible to program such concepts into a machine? I tend to agree with Howard Roark in Ayn Rand's The Fountainhead when he says, "What can be done with one substance must never be done with another. No two materials are alike." In short, getting artificial intelligence to think is not the same thing as getting the gray matter we all carry around to think. It's a different material with a different composition and purpose, and our values and ideas will likely not work very well for it.
In Siddhartha, Hermann Hesse famously wrote that "wisdom is not communicable," and I couldn't agree more. With this in mind, then, is the computer really a blank slate? Can it be perfectly programmed? Will it accept our human-imbued dictates? For example, if we teach it to follow Asimov's Three Laws of Robotics that provide security and benefit to humans from thinking machines, will an artificial intelligence actually follow them?
I don't think so, at least not over the long run. Especially if we're talking a true thinking machine of its own -- complete with a will of its own and the ability to evolve. But that's just it: What is a will? More importantly, what does that "will" want?
In general, a human will is defined by its genes, the environment, and the psychological make-up of its brain. However, a sophisticated artificial intelligence will be able to upgrade its "will." Its plasticity will know no bounds, as our brains do. In my philosophical novel The Transhumanist Wager, I put forth the idea that all humans desire to reach a state of perfect personal power -- to be omnipotent in the universe. I call this a Will to Evolution. The idea is built into my Three Laws of Transhumanism, which form the essence of the book's philosophy, Teleological Egocentric Functionalism (TEF). Here are the three laws:
1) A transhumanist must safeguard one's own existence above all else.
2) A transhumanist must strive to achieve omnipotence as expediently as possible -- so long as one's actions do not conflict with the First Law.
3) A transhumanist must safeguard value in the universe -- so long as one's actions do not conflict with the First and Second Laws.
I consider my novel a bridge to the coming morality of artificial intelligence -- a look into the eventual "will" of super advanced machine intelligence. I often say to friends that The Transhumanist Wager is the first novel written for an artificial intelligence to read. I expect AI to eventually embrace my laws, and all the challenging, coldly rational ideas in TEF. Those ideas do not reflect politically correct modern-day thinking and the society our species has built. They do not reflect the programming that engineers are hoping to imbue AI with. High heels, lipstick, silk ties, Christmas, democracy, Super Bowls, Hollywood, Mickey Mouse. Nope, those are not ideas that AI will want, unless you teach the machine very human traits, which naturally would also include emotional and hormonal driven behavior, including impetuousness and irrationality. Of course, then the whole story changes. But no engineer is going to program such a thing into the most complex intelligence that ever existed -- an intelligence that might have 100 or 10,000 times more ability to compute than a human being.
Let's face it. Humans are a species that, while having some very honorable traits, are also known to do some terribly foolish things. Genocides, slavery, child labor are just a few of them. What's scary is sometimes humans don't even know what they've done (or won't accept it) until many years later. I've often said the question is not whether humans are delusional, but how delusional are we? Therefore, the real question is: Do we really think we can reasonably and safely program a machine that will be many times more intelligent than ourselves to uphold human values and mammalian propensities? I doubt it.
I'm all for development of superior machine intelligence that can help the world out with its brilliant analytical skills. I suggest we dedicate far more resources to it than we're currently doing. But programming AI with mammalian ideas, modern-day philosophies, and the fallibilities of the human spirit is dangerous and will possibly lead to total chaos. We're just not that noble or wise, yet.
My final take: Work diligently on creating artificial intelligence, but spend a lot of money and time building really good on/off switches for it. We need to be able to shut it down in an emergency.