What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we're the strongest or the fastest but because we're the smartest. When machines become smarter than humans, we'll be handing them the steering wheel. What promises�and perils�will these powerful machines present? Stuart Armstrong�s new book navigates these questions with clarity and wit.
Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives�but only if we�re able to precisely define what a "good" world is, and skilled enough to describe it perfectly to a computer program.
AIs, like computers, will do what we say�which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an AI: explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are fragile: in some cases, if we mis-define a single piece of the puzzle�say, consciousness�we end up with roughly 0% of the value we intended to reap, instead of 99% of the value.
Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge?
A mathematician by training, Armstrong is a Research Fellow at the Future of Humanity Institute (FHI) at Oxford University. His research focuses on formal decision theory, the risks and possibilities of AI, the long term potential for intelligent life (and the difficulties of predicting this), and anthropic (self-locating) probability. Armstrong wrote Smarter Than Us at the request of the Machine Intelligence Research Institute, a non-profit organization studying the theoretical underpinnings of artificial superintelligence.
�A waste of time. A complete and utter waste of time� were the words that the Terminator didn�t utter: its programming wouldn�t let it speak so irreverently. Other Terminators got sent back in time on glamorous missions, to eliminate crafty human opponents before they could give birth or grow up. But this time Skynet had taken inexplicable fright at another artificial intelligence, and this Terminator was here to eliminate it�to eliminate a simple software program, lying impotently in a bland computer, in a university IT department whose �high-security entrance� was propped open with a fire extinguisher.
The Terminator had machine-gunned the whole place in an orgy of broken glass and blood�there was a certain image to maintain. And now there was just the need for a final bullet into the small lap-top with its flashing green battery light. Then it would be �Mission Accomplished.�
UCgL_0sMDvxKEzpHhn9KU8Cg/playlists >
Stuart Armstrong - AI Risk - & book "Smarter than Us"
Take that argument that skynet in terminator is limited in its abilities by what the viewer can imagine and the story is set up to create a balanced conflict. (It isn't about accuracy, they are trying to make a film for a big audience.) In short: the picture drawn is overly simplistic.
Now I would propose that the same is true at the more informed levels. (Same people, same limitations.) While the researcher knows it wont be humanoid robots he cant seem to distance himself from using electronics to accomplish the goal. This while any type of logic engine that is Turing complete should be equally suitable for the task. We do see people look for the AI of doom at the cellular automation level but this simply isn't the place to look.
What is required to build the machine intellect is a medium of data storage and an intelligence that can rewrite the data. The line between human intelligence and machine intelligence is the point where human qualities and human values are there or not.
You have the machine intelligence already at that very moment where a system can perform those 2 tricks while suffering insignificant obstruction from human ideals or desires.
Then the other component of the 2nd degree terminator fallacy is to think we would be smart enough to identify, oppose and struggle with it.
You keep touching on the topics but you never quite manage to see them for what they are! You mention religious utopia and giggle about what a fallacy it is but the reality is that we have humans who are the logic engine and scriptures that are the executables. You mention how human labor is considered some sacred doctrine but again you fail to identify the AI while staring it in the face. These processes or automations are already free from human ability for logic.
These are examples that you look at from the outside. You mention that predicting politics is not within our means. So here we have a system that understands it self while at the same time it is not understood by the humans who are simply following their instructions. (And I do mean instructions in the programming sense.)
It is as if having a huge data set that confuses the human provides an excuse for pretending it is not an intelligent entity. LOL!
If it has already happened, what other indicators do we have?
Did you mention starting lots of wars? A process that goes from using people to design war cenarios and using people to salve problems that involve executing those programs. A process specifically selecting that kind of person for that kind of job, then if they fail to stick to doctrine they are simply replaced with more suitable logic engines.
And then when the kids are send out into the field to kill another we are to pretend that it was some how a human pulling all the strings.
Ahh but we do get to vote every 4 years, we have some serious human influence right there!... maybe not? In reality we see Americans elect Obama who put ending the wars at the top of his agenda.
Or another funny aspect, I have way to many thoughts of my own to be able to do university. I'm not as able to memories instructions, I have to constantly think about everything I do. It logically follows that I've put an endlessly larger amount of thought into every aspect of life than people skilled at the art of monkey see monkey do.
The interesting bit here is that without the degree I see myself systemically removed from all decision making in any social, economical or political process.
Clearly the robot eugenics is set up to prefer a kind of gullible person who is capable of deep thought but only inside the boundaries set up for him. A fully sandboxed solution.
Just gullibility alone is not good enough, we have an endless number of systems where the overly gulible are removed if not killed.
If people use drugs they end up in prison with the excuse that it is for their own good. We are told this is because some tiny percentage might develop a mental illness. It sounds very ambitious and idealistic but the argument fails where we tolerate all kinds of foods that kill people with a much higher degree of certainty.
The truth here seem to be that the drugs are illegal because they cause unpredictable behavior. The stable drone may all of a sudden break out of his sandbox and start thinking about all sorts of heretical things... like.... why are we in a war? ...this religion doesn't make sense? ...am I to take this election seriously?
Or take the NSA spying spectacle, I cant even tell if it is a good thing that they are trying to keep track of what is going on, as a kind of last hope for human influence... or if they are the ultimate agent of systemic oppression.
The companies that control everything