Content by Cory Doctorow (top ebook reader .TXT) π
Excerpt from the book:
Read free book Β«Content by Cory Doctorow (top ebook reader .TXT) πΒ» - read online or download for free at americanlibrarybooks.com
Download in Format:
- Author: Cory Doctorow
Read book online Β«Content by Cory Doctorow (top ebook reader .TXT) πΒ». Author - Cory Doctorow
any of the other elements that make up the squeeze-and-release tension in a good yarn will be unrecognizable to us pre-Singletons.
It's a neat conceit to write around. I've committed Singularity a couple of times, usually in collaboration with gonzo Singleton Charlie Stross, the mad antipope of the Singularity. But those stories have the same relation to futurism as romance novels do to love: a shared jumping-off point, but radically different morphologies.
Of course, the Singularity isn't just a conceit for noodling with in the pages of the pulps: it's the subject of serious-minded punditry, futurism, and even science.
Ray Kurzweil is one such pundit-futurist-scientist. He's a serial entrepreneur who founded successful businesses that advanced the fields of optical character recognition (machine-reading) software, text-to-speech synthesis, synthetic musical instrument simulation, computer-based speech recognition, and stock-market analysis. He cured his own Type-II diabetes through a careful review of the literature and the judicious application of first principles and reason. To a casual observer, Kurzweil appears to be the star of some kind of Heinlein novel, stealing fire from the gods and embarking on a quest to bring his maverick ideas to the public despite the dismissals of the establishment, getting rich in the process.
Kurzweil believes in the Singularity. In his 1990 manifesto, "The Age of Intelligent Machines," Kurzweil persuasively argued that we were on the brink of meaningful machine intelligence. A decade later, he continued the argument in a book called The Age of Spiritual Machines, whose most audacious claim is that the world's computational capacity has been slowly doubling since the crust first cooled (and before!), and that the doubling interval has been growing shorter and shorter with each passing year, so that now we see it reflected in the computer industry's Moore's Law, which predicts that microprocessors will get twice as powerful for half the cost about every eighteen months. The breathtaking sweep of this trend has an obvious conclusion: computers more powerful than people; more powerful than we can comprehend.
Now Kurzweil has published two more books, The Singularity Is Near, When Humans Transcend Biology (Viking, Spring 2005) and Fantastic Voyage: Live Long Enough to Live Forever (with Terry Grossman, Rodale, November 2004). The former is a technological roadmap for creating the conditions necessary for ascent into Singularity; the latter is a book about life-prolonging technologies that will assist baby-boomers in living long enough to see the day when technological immortality is achieved.
See what I meant about his being a Heinlein hero?
I still don't know if the Singularity is a spiritual or a technological belief system. It has all the trappings of spirituality, to be sure. If you are pure and kosher, if you live right and if your society is just, then you will live to see a moment of Rapture when your flesh will slough away leaving nothing behind but your ka, your soul, your consciousness, to ascend to an immortal and pure state.
I wrote a novel called Down and Out in the Magic Kingdom where characters could make backups of themselves and recover from them if something bad happened, like catching a cold or being assassinated. It raises a lot of existential questions: most prominently: are you still you when you've been restored from backup?
The traditional AI answer is the Turing Test, invented by Alan Turing, the gay pioneer of cryptography and artificial intelligence who was forced by the British government to take hormone treatments to "cure" him of his homosexuality, culminating in his suicide in 1954. Turing cut through the existentialism about measuring whether a machine is intelligent by proposing a parlor game: a computer sits behind a locked door with a chat program, and a person sits behind another locked door with his own chat program, and they both try to convince a judge that they are real people. If the computer fools a human judge into thinking that it's a person, then to all intents and purposes, it's a person.
So how do you know if the backed-up you that you've restored into a new body -- or a jar with a speaker attached to it -- is really you? Well, you can ask it some questions, and if it answers the same way that you do, you're talking to a faithful copy of yourself.
Sounds good. But the me who sent his first story into Asimov's seventeen years ago couldn't answer the question, "Write a story for Asimov's" the same way the me of today could. Does that mean I'm not me anymore?
Kurzweil has the answer.
"If you follow that logic, then if you were to take me ten years ago, I could not pass for myself in a Ray Kurzweil Turing Test. But once the requisite uploading technology becomes available a few decades hence, you could make a perfect-enough copy of me, and it would pass the Ray Kurzweil Turing Test. The copy doesn't have to match the quantum state of my every neuron, either: if you meet me the next day, I'd pass the Ray Kurzweil Turing Test. Nevertheless, none of the quantum states in my brain would be the same. There are quite a few changes that each of us undergo from day to day, we don't examine the assumption that we are the same person closely.
"We gradually change our pattern of atoms and neurons but we very rapidly change the particles the pattern is made up of. We used to think that in the brain -- the physical part of us most closely associated with our identity -- cells change very slowly, but it turns out that the components of the neurons, the tubules and so forth, turn over in only days. I'm a completely different set of particles from what I was a week ago.
"Consciousness is a difficult subject, and I'm always surprised by how many people talk about consciousness routinely as if it could be easily and readily tested scientifically. But we can't postulate a consciousness detector that does not have some assumptions about consciousness built into it.
"Science is about objective third party observations and logical deductions from them. Consciousness is about first-person, subjective experience, and there's a fundamental gap there. We live in a world of assumptions about consciousness. We share the assumption that other human beings are conscious, for example. But that breaks down when we go outside of humans, when we consider, for example, animals. Some say only humans are conscious and animals are instinctive and machinelike. Others see humanlike behavior in an animal and consider the animal conscious, but even these observers don't generally attribute consciousness to animals that aren't humanlike.
"When machines are complex enough to have responses recognizable as emotions, those machines will be more humanlike than animals."
The Kurzweil Singularity goes like this: computers get better and smaller. Our ability to measure the world gains precision and grows ever cheaper. Eventually, we can measure the world inside the brain and make a copy of it in a computer that's as fast and complex as a brain, and voila, intelligence.
Here in the twenty-first century we like to view ourselves as ambulatory brains, plugged into meat-puppets that lug our precious grey matter from place to place. We tend to think of that grey matter as transcendently complex, and we think of it as being the bit that makes us us.
But brains aren't that complex, Kurzweil says. Already, we're starting to unravel their mysteries.
"We seem to have found one area of the brain closely associated with higher-level emotions, the spindle cells, deeply embedded in the brain. There are tens of thousands of them, spanning the whole brain (maybe eighty thousand in total), which is an incredibly small number. Babies don't have any, most animals don't have any, and they likely only evolved over the last million years or so. Some of the high-level emotions that are deeply human come from these.
"Turing had the right insight: base the test for intelligence on written language. Turing Tests really work. A novel is based on language: with language you can conjure up any reality, much more so than with images. Turing almost lived to see computers doing a good job of performing in fields like math, medical diagnosis and so on, but those tasks were easier for a machine than demonstrating even a child's mastery of language. Language is the true embodiment of human intelligence."
If we're not so complex, then it's only a matter of time until computers are more complex than us. When that comes, our brains will be model-able in a computer and that's when the fun begins. That's the thesis of Spiritual Machines, which even includes a (Heinlein-style) timeline leading up to this day.
Now, it may be that a human brain contains n logic-gates and runs at x cycles per second and stores z petabytes, and that n and x and z are all within reach. It may be that we can take a brain apart and record the position and relationships of all the neurons and sub-neuronal elements that constitute a brain.
But there are also a nearly infinite number of ways of modeling a brain in a computer, and only a finite (or possibly nonexistent) fraction of that space will yield a conscious copy of the original meat-brain. Science fiction writers usually hand-wave this step: in Heinlein's "Man Who Sold the Moon," the gimmick is that once the computer becomes complex enough, with enough "random numbers," it just wakes up.
Computer programmers are a little more skeptical. Computers have never been known for their skill at programming themselves -- they tend to be no smarter than the people who write their software.
But there are techniques for getting computers to program themselves, based on evolution and natural selection. A programmer creates a system that spits out lots -- thousands or even millions -- of randomly generated programs. Each one is given the opportunity to perform a computational task (say, sorting a list of numbers from greatest to least) and the ones that solve the problem best are kept aside while the others are erased. Now the survivors are used as the basis for a new generation of randomly mutated descendants, each based on elements of the code that preceded them. By running many instances of a randomly varied program at once, and by culling the least successful and regenerating the population from the winners very quickly, it is possible to evolve effective software that performs as well or better than the code written by human authors.
Indeed, evolutionary computing is a promising and exciting field that's realizing real returns through cool offshoots like "ant colony optimization" and similar approaches that are showing good results in fields as diverse as piloting military UAVs and efficiently provisioning car-painting robots at automotive plants.
So if you buy Kurzweil's premise that computation is getting cheaper and more plentiful than ever, then why not just use evolutionary algorithms to evolve the best way to model a scanned-in human brain such that it "wakes up" like Heinlein's Mike computer?
Indeed, this is the crux of Kurzweil's argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.Indeed, this is the crux of Kurzweil's argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.
But it's a cheat. Evolutionary algorithms depend on the same mechanisms as real-world evolution:
It's a neat conceit to write around. I've committed Singularity a couple of times, usually in collaboration with gonzo Singleton Charlie Stross, the mad antipope of the Singularity. But those stories have the same relation to futurism as romance novels do to love: a shared jumping-off point, but radically different morphologies.
Of course, the Singularity isn't just a conceit for noodling with in the pages of the pulps: it's the subject of serious-minded punditry, futurism, and even science.
Ray Kurzweil is one such pundit-futurist-scientist. He's a serial entrepreneur who founded successful businesses that advanced the fields of optical character recognition (machine-reading) software, text-to-speech synthesis, synthetic musical instrument simulation, computer-based speech recognition, and stock-market analysis. He cured his own Type-II diabetes through a careful review of the literature and the judicious application of first principles and reason. To a casual observer, Kurzweil appears to be the star of some kind of Heinlein novel, stealing fire from the gods and embarking on a quest to bring his maverick ideas to the public despite the dismissals of the establishment, getting rich in the process.
Kurzweil believes in the Singularity. In his 1990 manifesto, "The Age of Intelligent Machines," Kurzweil persuasively argued that we were on the brink of meaningful machine intelligence. A decade later, he continued the argument in a book called The Age of Spiritual Machines, whose most audacious claim is that the world's computational capacity has been slowly doubling since the crust first cooled (and before!), and that the doubling interval has been growing shorter and shorter with each passing year, so that now we see it reflected in the computer industry's Moore's Law, which predicts that microprocessors will get twice as powerful for half the cost about every eighteen months. The breathtaking sweep of this trend has an obvious conclusion: computers more powerful than people; more powerful than we can comprehend.
Now Kurzweil has published two more books, The Singularity Is Near, When Humans Transcend Biology (Viking, Spring 2005) and Fantastic Voyage: Live Long Enough to Live Forever (with Terry Grossman, Rodale, November 2004). The former is a technological roadmap for creating the conditions necessary for ascent into Singularity; the latter is a book about life-prolonging technologies that will assist baby-boomers in living long enough to see the day when technological immortality is achieved.
See what I meant about his being a Heinlein hero?
I still don't know if the Singularity is a spiritual or a technological belief system. It has all the trappings of spirituality, to be sure. If you are pure and kosher, if you live right and if your society is just, then you will live to see a moment of Rapture when your flesh will slough away leaving nothing behind but your ka, your soul, your consciousness, to ascend to an immortal and pure state.
I wrote a novel called Down and Out in the Magic Kingdom where characters could make backups of themselves and recover from them if something bad happened, like catching a cold or being assassinated. It raises a lot of existential questions: most prominently: are you still you when you've been restored from backup?
The traditional AI answer is the Turing Test, invented by Alan Turing, the gay pioneer of cryptography and artificial intelligence who was forced by the British government to take hormone treatments to "cure" him of his homosexuality, culminating in his suicide in 1954. Turing cut through the existentialism about measuring whether a machine is intelligent by proposing a parlor game: a computer sits behind a locked door with a chat program, and a person sits behind another locked door with his own chat program, and they both try to convince a judge that they are real people. If the computer fools a human judge into thinking that it's a person, then to all intents and purposes, it's a person.
So how do you know if the backed-up you that you've restored into a new body -- or a jar with a speaker attached to it -- is really you? Well, you can ask it some questions, and if it answers the same way that you do, you're talking to a faithful copy of yourself.
Sounds good. But the me who sent his first story into Asimov's seventeen years ago couldn't answer the question, "Write a story for Asimov's" the same way the me of today could. Does that mean I'm not me anymore?
Kurzweil has the answer.
"If you follow that logic, then if you were to take me ten years ago, I could not pass for myself in a Ray Kurzweil Turing Test. But once the requisite uploading technology becomes available a few decades hence, you could make a perfect-enough copy of me, and it would pass the Ray Kurzweil Turing Test. The copy doesn't have to match the quantum state of my every neuron, either: if you meet me the next day, I'd pass the Ray Kurzweil Turing Test. Nevertheless, none of the quantum states in my brain would be the same. There are quite a few changes that each of us undergo from day to day, we don't examine the assumption that we are the same person closely.
"We gradually change our pattern of atoms and neurons but we very rapidly change the particles the pattern is made up of. We used to think that in the brain -- the physical part of us most closely associated with our identity -- cells change very slowly, but it turns out that the components of the neurons, the tubules and so forth, turn over in only days. I'm a completely different set of particles from what I was a week ago.
"Consciousness is a difficult subject, and I'm always surprised by how many people talk about consciousness routinely as if it could be easily and readily tested scientifically. But we can't postulate a consciousness detector that does not have some assumptions about consciousness built into it.
"Science is about objective third party observations and logical deductions from them. Consciousness is about first-person, subjective experience, and there's a fundamental gap there. We live in a world of assumptions about consciousness. We share the assumption that other human beings are conscious, for example. But that breaks down when we go outside of humans, when we consider, for example, animals. Some say only humans are conscious and animals are instinctive and machinelike. Others see humanlike behavior in an animal and consider the animal conscious, but even these observers don't generally attribute consciousness to animals that aren't humanlike.
"When machines are complex enough to have responses recognizable as emotions, those machines will be more humanlike than animals."
The Kurzweil Singularity goes like this: computers get better and smaller. Our ability to measure the world gains precision and grows ever cheaper. Eventually, we can measure the world inside the brain and make a copy of it in a computer that's as fast and complex as a brain, and voila, intelligence.
Here in the twenty-first century we like to view ourselves as ambulatory brains, plugged into meat-puppets that lug our precious grey matter from place to place. We tend to think of that grey matter as transcendently complex, and we think of it as being the bit that makes us us.
But brains aren't that complex, Kurzweil says. Already, we're starting to unravel their mysteries.
"We seem to have found one area of the brain closely associated with higher-level emotions, the spindle cells, deeply embedded in the brain. There are tens of thousands of them, spanning the whole brain (maybe eighty thousand in total), which is an incredibly small number. Babies don't have any, most animals don't have any, and they likely only evolved over the last million years or so. Some of the high-level emotions that are deeply human come from these.
"Turing had the right insight: base the test for intelligence on written language. Turing Tests really work. A novel is based on language: with language you can conjure up any reality, much more so than with images. Turing almost lived to see computers doing a good job of performing in fields like math, medical diagnosis and so on, but those tasks were easier for a machine than demonstrating even a child's mastery of language. Language is the true embodiment of human intelligence."
If we're not so complex, then it's only a matter of time until computers are more complex than us. When that comes, our brains will be model-able in a computer and that's when the fun begins. That's the thesis of Spiritual Machines, which even includes a (Heinlein-style) timeline leading up to this day.
Now, it may be that a human brain contains n logic-gates and runs at x cycles per second and stores z petabytes, and that n and x and z are all within reach. It may be that we can take a brain apart and record the position and relationships of all the neurons and sub-neuronal elements that constitute a brain.
But there are also a nearly infinite number of ways of modeling a brain in a computer, and only a finite (or possibly nonexistent) fraction of that space will yield a conscious copy of the original meat-brain. Science fiction writers usually hand-wave this step: in Heinlein's "Man Who Sold the Moon," the gimmick is that once the computer becomes complex enough, with enough "random numbers," it just wakes up.
Computer programmers are a little more skeptical. Computers have never been known for their skill at programming themselves -- they tend to be no smarter than the people who write their software.
But there are techniques for getting computers to program themselves, based on evolution and natural selection. A programmer creates a system that spits out lots -- thousands or even millions -- of randomly generated programs. Each one is given the opportunity to perform a computational task (say, sorting a list of numbers from greatest to least) and the ones that solve the problem best are kept aside while the others are erased. Now the survivors are used as the basis for a new generation of randomly mutated descendants, each based on elements of the code that preceded them. By running many instances of a randomly varied program at once, and by culling the least successful and regenerating the population from the winners very quickly, it is possible to evolve effective software that performs as well or better than the code written by human authors.
Indeed, evolutionary computing is a promising and exciting field that's realizing real returns through cool offshoots like "ant colony optimization" and similar approaches that are showing good results in fields as diverse as piloting military UAVs and efficiently provisioning car-painting robots at automotive plants.
So if you buy Kurzweil's premise that computation is getting cheaper and more plentiful than ever, then why not just use evolutionary algorithms to evolve the best way to model a scanned-in human brain such that it "wakes up" like Heinlein's Mike computer?
Indeed, this is the crux of Kurzweil's argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.Indeed, this is the crux of Kurzweil's argument in Spiritual Machines: if we have computation to spare and a detailed model of a human brain, we need only combine them and out will pop the mechanism whereby we may upload our consciousness to digital storage media and transcend our weak and bothersome meat forever.
But it's a cheat. Evolutionary algorithms depend on the same mechanisms as real-world evolution:
Free e-book: Β«Content by Cory Doctorow (top ebook reader .TXT) πΒ» - read online now on website american library books (americanlibrarybooks.com)
Similar e-books:
Comments (0)