Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.winer.org/About/cons_mark_3.html
Дата изменения: Unknown
Дата индексирования: Thu Feb 27 20:45:13 2014
Кодировка:

Поисковые слова: m 87 jet
Mark 3 Consciousness
								P.O. Box 797
								Sonoita, AZ  85637-0797

								January 2, 2003


Mr. Ray Kurzweil
Kurzweil Technologies, Inc.
15 Walnut Street
Wellesley Hills, MA  02481

Dear Mr. Kurzweil:

	This is to congratulate you for your book "The Age of Spiritual Machines"
that I just finished reading, and for having not only the foresight, but the
courage to write it. You have demonstrated remarkable intelligence, breadth of
knowledge and of interests (computer science, philosophy, physics, biology, art,
music, and poetry), and seriousness of purpose in your entire body of work,
both in business and in your books. This book ranks at least in the top 100 most
important books of the 20th century, if not the top 10. Everyone interested in
who we are, where we came from, and where we are going should read this book
(that should be everyone!). Everyone interested in the philosophy of
consciousness and identity also should read this book.

	For that reason, I am enclosing a copy of this letter to Prof. David
Chalmers of the Philosophy Department at the University of Arizona. I sent a
couple rather naive letters to Prof. Chalmers (copies of which I enclose) more
than two years ago, and he kindly responded by suggesting that I read the papers
listed on his Web site, some of which I have read in my sparse spare time.

	As a physics major, a computer programmer for over 20 years, and now an
engineering manager, I agree with most of your conclusions about the technology
and where it is headed. Smaller, faster, and cheaper has been the trend, is the
trend, and will be the trend for the foreseeable future. You have documented
through your footnotes that your extrapolations rest on known technology, not
science fiction. Your view of evolution, rather than being confined to plants
and all-animals-except-humans, is broad enough to encompass all living things
on the Earth, including us. You have skillfully sidestepped the traditional,
and boring, human versus machine argument by foreseeing the
human-merging-with-machine future. Instead of raising that as an ugly end
to humankind (e.g., the Borg "character" in Star Trek - Next Generation) you
argue convincingly of its being the salvation of humans in the face of
ever-faster computers that must eventually surpass humans in intellectual
horsepower. There is no doubt that computers will one day be intellectually
superior to today's humans, and will, eventually, pass the Turing Test.

	One question on my mind, and where I seem to differ from you, is
"will machines ever be conscious?" You seem to think so, and I simply don't
know, but I'm not willing to bet my life (consciousness) on it.

	The other question that you raise and that I believe you do not answer
convincingly is a question of identity, namely, "can consciousness be
transferred from one sustainer (e.g., a human body) to another (e.g., a
different human body or a computer, if computers can be conscious)?"

	In Chapter 3 (all references and page numbers are to the paperback
edition of The Age of Spiritual Machines ISBN 0 14 02.8202 5) you do an
excellent job reviewing the various theories of consciousness and identity
for the lay reader (by the way, re. P. 61, I, too, was raised a Unitarian
and try to use the "many paths" synthesis approach in my daily life). I agree
with your basic interpretation of Descartes' Cogito as a simple, elegant
statement of what is, as a bare minimum, knowable. I had a friend send me
an extended discourse this past summer about why Descartes was wrong and
why a quasi-Buddhist approach taught by the Landmark Education Forum was
better, and I think I was not successful in trying to make him understand
that the two are not incompatible. That was because I was unable to convince
him that "I think" did not necessarily imply Western logical reasoning, it
was a statement of the existence of consciousness.

	Yet you appear to come out in the end on the side of using the Turing
Test as the ultimate test of whether a machine is conscious. I believe this
is faulty, and confuses intelligence with consciousness. On pp. 55-56 and
following you explain the theory of the difference between objective and
subjective experience, then on pp. 59-60 you reference Prof. Chalmers's work
and explain the intellectual history of Wittgenstein. While an undergraduate,
I was frustrated when discussing epistemology with my fellow physics majors,
trying to make them see that there is nothing in physical theory that
explains why we experience redness instead of blueness when exposed to light
of a certain wavelength. They didn't get it - they kept insisting it had to
do with the wavelength of the light.

	The fact that the subjective world is distinct in some way from the
objective world (even if the former is caused by the latter) is, I believe,
amply demonstrated by the thought experiment concerning the woman kept in
a black and white world presented by Frank Jackson in his paper
"Epiphenomenal Qualia" that I downloaded some time ago from Prof.
Chalmers's Web site. This distinction is at the core of Prof. Chalmers's
distinction between the "easy" problem of figuring out how the brain works
(the "scanning" that you predict will be complete by the end of this century)
and the "hard" problem of explaining the subjective world of consciousness.

	Intelligence is a behavior that intelligent beings (animals, including
humans, or machines) exhibit outwardly to others. One dimension of
intelligence, the Intelligence Quotient, can be quantified for an individual
using that individual's performance on the Stanford-Binet intelligence test.
Consciousness is the subjective, inner world each of us (supposedly) has.
All the Turing Test can hope to do is prove that a machine is intelligent,
because it uses outwardly exhibited machine behavior. It cannot, under any
circumstances prove that a machine is conscious.

	I said above that we each (supposedly) have a subjective world. While
I graduated as a physics major, I narrowly missed graduating as a philosophy
major as well. One of the problems I studied in my philosophy courses was
the Problem of Other Minds - how can we prove that everyone else is not a
robot? After considerable reading and discussion, I was disappointed to
learn that the answer (then) was, we can't, and the best argument is a
"common sense" argument that since we all seem to come into this world by
the same method, and I have good testimony from an otherwise reliable
source (my mother) with no apparent reason to lie that I was created by
the same method, then all other people, who are like me in other ways,
must also be conscious. This is not a good argument. Unless something
has happened in this field in the last 35 years, this remains unsatisfying.
It could be we cannot know for certain if anyone, or thing, besides
ourselves is conscious. Which brings me to my next point.

	It could be we cannot know (as a matter derivable from fundamental
principle or laws of physics) whether other people, or machines, have
minds. That is, someone might be able to prove, as a theorem based on
information theory, quantum mechanics, or both, that it is impossible to
know if an intelligent entity is conscious. Now that is a guess. When
mathematicians guess at something, they call it a Conjecture (capital
"C"), but this is really important, and besides, I profess no knowledge
in this field, so this is a small "g" guess on my part. This does not
necessarily mean we will never solve Prof. Chalmers's "hard problem",
though someone might prove that it does mean this, in which case it
becomes the Impossible Problem.

	So far, I have discussed the general problem of machine consciousness.
I will now turn my attention to the related problem of identity. But first,
a short digression. When I was an undergraduate, I heard a professor named
Roderick Chisholm, who was a student of Wittgenstein, give a talk on
identity. He described two identical wooden ships built at the same time.
One would simply be maintained, so that as each plank wore out, it would be
replaced with an identical wooden plank. On the other ship, when each wooden
plank wore out, it would be replaced with an iron sheet of equal area. After
30 years, the second ship's wooden planks are all replaced with iron. There
could be some debate whether the two ships are now identical, depending on
how you define "identical". However, if a human could be cloned in two, the
cells of one replaced over 30 years, and you talked to both after 30 years,
there would be no question after 30 years they are the same person. They
would have the same personality, mannerisms, speech style, memories, and
all the other things that characterize a personality. That is, he used a
customized analogy of the Turing Test, an interview, to determine that Fred
was indeed Fred and not someone else. His point was that there is something
that we recognize in a person's behavior that establishes their identity, and
we can recognize it almost immediately, after very little time, and there is
no dispute with other humans as to whether or not a particular being is Fred,
while there can be considerable dispute among humans as to whether two ships
are identical.

	When I first heard this talk as a freshman, and later when I took
Prof. Chisholm's metaphysics course, I was unconvinced. Especially a couple
years later, when I took his course, I had already sat down at a terminal
at the university computing center and "talked" to ("with") the Liza
program on the IBM 360/67. It could not have passed the Turing Test. But
I could see the possibilities, and found I was very excited by artificial
intelligence and robotics. The point is, Prof. Chisholm was using as his
test of identity a test similar to the one you plan to use for consciousness,
that is, trying to determine the (possibly unknowable) subjective state of
someone by interviewing them. I propose that a conscious being's identity
is tied to their consciousness, not to their outward personality, though
that is how their identity is usually manifested to others. I give the
reason below.

	You discuss on pp. 124-5 downloading one's mind into a computer,
and previously an analogous issue in the Star Trek matter transporter on
p. 54. In one of my letters to Prof. Chalmers, I go into some detail about
the matter transporter and the issue of moving matter versus moving
information. I pose this question to you: suppose you are actually on
the Enterprise and Kirk ordered you into the transporter to beam down
to a known friendly destination with no threats. Would you go? Given what
you wrote in your book, I assume the answer is Yes. I would insist on
taking the shuttlecraft instead. That's because I would have no guarantee
that the person who stepped out on the other end was me! He would look
like me, act like me, talk like me, and convince all my friends he was me.
He would know my mother's maiden name, my social security number, my
credit card numbers, birth date, and other personal information. He would
be absolutely identical to me in every way. He would insist he was me,
and get mad if you accused him of not being me (p. 126).

	But if the transporter worked by destructive decomposition,
information transfer, then reconstruction from that information, how do
you transfer the consciousness? If, as you appear to maintain,
consciousness can be substantiated by a test performed by others, then
go ahead and step into that transporter (p.131, freezer scanning). But
now suppose that it malfunctions, scans you nondestructively, and
creates an exact duplicate hundreds of kilometers away. This duplicate
convinces everyone by his outward behavior that he is you and begins
spending your money using your credit cards (that were exactly duplicated
in your duplicated wallet) while you are stuck on board the Enterprise
waiting for Scotty to figure out what is wrong. Ray-duplicate is not you
in terms of classical identity, that is, he does not have your
consciousness. I won't call him "Ray-clone", for cloning has a particular
cell biology meaning in which a cell is injected with DNA from an
external donor, then a DNA-duplicate is grown, but this DNA-duplicate
is not likely to have the same personality as the DNA donor due to having
grown up in a different time with different experiences; in my thought
experiment, the two Rays are exact duplicates in every way at the instant
of creation of the second Ray, then only from that point on do they have
different experiences that would influence their personality in a way
noticeable by others (per the Chisholm Test).

	Assuming Ray-duplicate is conscious (something we cannot know for
certain) he has a different consciousness from yours. His consciousness
was created when the living tissue was (re)created from the information
transmitted about your body. We know this because you cannot be in two
places at the same time (at least according to the old Firesign Theatre)
in reality (virtual space aside, for the time being). Using your
communicator, you can even communicate with him, discover he exists,
and ask him to stop spending your money. He will become incensed (p. 126)
because he is absolutely convinced he is you, and can't understand who
this imposter is talking to him back onboard the ship. I think this little
experiment implies that (a) consciousness does not necessarily move into
a computer when thought patterns are scanned into it, and (b) identity is
intimately linked with consciousness (in all its forms, including the
subconscious in the case of amnesia victims), not with tests of memory
or outward appearance given by friends of the person under examination.

	All this is hypothetical and far-fetched. Yet this thought
experiment has serious implications for your theory that in 2099, humans
will routinely be software on a Web with the same consciousness we have
today, only better. It may well come to pass that humans will merge with
machines as you predict, and that humans will have their brains scanned and
cease to exist instantiated in a carbon-based body. But I maintain that there
is no guarantee that consciousness as we now know it will survive in such a
world. Human behavior will be replicated, even enhanced; but when humans
"cross over" to the other side, as you put it in the book, they may cease
to exist as humans in a fundamental way. Exactly what is it, anyway, that
"crossed over" - a set of behaviors, or the true consciousness, and thus
the identity, of the person whose brain was scanned? Suppose scanning is
non-destructive, what then? Is there two of everyone? Which one is "real"?
If you get scanned non-destructively, which one is really you?

	You seem to acknowledge this problem from time to time. For example,
on p. 154, you state that machines will report the same range of experiences
as humans, even a broader range, "But what will they really be feeling? As
I said earlier, there's just no way to truly penetrate another entity's
subjective experience, at least not in a scientific way. I mean, we can
observe the patterns of neural firings, and so forth, but that's still
just an objective observation." Again, later, on p. 242 you challenge
the Molly software brain scan as to whether it is the "real" Molly. Yet
later, you go on to predict that humans will merge with machines in a
way that abandons carbon-based bodies and human consciousness becomes
so much software. This appears to contradict the statements I quote,
unless you are willing to concede that such entities, though intelligent,
may not be conscious; and even if conscious, they are not the same
"people" as those who were scanned.

	Please understand that I do not think there is anything unique
in the ability of organic (carbon-based) compounds to support
consciousness - not only does silicon form many of the same types
of compounds ("silicones", with an analogous behavior to carbon),
but there is no inherent reason that silicon-based creatures could
not also be conscious. My objections are to (a) the concept of
consciousness residing only in patterns represented by algorithms
and software, and (b) to the transference of consciousness by
reading out brain function and replicating it in software, and
in storing the algorithms (along with backup copies of the software),
given the matter transporter thought experiment above. You even
mention that the computers are likely to be something like nanotubes,
which are carbon-based structures after all.

	Given that you acknowledge the problem, I am perplexed that
you don't address it better. Perhaps you are saying that the lure
of the benefits of technology will cause humans (with their neural
implants, virtual cyberspace, and the [I think, false] promise of
immortality) to be overpowered by the machines, join with them, and
abandon the one last thing that separates them from the machines - their
consciousness. Evolution will march on, with the most intelligent,
in this case the machines, winning out in the end. You try to paste
an appealing facade over it, something that all but the Luddites will,
in the end, accept (though most readers today find the Borg repelling).
You strike me as being too honest for such a sham, so I can only
assume that although you acknowledge the problem, you believe in the
end, as Chisholm seemed to, that the Turing Test resolves the issue
(the objective can discern the subjective, somehow), which I can't accept.

	On p. 242, Molly objects to being questioned about her identity,
and you decide not to push the issue further. Again, you are using
external behavior to decide internal states, and I believe that is
a mistake. Although the "hard problem" may, in the end, be
explainable, it could be that a conscious being's internal states
may not be knowable, and even the fact that a being is conscious
may not be knowable. It's not possible for anyone but the original
Molly to know for certain if Molly survived the readout. If she did
not, she's dead, and the software Molly on p.242, just like the
Ray-duplicate of the copying matter transporter, will insist she's
the real Molly that survived the transfer. But she will not
necessarily have real conscious experiences, as we humans know
them. The machine Molly will insist she remembers being human and
that her current "experiences" are the same as, or better than,
her human ones. She will not be lying, but she will be mistaken.
Given the full richness of human experience, how can a memory
compare with the real experience? Also, how will we ever know
we got Asimov's robot three laws programmed right in all their
possible subtleties? On the other hand, if Molly did survive,
she will act just the same as if she did not, and we won't know
the difference.

	On p. 182, you describe Ted Kaczynski's neo-Luddite
philosophy and the fact that there are too many people and too
little nature. Bravo! I have long thought that the US could do
quite well with a population of 75 million people, that of all
our social ills, population control is our most important problem,
and the lack thereof is the root cause of many of our other
problems. I found it interesting that you predict that the world
population will level off at 12 billion people (p. 222), though
you don't state clearly why. Perhaps it will be the technological
drive that spurs people to obtain higher education (there is an
inverse correlation between education level and family size),
the rising standard of living that will decrease dependence on
children for support in one's old age, coupled with more
widespread dependence on technology that will decrease dependence
on religion, especially those religions that discourage birth control.

	One aspect of not returning to nature, though, is that, as
humans merge with machines, they could adopt machine values and
lose sight of their natural roots. You described one human (I believe
it was one of Molly's children) who withdrew from human society and
spent more and more time in virtual cyberspace. What is to prevent
2029 humans, and especially 2099 software "human/machines", from
abandoning their caretaker role of the Earth's plant and animal
resources and developing a new aesthetic that shuns nature,
ignores the needs of other carbon-based species, and creates a
barren world devoid of natural beauty as we now know it? With twice
as many people as today in thirty years, we will have made several
dozen species extinct by destroying their habitat, and could be
hard pressed to feed everyone (though technology has a way of keeping
ahead of food needs; most famine is caused by politics).

	In several places (e.g., p. 221) you mention cochlear implants.
First, I want to congratulate you for your work over the years in
making devices for the disabled. My wife worked for a while (before
state funding cuts forced her job to be eliminated) with a local
non-profit company that introduced disabled people to the vast array
of products and services available to improve their lives. Many of
the devices and much of the software in their demonstration room was
the direct result of your efforts and those of companies you founded.
You deserve every inventor's award and medal you have won. However,
isn't there some controversy in the deaf world about cochlear
implants? That is, there are those who would benefit from them who
want to remain deaf, or who want their children to remain deaf, as
they believe there is some value to being in that deaf society. I
simply do not understand that, being a hearing person, and enjoying
a symphony orchestra (though about 1.5 years ago, I suffered a
mini-stroke in my right inner ear, lost all my high frequency
hearing in that ear, and have continuous tinnitus that prevents my
using a hearing aid, according to my otolaryngologist). Do you
believe that as more such implants become available, this
disabled-Luddite backlash will increase or decrease?

	Thanks for mentioning the asteroid threat on p. 258. My
research interest is astrometry (observing and measuring the
positions of) Near Earth Objects (NEO's, asteroids and comets) to
help determine whether they pose a hazard to Earth. About eight
such objects are lost each month because the Federal Government
under funds such research, and professional astronomers are not
properly rewarded by their peers for conducting such research. As
a result, this research is conducted almost exclusively by amateur
astronomers who increasingly do not have telescopes with sufficient
light gathering power to follow up on the discoveries of the major
professional surveys. Consequently, as more NEO's are discovered,
more are lost, because the surveys do not have the resources to
perform their own follow-up astrometry. If you wish to learn more
about the NEO threat and research opportunities, please visit our
Web site at www.winer.org or the Near Earth Object Search Society
Web site at www.neoss.org. 

	Enough criticism, now for some questions. On p. 294 (yes, I
read the Time Line and all of the gray boxes with PDL for the three
algorithms) under Variations for the neural net, I was wondering would
be possible to leave the number of outputs variable? Here is an
interesting problem. In 1888, Daniel Kirkwood discovered that four
pairs of asteroids had nearly identical values of several orbital
elements, and in 1918, Kiyotsugu Hirayama published his seminal work
establishing the families of asteroids, based on similar orbital
elements, still associated with his name. In 1984, David Tholen
published his Ph.D. thesis in which he invented new asteroid families
based on their spectra that, presumably, yielded information about
their chemical composition. In the early 1990's, Ellen Bus did some
work with a neural network classification system that invented new
classes using spectra as inputs, but it was unclear what physics
were associated with these output classes and as far as I know,
the research was abandoned.

	My questions are: First, how can you have the neural net invent
some open-ended number of classes of asteroids? Your algorithm, even
the variations, appears to require defining the number of outputs
a priori. Can you have an undefined number of outputs by defining
a fixed but large number of outputs, then inducing the values of
several of them to go to zero? Second, is there a way to modify
this research to make the outputs physically meaningful in some way?
That is, instead of having the output classes or bins defined by the
computer in some arbitrary way after the initial learning process,
these outputs need to be restricted to something with physical meaning
(e.g., spectral features associated with geochemical composition).
Dr. Bus did not do that - she and her neural net expert collaborator
apparently let the computer define new output bins that had no physical
meaning associated with them.

	Finally, a comment. You mention (p. 260) that the fate of the
Universe will be a decision "we" will make when the time comes. A few
thoughts come to mind. First, what about other intelligent life in
the Universe, maybe further along than us, maybe far behind us. Do
they get to participate in this momentous decision? Second, all baryonic
matter in the Universe (as we know matter) will disappear when the
proton decays in approximately 1040 years. Supposedly, baryonic matter
is needed to build the computers needed in 2099 to support future
intelligence ("life"?) according to your predictions. Since baryons
are composed of quarks, and you predict we will be doing
femtoengineering, perhaps if your predictions are correct, our
descendents will find a way to control their destiny. But that will
be a tall order. Third, in the myth of the vampire, faced with the
prospect of immortality, vampires eventually commit suicide.

	Also, mortality may be a necessary component for healthy social
evolution. If the members of a society do not age and die, the society
as a whole can stagnate, as younger members with fresh ideas may not
have a path to positions of power or influence. In your 2099 scenario,
you provided the mechanism for the creation of new intelligences, but
you did not make it clear what would force the "old geezers" to retire.
Would it be boredom? You seemed to hint at that when Molly quit the
census committee, but it's not clear how to address this problem
throughout the society, especially in key positions held by those
who do not want to relinquish their positions.

	And this is a pervasive, and potentially serious problem.
When I attended a college class reunion a few years ago, my classmates
were complaining about rap (hip-hop) music, body piercings, and
freakish hair. I had listened to some mainstream hip-hop just before
the reunion and though I wouldn't make it my usual listening fare, it
was tolerable, not as troublesome as the gang lyrics in the early days
of the genre. I reminded my classmates that they sounded just as our
parents did 30 years earlier, for precisely the same reasons. My point
is that people grow old, not only in their bodies, but in their thinking.
If you scan human brains into a computer (even successfully), you will
transfer something not meant to last more than about 80-100 years. It
seems that each generation needs to learn history from the previous one,
then apply a new perspective to the problems of the present as it shapes
the future. We have to be willing to step aside and let the next
generation take over, even though they did not experience what we did,
just as our parents stepped aside and let us take over even though we
did not experience the Depression and World War II first hand.

	Despite my criticisms, I want you to know how indebted I am to
you for writing this book. In it, you demonstrate that issues debated
by philosophers for centuries are not esoteric topics reserved for the
ivory tower, but are, or at least all too soon will be, issues of life
and death for the human species that we simply must be willing to
acknowledge, confront, and address. It may well be that we will solve
Prof. Chalmers's "easy problem" too soon, before we have time to think
seriously enough about the "hard problem". Or perhaps the pace in growth
in machine intelligence won't grant us the time we need to solve
the "hard problem" before it's too late.

	I would really like to know how this story of the human species turns out.


								Sincerely,




								Mark Trueblood
								www.winer.org


p.s. According to information on your Kurzweil Technologies, Inc.Web site, you
were born 11 days before I was.