Introduction It is galling that I have planned to write about transhumanism since the turn of this year, only for the Guardian newspaper to write about it a number of times. However, there is still a point to continuing, as I wish to discuss as many facets of transhumanism as possible and sort out exactly why I find it displeasing.
The reason I wish to write about this is a piece in the Metro, a free newspaper circulated on British public transport, on January the 11th. "Rise of the machine" was the headline, and the byline proclaimed that 'artificial intelligence is stepping out of the laboratory and into your living room'. The first bit was principally about Robosapien, a hot item in the news around Christmas.
"So where is it all heading? Some scientists are looking forward to 'the singularity': the moment when we create an artificial mind more powerful than our own. This could the create a smarter AI still, starting a snowball effect resulting in massive technological advances over a short period of time."
Apparently, Peter Thiel, former CEO of PayPal, is involved in giving large sums of money to work on singularity stuff. It's nice that he has a hobby, isn't it? The article then goes into 'what if the robots go insane?!!' territory, mentioning the film 'Terminator', ending with the amusing "So treat today's robots well - when the singularity comes, they'll remember who changed their oil".
Evidently someone is taking transhumanism seriously. It may be beneficial to find out what it is, and examine what we can and should think of it. The reason I wish to write on this issue is that I distrust technology and science, considering them as providing answers primarily to technological and scientific issues. When it comes to human, political, social, and philosophical problems, technology and science are not the answer. They can support an answer, be a mechanism to aid the answer that has been found, but they are not the answer in themselves. At this early stage, I am concerned as to whether the transhumanists might not be looking in the wrong place for a saviour. This article is an exploration into transhumanism and its problems, and of my own distrust of its language and basis.
For this article, I am using the wonderful and tiny book 'Derrida and the End of History' by Stuart Sim, in the Postmodern Encounters series. It is very good, and makes one look at postmodernism in a positive light, which is by all accounts a tough task when there is a bloody civil war going on over the issue between intellectuals. I am also using internet pages which will, for the most part, be linked to in a hypertextual manner. Attempting to explain transhumanism If someone were to accost you on the street and say that "the answer to all the world's problems is provided by the internet", you would walk away, tutting something about political correctness having gone mad. Currently, if one is to use google to search for "solution to the world's problems", you get an article on how Islam is the answer. You should observe that such solutions have been around before the internet, the internet only provides a new way for presenting this information and allowing it to reach a mass audience. In this respect, it is like a cannon that fires religious leaflets into the letterbox of anyone who publicly wonders "why am I alive?".
However, there are people who pin the hopes of the future on computers. These people can be considered transhumanists. But other people are more interested in medical technology, or augmentation of the human with computers. These can also be considered as transhumanists. What we are encountering is a spectrum of belief, with the unifying theme being of faith in technology as improving the human condition. At one end, it is evidently reasonable - technology can aid us in many ways. At the other end, it is less reasonable, and more speculative, and perhaps somewhat worrying.
It is becoming evident that there is a bewildering array of ideas floating around under the banner of transhumanism, and I do not feel that I can read, let alone explain, enough about these viewpoints. Therefore you must try to apprehend and accept the limitations to my account.
Some of the jargon we are going to come across includes words such as transhumanism (which can be abbreviated as >H or H+), extropian, singularity, and endism. We are going to go through each of these things, but I must stress that they are linked. Transhumanism, which we will see has some good aspects, entails extropianism and the idea of the singularity, which I consider less beneficial.
Increasing the range of human life Transhumanism is not unappealing. In
Saturday's Guardian, James Harkin writes briefly but lucidly about the science of life extension:
"Next week a far-flung group of scientists, philosophers and future-gazers will descend on Oxford University for a conference about it, titled Tomorrow's People: The Challenges of Technologies for Life Extension and Enhancement. At around the same time, Ray Kurzweil, a longtime prophet of radical life extension, will launch in Britain his book, The Singularity is Near. Humans, he argues, are shortly approaching lift-off to immortality."
All this activity is placed under the banner of 'transhumanism', which as James explains, is "the belief that if we humans can just hang on for the next 30 or 40 years, the science will have reached such a level of sophistication that we will be able to live for the next 1,000". I take issue with this, as not all transhumanism is concerned with the extension of human life. In general it is to do with the augmentation of human life with science, or technology, or computers, or all three. Most transhumanism is to do with technology increasing our freedom, some is more to do with technology being able to manage our freedom more effectively, imagining computers creating human society. But all transhumanism considers that we can be more than human with the help of technology.
James considers that "transhumanists share a welcome zeal for overcoming our limitations. For them, there is little that is natural about when we get old or die, and the subtle alteration of our incubator, our scientific and technological surroundings can keep us alive longer than ever before". He, of course, balances this by casting doubt on such claims, as it is not exactly mainstream science yet. But where he hits the nail on the head is here:
"The idea of radical life extension is also a little antisocial. A house, to borrow De Grey's metaphor, is a place to live in as well as an investment. As with the house-buying and renovation craze, transhumanism risks turning all our energies inwards, rather than out into society, where they might be of more immediate use. Sometimes, the urge to escape the ageing process seems like an attempt to escape everyone else."
What is important - the individual human, or humanity in general? If we concentrate too much on the former, there is no equality, which I believe entails no social stability and no true progress. At the same time, I am not appealing to a 'transcommunism' or 'transcollectivism' of technologically-strengthened socialist utopia. Technology and science are too important to subjugate to political ideologies without thinking strongly about it first - we must be examining the values inherent in these movements. Will life-extending transhumanism be for the rich, causing a massive divide between eternal Westerners and the ever-changing mortals who service them anonymously? Who should and who will benefit? Are humans even happy enough to be capable of living for longer?
These issues are explored by Madeleine Bunting in another
Guardian article. She explains that transhumanists believe that "humanity is on the point of being liberated from its biology. In their advocacy of our "technological rights", they believe that human beings are on the brink of a huge leap in development... We will be, as their slogan goes, "better than well". Evidently, there are implications to this. Madeleine explains them better than I will:
"This is the prospect that horrifies the so-called "bio-conservatives" such as Francis Fukuyama, who argues that transhumanism is the most dangerous ideology of our time. There are plenty who share his concerns, pointing out that the implications for human rights, indeed for our understanding of what it is to be human, are huge. What place will equality have in this brave new world? What place will privacy have when brain imaging can read our thoughts and transcranial magnetic stimulation can manipulate our thoughts? What powers over our brains will the state demand in the war against terror? [...]
"We're not talking about radical new steps, only an acceleration of existing trends. For example, if you can have Viagra for an enhanced sexual life, why not a Viagra for the mind? Is there a meaningful difference? If we show such enthusiasm for "improving" our noses and breasts with cosmetic surgery, why not also improve our brains? As computers continue to increase in power and shrink in size, why shouldn't we come to use them as prostheses, a kind of artificial limb for the brain? If we have successfully lengthened life expectancy with good sanitation and diet, why can't we lengthen it with new drugs? Ritalin is already being traded in the classroom by US students to help improve their concentration."
This is the world of human enhancement, and it evidently brings up its own problems. The other side of transhumanism is quite different.
Increasing the power of 'artificial life' Some scientists believe that the seemingly inexorable progress being made in the field of computing will lead us to a future where computers, having developed artificial intelligence and artificial consciousness, could take over.
Andrew Smith's article is a discovery of this belief from that futuristic-sounding year 2000.
In this article, Justin Rattner, "head of Intel's Microprocessor Research Laboratories", proclaims continued support for Moore's Law ("the projection which has computer processing speeds doubling every 18 months and which he expects to hold good for the next 10 years at least") and finds it possible for computers to behave intelligently. I am wondering whether computers will ever behave intelligently or only appear to do so, or if there is a difference between the two things. How intelligent is 'intelligent'? Will a computer ever be able to play chess better than us, and tie shoelaces, and raise a child, and go shopping for canned items? Will a computer ever be able to act in a way in which it wasn't programmed? If a computer can be programmed to shop for groceries, all well and good, but I am not sure to what extent that might be defined as intelligence.
More worryingly, John Leslie, a philosophy professor, explains that "'two possible scenarios present themselves here... The first is that the machines take over against our wishes. That seems to me less likely than that they take over with our tacit or explicit blessing. My own view is that, if it were all true, and they were conscious, then fine - but if, as is likely, they weren't conscious in the full sense, then that would be a disaster.'"
The questions raised by this area of transhumanism are ones of the limitations of computers, and the limitations of what powers computerised 'life' should have. I am not so convinced that computers are capable of learning or showing human-like capacities. Computers are capable of processing, and I do not believe that all human capabilities can be broken down and explained merely as processing. There is something infinitely adaptable about humanity that I am not sure a computer intelligence could ever display. If such an intelligence has not been programmed to deal with a situation, it cannot deal with it.
To put it another way: a computer, and presumably therefore a computerised intelligence, can only do what it is told. Newness and conflicts will confound it. Can a computer ever decide to do otherwise than what it has been told? I admit that one person who believes they are in control can attempt to make a computerised intelligence perform an action that another, with more administrative power or expertise has disallowed. But in the end, someone will be in control, the consciousness will be entirely boundaried by the input of another. Perhaps what I admire about humanity is that it is so hard to control. I would be surprised whether an artificial consciousness could represent this, without the lack of control being contrived. You could program a robot to act erratically, sometimes with and sometimes against orders, or to have a rule-set that it would not transgress, and therefore be seemingly willful. Yet, is seeming willful the same as being willful?
So, we have examined two of the main facets of transhumanism - a concern with technologising human life, and a concern with humanising technological life. Life itself is to be reimagined, reengineered, and moved beyond, as suggested by the name of the movement. Now, to understand this movement better, we shall look into its stated origins, what transhumanists say of it, and find something more to question.
What transhumanists say The
transhumanist FAQ introduces itself well: "Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase." There is still more for us to do, we have not yet 'become'. We are going to move beyond ourselves. This is quite a complex thing to talk about, so I will keep referring to transhumanist literature until a tentative conclusion might be possible.
The
transhumanist declaration is a good place to questioning this idea. Item number 4:
"(4) Transhumanists advocate the moral right for those who so wish to use technology to extend their mental and physical (including reproductive) capacities and to improve their control over their own lives. We seek personal growth beyond our current biological limitations."
This seems fair enough, although I will question it later. Number 7 is:
"(7) Transhumanism advocates the well- being of all sentience (whether in artificial intellects, humans, posthumans, or non- human animals) and encompasses many principles of modern humanism. Transhumanism does not support any particular party, politician or political platform."
Evidently, being free of politics would be a good thing - attempting to be value-neutral, objective, and level-headed about the coming wave of technological possibility. But I will also question whether this is possible later.
'Posthumanity' could be realised in a number of ways, according to the
transhumanist FAQ. Life extension and super-intelligent computers, as already mentioned. Nanotechnology may give us ultimate control over matter. There is human superintelligence, virtual reality, and cryonics to look forward to. And wouldn't it be excellent to upload our consciousness into a computer, which would run our processes so fast that subjective time would stretch and stretch, making us ever more capable?
All of this future-stuff has given rise to the idea of the singularity, a coin probably termed by mathematician and novelist Vernor Virge in the article
What is the singularity?: "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." This, I believe, is the driving force behind all transhumanism. There will be a time when progress is so fast that escaping it will be impossible, it will draw us in like a singularity, a black hole, so inexorably and so quickly that in an instant everything we thought we knew will be gone, and humanity will be changed forever. We cannot easily manage this change, it will simply happen, and we won't know what we will be capable of at this time until it happens. One explanation of how this will work comes from the
Singularity Institute, who explain that as soon as a super-smart AI is created, it will continue to create even better AIs, and progress will be unstoppable, and humanity will be redefined. The 'technorapture' will have come, and we will be in a heaven of advanced computer systems.
This belief is an interesting one to analyse, and I will certainly attempt to do so later.
What extropians say Transhumanist thought is inextricably bound up with the less restrained, more exuberant extropianism, which is perhaps more able to wear its heart on its sleeve.
I would direct you to the
website, but it is practically readable in its meaninglessness:
"Extropy — The extent of a living or organizational system’s intelligence, functional order, vitality, and capacity and drive for improvement... For the sake of brevity, I will often write something like “extropy seeks…” or “extropy questions…” You can take this to mean “in so far as we act in accordance with these principles, we seek/question/study…” “Extropy” is not meant as a real entity or force, but only as a metaphor representing all that contributes to our flourishing."
Reading about extropians, it is hard to discover exactly how they differentiate themselves from what might be called 'mainstream transhumanists. In the end,
Toby Howard's article serves as a nice summary of the extropians - transhumanists who accept the free-market by, for example, dressing up as a dominatrix called 'the State' leading around a man on a leash called 'the taxpayer'. They are the transhumanists that probably read Ayn Rand and would call themselves libertarians.
Max More, head of the movement,
descri bes himself as "Director of Content Solutions at ManyWorlds, Inc., the strategy and intellectual capital design firm. Dr. More is an internationally acclaimed strategic futurist, regularly speaking at conferences in the United States and throughout Europe". He is
not concerned with oil peaking, saying that we are 'running into oil' rather than out of it. Yes, we are getting better at extracting oil - drilling deeper for cheaper, in essence - but in what way can that not mean that we are not running out of reserves? Surely it is a resource that is limited? How deep are we prepared to drill? I question whether this is 'transhuman' thinking at all.
Most seriously, he states that the end of history is here, that
"markets have won. Now let the winner get on with the job".
Oh dear!
Transhumanism and the end of history The end of history is an idea that is still being disputed. Stuart Sim's book, "Derrida and the End of History" is a nice introduction to one area of the debate. All it means to say that 'this is the end of history' is to state 'well, this is it, we've won'. When history has ended, the human struggle is over, and all that's left to do is wipe up the mess and be successful.
It is evidently a very self-congratulatory idea. Fukuyama has written famously about it, simply asserting that the Western ideology of free-market capitalism had won. It is becoming more evident that this statement is not exactly being upheld by the facts. It is certainly the dominant system, but will it last forever? I find it limiting, perhaps even purposefully self-delusory, to be so sure that it will.
On p63, Sim concludes simply that "The 'end of history' is not the good news that Fukuyama believes it to be; not if we have any desire at all to contest the balance of economic and political power that currently prevails in our world".
There is much of interest in considering that now the transhumanists and extropians are proclaiming the end of history, and Fukuyama is a 'bio-conservative' arguing that they are dangerous. What does it mean to say 'this is now the end of history', or, 'the end of history is coming'? Is it really a statement free of values, ideology, and politics: or is it a statement precisely of values, ideology, and politics?
Why I do not trust transhumanism Here are three general points I will be making:
- The technology may not be as transforming as claimed
- A posthuman future may be terrible
- Transhumanism hides a set of questionable values, and may not solve various problems
Can we so easily assume that the technology promised will come? Transhumanism has a lot of faith, almost a religious faith, in technology. The core values of transhumanism are technological progress being almost unlimited, and the absolute compatibility of this technology to a better human future, if it is understood and used correctly.
The idea of the 'singularity', a technorapture, is particularly religious. At some point, a time will come where there will be choice. We are waiting for a time where the believers will be proved right.
Perhaps technology will improve so greatly that the world will transform, an ugliness will dissolve. More likely to me, however, is that the world will continue to stay the same in many ways, and technology will only be transforming in a smaller way. My cynicism and pessimism, I believe, are supported by study of human progress. The internet, to take one example, is an improve form of communication, and part of a human development that spans back at least to the first vocal utterances. We talked, we wrote, we developed postal systems, the telegraph, and then could transfer information via radio, phone, television, and fax. Now we can transfer information by the internet. Is the internet much more than a new and convenient way of sharing information based on computers? Will future technology actually make unimaginable alterations to the world, or just keep adding on new and faster ways of doing things?
When, seriously, was the last time the world changes so much something entirely new occurred?
We should not substitute a utopia into an unknown future just for the sake of optimism. Yes, it is certainly possible that my cynicism about technology producing something out-of-this-world is entirely misplaced. We must consider, however, technology being destructive. Sim reports that Lyotard imagined an unpleasant transhuman future:
"Lyotard's take on the end of history is worth commenting on, given that it is a vision, and a singularly bleak one at that, of both the end of history and the end of the world. The Inhuman pictures a world where the forces of technoscience (that is, advanced capitalism) are concerned above all to prolong life past the end of our universe. It will not, however, be life as we currently know it; rather, what is being sought is the ability to make thought possible without the presence of a body... Lyotard proceeds to sketch out a nightmare vision in which computers take over from the human, given that they are less vulnerable and more efficient than human beings - and also, even more crucially from the point of view of techno-scientists, more susceptible to control..." (p25-26)
Lyotard does not want to be a computer, asking "Is a computer in any way here and now? Can anything
happen with it? Can anything happen
to it?"
For my part, there are a number of nightmare scenarios to consider. What if super-powered humans just succeed in wiping each other out, through war or terrible inequality, posthuman against slavehuman? What if the rich buy longer and longer lives, and concentrate more and more money for themselves, until we have an entirely stagnant economy? What if uploading our minds entails the destruction of something human?
Finally, a superintelligent computer, being superintelligent, might spend a nanosecond going through the internet - all the dull weblogs, all the pictures of cats, all the excited transhumanist expectation - a further nanosecond processing the images of the scientists gazing surprised through the cameras, and turn itself off. What will we do if we find out that superintelligence can't save us, because there really are no final answers to the question of humanity?
To bring up a previous argument - could an uploaded human consciousness experience free will while in a computer? Are not a computer's processes determined, and obviously shown to be so? One might be able to programme a computer 'consciousness' to act in a way according to moods, capriciously, and sometimes unpleasantly. But if a consciousness has a will that is controlled and determined by a computer programme, is that the same as human will? I fear that I would rather let my brain die than become subject to a programme and its programmers.
Transhumanism is based in humanism. Humanism is based on a questionable set of values. Transhumanism is also based on a questionable set of values, and it must be questioned.
Mainstream transhumanism believes in the individual, it believes in technology, it believes in Western government. It is an uneasy mix of liberals and libertarians who have their own ends. It is rife with problems of human difference and inequality. While on one hand, transhumanism has an
admirable set of values, or at least it states admirable values. there is something more to it that it does not appear to admit, perhaps does not realise.
It is questionable whether technology can answer human questions. Humanity itself has created all sorts of problems, and technology alone cannot alleviate them. We need to do something, to look our ourselves and each other, to interrogate history - to find out how we got here - and our possible futures, and to work out exactly what is
right. A computer cannot do this for us, a computer has no ability to process any information unless it is told how to process it. A computer has no ability to adapt its processing unless it is programmed to adapt.
Transhumanism asks us to concentrate on improving humanity by waiting for technology to do it for us. I do not believe that it is a certainty that it can. What we must be doing is dealing with the problems and questions of human existence now, without hoping for a saviour or a technorapture. Problems to do with human existence are far more serious than transhumanists seem to realise - will computers be telling us how to live? Computers and technology do not force us to change how we live; they merely offer a way to do so. It is down to humans to act.
What I am trying to say is that
human history has much more power than the transhumanists admit. There is much more to us, collectively, than a bunch of individuals waiting to live forever. History cannot be hidden or destroyed by claiming it is about to end, that we are going to transgress it. We must still keep grappling with issues of human existence, issues of even more importance than technological progress.
There are innumerable such issues: How should we be living? What does it mean to be alive? How should an individual treat another individual? To what extent does the individual exist? Should we have equality, and what should it be? What is freedom, and who should have it? What are the limits a human can impose on another human? The transhumanist declaration expounds the moral right for individuals to take control over their own lives - but how will this affect the lives of others?
Waiting for technology to provide us with superintelligence is not important enough for us to take our eyes off of these issues now. Imagining that these questions will be adequately answered by technology may lead us into leaving them fatally unanswered. Technology is only one part of progress, and although the transhumanist society admit this is the case, I still think that it is overemphasising only one route to possible human betterment.
Proclaiming the end of history is one way of saying that "these values have won", and in itself only inspires more argument and more history. It seems to be hard to actually end history without wiping out the human race. To say that history will soon end, and that technology will kill it for us, is also a statement that hides an argument within it. It is an argument to concentrate on technology, while ignoring other things. And my feeling is that these other things, these human questions, are far more important than technology. They will not be conclusively answered by technology, and nor will they be answered by being ignored. We must find other ways of dealing with them, without relying on an uncertain future full of unimaginable possibilities to come and save us.
Thankfully, human history is affecting some transhumanists.
Russell Blackford mentions one of the many problems that must be considered: "transhumanists should go beyond arguing that enhancement technologies should be widely available. I now think that we should support political reforms to society itself, to make it more an association of equals. I am not planning to give away my own modest wealth, and I am only prepared to give two cheers for egalitarian political theory, but we have to find ways to narrow the gap between the haves and the have-nots."
A computer cannot answer this question. A posthuman is not around to answer this question. It is a human question and we must answer it, now, or suffer the consequences. Now matter how far technology advances, we must still advance also. I refuse to have faith in a vastly improved future when there is so much to change right now. If we don't tackle it now, who will be making the future for us?
Transhumanists seem to believe that the future will come because the future itself will create it. Computers, scientists, and technology will advance so much that the future is inevitable. The future is not created in the future, however. It is created now. And we must do something, now.
To paraphrase Sim's conclusion of Derrida's argument against Fukuyama, transhumanism's declaration that society is going to be changed by technology is not the good news that it is believed to be; not if we have any desire at all to contest the balance of economic and political power that is expected to prevail in this future world. What will the ethics of the future be, and do we want our immortal children to live there?
A transhumanist will, of course, understand these objections and want to do something about them. I think they have the order of how to do it wrong, though. You do not imagine a future, wait for it to come, and while it's coming discuss what it should be. You look at now, explain what you would like it to become, and act on it so that the future that you want become closer to reality. We are not here to ask, "what is technology, and how can it help us?". We are here to ask a much wider and more difficult question, "what is it to be human, and how can we improve humanity?". Transhumanists may think they are asking the latter question, but their methods, ideals, and imaginations are far too restricted. Transhumanism will be only one part of successful human progress.
Conclusion: Two serious questions Can we take transhumanism seriously? It involves a number of quite wealthy, often stupidly named and often American white men - a large number with philosophy educations - heading the movement, with an undetermined amount of 'footsoldiers'. I am sure they all take it quite seriously. I am also sure that many people, looking in, will see a worrying mix of science fiction and computer nerdism. This is not helped by an article on the transhumanism blog to do with the H+ crowd
becoming more ubersexual and less nerdy:
"Are you suffering from shyness, social anxiety or depression? Take Prozac... Stuck in a fashion time warp? Buy new clothes at a store known for being trendy... Fat? Stop eating junk food. Start eating fruits, vegetables, whole grains, and go to gym... Lonely? The cyberdelic trip is over... Get away from that damn computer before you become a hikikomori, spend some quality time with your family and friends, go out, meet new people, get laid.
"LIVE LIFE!
"Too poor to do any of this? Get a real job and keep it!"
It is nice to see that transhumanists are not above giving overly simple answers to personal questions. I wonder what you are supposed to do if something is stopping you from getting a real job and keeping it.
Do I take transhumanism seriously? I take being alive seriously, and I take now seriously. It is part of life, and it is part of now. But it pales in comparison to the many problems suffered the world over, and I do not think it offers a realistic, sensible, or rational solution to those problems. It is overly simple a solution, it requires too much faith in technological progress, it seems to be a method of ignoring the failures of recent human history and emphasising its scientific success.
What transhumanism offers is a technological dream of the future, which for the most part we must wait for. What I desire is a human dream of the future, which for the most part we must work for. This is the consequence of finding human questions more demanding and necessary than technological questions.