Saturday, March 25, 2006

"I don't really. I just play it."

"I play a game called city of heroes. Untill recently I haven't been a 'gamer'. I bought coh at the end of December. My playing has gotten out of control. My behavior is unlike me. My friends have stopped calling. I haven't seen them in months. I live alone so my new behavior has progressed without anyone protesting. I don't have a roommate or wife to drag me away. I sleep, work, and play. There isn't time for anything else. I feel horrible. I am at work now and I feel like I just want to crawl into a cave. When I play I feel OK. When I don't play I am a mess. I feel silly talking about this. I have always been a social person untill recently. I want to find a way to modderate my playing. I keep saying that I will but then I put it off. Tomorrow I will modderate, today I am going to play. I just want one more level. etc. I don't know what to do."
-- a post on EQ Daily Grind


  I could not begin to assess all the many films, TV shows, or books that involve people withdrawing from a real to a virtual existence. There is evidently a cultural feeling that has come through these works that some people would find a reason to escape reality into falseness.
  Yet, now it is happening in a way that is hard to ignore. People are dropping out of life to play games. Only a few days ago I talked to an old IRC friend, and he said:
[Me] I am starting to write an essay about WoW [World of Warcraft, an online game about orcs].
[Me] I am concentrating on players reporting that they actually feel unpleasant playing it, i.e. that they are addicted to it, and that it is stopping them from doing other things.
[Demiurge] oh? how it destroys peoples social lives? certainly did mine
[Demiurge] Yeah I get that
[Me] The psychological implications of withdrawal from 'real' life are interesting to consider.
[Me] What happens when you play WoW?
[Demiurge] I dunno, I just sorta do it. Dont really enjoy it much anymore, play more out of a sense of responsibility since I'm the leader of my guild
[...]
[Me] How much do you think about RPGs, then?
[Demiurge] well there's oblivion I'm getting tomorrow
[Demiurge] I intend to roleplay a thief/assassin in that game
[Me] Does gaming take over the rest of your life, do you think?
[Demiurge] at the moment? Yes, I havent left the house for 2 months
[Me] Wow. How does that make you feel?
[Demiurge] bad obviously, I want to get a job but I cant summon the willpower to find one
[Me] Do you feel you have removed yourself from 'real life' to play WoW?
[Demiurge] yep
[Me] What reasons can you think of for having done this?
[Demiurge] well I dont dwell on that, I suppose I just "have"
[...]
[Me] Is WoW too good, i.e. much better than a real life ever could be, or does life suck for you?
[Me] In general, very wide terms.
[Demiurge] no not as such, but you're getting very philosophic now, I dont think in those patterns about it
[Me] How do you think about it, then?
[Demiurge] I dont really
[Demiurge] I just play it


  At this point he logged off and we didn't talk again, I think my questions were annoying him. Which is understandable.
  Obviously, this encounter shocked me to the core. I don't know this 20 year old man, I only used to talk to him online. At one point in my life I would have happily given up my existence too, such is the pain of unbearable unhappiness. But, thankfully, I found out how to think differently about life. Rather than despair, now I assert that I deserve a place in the world, that I will work my way to some sort of answer, that I will attempt to make a difference. I no longer worry, "well, I'll die, making the whole thing futile". If my existence makes some change, somewhere, death does not get rid of that change. Life is a struggle, yes, and not at all constantly pleasant, but you do your best. That is the point.
  If you do not believe that making a change is possible, or worthwhile, or too much effort, or that it is beyond your capabilities, then you do feel awful. You feel cheated by existence, condemned to some fleshy prison full of self-recrimination. The internalised voice of authority - the super-ego - haunts you, and you want to rebel. I can certainly understand this horrible mix of loathing and failure, causing more loathing, which causes more failure. It is vicious indeed.

  So, people want to escape being alive. And suicide is an option, but it takes either terrible circumstances, terrible problems, or a terrible determination to do it. It is too final, it cuts off any option of improvement, of being wrong. I certainly couldn't do it myself, and, yes, I did think about it an awful lot.
  A less painful way of killing yourself is to pour your life into a computer game and not come back out - if you are ever dragged out, your life is not there anymore! It is a little suicide, a suicide of the personality, of the social existence, killing yourself from the lives and memories of the people you used to know.

  There is an excellent site about EverQuest, which hosts various confessions and statements of problematic online gaming, including the one above. I have emailed the owner of this site, and I wish to post my thoughts here also. In the future I plan to write an article about these problems, commenting on various websites that deal with it, on accounts provided by sufferers and those who know them, and attempting to understand the psychology and philosophy behind this withdrawal from the world. Unless it is too big a project to actually do, in which case, this will be it! The issue of people withdrawing from life to play these games interests me because it touches on so many basic questions about existence. What is the real world, and what do we want from it? What should we do when it doesn't meet our wants and needs? What is being alive, and what are we supposed to do about it? Under which circumstances can our choices be criticised - when we let go of our responsibilities, or hurt others, or ourselves? Or does personal free allow us to destroy ourselves for any reason, in any way, without having to worry about others? There are basic questions of existence, of ethics, or who we are and how best to act that we must confront, and this issue is one of the millions of ways of doing this.

My email to the proprietor of EQ Daily Grind

>Overall, I think that there are many different reasons why the online world
>can become more attractive than the real world... and that every obsessive
>gamer has his / her individualized set of reasons - part of the reason why
>this phenomenon is hard to analyse, and why the obsessiveness / addiction is
>hard to address with just one "generic" solution.

  At the moment I reckon that there may well be general reasons why people play problematically.
  I call it 'problematic' because I do not want to use the word obsession, it comes with too much theoretical baggage. And I do not call it addiction as I do not think that using an analogy or metaphor of drugs helps anyone understand the problem. It may be similar to obsessions, compulsion, drug or alcohol addiction, or gambling addiction / compulsion, and in fact I see many similarities, but in ways that would not be seen if it were labelled as an obsession or addiction.

  So, problematic gaming. Having grown up (being 23) in a computerised world, I am entirely used to computers, and am in fact quite sick of them. They're not as good as they are supposed to be, not as world changing - but then again I did not live in a pre-computer world! I have always played games, and always been aware that they can overshadow other parts of life, albeit for me only ever temporarily. They can cause happiness, anger, frustration etc. in a child, and I remember these feelings. I have always known people, my parents, my friends, etc. who were thinking, talking, primarily involved in the computer game they wanted to be playing. But, in the past, I have never seen exactly the same problematic quality. Games have taken presidence, both in my life and in others that I have seen, but they have never replaced life. Always people would get bored and do something else, and feel slightly silly. For example, when I was in college Final Fantasy VII was an extremely popular game, along with collectible cards and so on. People would spend an inordinate amount of time with these, but they never replaced life, there were just a main focus. A focus that could be questioned and challenged (I hated both and would frequently point out how dull they were).
Nowadays, online games are, as it is said, more 'immersive'. Due to the choices involved, the actual element of working ('grinding'), the social network, relationships, even economies, they can seriously begin to replace life.
  We have always played games, and we have always had people too interested in games. Chessmasters like chess too much, and it can obviously damage them. People can get too wrapped up in their weekly low-stakes poker. And so on and so on. What we have now, though, is people not just making gaming the focus of their life but their entire life. They will actually not want to move from being in the game.

  I understand that this kind of problematic gaming is actually a giving up of one life and trying to take another, although it is impossible as it is not real. People actually want to let their past life die so they cannot go back. There is something self-destructive about it.
This is not entirely psychological, there is no switch to make people want to commit this kind of personality suicide. There are general reasons, different in different cultures I would say, that contribute to a need to entirely surrender a real life and form a fake one.

  Compared to real life, MMORPGs allow:
  Much more personal control to talk to or ignore people, to change who 'you' are by creating new characters, to act heroically or barbarically, to fight, kill, slaughter and insult
  To have a lot less responsibility or tricky choice or 'unpleasant freedoms'

  I believe that playing these games gives you a certain self control over a vastly more limited set of options. The world is simple, rules constrain to a much greater extent in terms of interacting with environment, others, and your professiong and skills. There is a lot less to think about but, in a way, more you can just do.
  In real life you cannot wander around clicking on stuff and shouting 'LOL'. Gratification is not so instant, boredom sets in, you must think and be active and fit in to the world. In an MMORPG, the world is so simple that you just exist almost thoughtlessly. Having attempted to talk to people as they play, I would not be surprised to find that, neurologically, the demands on their brain are much smaller. But I am not so interested in the science - philosophically, you have freedom to do, and much less to worry about in order to do it. Less thought, more action. It is absorbing simply because its repetitive nature is easy.

  People are withdrawing from a world they do not understand and control and slipping into something easy, constant, warm, and comforting, the mental slippers of an online game. We have always done this, and always will, it is the extent which is a problem. There is so much to distract you in these games - new areas, new levels, new people and relationships - that they seem to be a viable replacement for real life.

  The problem is life itself - people are not equipped to deal with it, and the world is too hostile to them. This withdrawal from the world reminds me of the famous 'rat park' studies. Drug-addict rats will remain drug-addicts and die in cages. Put them in a lovely environment, a ra park, and they will overwhelmingly choose water over sweetened morphine. Their natural drives and instincts, uncaged, supply all the happiness they need, and they do not want drugs!
Addiction is complex: it is biological, psychological, and social/environmental. It is your body reacting to an agent that changes your psychological state that necessitates changing depending on where you are in the real and also human world. I would imagine that happy, fulfilled people play MMORPGs in an entirely different way, enjoying them, forming friendships, playing it as a game, but existing fully in the real world as well, stopping their virtual fun when necessary. The phenomenon of absolute withdrawal - 'catassing', 'hikikomori' - must be, I believe, to do with some perceived lack.

  So, what I am saying is that these people, whether consciously or without knowing, do not want to live here and now because they do not like it. Many are intelligent, many could evidently 'do something with their lives', but they are tired and sick and do not see the point. I have talked to many problematic gamers online, and they report that they do not want to go to school, see family, be bothered. They freely admit they play to 'fill in the time'. I have learned to be a happy and fulfilled person for much of the time, and my desire is to use time properly to do something important in the world. These people have no such desire, often if one puts it to them it seems implausible that life can be so pleasant.

  Psychologically and socially we should deal with this problem - people want to drop out. They exist in an environment which, not surprisingly, makes them want to escape. And there are worlds out there, acessible through mice and keyboards, that allow for interaction on your terms, but in a simple world. You know who you are and where you are going. There is no uncertainty.
  They capitulate in the face of troublesome freedom, which entails responsibilities, work, and some sort of social or personal striving for success. Whereas virtual responsibilities are easier to meet, virtual freedom is a lot less constrained, virtual work takes less time, and virtual stress can be measured for you in experience.
  I see why they do this; but not for a moment do I envy them. In the most important way, they no longer exist.

Saturday, March 18, 2006

The danger of being satisfied

  Rob Brezsny's star is in the ascendant - books being published, interviews on radio shows, CDs of his band, and his own website with a multicolour hand print.
He also writes horoscopes, which may give you pause for thought.

  Rob's answer to human problems is 'pronoia', catchily the opposite of paranoia. Apparently, we should be seeing life as a conspiracy to make us feel good. If I may quote:
 
"Act as if the universe is a prodigious miracle
created for your amusement and illumination. Assume that
secret helpers are working behind the scenes to assist you
in turning into the gorgeous masterpiece you were born to
be. Join the conspiracy to shower all of creation with blessings."


  Isn't that just lovely?! Let me also open the door for... the 15 Minute Miracle:
"The 15-Minute Miracleä is a fun-to-do, fill-in-blanks journaling process that increases your sense of well-being the moment you engage in it. As your sense of well-being rises, your vibrational frequency increases, which causes you to attract even MORE wonderful things that cause you to feel grateful, encouraged, and in the flow of Life. As you continue to do this process, you are likely to become an Irresistible Magnet for Love, Money, Miracles, and More ...sometimes at the speed of thought!"


  A template for this miracle is here. It includes writing down what you appreciate, your intentions, asking for assistance, and intoning that you are an "irresistible Magnet for Miracles". Lovely! Lovely! Lovely!


  Yes, it is admirable to be optimistic and happy. Optimism reduces stress and increases life expectancy, according to some studies. And, certainly, your effect on others is better when you are happy, and I fully endorse it - but this philosophy of self-miracles and pronoia is not happiness.
  There is much psychology behind these ideas, and I will try to outline a little bit of it. First, there is a certain confirmation bias that occurs when you look for something. When you find confirming information, it has more weight in your memory, blotting out non-confirming information. So, expecting happiness will increase your recognition of happiness. This will, of course, make you happier, which I agree is all well and good.
  Secondly, there is a pervading value of the psychology of individualism. You look for your own miracles, and when you get them, you win! Those sad people who are always miserable should shut up and think happy!
  This leads us onto the gravest psychology error of these philosophies - ignorance. Looking for your own and finding your own happiness in the little things effectively reduces your place in the world to meaningless little details. Look around. Not all people have their own website, CDs, and books to hawk. Not everyone has many little things to be thankful of.

  What would Rob make of the starving, the dispossessed, the tyrannised, the oppressed? Would he look at them and think, "I'll inform them that they should be happy, at least they have feet!"? Would he look past, thinking, "Ha! I can breathe oxygen, how lucky!"? Or would he look at them and think, "I'm glad I'm not poor!"?

  Being a pronoiac helps yourself, but not others. It leaves you ignorant of their pain, and their inability to glory in being white, middle-class, American, and well off. All in all, it is another facet of humankind’s vast ability to glory in their own riches while others bear the brunt of the resultant poverty, without consideration of the accident of birth that made this situation occur.

  I believe, Rob, that you would find more happiness in devoting your life to the big things that affect us all, not the little things that affect you. Otherwise your existence is nothing more than a small exercise in smugness.

Sunday, March 12, 2006

Transhumanism and its place in endist thought

  Introduction

  It is galling that I have planned to write about transhumanism since the turn of this year, only for the Guardian newspaper to write about it a number of times. However, there is still a point to continuing, as I wish to discuss as many facets of transhumanism as possible and sort out exactly why I find it displeasing.
  The reason I wish to write about this is a piece in the Metro, a free newspaper circulated on British public transport, on January the 11th. "Rise of the machine" was the headline, and the byline proclaimed that 'artificial intelligence is stepping out of the laboratory and into your living room'. The first bit was principally about Robosapien, a hot item in the news around Christmas.
 
"So where is it all heading? Some scientists are looking forward to 'the singularity': the moment when we create an artificial mind more powerful than our own. This could the create a smarter AI still, starting a snowball effect resulting in massive technological advances over a short period of time."

  Apparently, Peter Thiel, former CEO of PayPal, is involved in giving large sums of money to work on singularity stuff. It's nice that he has a hobby, isn't it? The article then goes into 'what if the robots go insane?!!' territory, mentioning the film 'Terminator', ending with the amusing "So treat today's robots well - when the singularity comes, they'll remember who changed their oil".
  Evidently someone is taking transhumanism seriously. It may be beneficial to find out what it is, and examine what we can and should think of it. The reason I wish to write on this issue is that I distrust technology and science, considering them as providing answers primarily to technological and scientific issues. When it comes to human, political, social, and philosophical problems, technology and science are not the answer. They can support an answer, be a mechanism to aid the answer that has been found, but they are not the answer in themselves. At this early stage, I am concerned as to whether the transhumanists might not be looking in the wrong place for a saviour. This article is an exploration into transhumanism and its problems, and of my own distrust of its language and basis.

  For this article, I am using the wonderful and tiny book 'Derrida and the End of History' by Stuart Sim, in the Postmodern Encounters series. It is very good, and makes one look at postmodernism in a positive light, which is by all accounts a tough task when there is a bloody civil war going on over the issue between intellectuals. I am also using internet pages which will, for the most part, be linked to in a hypertextual manner.


  Attempting to explain transhumanism

  If someone were to accost you on the street and say that "the answer to all the world's problems is provided by the internet", you would walk away, tutting something about political correctness having gone mad. Currently, if one is to use google to search for "solution to the world's problems", you get an article on how Islam is the answer. You should observe that such solutions have been around before the internet, the internet only provides a new way for presenting this information and allowing it to reach a mass audience. In this respect, it is like a cannon that fires religious leaflets into the letterbox of anyone who publicly wonders "why am I alive?".
  However, there are people who pin the hopes of the future on computers. These people can be considered transhumanists. But other people are more interested in medical technology, or augmentation of the human with computers. These can also be considered as transhumanists. What we are encountering is a spectrum of belief, with the unifying theme being of faith in technology as improving the human condition. At one end, it is evidently reasonable - technology can aid us in many ways. At the other end, it is less reasonable, and more speculative, and perhaps somewhat worrying.
  It is becoming evident that there is a bewildering array of ideas floating around under the banner of transhumanism, and I do not feel that I can read, let alone explain, enough about these viewpoints. Therefore you must try to apprehend and accept the limitations to my account.
  Some of the jargon we are going to come across includes words such as transhumanism (which can be abbreviated as >H or H+), extropian, singularity, and endism. We are going to go through each of these things, but I must stress that they are linked. Transhumanism, which we will see has some good aspects, entails extropianism and the idea of the singularity, which I consider less beneficial.


  Increasing the range of human life

  Transhumanism is not unappealing. In Saturday's Guardian, James Harkin writes briefly but lucidly about the science of life extension:
 
"Next week a far-flung group of scientists, philosophers and future-gazers will descend on Oxford University for a conference about it, titled Tomorrow's People: The Challenges of Technologies for Life Extension and Enhancement. At around the same time, Ray Kurzweil, a longtime prophet of radical life extension, will launch in Britain his book, The Singularity is Near. Humans, he argues, are shortly approaching lift-off to immortality."

  All this activity is placed under the banner of 'transhumanism', which as James explains, is "the belief that if we humans can just hang on for the next 30 or 40 years, the science will have reached such a level of sophistication that we will be able to live for the next 1,000". I take issue with this, as not all transhumanism is concerned with the extension of human life. In general it is to do with the augmentation of human life with science, or technology, or computers, or all three. Most transhumanism is to do with technology increasing our freedom, some is more to do with technology being able to manage our freedom more effectively, imagining computers creating human society. But all transhumanism considers that we can be more than human with the help of technology.
  James considers that "transhumanists share a welcome zeal for overcoming our limitations. For them, there is little that is natural about when we get old or die, and the subtle alteration of our incubator, our scientific and technological surroundings can keep us alive longer than ever before". He, of course, balances this by casting doubt on such claims, as it is not exactly mainstream science yet. But where he hits the nail on the head is here:
 
"The idea of radical life extension is also a little antisocial. A house, to borrow De Grey's metaphor, is a place to live in as well as an investment. As with the house-buying and renovation craze, transhumanism risks turning all our energies inwards, rather than out into society, where they might be of more immediate use. Sometimes, the urge to escape the ageing process seems like an attempt to escape everyone else."


  What is important - the individual human, or humanity in general? If we concentrate too much on the former, there is no equality, which I believe entails no social stability and no true progress. At the same time, I am not appealing to a 'transcommunism' or 'transcollectivism' of technologically-strengthened socialist utopia. Technology and science are too important to subjugate to political ideologies without thinking strongly about it first - we must be examining the values inherent in these movements. Will life-extending transhumanism be for the rich, causing a massive divide between eternal Westerners and the ever-changing mortals who service them anonymously? Who should and who will benefit? Are humans even happy enough to be capable of living for longer?

  These issues are explored by Madeleine Bunting in another Guardian article. She explains that transhumanists believe that "humanity is on the point of being liberated from its biology. In their advocacy of our "technological rights", they believe that human beings are on the brink of a huge leap in development... We will be, as their slogan goes, "better than well". Evidently, there are implications to this. Madeleine explains them better than I will:
 
"This is the prospect that horrifies the so-called "bio-conservatives" such as Francis Fukuyama, who argues that transhumanism is the most dangerous ideology of our time. There are plenty who share his concerns, pointing out that the implications for human rights, indeed for our understanding of what it is to be human, are huge. What place will equality have in this brave new world? What place will privacy have when brain imaging can read our thoughts and transcranial magnetic stimulation can manipulate our thoughts? What powers over our brains will the state demand in the war against terror? [...]
  "We're not talking about radical new steps, only an acceleration of existing trends. For example, if you can have Viagra for an enhanced sexual life, why not a Viagra for the mind? Is there a meaningful difference? If we show such enthusiasm for "improving" our noses and breasts with cosmetic surgery, why not also improve our brains? As computers continue to increase in power and shrink in size, why shouldn't we come to use them as prostheses, a kind of artificial limb for the brain? If we have successfully lengthened life expectancy with good sanitation and diet, why can't we lengthen it with new drugs? Ritalin is already being traded in the classroom by US students to help improve their concentration."


  This is the world of human enhancement, and it evidently brings up its own problems. The other side of transhumanism is quite different.


  Increasing the power of 'artificial life'

  Some scientists believe that the seemingly inexorable progress being made in the field of computing will lead us to a future where computers, having developed artificial intelligence and artificial consciousness, could take over. Andrew Smith's article is a discovery of this belief from that futuristic-sounding year 2000.
  In this article, Justin Rattner, "head of Intel's Microprocessor Research Laboratories", proclaims continued support for Moore's Law ("the projection which has computer processing speeds doubling every 18 months and which he expects to hold good for the next 10 years at least") and finds it possible for computers to behave intelligently. I am wondering whether computers will ever behave intelligently or only appear to do so, or if there is a difference between the two things. How intelligent is 'intelligent'? Will a computer ever be able to play chess better than us, and tie shoelaces, and raise a child, and go shopping for canned items? Will a computer ever be able to act in a way in which it wasn't programmed? If a computer can be programmed to shop for groceries, all well and good, but I am not sure to what extent that might be defined as intelligence.
  More worryingly, John Leslie, a philosophy professor, explains that "'two possible scenarios present themselves here... The first is that the machines take over against our wishes. That seems to me less likely than that they take over with our tacit or explicit blessing. My own view is that, if it were all true, and they were conscious, then fine - but if, as is likely, they weren't conscious in the full sense, then that would be a disaster.'"
  The questions raised by this area of transhumanism are ones of the limitations of computers, and the limitations of what powers computerised 'life' should have. I am not so convinced that computers are capable of learning or showing human-like capacities. Computers are capable of processing, and I do not believe that all human capabilities can be broken down and explained merely as processing. There is something infinitely adaptable about humanity that I am not sure a computer intelligence could ever display. If such an intelligence has not been programmed to deal with a situation, it cannot deal with it.
  To put it another way: a computer, and presumably therefore a computerised intelligence, can only do what it is told. Newness and conflicts will confound it. Can a computer ever decide to do otherwise than what it has been told? I admit that one person who believes they are in control can attempt to make a computerised intelligence perform an action that another, with more administrative power or expertise has disallowed. But in the end, someone will be in control, the consciousness will be entirely boundaried by the input of another. Perhaps what I admire about humanity is that it is so hard to control. I would be surprised whether an artificial consciousness could represent this, without the lack of control being contrived. You could program a robot to act erratically, sometimes with and sometimes against orders, or to have a rule-set that it would not transgress, and therefore be seemingly willful. Yet, is seeming willful the same as being willful?

  So, we have examined two of the main facets of transhumanism - a concern with technologising human life, and a concern with humanising technological life. Life itself is to be reimagined, reengineered, and moved beyond, as suggested by the name of the movement. Now, to understand this movement better, we shall look into its stated origins, what transhumanists say of it, and find something more to question.


  What transhumanists say

  The transhumanist FAQ introduces itself well: "Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase." There is still more for us to do, we have not yet 'become'. We are going to move beyond ourselves. This is quite a complex thing to talk about, so I will keep referring to transhumanist literature until a tentative conclusion might be possible.
  The transhumanist declaration is a good place to questioning this idea. Item number 4:
 
"(4) Transhumanists advocate the moral right for those who so wish to use technology to extend their mental and physical (including reproductive) capacities and to improve their control over their own lives. We seek personal growth beyond our current biological limitations."

  This seems fair enough, although I will question it later. Number 7 is:
 
"(7) Transhumanism advocates the well- being of all sentience (whether in artificial intellects, humans, posthumans, or non- human animals) and encompasses many principles of modern humanism. Transhumanism does not support any particular party, politician or political platform."

Evidently, being free of politics would be a good thing - attempting to be value-neutral, objective, and level-headed about the coming wave of technological possibility. But I will also question whether this is possible later.

  'Posthumanity' could be realised in a number of ways, according to the transhumanist FAQ. Life extension and super-intelligent computers, as already mentioned. Nanotechnology may give us ultimate control over matter. There is human superintelligence, virtual reality, and cryonics to look forward to. And wouldn't it be excellent to upload our consciousness into a computer, which would run our processes so fast that subjective time would stretch and stretch, making us ever more capable?
  All of this future-stuff has given rise to the idea of the singularity, a coin probably termed by mathematician and novelist Vernor Virge in the article What is the singularity?: "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." This, I believe, is the driving force behind all transhumanism. There will be a time when progress is so fast that escaping it will be impossible, it will draw us in like a singularity, a black hole, so inexorably and so quickly that in an instant everything we thought we knew will be gone, and humanity will be changed forever. We cannot easily manage this change, it will simply happen, and we won't know what we will be capable of at this time until it happens. One explanation of how this will work comes from the Singularity Institute, who explain that as soon as a super-smart AI is created, it will continue to create even better AIs, and progress will be unstoppable, and humanity will be redefined. The 'technorapture' will have come, and we will be in a heaven of advanced computer systems.
  This belief is an interesting one to analyse, and I will certainly attempt to do so later.


  What extropians say

  Transhumanist thought is inextricably bound up with the less restrained, more exuberant extropianism, which is perhaps more able to wear its heart on its sleeve.
  I would direct you to the website, but it is practically readable in its meaninglessness:
 
"Extropy — The extent of a living or organizational system’s intelligence, functional order, vitality, and capacity and drive for improvement... For the sake of brevity, I will often write something like “extropy seeks…” or “extropy questions…” You can take this to mean “in so far as we act in accordance with these principles, we seek/question/study…” “Extropy” is not meant as a real entity or force, but only as a metaphor representing all that contributes to our flourishing."

  Reading about extropians, it is hard to discover exactly how they differentiate themselves from what might be called 'mainstream transhumanists. In the end, Toby Howard's article serves as a nice summary of the extropians - transhumanists who accept the free-market by, for example, dressing up as a dominatrix called 'the State' leading around a man on a leash called 'the taxpayer'. They are the transhumanists that probably read Ayn Rand and would call themselves libertarians.
  Max More, head of the movement, descri bes himself as "Director of Content Solutions at ManyWorlds, Inc., the strategy and intellectual capital design firm. Dr. More is an internationally acclaimed strategic futurist, regularly speaking at conferences in the United States and throughout Europe". He is not concerned with oil peaking, saying that we are 'running into oil' rather than out of it. Yes, we are getting better at extracting oil - drilling deeper for cheaper, in essence - but in what way can that not mean that we are not running out of reserves? Surely it is a resource that is limited? How deep are we prepared to drill? I question whether this is 'transhuman' thinking at all.
  Most seriously, he states that the end of history is here, that "markets have won. Now let the winner get on with the job".
Oh dear!


  Transhumanism and the end of history

  The end of history is an idea that is still being disputed. Stuart Sim's book, "Derrida and the End of History" is a nice introduction to one area of the debate. All it means to say that 'this is the end of history' is to state 'well, this is it, we've won'. When history has ended, the human struggle is over, and all that's left to do is wipe up the mess and be successful.
  It is evidently a very self-congratulatory idea. Fukuyama has written famously about it, simply asserting that the Western ideology of free-market capitalism had won. It is becoming more evident that this statement is not exactly being upheld by the facts. It is certainly the dominant system, but will it last forever? I find it limiting, perhaps even purposefully self-delusory, to be so sure that it will.
  On p63, Sim concludes simply that "The 'end of history' is not the good news that Fukuyama believes it to be; not if we have any desire at all to contest the balance of economic and political power that currently prevails in our world".
  There is much of interest in considering that now the transhumanists and extropians are proclaiming the end of history, and Fukuyama is a 'bio-conservative' arguing that they are dangerous. What does it mean to say 'this is now the end of history', or, 'the end of history is coming'? Is it really a statement free of values, ideology, and politics: or is it a statement precisely of values, ideology, and politics?


  Why I do not trust transhumanism

  Here are three general points I will be making:
  • The technology may not be as transforming as claimed

  • A posthuman future may be terrible

  • Transhumanism hides a set of questionable values, and may not solve various problems


  Can we so easily assume that the technology promised will come? Transhumanism has a lot of faith, almost a religious faith, in technology. The core values of transhumanism are technological progress being almost unlimited, and the absolute compatibility of this technology to a better human future, if it is understood and used correctly.
  The idea of the 'singularity', a technorapture, is particularly religious. At some point, a time will come where there will be choice. We are waiting for a time where the believers will be proved right.
  Perhaps technology will improve so greatly that the world will transform, an ugliness will dissolve. More likely to me, however, is that the world will continue to stay the same in many ways, and technology will only be transforming in a smaller way. My cynicism and pessimism, I believe, are supported by study of human progress. The internet, to take one example, is an improve form of communication, and part of a human development that spans back at least to the first vocal utterances. We talked, we wrote, we developed postal systems, the telegraph, and then could transfer information via radio, phone, television, and fax. Now we can transfer information by the internet. Is the internet much more than a new and convenient way of sharing information based on computers? Will future technology actually make unimaginable alterations to the world, or just keep adding on new and faster ways of doing things?
When, seriously, was the last time the world changes so much something entirely new occurred?

  We should not substitute a utopia into an unknown future just for the sake of optimism. Yes, it is certainly possible that my cynicism about technology producing something out-of-this-world is entirely misplaced. We must consider, however, technology being destructive. Sim reports that Lyotard imagined an unpleasant transhuman future:
 
"Lyotard's take on the end of history is worth commenting on, given that it is a vision, and a singularly bleak one at that, of both the end of history and the end of the world. The Inhuman pictures a world where the forces of technoscience (that is, advanced capitalism) are concerned above all to prolong life past the end of our universe. It will not, however, be life as we currently know it; rather, what is being sought is the ability to make thought possible without the presence of a body... Lyotard proceeds to sketch out a nightmare vision in which computers take over from the human, given that they are less vulnerable and more efficient than human beings - and also, even more crucially from the point of view of techno-scientists, more susceptible to control..." (p25-26)

  Lyotard does not want to be a computer, asking "Is a computer in any way here and now? Can anything happen with it? Can anything happen to it?"
  For my part, there are a number of nightmare scenarios to consider. What if super-powered humans just succeed in wiping each other out, through war or terrible inequality, posthuman against slavehuman? What if the rich buy longer and longer lives, and concentrate more and more money for themselves, until we have an entirely stagnant economy? What if uploading our minds entails the destruction of something human?
  Finally, a superintelligent computer, being superintelligent, might spend a nanosecond going through the internet - all the dull weblogs, all the pictures of cats, all the excited transhumanist expectation - a further nanosecond processing the images of the scientists gazing surprised through the cameras, and turn itself off. What will we do if we find out that superintelligence can't save us, because there really are no final answers to the question of humanity?
  To bring up a previous argument - could an uploaded human consciousness experience free will while in a computer? Are not a computer's processes determined, and obviously shown to be so? One might be able to programme a computer 'consciousness' to act in a way according to moods, capriciously, and sometimes unpleasantly. But if a consciousness has a will that is controlled and determined by a computer programme, is that the same as human will? I fear that I would rather let my brain die than become subject to a programme and its programmers.

  Transhumanism is based in humanism. Humanism is based on a questionable set of values. Transhumanism is also based on a questionable set of values, and it must be questioned.
  Mainstream transhumanism believes in the individual, it believes in technology, it believes in Western government. It is an uneasy mix of liberals and libertarians who have their own ends. It is rife with problems of human difference and inequality. While on one hand, transhumanism has an admirable set of values, or at least it states admirable values. there is something more to it that it does not appear to admit, perhaps does not realise.
  It is questionable whether technology can answer human questions. Humanity itself has created all sorts of problems, and technology alone cannot alleviate them. We need to do something, to look our ourselves and each other, to interrogate history - to find out how we got here - and our possible futures, and to work out exactly what is right. A computer cannot do this for us, a computer has no ability to process any information unless it is told how to process it. A computer has no ability to adapt its processing unless it is programmed to adapt.
  Transhumanism asks us to concentrate on improving humanity by waiting for technology to do it for us. I do not believe that it is a certainty that it can. What we must be doing is dealing with the problems and questions of human existence now, without hoping for a saviour or a technorapture. Problems to do with human existence are far more serious than transhumanists seem to realise - will computers be telling us how to live? Computers and technology do not force us to change how we live; they merely offer a way to do so. It is down to humans to act.
  What I am trying to say is that human history has much more power than the transhumanists admit. There is much more to us, collectively, than a bunch of individuals waiting to live forever. History cannot be hidden or destroyed by claiming it is about to end, that we are going to transgress it. We must still keep grappling with issues of human existence, issues of even more importance than technological progress.
  There are innumerable such issues: How should we be living? What does it mean to be alive? How should an individual treat another individual? To what extent does the individual exist? Should we have equality, and what should it be? What is freedom, and who should have it? What are the limits a human can impose on another human? The transhumanist declaration expounds the moral right for individuals to take control over their own lives - but how will this affect the lives of others?
  Waiting for technology to provide us with superintelligence is not important enough for us to take our eyes off of these issues now. Imagining that these questions will be adequately answered by technology may lead us into leaving them fatally unanswered. Technology is only one part of progress, and although the transhumanist society admit this is the case, I still think that it is overemphasising only one route to possible human betterment.
  Proclaiming the end of history is one way of saying that "these values have won", and in itself only inspires more argument and more history. It seems to be hard to actually end history without wiping out the human race. To say that history will soon end, and that technology will kill it for us, is also a statement that hides an argument within it. It is an argument to concentrate on technology, while ignoring other things. And my feeling is that these other things, these human questions, are far more important than technology. They will not be conclusively answered by technology, and nor will they be answered by being ignored. We must find other ways of dealing with them, without relying on an uncertain future full of unimaginable possibilities to come and save us.
  Thankfully, human history is affecting some transhumanists. Russell Blackford mentions one of the many problems that must be considered: "transhumanists should go beyond arguing that enhancement technologies should be widely available. I now think that we should support political reforms to society itself, to make it more an association of equals. I am not planning to give away my own modest wealth, and I am only prepared to give two cheers for egalitarian political theory, but we have to find ways to narrow the gap between the haves and the have-nots."
  A computer cannot answer this question. A posthuman is not around to answer this question. It is a human question and we must answer it, now, or suffer the consequences. Now matter how far technology advances, we must still advance also. I refuse to have faith in a vastly improved future when there is so much to change right now. If we don't tackle it now, who will be making the future for us?
  Transhumanists seem to believe that the future will come because the future itself will create it. Computers, scientists, and technology will advance so much that the future is inevitable. The future is not created in the future, however. It is created now. And we must do something, now.

  To paraphrase Sim's conclusion of Derrida's argument against Fukuyama, transhumanism's declaration that society is going to be changed by technology is not the good news that it is believed to be; not if we have any desire at all to contest the balance of economic and political power that is expected to prevail in this future world. What will the ethics of the future be, and do we want our immortal children to live there?
  A transhumanist will, of course, understand these objections and want to do something about them. I think they have the order of how to do it wrong, though. You do not imagine a future, wait for it to come, and while it's coming discuss what it should be. You look at now, explain what you would like it to become, and act on it so that the future that you want become closer to reality. We are not here to ask, "what is technology, and how can it help us?". We are here to ask a much wider and more difficult question, "what is it to be human, and how can we improve humanity?". Transhumanists may think they are asking the latter question, but their methods, ideals, and imaginations are far too restricted. Transhumanism will be only one part of successful human progress.


  Conclusion: Two serious questions

  Can we take transhumanism seriously? It involves a number of quite wealthy, often stupidly named and often American white men - a large number with philosophy educations - heading the movement, with an undetermined amount of 'footsoldiers'. I am sure they all take it quite seriously. I am also sure that many people, looking in, will see a worrying mix of science fiction and computer nerdism. This is not helped by an article on the transhumanism blog to do with the H+ crowd becoming more ubersexual and less nerdy:
 
"Are you suffering from shyness, social anxiety or depression? Take Prozac... Stuck in a fashion time warp? Buy new clothes at a store known for being trendy... Fat? Stop eating junk food. Start eating fruits, vegetables, whole grains, and go to gym... Lonely? The cyberdelic trip is over... Get away from that damn computer before you become a hikikomori, spend some quality time with your family and friends, go out, meet new people, get laid.
  "LIVE LIFE!
  "Too poor to do any of this? Get a real job and keep it!"

  It is nice to see that transhumanists are not above giving overly simple answers to personal questions. I wonder what you are supposed to do if something is stopping you from getting a real job and keeping it.

  Do I take transhumanism seriously? I take being alive seriously, and I take now seriously. It is part of life, and it is part of now. But it pales in comparison to the many problems suffered the world over, and I do not think it offers a realistic, sensible, or rational solution to those problems. It is overly simple a solution, it requires too much faith in technological progress, it seems to be a method of ignoring the failures of recent human history and emphasising its scientific success.
  What transhumanism offers is a technological dream of the future, which for the most part we must wait for. What I desire is a human dream of the future, which for the most part we must work for. This is the consequence of finding human questions more demanding and necessary than technological questions.