I saw the Matrix sequel recently and found it to be about what I expected: a combination of mediocre philosophy thrown in at random and kick-ass digitized kung-fu (aka wire-fu) scenes. While the original was more interesting in terms of "deep thoughts," both movies raise a great number of interesting philosophical questions that can serve as a good introduction to philosophy to those fazed by the empty void of postmodernism. To help with the process, the Matrix website has a handy philosophy section featuring over a dozen different essays with all sorts of perspectives. Some of them are quite interesting and thought provoking, while others are hopelessly muddled in their own subjectivism. (Ex: "I think that even if I am in a matrix, my world is perfectly real.") I suggest reading the introduction to skip to the most interesting essays.
Anyway, there are several interesting points raised the essays that echo some things I’ve been arguing for years. One is that morality is as applicable to entities living in the matrix as it is to the flesh and blood variant. Because morality is based on the practical necessities of a rational entity’s life, it applies equally to all rational entities, including the vat-enclosed, artificial, and virtual kinds. Check out the essay "Artificial Ethics" on the site for more.
Another interesting issue is brought up by Kevin Warwick in the essay "The Matrix - Our Future?" who ponders the plausibility of humanity ending up in a real-life Matrix. (Dr. Warwick is actually the first ever cyborg, implemented not once, but twice with silicon chips. The second was a neural implant that allowed him to remotely interface with a robot arm over the net, record and play back sensory perceptions, and even communicate emotions to a similar chip implanted in his wife. He is actively working on developing the technology to make telepathy a reality, and at this rate, it may well become a reality in his or my lifetime.) Anyway, I have long shared Dr Warwick’s hypothesis, only I take it one step further: I believe that in the long run, the biological human race is doomed. The status quo is inherently unstable, and there are only three possible outcomes in the long run: (a) humanity is destroyed by internal or external factors (b) humanity evolves into non-biological entities or (c) artificially created (but not necessarily intelligent) entities wipe out humanity. This is a philosophical conclusion rather than simply a technological one because it is based on the basic relationship between humanity and technology rather than any particular trend or development. It requires a lengthy explanation, so if you’re up to it, go on to read my theory.
Posted by David at May 22, 2003 05:36 PM | TrackBack