Jan 13, 2009 00:09
For my science fiction and philosophy class I was writing an essay in which I argue that living in the Matrix or a similar simulation would be worse than living in reality, even though one would think one is living in reality. In writing this I came up with the following argument, which I rather like but also have some reservations about:
"It is true that they will be free of the pain of realizing that their world is unreal, and they will feel simulated pleasures as acutely as they would real ones, but life cannot be measured solely in terms of pleasure and pain. Few people would argue that pleasure and pain are responses randomly associated with certain actions or events; rather they serve to motivate people to do certain things and to deter them from others based on some standard of what is desirable. To say that this standard is simply what brings pleasure would be circular. Therefore, the causes of pleasure and pain must have intrinsic value; the good or bad quality of the feeling is a response to the good or bad quality of the cause, not the other way around. Thus a real feeling is not enough if it does not have a real cause."
Obviously, in the simplest forms of the emotions, the causes of pleasure and pain are determined by evolutionary pressures, and the ultimate value is ability to pass on genes. In the conscious human mind, however, this alone cannot suffice, since the causes of pleasure and pain vary from person to person and can even change over time. Further, various pleasures and pains -- for example intellectual pleasure taken in science, math, music, or art -- either have no apparent connection to evolutionary pressures or are very far removed therefrom. Probably not coincidentally, these are also areas most subject to individual variation and change over time.
Anyway, the real reason I began this entry was that in considering this issue originally in the metaphysical context of the essay, I was struck by how the argument quoted above might impact ethical philosophy. Although not an absolute subscriber to any one theory, I tend to be a fan in many ways of the utilitarian ethics in which the most ethical choice is that which causes the greatest good and least harm to the greatest number of people. (Generally this is also tempered by some concept of individual rights whose breach does so much harm that it is as a general rule not allowed, even with much good on the other end of the balance.) Frequently, if I recall correctly including by J.S. Mill, the good is defined specifically as happiness. That is, the utilitarian goal is to bring the greatest happiness to the greatest number. (He is hardly the only philosopher nor utilitarianism the only philosophy to equate happiness with good; Aristotle, for example, reached a conclusion along that line as well.)
In light of the argument at the root of this discussion, however, considering good to be happiness is troubling. Happiness can hardly be both the good itself and a result of good. Determining exactly what else is good, however, is a difficult and rather arbitrary task. Certainly, there are the evolutionary pressures of survival and reproduction of the individual, group, and species, and perhaps by extension of ideas and aspects of an individual's or culture's mental existence as well. Most ethical theories, however, condemn certain actions toward this particular "good," especially those that harm others, and indeed fulfillment of evolutionary goals is not always coincident with happiness in the complex human mind. Further, intellectual goods such as truth and beauty are frequently valued by humans and can bring happiness, but it is difficult to describe their value to one who does not himself value them, except perhaps with recourse to the fact that they bring happiness. Another class of good often valued by humans is social in nature and seems to straddle the divide between evolutionary pressures and purely intellectual values, as the psychology behind social values can be explained by elements of both evolution and complex cognition.
In the end, I fear I am not going to reach a conclusion. If any reader has followed this far, I am sorry to disappoint you, but perhaps you may see it as an opportunity to think for yourself, a rare and precious thing these days. I am left more or less with a rather circular dilemma I have puzzled over before, that of assigning ultimate value. Happiness and value seem to coincide, but to say that happiness is value is unsatisfying and illogical. At the same time, to describe various possible other things as value seems arbitrary and impossible to prove. In the practical ethical question, however, the difficulty of identifying the fundamental doesn't really matter, at least in most situations. Happiness can be generally correlated with good and unhappiness with bad, whether they are the values themselves, the result thereof, or in some other relationship. Thus striving for the greatest overall happiness and striving for the good may or may not have the same goal, but they will tend to head in the same direction.
That's enough confusion for one night, I think. It's also longer than the actual essay I wrote for the class, although I think it took somewhat less time to write, and it may also be less polished.
Philosophical Disclaimer: It is difficult to communicate ideas with words. Consider that you have probably misinterpreted some of what I have said.