(no subject)

Oct 09, 2006 02:25

Alright, I'm having a moment where I feel an incredible need to sort out my thought by writing about them, so I'm going to post on here again even though I assumed that that practice was long since over-with.

Okay, so I was reading slashdot today and just started browsing their "sci-fi" news archives (yep, it's a whole category of news there) and just stumbled onto some mindblowing shit. Topics of note: will we live 1000 years? Will only the richest of us live 1000 years, and does that justify not even attempting to gain this new technology? How about toxoplasma, a bacterium which can radically alter behavior in latent infection and which has probably latently infected about half of the world's population? WTF?! And if you can alter a single gene to produce completely fearless mice, what does that mean for humans?

I realized that mostly what people don't like about AI is the idea that, once we've achieved it, we'll be obsolete. There will be nothing that we have that our creation does not, and so it will displace us at the head of the table. But why does everyone see robots as a force that dislikes humanity and that wants to replace us? Guys, The Matrix might not have been the chilling glimpse into the future that we all think it was. Maybe robots will want to coexist with us, will cooperate with us for our mutual good. I notice that if I change my perception of robots to being who have the same values as we do, I am no longer threatened by the prospect of their proliferation. Maybe this is our brave new world?

This also provides an excellent example of an idea that I've had for awhile now, that the world is divided into two types of forces, the balancing forces and the self-perpetuating forces. Technology, be it advances in the fight against aging or startling new kinds of mechanical intelligence or life, is a huge example of the latter kind. This is also important to me because I recently came into the belief that humans have such an object-centered worldview, a feeling that matter is the important stuff to be considering, whereas in fact the only stuff that's at all interesting or important is motion, the forces that cause it, and the energy behind it. It actually started linguistically with the idea that, instead of naming an object (say, a ball) and enumerating all of the actions associated with it (bouncing, rolling, and so on) it makes more sense to name a force or a motion -- like pushing, for example -- and then try to apply it to the various nouns which may infuence it or undergo it.

One of the reasons the whole robot thing even came up (or that I framed it the way I did) is that I was thinking of all the various types of happiness. Probably one of the most important types of happiness is a sense of purpose, an overarching goal, which in our classic AI-driven dystopic future scenarios is the thing that robots take from us. I would think that, on the spectrum of happinesses from immediate to long-term, this one is at the very end, and for most people probably the most important. (Sidenote: would you rather have incredible and unlimited sexual pleasure but no purpose to your life, or a great and wonderful purpose that you are aware of but no sexual pleasure ever again? Also, how weird would it be if every human had to make that decision for themselves on the same day?)

Is there any way we could have science without the driving force of profit? You know, science that could actually benefit the masses, instead of giving three percent of us 500-year lifespans? There's such a conflict between technology and capitalism, or maybe just technology and selfishness. Suppose you dedicated your life to something, went before a committee, and asked (in the way that scientists ask for grants now) for another 100 years of life so that you can see your own space probe reach the Oort cloud or whatever? Who would get to be on that committee anyway?

Even mroe basic than the conflict (and by conflict, I mean my perception of one, since in reality one may or may not exist) between technology and morality. As I said before, the idea of robots depriving humanity of a purpose, of homo sapiens sapiens being consigned to live in robot-monitored daycare for eternity, is at the heart of the general fear (or at least my own fear) of AI. But even the longer lifespans thing... what would that do to us? Would our educational system triple in length, so that a medical degree and all of its prerequisites require 80 years of training (but then our doctors would be of just incredible quality compared to who we have now)? Would we be almost justified in not caring about starving people in third-world countries because their lives are to us as a passing of an instant?

By the way, doesn't life have to have a purpose besides self-perpetuation? The other day it came to me that maybe humanity does have a purpose, and that is to discover and explore the universe (and ourselves). I mean, of course you can't be launching rockets while your hosue is burning down, which is pretty much analogous to the way things are going een in our modest goal of interacting peaceable with ourselves and providing for everyone's basic needs. But, after that (if there ever *is* an "after that"), shouldn't there be something more? How about to increase awareness and overall consciousness in the universe? During my ethics class, I sometimes muse that maybe the reason humans seems to me inherently more important than other forms of life is that a more intelligent being has a greater capacity for pleasure and pain, and so the goal of increasing overall one is best served with a human rather than a mouse (although the goal of decreasing suffering... well, maybe not. Hmmm). Turns out John Stuart Mill and the Utilitarians agree with me there. Of course, maybe one needs a balance of happinesses (to bring back an earlier point): some magic formula of pleasures that both sustains us and entertains us, and false pleasures may seem to have a great value but their real value takes the truth of their nature into account.

What are the happinesses?

A sense of purpose.
A sense of belonging, the sum of various personal relationships whether mutual or one-sided.
The more immediate passion of incredible love for one or more persons.
The overwhelming feeling of being loved by one or more persons.
A happiness with one's place in the world.

The ability to express feelings and ideas and be understood.
The outlet to act without completely understanding why, or to give birth to something worthwhile, ie art.
Freedom... mostly I guess freedom to act silly, like when you're a kid.

The feeling of being worthwhile. You know, self-esteem.
The feeling of having completed or accomplished. Triumph.
Caring for or helping others. (This one is important.)

Physical contact, comfort-touch, and sexual pleasure.
Aesthetic pleasure -- sort of a weird one, but it's definitely there.
Sensory pleasure (not somatosensory, but rather the other four): good smells, good tastes. Hmm... aural pleasure doesn't really belong here. What's the deal with music? WAIT: maybe sensory information can only be judged to be "pleasurable"on the basis of the emotional or hormonal response; in which case, a good smell is good in the same way that a good song is good. Well, that's a sloppy and specious definition but it'll do for now.

Alright, I like that list.

Whenever I surf Slashdot I come away with this feeling of, wow, everything is actually happening right now. The stuff you thought you'd never live to see maybe only years away (read: teleportation, the space elevator, even the Singularity). Shit is going DOWN in science, and it's going down *now*.

And who's to say which is more important: the great question outside or the great question inside? The fate of the universe or the motions of RNA transcriptase inside our heads? And by the way, it would be totally possible for me to triple-major in Neuroscience, computer science and math. I though of that earlier today, and right after I did I had to sit down because of how awesome that would be. Goddamn.

Alright, I though I was done but I'm not. I still have this: why do humans want to reject nature's morality in favor of our own? I mean, aside from the obvious answer because, being human myself, I like *our* morality (the good of the individual) way more than if nature had complete rule over our fates (nature's morality being, the good of the species is so much more important than the good of the individual that the individual's needs might as well not exist). Is the system that, as soon as you're smart enough to find a flaw in that system, then you're important enough as individuals that another system is needed, and you have to create it? Or is that just arrogance, and nature's way really is best, and we suffer and care about each other's suffering for no useful reason? Why would that be? Is it because that, too, is good for the species? Caring about others benefits us as a whole? That means it is our fate to see suffering and be helpless to stop it, which would be pretty awful.

And speaking of pretty awful, what's going on in the world right now? Have the mass of people really been hypnotized into caring about only what CNN tells them to care about? Are corporations being given a free reign to do whatever they want in pursuit of profits, to ruin the lives of a large percentage of people because the small percentage of us with even some smidgen of power can't muster up the motivation to even try to stop them? Are we really as fucked as all that? And do we cling to our flawed examples of people "overcoming" because they're all we have to reassure ourselves that The People have even the theoretically power to change anything at all of import?

Mere decades ago people were speculating about these newfangled computers, what uses people would find for them, whether there might one day be one in every home. A short time later, we were wondering about cell phones (and with our trends of tinier and tinier computers, and more and more multifunctional cellphones, who's to say they won't soon merge into one electronic accessory?) How are we supposed to be able to tell the sci-fi hokum from tomorrow's everyday appliances? If they can teleport macroscopic object half a meter now, then why not just teleport all my stuff back home for the summer, and into my new room on moving-in day? Will that actually happen soon, or are we just seeing a few sputters before it dies out again for another few centuries (or maybe forever)? Genetic engineering of humans? New cures for diseases and ways to combat aging? What is this shit? A manned mission to Mars?

...Wow. Can I go?
Previous post Next post
Up