Not your commonplace, ordinary struct

Sep 26, 2008 04:37

When I was taking the two weeks of class leading up to student teaching in the spring of 1993, the prof took us all into a very cramped computer lab in the Education Building. Crowding us all around an old Mac (probably an LC series, but I don't remember for sure), he showed us a program that we were supposed to use as a sort of communal journal. We could each record entries in a number of fields, as would be expected. The innovative aspect was that the program let us link our comments to other comments, forming a chain or, really, more like a web, from which we could all take ideas and learn from what our peers were doing.

Too bad the interface was so bad that, as far as I know, NONE of us used it. This program was so bad that it made HyperCard look like Tron, although without the Wendy Carlos and Journey. I saw what the programmers were trying to do, and I was really excited by the idea, but the implementation was bad. Really bad. Claw out your eyes and break off your thumbs bad. I rejected the software and embraced the idea.

(Digression: If anyone out there has $150K or so lying around going to no good use, I have an idea for a multidimensional history program -- that word is really inadequate, but it'll have to do -- that, as far as I know, has never been even remotely attempted. I first came up with this in the mid-'90s, but the tech wasn't up to the task. I think it is, now. Call your neighborhood venture capitalist and put him in touch with me. End of digression.)

Obviously, hypertext has become more sophisticated in the succeeding years, or we wouldn't all be reading this now and I wouldn't have been able to use Wikipedia earlier -- the ultimate in hyperlinked knowledge -- to check to make sure Wendy Carlos had become Wendy when Tron was released. Hold that thought.

Something else I first learned about in college -- actually, no, before college -- was parallel processing, where you broke a problem down into smaller pieces and handed each piece to a different person, or computer, assembling their results into the solution to the larger problem. I learned this because one summer I had a baby computer class ("Today we're going to learn to use Print Shop to make greeting cards!" "Oooooh...") in a lab that also had a mysterious black piece of furniture roughly the size of a big gun safe turned on its side. This, we learned, was one of the terminals to the university's Cray supercomputer. I was very impressed that the terminal was bigger than any actual computer I had ever seen. (Really, after a full day of Print Shopping, I was impressed with anything that was NOT a 9-pin dot-matrix printer.) The teacher gave us a brief and largely accurate explanation of the Cray.

Ultimately, of course, this idea of parallel processing evolved into distributed computing (tip o' the hat to nugget), which has been applied to some pretty knotty problems (and some pretty nice applications, such as SETI@home).

Now we come back to Wikipedia, which merges hypertext with distributed computing to build a massive compendium of mostly accurate knowledge, with information added, edited, and linked by whoever is motivated and knowledgeable (or stubborn) enough to make it happen. Wikipedia is a truly mammoth undertaking, and the first time I really understood what it was trying to do, I had to have a lie-down for a bit. If I hadn't encountered a shadow of this idea before, in the Orson Scott Card story "The Originist" (in which -- SPOILER ALERT -- a Trantorian helps the University Library compile the hyperlinked index of all its works END OF SPOILER), I think my mind might actually have leaked out of my ears. I wanted the index from "The Originist" and I'm still astounded by the variety of information, mis- and otherwise, available in Wikipedia.

Which leads me, finally, to something that's going to make Wikipedia look like cuneiform. It's possibly the most brilliant ARG I've ever heard of. Except that it's not really an alternate reality, per se...just one that hasn't come true yet. The creators call it a massively multiplayer forecasting game, which truly doesn't do the concept justice: it takes hypermedia and distributed computing and throws them in a blender with about a thousand Red Bulls. It's called Superstruct. An extraordinarily brief summary of the game is that in 2019, simulations show that humanity will be extinct by the mid-2040s, owing to a synergy of five trends all coming to a head.

The brilliance of the game is that it takes place on the Webternets we already have, through things such as YouTube, Twitter, blogs of every shape and size, and probably half a dozen media I'm totally forgetting. For six weeks, starting in 10 days or so, the players will be living the lives of people in 2019. Using any and all media at their disposal, they can discuss, collaborate, and roleplay their near-future analogs, building a picture of the world as it stands and trying to find ways to avert the catastrophe that's mere decades away. Not just distributed computing, this is distributed creativity; not just hypermedia, this is hyper-reality.

I'm talking this up because it is such an ambitious undertaking, such a novel approach to thinking about these issues, that I wanted to draw everyone's attention to Superstruct and invite you all to, at the very least, look around their website and see the bare foundation that they want everyone to build on. It is, quite possibly, one of the finest examples of using our Interweb powers for good that I have ever seen. It is my profound regret (and unquestionably my loss) that I won't be able to keep up with all of this, because I am excited to find out what happens. I have no idea if this will solve the world's problems, but what I know for sure is that SOMETHING new is going to come out of Superstruct, something that never would have happened without this unprecedented collaboration among hundreds of people.

Check it out.
Previous post Next post
Up