Two sf story ideas

Dec 15, 2010 11:47

A couple of ideas for vaguely Singularitarian short stories that I've had kicking about for a while, but which I can't see how to take from "idea" to "completed story". If anyone wants to write them, be my guest. There are no doubt lots of reasons why these ideas are stupid, and I'd be grateful if you'd point them out to me. On the other hand, if ( Read more... )

mountains, visions, climbing, grim meathook future, sf

Leave a comment

Comments 27

ciphergoth December 15 2010, 13:06:22 UTC
I believe that the brain is almost certainly taking advantage of quantum processes in its functioning

I'm surprised by that - say more? Is this a Mitchell Porter thing?

Reply

pozorvlak December 15 2010, 14:43:15 UTC
Nothing so detailed - I know almost nothing about neuroscience, I'm afraid. This is basically a hunch based on (a) David Clow's arguments here, (b) my general pessimism, (c) er, some other (fairly speculative, IIRC) stuff that I've seen on the Web but can't for the moment remember (d) the idea that if there were useful quantum effects available for improving brain function, evolution's had a long time to discover and exploit them.

Do you have any good arguments why this wouldn't be the case?

[Yes, I should read the whole WBE roadmap rather than just commentaries on it. It's on the list...]

Reply

ciphergoth December 15 2010, 14:51:32 UTC
a) Doug (not David) Clow's article says "[...] I doubt the quantum level, 11, is needed but Roger Penrose would disagree" - if it's the Penrose thing you're looking for, see Wikipedia

b) Why would you expect pessimism to give you information about reality here?

c) can't really address here

d) This again seems a bit general - I think we should be working out whether the brain is using a particular trick by looking for it, not just assuming it has all tricks.

This doesn't seem to me to add up to "almost certainly".

Reply

pozorvlak December 15 2010, 15:07:16 UTC
a1) I've read Emperor's New Mind, but wasn't aware of Penrose's more recent work on the idea. Thanks! Note, incidentally, that Hofstadter disposes of Penrose's (unoriginal) Gödel-based argument in about a paragraph in Gödel, Escher, Bach: there's no reason for the conscious mind to be able to encompass its substrate's full deductive system, and hence for it to recognise its own Gödel sentence.

a2) Doug*'s arguments don't add up to "quantum level necessary", but they do to my mind add up to "a very low-level simulation is necessary". Which is enough for my purposes, but obviously the difference is very important for anyone trying practical WBE.

* oops.

b) By itself, I wouldn't - but it's worth asking oneself if WBE proponents are suffering from the Planning Fallacy, and if so which of their assumptions might be excessively optimistic. Again, I should probably read the WBE roadmap before accusing its authors of anything, but over-optimism on their part is a priori likely ( ... )

Reply


half_of_monty December 15 2010, 13:56:44 UTC
Gorgeous, both of them! Write them!!

Reply

pozorvlak December 15 2010, 15:18:25 UTC
OK. What happens in the second one after the opening scenes? :-)

Reply

pozorvlak December 15 2010, 15:37:08 UTC
Come to that, what happens in the first one after the opening scenes? And how do I handle such a massive infodump?

Reply

half_of_monty December 15 2010, 15:45:56 UTC
Oooh, er, gosh.

I fear the nonexistence of any non-parody writing by me is largely due to my inability to work out anything resembling a plot.

On the other hand, I'm going through another Woolf phase at the mo. For the first, why not just stick inside one of the characters' head, and detail the mental back-and-forth and doublethink and anguish, and not bother with any additional plot to that above?

You don't even need to specify the decision they reach -- just take it to the point where a decision must finally be made.

I don't know about massive infodumps. How does good SciFi cope? I don't read much.

For the second, I would put all the writing into the mountain, esp with building up the relationship between the brothers, and in conversation about the wife. And end it where you've ended it above. [If you want to go on from there, it seems so easy to fall into farce; beware Blithe Spirit!]

Reply


elvum December 15 2010, 17:53:33 UTC
Just wondering, but surely for 99.99% of use cases something (much) simpler and cheaper than a simulation of a human brain will be a better solution? A conviction that the 0.01% is important? Curiosity? A conviction that betterness is self-evidently directly proportional to like-us-ness? :-)

Reply

ciphergoth December 15 2010, 22:08:26 UTC
One use case I have in mind is ETERNAL LIFE. I don't know of a simpler or cheaper option for that one...

Reply

pozorvlak December 15 2010, 23:37:22 UTC
Aha, an excellent point! So (for instance) the datacentre running an uploaded Warren Buffet might have greater EROI if it were running non-sentient statistical trading software instead?

Guess that leaves us with eternal life, then. But the idea of a repressive caste of uploaded billionaires is still salvageable, I reckon.

Reply


gareth_rees December 15 2010, 22:14:11 UTC
Like ciphergoth, I am not impressed by the claim that human brain is doing anything particular with quantum mechanics that can't be emulated at the level of proteins or even neurons. I've read Penrose's The Emperor's New Mind, and as far as I can tell, Penrose's argument is:

  1. We don't understand consciousness.
  2. We don't understand quantum gravity either.
  3. Therefore, conscious might be caused by quantum gravity.
It's a god-of-the-gaps argument: unanswerable but unimpressive. To argue against it: surely among the things we know about the brain is that its operation is remarkably robust, functioning moderately well under a variety of environmental conditions that would cause any quantum phenomenon to decohere into mush ( ... )

Reply

pozorvlak December 17 2010, 09:53:54 UTC
as far as I can tell, Penrose's argument is: [snip]

It's a bit more complicated than that - he uses Gödel's theorem to argue that the brain is doing something non-computable, and that this non-computability cannot arise from known macroscopic physics. But his argument for non-computability is very shaky, as discussed above.

The protagonist is convicted of criminal damage and sentenced to 100 hours community service ... running errands for uploaded minds!

I love it. That would make a great 2000AD future shock.

Reply


necaris December 15 2010, 23:57:03 UTC
Dude, they sound amazing -- I wish I could volunteer to write one!

Reply


Leave a comment

Up