Here's an entry for all the tech geeks out there

Oct 08, 2008 11:00

This whole post is based on one hypothetical question. What would be the result of infinite processing power? Assume that you have a black box that has infinite memory and will calculate any value that can be computed within a finite period of time, then return it to you instantly ( Read more... )

Leave a comment

blackeagle2138 October 8 2008, 16:11:56 UTC
Encryption would instantly become useless. The real question is: would this be a good thing, or a bad thing? It also kind of reminds me of the big quote in Antitrust - "Human Knowledge Belongs to the World". In effect, as soon as someone has and implements an idea, it is possible that it could be nearly instantly mimicked by someone who can ask your black box the correct questions. Security through obscurity would become your only hope and, well, as we've seen - it's not the best security ever.

Reply

draque October 8 2008, 19:06:50 UTC
Hrm. I'm not 100% sure that's the case. Assuming that the machine has protected memory (there's an infinite amount, so no reason not to), it could offer authentication that required finite time between failed attempts at access. In the protected memory, it could store the data for binary switch encryption, which is 100% uncrackable (although you need 1MB of encryption data to encrypt 1MB of meaningful data, and the encryption data must be thrown out after a single use).

If you could request that the system generate X bytes of random encryption data, then grant access to two users, you would be able to securely transfer data. The only issue is that it would be susceptible to a man in the middle attack if the request for encrypted bits was made after the network sniffing began. Taken to a far enough level though, no system can establish secure data transmission under 100% surveillance between two people who have never previously communicated.

Reply

blackeagle2138 October 8 2008, 23:27:09 UTC
My thought would be that if bandwidth/throughput is an issue, you would have to maintain a local copy of it on your end, which could be submitted as a decryption problem to this machine. Granted, you'd have to wait in the arbitrarily long waiting list for this machine (which may well be practically infinite) but for bigger things I can see it being worth someone's time to smash in to someone's secure server to yoink out this data.

Plus, your black box cannot exist in the vacuum forever. Once someone realizes that its possible, they may send problems to this computer in order to duplicate it and once these black boxes start competing, it's going to be an enormous mess methinks

Reply

draque October 9 2008, 13:25:57 UTC
Keep in mind that the only thing that reproducing the black box would do is increase bandwidth. The processing power of two infinitely quick machines is not any greater than the processing power of two. Remember that when you deal with muptiple infinities, constants drop out.

Anyhow, it would certainly give people a new definition of what is and isn't secret.

Reply

blackeagle2138 October 9 2008, 13:55:13 UTC
Yeah, the computers will have infinite processing power, but I have a feeling it would be more about the operators reacting to each other than their hardware - i.e. if one was going to try to hack the other. Even if a computer can computer all the possible ways it can be attacked, even with infinite processing power it can likely only mount a defense against finite possibilities. Unless of course we bring AI in to the equation, in which case, how comfortable are you with an AI with infinite processing power just chilling in black box? Good Morning HAL/SkyNet/Whoever...

Reply

draque October 9 2008, 15:03:07 UTC
Honestly, if you're willing to make some concessions to security vs. usability, making an essentially unhackable system isn't hard. If you make it accept limited login attempts from any given source for any given period of time, you're going to create minimum boundaries for hacking the system in terms of average crack time. If you have a login system with a quintillion possible combinations of entrance codes (which is trivial, even with current systems), but only allow one attempt per second per source, then assume that another person's hypercomputer had hacked the top levels internet DNS servers to point all IPs (except the one it's trying to hack) towards itself, it would still take the better part of three days to get in.

AI would certainly be an interesting question. You could get lazy, and simply model a human brain down to the fundamental particles (which is to say, you wouldn't have to know how the human brain worked, just its structure) and you would have a thinking AI. There would probably be better ways to do it, though

Reply

blackeagle2138 October 9 2008, 21:27:32 UTC
Well again, you're always going to be vulnerable to MitM, and if you have your own BlackBox-O-Tron, the security of any protocol of communication between the BlackBox and any sort of non-BlackBox would be easily compromised unless you keep it physically isolated on a standalone LAN - basically forcing people to be physically present at the BlackBox to do work on it. Your BB can do traffic analysis on anything less than perfect isolation and find a solution for sneaking past a Non-BB firewall.

I do think that some sort of learning programming would be necessary to program this thing efficiently. I'm not sure that any code a human outputs would be worthwhile, and it would probably be best if it could be 'taught' to write its own software. But then we get in to the ethical issues of have we created some sort of infinitely smart life form? Now we're forcing it to do our work? How dare we?

Reply


Leave a comment

Up