Massive Spam Parallelism

Nov 28, 2006 19:43


One reason we will not be rid of spam anytime soon is that spam is very well suited to massively parallel mechanisms. The recent uptick in my spam (and everybody else's, I suspect) is due to the fact that more bots are being knit into botnets, and they're better bots. Even Port 25 blocking, ( Read more... )

spam, internet

Leave a comment

Comments 3

anonymous November 29 2006, 06:33:50 UTC
Depressingly enough, Microsoft's solution to this one is a pretty decent one. Make everything "trusted". It'll get a lot harder to do free/open source stuff, but it will also become virtually impossible to get a botnet going since everything will need to be running certain versions and so forth. It would need to be a complete "fabric", all the way down to the end users, but it could theoretically work. That's one of the big things they're pushing about all this Trusted Computing garbage.

Of course, I sincerely doubt it ever will work, since it would require a rather fundamental restructuring of how the Internet operates and would cut out huge numbers of servers that currently run on Apache on some form of UNIX/Linux. ISPs wouldn't put up with that, nor would large corporations. Their users might have to, though.

Reply

jeff_duntemann November 29 2006, 20:48:59 UTC
No. Microsoft is not known for creating full-fabric "solutions." It was Microsoft who released DLL Hell on us, by encouraging "shared code" when there is precious little reason for one app to share code with any other app.

What "trusted computing" really means is that I will no longer control the use of my own equipment and files. The primary beneficiary of all Microsoft's security efforts are to make sure that nobody steals Microsoft software. My need for a reliable computing platform is not even on the radar.

I do not use XP on a daily basis, and I don't intend to use Vista at all. By the time Windows 2000 becomes unusable I will long have migrated to the Mac or some as-yet-unknown flavor of desktop Unix.

Reply

regek November 30 2006, 16:55:25 UTC
The point would be that if you don't control the lower levels of your own computer, then neither will viruses, worms, trojans or other nasties. For example, let's say they require a specific networking stack to be in place and that stack only allows sending fully formed packets through a high-level API (so you don't get layer two access or even layer three directly). If the stack does sanity checking on the outgoing data to prevent spoofing and so forth, that gets rid of DoS-by-proxy. It would also eliminate most network scans. The same stack could then enforce certain restrictions on the number of hosts you are allowed to contact within a specified period of time. If you try to send traffic to more than 50 hosts in 20 seconds, for example, it would block further traffic for a while ( ... )

Reply


Leave a comment

Up
[]