I am a big fan of Herb Sutter's work. He has great books on C++ programming techniques, for example. He recently gave a talk about programing for modern machine architectures. The very short version is that memory latency is the root of (almost) all performance issues. [
blog entry][
Google video][
corresponding slidesFor a teaser, did you know that
(
Read more... )
Comments 5
I would offer a (I believe) slightly different suggestion, that concurrency should be explicitly supported at the programming-language level. We (the computer science research community) have been working on concurrent languages for a long time, and smart people are still thinking about the problem. For instance, Jayadev Misra (author of "Drinking Philosophers") has some of his minions working on a language called Orc, which embodies some interesting ideas.
Anyway, yes, we live in interesting times.
Reply
That would deal with correctness... it still doesn't deal with some of the performance issues that Sutter mentions; particularly that your parallel performance gets destroyed if two threads need to write to different variables that happen to live in the same cache line. Yikes.
Reply
Sure, you can count L1, L2 cache, and some of the MMU, but it's not 99%, I don't believe.
Reply
Unfortunately neither Sutter nor his source provide citations for that claim, but it sounds plausible. Unless the number of transistors required to implement the instruction set has grown a lot, the actual processing components would only be getting smaller.
Reply
Reply
Leave a comment