If I Had a Billion, Part 5

Aug 08, 2007 12:08


(Continuing a thread I began in my July 13, 2007 entry.) A gratifying number of people who wrote to me indicated that they would fund research, in a lot of different areas. I'm for that; research is much less political than education, and much more can be done with less money. (Even a billion ( Read more... )

speculation, games, software

Leave a comment

Comments 5

If you had a billion,you'd become a Venture Capitalist anonymous August 9 2007, 00:44:23 UTC
Jeff Duntemann, VC. Most of your post sounds to me like a proposal to be a VC and fund a H/W plus O/S development.

It also made me think of past projects like IBM's Future Systems and OS/2, Pink and Taligent, BeOS and NeXt. How many attempts to replace the old Mac OS died quietly inside Apple?

Good luck!

Bill

Reply


beamjockey August 9 2007, 01:29:39 UTC
Um, wouldn't it be better to abandon the 8something86 architecture, and invent a new family of chips? You have a billion dollars to play with.

Reply

regek August 9 2007, 04:20:57 UTC
There's already a chip that lends itself extremely well to virtualization. In fact, on the currently shipping hardware using this chip, you can't run any native code. Everything is run through a hypervisor built into the firmware.

There's a single management processor that is responsible for starting the rest of the hardware and monitoring it. It runs on a scaled-down version of that same chip and boots a flash-based Linux system.

For that matter, the operating system that runs on this hardware now has a binary compatibility layer that lets it run ELFs compiled for x86 Linux.

It's called the P-series from IBM. ;-) They're extremely good at virtualization at this point. Rather than specifying "I want this code to run on one processor and this code to run on another two and this to run on a fourth", you specify how much time each task should be able to request from the system as a whole. The hypervisor then balances it across all of the hardware in the box. Works very well. Not quite as fault-tolerant as a NonStop, but it's ( ... )

Reply

jeff_duntemann August 9 2007, 15:24:18 UTC
As Regek points out below, this has been done. CPUs aren't actually the difficult part. Integrating a CPU with hardware support (i.e., mobos) and OS software is the toughie. I like the P Series and it would work well in this application. Furthermore, virtualization of the X86 architecture itself is more or less mature, so running legacy apps in a VM window is not magic. Seamless parallel execution does require radical rethinking of everything connected with a computing platform, so changing from a pure X86 processor to something that knows how to virtualize X86 seamlessly is really no big jump.

And a billion really doesn't go as far as it used to, heh.

Reply

regek August 10 2007, 05:23:23 UTC
Actually, come to think of it, this reminds me of the problems people are having coding for the PS3. The Cell chip can deliver absurd performance. We're talking gigaflops upon gigaflops. The problem is that it's massively parallel and nobody knows how to code for that. Computer games are just now starting to become multithreaded, and that's only to take advantage of dual-core processors. Before Intel really started pushing the Core Duo, the closest we got to parallelism in games and most other apps were shaders that ran on the video card.

For that matter, lots of games still use integer math internally for everything, which skips right by the purpose of these big, expensive (in terms of silicon area) vector coprocessing units everyone has been adding to their chips. Apple's page on optimizing the Noble Ape simulator focuses on one specific debugging tool from Apple, but it's a good example of the kind of performance increase some applications can see from multithreading and using vector math ( ... )

Reply


Leave a comment

Up