A curious phenomenon in the computer industry is the differing dynamics driving the CPU makers and the RAM makers.
With CPUs, the emphasis has been on *speed*. Year on year, clock rates go up. Nowadays its measured in gigs. Not so long ago - Megs.
With RAM, the emphasis has been on *capacity*. With speed of access coming a distinct second.
The result has been that modern day CPUs absolutely *scream* along ... except for when they need to access such slow devices as RAM chips.
Real-world applications spend a lot of time accessing RAM, so much so, that processors can spend a silly amount of their total execution cycles waiting around for the tardy RAM to do its thing.
So here is the upshot:
- We are building machines with fasters and faster processor clock speeds
- As the speed goes up, so too do the problems of heat generation
- As the speed goes up, the amount of time a box spends waiting on its RAM goes up
- Consequently, pumping CPU speed is no longer the obvious solution to application performance problems
We need another way to drive performance rather than just look at CPU speed.
We need to find ways to make todays ultra-fast CPUs spend more of their cycles doing useful work and less of their time navel gazing waiting for RAM.
To do this effectively, we need to give each processor core the ability to switch between applications that are waiting around on RAM and do it quickly.
That is what chips like the T1 UltraSparc in the Niagara are all about.
Ask not how many seconds your application takes to process a record or an document. That is not the important question. The important question is how many records/documents your application can process in a second.
The second is a measure of *throughput*. The former is afflicted by von Neumann's Curse.
Next up: a look at threads - the obvious paradigm to feed cores with applications to switch between. As with everything else of course, its not quite that simple.