In Depth
Technology
Memory booster
Canadian math expert aims to speed up computers
Last Updated October 2, 2007
By Shane Schick, CBC News
The author is the editor of ComputerWorld Canada, the country's longest-running magazine for Canadian technology professionals. He launched ITBusiness.ca, has written about technology and its impact on the workplace for the Toronto Star, the Globe and Mail and a number of other publications, and in 2006 received the Innovation Award for Excellence in Science and Technology Reporting from the Canadian Advanced Technology Alliance.
It wasn't easy when Norbert Zeh and his wife first started living together. She couldn't understand why she would inevitably find him sitting quietly, staring at the wall.
"[We] would always have these fights because I looked like I wasn't doing anything," he says. "The reality is, I was thinking really hard."
That's what it looks like when someone is tackling a problem few have ever attempted successfully: speeding up the way data moves into and out of a computer system.
The Experiment
Zeh, an assistant professor in the faculty of computer science at Dalhousie University in Halifax, is officially working on creating "algorithms for memory hierarchies." In layman's terms, he's developing mathematical models that computers could use to do a better job of pulling data from their hard drives and memory chips and sending it to the processor (the "brains" of the computer).
A hard drive has a mechanical arm that finds and reads data stored on a spinning platter. As a result, the process of getting information from a hard disk is about a million times slower than the speed at which data flows through the average processor.
"What I am doing is to take whatever system designs that hardware designers come up with, and ensure that we use computational techniques that take maximal advantage of the hardware — in this case of the memory system," he says.
By "computational techniques," he means the way machines pass data back and forth between different parts of the system. "It is no longer sufficient to build better and better computers. If we don't have the software that co-operates with the hardware, if we use software designed for the hardware of the 1970s, the better hardware doesn't really help us much."
According to Zeh, the access time to get information stored on a hard disk is about a million times slower than the computing speed of the average processor. This delay is called latency. The read/write function on a drive is a mechanical process — there are actually tiny pieces of machinery moving back and forth on the disk to read or write the data — so transferring information from a drive to memory chips and RAM tends to be much slower than what happens in the electric circuitry of a chip. That means that even when a company like Intel or AMD comes up with a processor that is supposed to make a PC go faster, it may not help all that much when searching for stored information, because it still takes a while for a high-performance computer to search the hard drive to find what you're looking for or to process that data (and that powerful chip will be spending much of its time idling while it waits for the data to arrive).
"The gap between disk speeds and processor speeds is going to widen," Zeh says. "The big problem is you have these fast processors, but you have to feed them with data. There's just no way around having to retrieve that data. And we're not increasing the speed at which you do that."
Hardware makers try to increase how much data they can process on high-performance systems today by having multiple disks attached to dedicated circuits called controllers, which "talk" to the CPU. With these systems comprised of multiple disks, a computer can distribute different pieces of data across a number of slow hard drives, reading or writing a big chunk of information more quickly by using several drives simultaneously.
Zeh is taking a different approach: Instead of adding more and more disks and controllers, the algorithms he's working on would provide a shortcut of sorts so that the computer could read data off the disk in a more efficient way.
The result might not mean faster computing for average users, whose machines often have more horsepower than they ever use anyway, he says. Instead, it might mean that any business that analyzes large amounts of data would see a much faster performance. This could include the kind of powerful computers run by organizations we rely on every day, like a bank, for example. Or it could benefit the scientific researchers who use computers to study biology and come up with a cure for a deadly disease.
Timeline:
Most of Zeh's experiments are done using good, old-fashioned pencil and paper, but once he has algorithms he thinks might work, he says he'll be working with grad students to test the math on real-life database systems and see if performance is improved.
"The long-term goal is to have one library of algorithms people can use to solve problems in a very disk-efficient way," says Zeh, who hopes to have some algorithms completed within five years.
"For now, the main application fields would be scientific computations, GIS, search engines and database systems in general. Over time, these insights will be incorporated into more mainstream applications," he says.
Reality Check:
Zeh isn't alone in his search for more efficient searches, by any means. Russell Klein, a database analyst with Boston-based Aberdeen Group, says a number of companies are trying to solve the problem of latency on hard drives and in memory, but no one has managed to come up with a cure-all so far.
"Unless through some dramatic technological revolution of some kind, I don't ever see disk access latency catching up. I see it as the permanent bottleneck, especially around large databases," Klein says. "I don't see that the algorithms are ever going to solve the problem [entirely], but they will certainly improve overall performance, because that [data access] is where the drag is."
Zeh says some of his success will depend on the quality of software used to take advantage of the algorithms in computers, which may require some remedial education among developers.
"The average programmer probably has no problem understanding the very simple algorithms that were developed in early '60s," he says. "With the kind of problems we're dealing with today, it's almost too much to ask that the average programmer can understand that right and apply [the solutions] right."
Menu
- MAIN PAGE: High Tech
- Can Amazon Kindle e-book interest?
- Beware of Facebook's Beacon?
- Self-publishing
- Here comes Android riding an open source wave
- IP in Canada
- IT for business
- Blackberry only the beginning
- New cellphone providers 'a matter of national interest'
- Technology and copyright collide online
- Crowdsourcing
- Obsolete gadgets
- Facebook: Can it survive the crossover into mass culture?
- Memory booster: Canadian math expert aims to speed up computers
Related:
RELATED
External Links
- A Journey through Modern Computer Architectures
- Random access memory explained
- Exploring memory architectures
(Note: CBC does not endorse and is not responsible for the content of external sites - links will open in new window)