fbpx

Six hot technologies for the 21st century

Wednesday, August 21, 1996

Datamation – V.42,n.14 (Aug.1996), p.68-73
by Alden M. Hayashi and Sarah E. Varney


What would computers be like without magnetic disk storage, graphical user interfaces, relational databases, languages such as C++ and FORTRAN, and operating systems like UNIX? Or, for that matter, whatever would information technology be like today had the transistor itself not been invented? To ask those questions is to ponder the influence of the Big Four of IT research–Bell Laboratories, Xerox PARC, IBM’s T.J. Watson and Almaden Research Centers, and MIT.

To be sure, there are (and have been) myriad other organizations-the Stanford Research Institute, Carnegie-Mellon University, Siemens Nixdorf, and Digital Equipment, to name a few–that have played key roles in advancing IT. And companies like Microsoft and labs like Interval Research and the University of Washington’s Human Interface Technology Lab, which did not even exist when the Big Four were busy conducting IT research decades ago, are now doing their part in the continuing evolution of information technology.

Nevertheless, on this, the eve of the 50th anniversary of the transistor and the dawn of a new era of network computing, we at Datamation have taken a peek at the research currently under way at the Big Four. We have specifically focused on technology that we think will he emerging around IT’s first fin de siecle, a time when the surging popularity of the Internet promises major industry changes.

Crystals for data storage

IBM has long excelled in basic research that has helped push the boundaries of IT. Last fall, Big Blue commenced a $32 million holographic data storage project with an illustrious group of partners including GTE, Kodak, Rockwell, Stanford University, and Carnegie-Mellon University. Simply put, the technology calls for data to be stored holographically as “pages” of bits in an optical medium such as a crystal. Early results from experiments portend that holographic technology could be used to store 12 times as much data at the same cost as magnetic disk storage. Furthermore, because a laser is used for writing and reading the data, holographic storage promises input/output rates 10 times faster than its magnetic counterpart. “The capacity of the technology, yes, is important. But the real feature is the potential for enormous data rates,” asserts Glenn T. Sincerbox, IBM program manager for holographic data storage systems and technology. According to him, data rates of 1Gbps are entirely possible.

But it is the technology’s ability to perform searches in totally new ways that makes holographic storage so compelling. Says Sincerbox: “We can interrogate all these pages that we multiplexed into one volume simultaneously and determine which page most closely matches our search criteria. With rotating media such as magnetic disk storage, you’re constrained to a serial type of reading.”

Sincerbox foresees the possibility of using “associative retrieval,” a pattern-recognition type of approach, for finding information. For example, to locate information about a specific topic, a user could interrogate the holographic crystal to find the page with the digital pattern–or fingerprint, if you will –that most closely matches the pattern of the information being sought. Because of such novel datamining techniques, holographic data storage could lead to a completely new way of representing information, assert industry gurus. Just as IBM’s development of the relational database helped transform data into information, the company’s work in holographic data storage could turn that information into knowledge.

Yet, even if associative retrieval doesn’t pan out, holographic technology still holds promise for users with demanding data storage requirements. For example, Sprint, which operates an enormous data warehouse of customer information, might be interested in holographic technology–if it proves to have the reliability of magnetic storage. “Holographic data storage seems very’ promising for now. It’s on the order-of-magnitude improvement that we need in order to justify the risk of using a new technology,” says Hector Martinez, director of business and technical architecture for Sprint’s Business Information and Technology Solutions.

A-terabit-per-second network

In the era of network computing, the ability to store huge amounts of data would become all the more important if you also had the corresponding ability to transmit that data at lightning speeds. To that end, Bell Labs demonstrated last spring a technique for transmitting data over fiber-optic lines at a rate of one terabit per second. At that speed, which is more than 400 times faster than the current technology, the text of 300 years of a daily newspaper could be transmitted in a mere second. The new fiber-optic technology uses a combination of multiplexing techniques in which polarized light beams of slightly different wavelengths are used to carry data simultaneously over the same glass fibers.

Although the basic physics of the technology has been proven, several obstacles exist. The first is that, because the light signals are weak–in multiplexing, a beam must be split into, say, 100 different signals-the signals need to be reamplified at intermediate points, for example, every 50 miles. But that game can only be played for so long. “After several such amplification processes, the signal-to-noise ratio becomes unacceptably small, and the signals can’t be received error-free,” says Andrew Ghraplyvy, distinguished member of the technical staff at Bell Laboratories, now part of Lucent Technologies. Thus, unless further advances are made, a terabit-per-second system will not be suitable for long distances, that is, for transoceanic transmission.

Furthermore, the multiplexing techniques need further refinement. “The fibers themselves aren’t the problem because they inherently have very large bandwidth,” says Chraplyvy. “The problem is in getting all the bits on and off the fiber when you’re trying to transmit terabits of information per second.”

According to Chraplyvy, the basic technology for multiplexing light at different wavelengths is about ready for commercialization; Lucent Technologies is offering a basic system that can handle 20Gbps. The technique for multiplexing polarized light, though, still needs work. “We know how to do this in the lab, but nobody’s tried it in the field,” he says.

If the technological kinks can be worked out, terabit-per-second fiber-optic systems could accelerate the era of network computing, making trivial the transmission and reception of large applications and huge quantities of data. One of the strongest arguments against the Net PC, Java, and other network-reliant entities has been that users will not have the patience to download their word-processing applications, spreadsheets, and other data. Terabit-per-second transmission would nullify that argument.

The technology could also speed the acceptance of applications that incorporate virtual-reality environments. In theory, such environments could be generated by a powerful computer located at a centralized site and then downloaded in a glorious gush of ones and zeros to a remote location, instead of having to be generated at that location by another computer that is typically less powerful. This scenario appeals to many IS managers. “For those very highly data-intensive applications, you wouldn’t have geographic boundaries anymore,” notes Steven C. Rubinow, vice president of corporate management information systems for Fidelity Investments.

Intelligent data replication

Meanwhile, the Xerox Palo Alto Research Center is working on improving existing database technology to exploit today’s networks. Initiated in 1991, Xerox PARC’s Bayou Project is focused on delivering an architecture for remote data retrieval that doesn’t assume everyone is linked via T-3 lines. “The world isn’t well connected with high-speed links,” notes Mike Spreitzer, research manager for Bayou. In addition, the structure of what’s connected and what isn’t changes over time, he says. Thus, the goal of Bayou is to provide a database infrastructure for handling such real-world conditions.

The Bayou technology supports any data model–relational, object, flat files, whatever. The key is to let users access and change data remotely in a manner that guarantees accuracy. Some current groupware products-most notably IBM’s Lotus Notes–use replication to give users access to data. These products, however, have only rudimentary functionality. In Notes, for example, conflict resolution between two users seeking to update the same field simply yields access on a first-come, first-serve basis, according to Spreitzer.

The Bayou technology deploys more sophisticated techniques for resolving such conflicts to enable a remote client to hook up to different servers and still receive accurate data. “The idea is to help one client see some consistency with its own actions as it moves from one server to another,” explains Spreitzer. Specifically, a quartet of guarantees– Read Your Writes, Monotonic Reads, Writes Follow Reads, and Monotonic Writes–is used to ensure data integrity. “The idea is to capture some particular writes that the next server needs to see,” says Spreitzer. In addition, users can specify how they want the system to resolve conflicts, and this specification can be application-based and performed by a built-in script, according to Spreitzer.

Perhaps the most notable aspect of the Bayou architecture has to do with the notion of committed and tentative updates. Users can see two views of the data: they can view only the results of updates produced by confirmed writes or they can see a full view, including tentative updates. Both views can be displayed simultaneously, says Spreitzer.

The next user interface

As the IT industry enters a new era of network computing, perhaps one of the biggest weaknesses is the user interface. Many in the industry assert that the graphical user interface (GUI) has simply run out of steam. “We need to figure out some sort of network equivalent of the graphical interface,” asserts Steven Levy, technology columnist for Newsweek.

One sure bet is that the next user interface will be more user friendly, probably by allowing people to communicate with their computers in a way that more closely mimics interpersonal communication. To that end, the use of speech seems a natural progression.

Already, commercial products for speech recognition are entering the market. Just last June, IBM introduced version 3.0 of VoiceType, which can handle continuous speech of limited commands such as “open file story. doe.” For unconstrained recognition, for example, when a person wants to dictate a letter, VoiceType requires that the speaker take small pauses between words.

There are other limitations. “Right now, the system is still rather fragile in a noisy environment,” admits David Nahamoo, senior manager for the human language technologies department at the T.J. Watson lab. But despite its shortcomings, IBM’s VoiceType is impressive for what it promises for tomorrow. IBM hopes to achieve the field’s Holy Grail– machine recognition of speaker-independent continuous speech–within the next five years, says Nahamoo. In preparation for that day, IBM is working to incorporate the VoiceType technology into Merlin, the company’s next generation of OS/2.

But speech recognition is merely the first step in creating a new voice-based computer interface. After recognizing speech, that is, being able to identify the words in a spoken sentence, the computer must be able to understand the meaning behind those words. To accomplish that task, IBM, along with various other R&D centers including Bell Labs, MIT, and Microsoft, is currently undertaking research in natural language understanding.

More GUI miles per gallon

A voice interface may be in your future, but, for now, the GUI is here to stay. That said, Xerox PARC has been busy pushing the limits of the technology it helped pioneer.

Furthering research that originated at the University of Washington, Xerox PARC has developed Magic Lens filters–arbitrarily shaped regions of the GUI that a user can position over an on-screen application, much as a magnifying glass might be placed over a newspaper. The lens would perform certain functions on the selected region, and different lenses might be combined to perform complex queries against an existing database. Theoretically, the technology would enable even novice users to construct complicated queries easily through a step-by-step visual process. According to Eric Bier, Xerox PARC research manager, Magic Lenses can be used with any kind of database as long as the data contains correlative 2-D graphical information to enable the visual querying.

Xerox PARC is also working on making GUI functionality more portable. The goal is to be able to transfer a feature of an application usually associated with a toolbar or dialog box to a movable semitransparent sheet called a “Toolglass.” Once the feature is moved, it can be used elsewhere. “You’d learn it once, and it’s good for many applications,” says Bier.

Xerox’s research notwithstanding, some industry gurus claim that the user interface is beside the point. “The real issue is how one represents knowledge and how one navigates through that knowledge. The user interface is simply the mechanical device to do it,” asserts Allan Frank, chief technology officer for KPMG. The issue of knowledge representation has certainly been brought to the fore by the burgeoning popularity of the Internet. “There is no underlying structure to the Web; it’s simply a set of chaotic Web pages that’s linked,” says Frank. “The question is: how does one put a structure on top of this?”

Web agents are multiplying

Until a way is found, users will struggle with browsers that are good merely for wading from one Web site to another. Or, people might let their software agents do the wading.

Software agents come in varying degrees of intelligence, the most primitive of which perform repetitive daily tasks, such as going through a user’s email and prioritizing the different messages. Other agents are designed to reach across a network to find specific information that a user might want. At MIT’s Media Lab, researchers have been working on even more sophisticated agents that might, for example, automate the purchasing function. In this scenario a buyer agent would travel around a network to find the seller agent offering the best possible deal for a given commodity.

MIT’s agent technology, spearheaded by Professor Patti Maes, has been spun off into a start-up company called Agents Inc., located in Cambridge, Mass. The company’s first product is Firefly, which deploys agent technology to automate the word-of-mouth process. The initial application is consumer oriented, a way for people to find other people with similar tastes in entertainment-books, movies, music, etc.–through Agents’ Web site (www.firefly.com). For example, if your list of all-time favorite movies includes many titles that are also on another consumer’s list of favorites, then chances are that the two of you would benefit from exchanging information about other movies. “The basic premise of this approach, called collaborative filtering, is that, if I know you, then I can recommend things for you,” says Max Metral, Agents’ chief technology officer and former graduate student of Professor Maes.

Although the initial implementation of Firefly is consumer oriented, Metral sees possibilities for the technology to be deployed by companies, particularly as they begin deploying intranet applications. For example, says Metral, software agents could be used by a large corporation that has several R&D labs to enable researchers at those sites to learn of related work that colleagues might be doing. Through such applications, Metral sees agent technology evolving into a type of intelligent middleware layer that would sit on top of a database and allow sophisticated datamining.

Agents are a huge step in the right direction, agrees Fidelitie’s Rubinow. “One of the downfalls of software,” he says, “is that the world changes so quickly that the software can’t keep up with the changes. So I think it’s a great idea to have the software modify’ itself.”

Weaned on vacuum tubes

Historically, it has often been difficult to foresee the business benefits of many IT innovations that research organizations like Bell Labs, IBM, Xerox PARC, and MIT have pioneered. Indeed, half a century ago the idea of using tiny electrical switching devices as computing elements may have been downright silly for those weaned on vacuum tubes. Somehow, though, transistors –as well as magnetic disk storage, the GUI, and other innovations-have managed to become all but indispensable in IT. Perhaps the same auspicious destiny awaits holographic data storage, speech recognition, and software agents.

For now, though, we can only imagine how these technologies may transform our industry as we enter a new era of network computing, which may include talking PCs loaded with intelligent, highly intuitive user interfaces presenting sophisticated queries via advanced software agents. Says KPMG’s Frank: “Some of the research that’s going to come out of these labs will be stuff that we won’t even realize how significant it is for another 10 years.”

Microsoft: A research powerhouse in the making

With an annual R&D budget topping $1 billion, Microsoft is hoping one day to reach the upper echelon of IT research and join the ranks of organizations like Bell Labs, Xerox PARC, IBM’s T.J. Watson and Almaden Research Centers, and MIT. To that end, Microsoft established five years ago the 100-person Microsoft Research Group, charged with making software easier, faster, and cheaper to use.

One intriguing area of research at Microsoft is “intentional programming;’ which could make languages like COBOL and C++ obsolete by allowing systems to be developed using “intentions”-user-specified programming abstractions containing information about the software’s syntax, implementation details, and feature variations. The goal of IP is to provide an entirely new development environment, one that goes beyond object-oriented programming, with its notion of reusable software components. IP is neither a method nor a new language-it’s kind of a combination of both.

A huge advantage of IP is that it would extend the life of software because, theoretically, the meaning of IP-encoded software can be sustained independently of the programming notation and implementation techniques used. “You will have software immortality,” claims Charles Simonyi, Microsoft’s chief architect. “Plus, IP will let you freely adjust notation or rendering as tastes and technology evolve. You just need a library of intentions [or powerful subroutines] for any language you want to work with.” Microsoft would most likely package IP with existing C or C++ libraries, according to company officials. With IP, asserts Simonyi, “no new development will make what you have obsolete.”

Already Microsoft’s researchers are developing code using IP techniques and components. If all goes well, the company will be marketing products based on IP within the next five years.

IP isn’t the most abstruse project at Microsoft. That distinction goes to “telepresence,” or the idea of “being there without really being there,” according to Microsoft literature. The telepresence project is headed by senior researcher Gordon Bell, the renowned architect of VAX computers, who works out of Microsoft’s San Francisco-based Bay Area Research Center. “Telepresence is a whole collection of things,” says Bell. “It answers [the question of] how we can be present at meetings held somewhere else. It involves space shifting, time shifting, and compression?

The way telepresence will manifest is uncertain, but Microsoft has already begun to explore the concept by voice/audio-enabling PowerPoint, the company’s popular slide presentation program. And Microsoft is working on getting the Mbone multimedia Internet backbone to run on Windows NT in anticipation of broadcast presentations on demand over the Internet.

Telepresence may sound like pie in the sky, but IS managers can already anticipate uses for a technology that would take today’s collaborative computing a step further. “Our engineers could benefit from shared full whiteboarding and videoconferencing,” says Jerry Lee, IS director with PharmChem Labs, a Menlo Park, Calif., drug-testing company. “Getting the drawings and other information to users quickly enough would no longer be the problem. Instead, the new problem would be getting the human brain to process the information quickly enough? -Mary Jo Foley

COPYRIGHT 1996 Cahners Publishing Associates LP

(c) 1996 Information Access Co. ALL RTS RESERV