Holographic Storage Technologies

The theory of holography was developed by Dennis Gabor, a Hungarian physicist, in the year 1947. His theory was originally intended to increase the resolving power of electron microscopes. Gabor proved his theory not with an electron beam, but with a light beam. The result was the first hologram ever made. The early holograms were legible but plagued with many imperfections because Gabor did not have the correct light to make crisp clear holograms as we can today . Gabor needed laser Light. In the 1960s two engineers from the University of Michigan: Emmett Leith and Juris Upatnieks, developed a new device which produced a three dimensional image of an object. Building on the discoveries of Gabor, they produced the diffuse light hologram. Today, we can see holograms, or 3D images, on credit cards, magazine covers and in art galleries. Yet this unique method of capturing information with lasers has many more applications in the industrial world and is on the verge of revolutionising data storage technology as we know it.
A project at Lucent Technologies Bell Laboratories could result in the first commercially viable holographic storage system. Leveraging advances across a number of technologies from micromirror arrays to new non linear polymer recording media, the team hopes to spin the project off into a startup. This technology not only offers very high storage densities, it could access that data at very high rates, due to the fact that holographic methods read an entire page of data in one operation. While conventional optical storage techniques read and write data by altering an optical medium on a per bit basis, holographic storage records an entire interference pattern in a single operation. This technique makes unique demands on both the light source and the recording medium. While a conventional optical disk system can get by with a relatively low power laser diode and a single detector, holographic techniques require high quality laser sources and detector arrays. However, these types of components have been getting cheaper. For example, CMOS pixel sensors offer the potential for the low cost detection of data arrays, while digital micromirrors can be used for data input from electronic systems. The biggest challenge has been devising a suitable optical medium for storing the interference patterns. The team turned to non linear polymers in its search for that key component. What is needed is a medium that can support the overlap of megabyte data pages, each with a high enough diffraction efficiency to enable high transfer rates. These two demands sound reasonably simple, but it really leads to a very long list of pretty stringent criteria for what a material has to do. The researchers have found what they believe is a suitable candidate, an acrylic polymer compound that polymerises in response to light. In addition to having the required optical performance properties, the new material, being a polymer, is easy to form into thick films. Film thickness directly relates to storage capacity and inorganic nonlinear materials, which are crystalline, are difficult to build in thick films. The researchers have built a prototype system using off the shelf components such as camera lenses and digital micromirrors from Texas Instruments.
Many novel technologies are being pursued in parallel towards accomplishing higher capacities per disk and higher data transfer rates. Several unconventional long term optical data storage techniques promise data densities greater than 100 Gb/in2 and perhaps even exceeding Tb/in2. These include near field and solid immersion lens approaches, volumetric (multi layer and holographic) storage, and probe storage techniques.
A solid immersion lens approach using MO media pursued by Terastor in the United States promises at least 100 Gb/in2 areal density. This technique relies on flying a small optical lens about 50 nm above the storage medium to achieve spot sizes smaller than the diffraction limit of light. Since the head is now lighter, this type of technology may lead to access times comparable with hard drives. Several Japanese companies are intrigued by the approach and are involved in Terastor's activities. Similar objectives are pursued by Quinta, a Seagate Company, where increasing amounts of optical technologies including optical fibers and fiber switches are used to reduce the size and weight of the head, which is non flying, but still placed quite near to the disk medium.
Multi layer storage is pursued both in Japan and the United States. In Japan the effort concentrates on increasing the number of storage layers in a PC based DVD disk. Some researchers also envision adapting multi layer recording to MO media by simultaneously reading and computing the data on several layers. Both approaches, however, have limited scalability in the number of layers. In the United States, Call/Recall, Inc. is using a fluorescent disk medium to record and read hundreds of layers. Also in the United States, significant effort is being put into developing holographic storage, aiming for areal densities exceeding 100 Gb/in2. Companies both in the United States and Japan are exploring the use of parallel heads to speed up data transfer rates. Finally, both in Japan and in the United States, optically assisted probe techniques are being explored to achieve areal densities beyond a Tb/in2. In summary, a fast growing removable data storage market governed by optical storage has resulted from substantial progress that has been made in optical disk storage techniques. These advances have come through a combination of laser wavelength reduction, increases in the objective lens numerical aperture, better crosstalk management, and coding improvements under the constant pull of new applications. Undoubtedly, emerging applications will pull optical storage techniques to reach new performance levels. There is room for advances in storage capacity, as transitions to blue lasers, near field optical recording, and multi layer systems will occur.

Comments