Holography is generally best known as a method to create a 3D image, which can be seen without any special glasses or other devices. Holography itself was invented by Hungarian-British physicist Dennis Gabor in 1947.
While trying to improve an electron microscope, he discovered a method to record the entire field information – amplitude and phase – and not just the usual intensity.
The breakthrough in the technology followed laser invention and development, which were distinguished from other light sources by their coherence (meaning that the wavelengths of the laser light are in phase in space and time).
In the 1960’s, Soviet physicist Yuri Denisyuk created a single-beam technique to produce a high quality image.
This method became widely known as Denisyuk holography. When a Denisyuk hologram is recorded with at least three lasers, full color holograms can be obtained.
Interestingly, Denisyuk took inspiration from the Lippmann color photography technique (interferential photography), which is a color-only technique that records the entire visible color spectrum.
When a Denisyuk hologram is recorded with at least three lasers, full color holograms, depicting a very realistic image of an object, can be obtained.
A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone.
Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One reason: VR can make users feel sick.
Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.
Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background, like the show of the so-called 9/11 terrorists attack, when projected airplanes flew into the Twin Towers.
Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results.
Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.
A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene’s colors, but it ultimately yields a flat image.
In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene’s parallax and depth.
So, while a photograph of Monet’s “Water Lilies” can highlight the paintings’ color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.
First developed in the mid-1900’s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase.
This reference generates a hologram’s unique sense of depth. The resulting images were static, so they couldn’t capture motion. And they were hard copy only, making them difficult to reproduce and share.
Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. Because each point in the scene has a different depth, you can’t apply the same operations for all of them.
That increases the complexity significantly. Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image.
Plus, existing algorithms don’t model occlusion with photo-realistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.
They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation.
The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information.
Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.
The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram.
To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion.
That approach resulted in photo-realistic training data. Next, the algorithm got to work.
By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms.
The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.
Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say.
This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern.
Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.
It’s a considerable leap that could completely change people’s attitudes toward holography. We feel like neural networks were born for this task.
MIT / ABC Flash Point News 2023.
Pilots presented evidence that no planes hot the Twin Towers during 9/11?