Holography is generally best known as a method to create a 3D image, which can be seen without any special glasses or other devices. Holography itself was invented by Hungarian-British physicist Dennis Gabor in 1947.

While trying to improve an electron microscope, he discovered a method to record the entire field information – amplitude and phase – and not just the usual intensity.

https://i0.wp.com/images.news18.com/ibnlive/uploads/2020/09/1599915247_untitled-design-75.png?resize=696%2C464&ssl=1

The breakthrough in the technology followed laser invention and development, which were distinguished from other light sources by their coherence (meaning that the wavelengths of the laser light are in phase in space and time).

In the 1960’s, Soviet physicist Yuri Denisyuk created a single-beam technique to produce a high quality image.

This method became widely known as Denisyuk holography. When a Denisyuk hologram is recorded with at least three lasers, full color holograms can be obtained.

https://i0.wp.com/i.pinimg.com/736x/72/33/d0/7233d060519b928dfbe4b5709630b533--daily-photo-deborah-lippmann.jpg?resize=696%2C593&ssl=1

Interestingly, Denisyuk took inspiration from the Lippmann color photography technique (interferential photography), which is a color-only technique that records the entire visible color spectrum.

When a Denisyuk hologram is recorded with at least three lasers, full color holograms, depicting a very realistic image of an object, can be obtained.

A new method called tensor holography could enable the creation of holograms for virtual reality, 3D printing, medical imaging, and more — and it can run on a smartphone.

https://i0.wp.com/www.thedigitalbridges.com/wp-content/uploads/2017/01/virtual-reality-gaming-headsets.jpg?resize=696%2C392&ssl=1

Despite years of hype, virtual reality headsets have yet to topple TV or computer screens as the go-to devices for video viewing. One reason: VR can make users feel sick.

Nausea and eye strain can result because VR creates an illusion of 3D viewing although the user is in fact staring at a fixed-distance 2D display. The solution for better 3D visualization could lie in a 60-year-old technology remade for the digital world: holograms.

Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background, like the show of the so-called 9/11 terrorists attack, when projected airplanes flew into the Twin Towers.

Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results.

Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.

A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene’s colors, but it ultimately yields a flat image.

https://i0.wp.com/assets.entrepreneur.com/images/misc/1618350502_holograma2.jpg?resize=696%2C547&ssl=1

In contrast, a hologram encodes both the brightness and phase of each light wave. That combination delivers a truer depiction of a scene’s parallax and depth.

So, while a photograph of Monet’s “Water Lilies” can highlight the paintings’ color palate, a hologram can bring the work to life, rendering the unique 3D texture of each brush stroke. But despite their realism, holograms are a challenge to make and share.

First developed in the mid-1900’s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase.

https://i0.wp.com/www.integraf.com/shared/images/custom/custom-security-hologram-closeup-combo.jpg?resize=696%2C305&ssl=1

This reference generates a hologram’s unique sense of depth. The resulting images were static, so they couldn’t capture motion. And they were hard copy only, making them difficult to reproduce and share.

Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. Because each point in the scene has a different depth, you can’t apply the same operations for all of them.

That increases the complexity significantly. Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image.

https://i0.wp.com/static.scientificamerican.com/sciam/cache/file/A59C99A3-27C6-4436-94F6D8FA4B88606F_source.jpg?resize=696%2C522&ssl=1

Plus, existing algorithms don’t model occlusion with photo-realistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.

They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation.

The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information.

https://i0.wp.com/static.wixstatic.com/media/bafe97_7911de6459e346f9a0f3935e79df266f~mv2.png/v1/fit/w_1000%2Ch_844%2Cal_c/file.png?w=696&ssl=1

Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.

The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram.

To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion.

https://i0.wp.com/ak7.picdn.net/shutterstock/videos/1020938857/thumb/12.jpg?w=696&ssl=1

That approach resulted in photo-realistic training data. Next, the algorithm got to work.

By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms.

The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves.

Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say.

https://i0.wp.com/www.voxelmatters.com/wp-content/uploads/2018/12/robots_3D-printing.jpg?w=696&ssl=1

This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern.

Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.

It’s a considerable leap that could completely change people’s attitudes toward holography. We feel like neural networks were born for this task.

MIT / ABC Flash Point News 2023.

4 1 vote
Article Rating
Subscribe
Notify of
guest

2 Comments
Inline Feedbacks
View all comments
Kidnapped by the System
Kidnapped by the System
Member
11-03-23 15:04

Pilots presented evidence that no planes hot the Twin Towers during 9/11?

Goolagong
Goolagong
Member
11-03-23 16:39

comment image