ASME.MVC.Models.DynamicPage.ContentDetailViewModel ContentDetailViewModel
The HoloLens Revolution

As recently as 2014, when scientists at NASA Jet Propulsion Laboratory in Pasadena plotted the course of one of their Mars rovers, they consulted maps and satellite images, as well as images uploaded from the rover itself. There was an element of guesswork in integrating all those overlapping data sources, but guessing wrong could send the rover along an impassable path or get it intractably stuck.

Now, JPL scientists have a new tool for studying possible routes. That tool, called OnSight, enables users to explore 3-D stereo views of the Martian environment and to get a natural sense of depth and understanding of spatial relationships. They are able to examine the rover’s worksite from a first-person perspective, plan new activities, and preview the results of their work firsthand.

A pilot group of mission scientists has been testing the application, using it for such tasks as identifying rock formations that merit further study by the Curiosity rover.

That NASA tool is just one of the applications of a new type of computer hardware and software combination called “mixed reality” or MR. It’s a variant of virtual reality systems, which have been around for decades and have edged toward the mainstream through new headsets, smart mobile devices with 3-D capabilities, and improvement in display resolution. Mixed reality uses some of the same technology, but instead of an immersive experience cut off from the outside world, it blends or merges holograms into the physical environment.

Mixed reality in combination with technologies that enable interaction with holograms, called holographic computing, is poised to be the next big disruptive technology. Major technology companies such as Microsoft, Google, Samsung, Apple, Sony, and Facebook are investing significant funding in the development of low-cost virtual and mixed reality systems, and Microsoft’s HoloLens is already being used in some applications, such as NASA’s OnSight system mentioned above.

These devices promise to break down the barriers between virtual and physical reality, and enable the physical and virtual worlds to intersect in new ways. And they have some very real engineering applications that could well transform the profession as profoundly as the personal computer did a generation ago.

;custompagebreak;

The Holographic Computer

Holograms may seem futuristic, but most of us carry around objects—such as credit cards or driver’s licenses—embedded with them. The recording of the light field necessary to make a hologram requires a laser, a beam splitter, and a photographic medium that enables the light scattered from an object to be recorded and later reconstructed. The resulting image changes as the position and orientation of the viewing system changes in the same way as if the object were still present, thus making the image appear three-dimensional.

Up to now, the most common photographic recording medium has been film. Holograms, by comparison, have been 3-D snapshots rather than moving, interactive images. Technologists have been working for decades to find a way to project dynamic, holographic images into the spaces where we live and work. The goal has been to create something that works similar to the depictions of holography in science fiction, from the recorded plea of Princess Leia in Star Wars to the immersive holodeck of Star Trek.

The Microsoft HoloLens doesn’t do that. Instead, it is a self-contained system running a special version of Windows 10 that generates and projects stereographic images onto the lenses of a headset. In addition to the central processing unit and graphics processing unit of a conventional computer, the HoloLens possesses a separate holographic processing unit. That HPU calculates where 3-D graphics exist in the physical space of the user and keeps track of such input as voice commands and gestures.

All that computing power—along with built-in speakers, advanced sensors, buttons, a camera, and vent—have been miniaturized to fit in an untethered visor. Compared to virtual reality headsets such as the Oculus Rift, the HoloLens projects its images onto a fairly narrow field—about the area of a sheet of copy paper held 18 inches in front of your face. That limited field of view is acceptable because unlike virtual reality, mixed reality does not try to be fully immersive. Instead, the HoloLens projects its images onto the lenses in a way so that the holograms are perceived to exist together with physical real-world elements in a shared environment.

That’s immensely difficult. If a hologram is supposed to sit on a table, the holographic computing system must calculate how that virtual object should appear to the HoloLens user as he turns his head, stands up or sits down, or if he leaves and then reenters the room. Indeed, if a real-world object passes through the space between the user and the hologram, the holographic computing system can in real time calculate which parts of the hologram would be eclipsed by that object and make those portions disappear.

What’s more, these holographic objects are interactive. Virtual buttons can be “clicked” and holographic objects can be transformed with tools or by touch, much the way that objects on a conventional computer screen can be altered. By virtually transforming the physical world into a tangible representation of programs and controls, HoloLens represents a dramatic reduction in the distance between the user and the interface.

;custompagebreak;

Engineering Applications

Microsoft introduced the system in January 2015 and since March 2016 has been shipping the system to developers and academic groups. Engineers, designers, architects, and animation teams at these organizations are using the HoloLens technology to discover the best practices and most powerful application for this new computing platform.

NASA, for instance, has been working with Microsoft on developing software tools to exploit the HoloLens. Another NASA application called ProtoSpace uses holograms for spacecraft design. The system superimposes a computer-generated version of space hardware over the field of view of the user’s headset. That enables NASA scientists to naturally walk around a full-scale version of a spacecraft (such as the Mars 2020 rover currently under development at the NASA’s Jet Propulsion Laboratory), and examine different components. That capability provides a feeling of how large each component is or how tight the tolerance might be—two measures that might be difficult to assess from looking at a model on a computer screen. The goal is to avoid design conflicts as the new six-wheeled robot is assembled.

This isn’t theoretical: NASA engineers are using ProtoSpace today. Engineers recently used ProtoSpace to check the size of the rover’s nuclear batteries, which is one of the last tasks done before launch, and to make sure they would fit inside the rocket that would launch the spacecraft to Mars.  ProtoSpace allows engineers to test fit all the components and practice the installation procedure at full scale with the actual tools they will need to ensure there is enough clearance. And they can do all this early in the design phase before a single part has been manufactured.

Another application being developed is called Sidekick. That tool is intended to provide crews aboard the International Space Station with assistance for complicated tasks. In one mode, Sidekick uses Skype voice and video chat to enable ground operators to see what crew members see, provide real-time guidance, and draw real-time annotations into the crew members’ environment to coach them through the task. Sidekick can also augment standalone procedures with animated holographic illustrations displayed on top of the objects the crewmember is working with. This capability could provide refresher training and guidance for station crews while they are in space, much closer to the time they will be performing a task. In addition, Sidekick's stand-alone mode could be a resource during future deep space missions, where communication delays complicate difficult operations.

Private companies are developing applications for the Holo­Lens as well. The Swedish defense and security company Saab has created a holographic training system for the HoloLens, and automakers Volvo and Volkswagen are working with Microsoft to demonstrate the use of HoloLens in lieu of CAD software for seeing life-size 3-D design schematics.

The engineering and construction firm CDM Smith, Boston, Mass., is discovering how to apply the HoloLens technology through the entire product life cycle. During the planning phase of building plant upgrades or extensions, for instance, engineers are uploading CAD files into the HoloLens to visualize how additional pumps or pipes will fit into the existing space.  During the design phase, the entire project team dons HoloLenses to “walk through” the final project and experience the project via holograms within the physical environment of the existing site. A safety issue in the layout that would not be obvious in the blueprints can be seen in the holographic representation and team members can suggest changes to the design before construction begins.

During construction, CDM Smith engineers walk through the project wearing HoloLenses to compare the holographic models to the actual work, to make sure the job is being done as designed. After the project is complete, facility managers can continue to use holographic models to collect and manage operations data, as well as manage the site remotely.

Education may also see some important applications for the HoloLens. My lab at Old Dominion is studying how to use the power of holographic computing to provide learners with a multisensory interactive immersive learning environment. One possibility involves providing remote access—linking several remote classrooms to the same lecture hall, or sending students on a virtual tour of places like the International Space Station, or connecting them to experts who can illustrate processes live, in person, and in 3-D.

Clackamas Community College in Oregon also is using HoloLens to develop a hands-on trade-based curriculum for automotive students. For example, students use the mixed reality application to disassemble engines, or to identify, using the headset, all the parts of an engine they are working on.

;custompagebreak;

Beyond The Technology

It was less than 10 years ago that society was introduced to a new computing paradigm: handheld devices that sensed and responded to touch and motion. What was a novel and, to some, baffling interface in 2007 has become so second nature that today small children are given their parents’ iPhones to play with—and those children have no problem operating them.

After seeing the HoloLens in action and talking to professionals who are excited about its possibilities, I believe holographic computing will follow a similar path. As the technology improves, holographic computers will be capable of rendering high-resolution 3-D digital content that blends seamlessly with our environment.  Manipulating that content will become as easy and natural as picking up a box or sitting at a table. High-definition holograms will look as real as physical objects and will become practical tools of daily life.

What’s more, holographic computing platforms and headsets will be light and small enough to wear all the time, and will be able to spatially map the user’s environment wherever he is. An engineer wearing a holographic computer will be able to use his actual hands to manipulate holograms representing parts of a new engineering system and virtually teleport (or “holoport,” perhaps) himself and his team members into meetings.

The now-ubiquitous handheld screens will be replaced by headsets and wearable devices that provide digital and projected interfaces everywhere.

Some technological breakthroughs still need to occur before we get to that point. For one, more natural ways of interacting with holograms and the holographic computing platform need to be developed. Private companies are working to crack that problem. For instance, Milpitas, Calif.-based Eyefluence is working on eye tracking technology intended to enable one’s gaze to navigate and explore holographic displays. Another company, Gest, Austin, Texas, is building a device that wraps around the palm and fingers to better capture gestures that a person could use to intuitively create, shape, and size holograms. And advanced voice recognition is being developed by Amazon, Google, Apple, Microsoft, and other tech giants.

Advances in scanning technology may enable engineers to scan, capture, create, and render 3-D models based on the user’s first-person visual perspective, eliminating the need for common scanning tools such as spinning tables for depth sensing devices or large scanning booths that require a hundred cameras. Magic Leap, a Dania Beach, Fla., company that is developing its own holographic computing system and headset, is just one of the groups working on that first-person scanning technology.

The biggest breakthroughs, I believe, will come with the coupling of holographic computing and two other emerging AI and information technologies: next-generation cognitive cybernetics capable of deep learning and even anticipatory capabilities, and the Internet of Things comprised of widespread connected embedded sensors.

Combined, those three technologies could create something like an Internet of Presence and Experiences, where smart wearable devices and appliances tap into advanced cognition systems to deliver information and services to you via holograms—often before you even know you want it. Those advanced technologies could transform mixed reality into an all-pervasive digital reality that is based on extraordinary levels of data, context, and insight.

Beyond the technology, however, there are likely some social aspects to the use of mixed reality headsets that need to be worked out. The headsets are more imposing than the Google Glass, and it may turn out that they are going to be tools best suited for use in designated workspaces and classrooms rather than in coffee shops or stores. On the other hand, society has adapted to mobile computing much more quickly than might have been expected.

The possibilities for holographic computing are truly exciting, and the endless applications go as far as the human mind can imagine. Holographic computing is one of the major engineering tools for the 21st century, and we are only beginning to understand how it will change the profession.

Ahmed K. Noor is eminent scholar emeritus and professor emeritus of modeling, simulation and visualization engineering at Old Dominion University in Virginia. For more information on holographic computing and mixed reality, go to a website created as a companion to this Mechanical Engineering magazine feature:  http://www.aee.odu.edu/holocomputing/. The site contains links to online material related to variety of aspects of both holographic computing and mixed reality. The site includes a daily news feed for up to date information on related subjects.

You are now leaving ASME.org