Creating A Reality Beyond The Real

There is something missing.

There is something missing in the way many are looking at what’s next for human communication.

Yes, one may argue that there has always been something missing.

Humans being inherently social beings have always cherished interacting with each other and with our communities to learn, to exchange information, to share experiences. We have traditionally done this by gathering around camp fires, over dinner, at local events, in joint activities, and in cheering for our favorite sports teams together. But our world expanded and dispersed, and we have started to miss these essential social interactions among friends and families.

Advances in computing and networking came into play. Technology allowed us to socialize over great distances. I still remember moving to the US about twenty years ago, leaving my family in Germany behind, and the telephone was the primary means of staying connected. But a one-minute phone call cost $1.25, meaning a simple audio-only interaction cost a fortune.

Humans, as inherently social beings, have always cherished interacting with each other and with our communities to learn, exchange information and share experiences.

What about today? Want to interact with your friend or a relative on the other side of the world? Just start up FaceTime or Skype, pick up the phone, send an email, Snapchat away, Instagram your latest photo, or – for the ‘older’ set – post an image on Facebook. And now my family puts an iPad on the breakfast table, we start our favorite video conferencing application, and Grandma and Grandpa join us from Germany for breakfast in New Jersey.

Or do they really join us? Yes, we see their faces on the iPad that prominently fills an empty chair at the breakfast table. But the display shows Grandma and Grandpa in their environment. In their home. Not in ours. They do not join us for breakfast in our home. They are not part of our immediate surroundings. We do not share a common experience. Instead, we “intrude” into their home, we stare into their living space to interact with them. They cannot feel the heat wave that makes us sweat on a hot summer day. They do not feel the humidity of the Jersey Shore in summer. They do not feel the truck passing by that makes our house vibrate. They do not smell the beautiful blossoms on the plum tree in our backyard. Instead, they are stuck in their own environment, just peeking into our world through a tiny little window that defines the current “small, flat screen world”, providing them with only a very limited visual and audio snapshot.

We are clearly missing something.

We need to recover these experiences by capturing, transmitting and recovering physical, physiological and psychological human signals over digital distances.

Sure, we are connecting…but we are not experiencing together. Today’s communications and interactive technology is like a narrow tube, almost like blinders on a horse, that only allows us a limited view into each other’s lives. We cannot truly share the multifaceted experiences that drive our feelings and emotions until we embed in each other’s full virtual environments.

What if there were technologies that could break through the limitations of this purely visual and audible communications that have defined us in the last few years?

We believe that the future of communications is not about going from HDTV to 4K to 16K to 360 videos. It is also not about replicating the real world in ever greater quality and detail. It is about adding the missing modalities to our remote, disjoint, digital interactions.

Think about haptic feedback as one example. Simple ‘touch’ can convey deep emotions, feelings. The future has to be about breaking through the barrier of realities, not just replacing them. About going beyond reality to augment our lives and our abilities to the better.

Think about being able to really feel how your aging parents are doing? How about being able to see, hear, and experience the emotions of an autistic child? How about actually feeling the slap of a high five with your friend who lives on the other side of the world, over a hologram?

And not just at play. Humans like to experience the same at their workplace. How about seeing and feeling the vibrations of the engine that you are trying to repair remotely? Or a doctor sensing a twisted arm of an injured player? Augmenting human beings to gain a much better understanding of each other and of our environment.

We need to recover these experiences by capturing transmitting and recovering physical, physiological and psychological human signals over digital distances.

Teams of researchers and engineers – including my research teams at Bell Labs – are working hard to get us closer to such reality. To a reality that goes beyond what is considered real, or possible, today. A reality that blurs the lines between physical and digital worlds, making them indistinguishable. Breakthroughs in capture technology will allow us to create life-like digital representations of objects and of people at the snap of our fingers, in easy and convenient ways.

These digital representations will then need a super high-performance, low latency network to transit and share such rich datasets, instantaneously. The good news is that is what end-to-end 5G networks are being built and defined to do.

New playback methods and devices then embed these digital representations into our physical world in a way that makes them indistinguishable from real objects. Today’s HoloLens and Magic Leap glasses are just the beginning, paving the way to new projection technologies that will provide ubiquitous mixed realities. These mixed realities will be assembled not only from visual and audible elements. Millions of sensors in our environment together with information obtained from wearables and even in-body bio sensors will provide a much deeper context and situational awareness to create the most helpful, most sensitive, and most privacy-preserving mixed realities.

A child or an elderly parent living far away, or in a remote location not feeling well today? Their life-like holographic representation on the sofa in your living room will make this obvious through appropriate facial expressions and a certain sadness in the tone of voice.

The operator of a vehicle on a road, or of a crane moving heavy materials at the construction zone getting tired? The system will recognize this instantaneously and prevent accidents by alerting the tired operator and taking appropriate actions.

So, what’s missing today?

We are getting close to creating this multiverse utopia!

New capture and playback devices are being worked on with novel devices and sensors. Low-latency, high bandwidth connectivity to millions of mobile devices with limited battery life are getting there with global rollout of 5G and IoT connectivity solutions. Edge cloud is beginning to be deployed to make ‘better than being there’ contextualized, personalized services a real possibility.

It is obvious we need AI/ ML to infer never-known-before situational awareness. This is where advanced analytics platforms and stream processing solutions become essential. Built on new approaches to software creation, execution, and management. Software will no longer run on a limited set of well-defined hardware platforms, under well-defined scenarios. Instead, software will instantaneously adapt to and execute on never-seen before devices and sensors that come and go at any point in time. This requires a fundamental re-design of how software is built with new AI/ML platforms and methods. We, at Nokia Bell Labs, are inventing just that.

We are creating a reality that goes far beyond what is considered real in today’s “small flat screen world”, to create a world as big as our human imaginations and expectations.

This article was published by Markus Hofmann.