When we built Niantic, we structured the company’s mission around three core values: exploration, exercise, and real-world social interaction. But not in our wildest dreams did we imagine the kind of positive impact our Augmented Reality (AR) experiences would have on our players and the communities where they live. We feel lucky that we’ve been able to combine our passion for technology and gaming to create innovative experiences for all ages.
Today, we are offering a preview of the technology we have been developing: the Niantic Real World Platform. This is the first time we’ve given an update of this nature publicly, and I’m confident it will provide a sense of how committed we are to the future of AR, and to furthering the type of experiences we have pioneered.
Throughout the past year, we have made strategic investments in initiatives focused on AR mapping and computer vision. Recently, we announced the acquisition of Escher Reality, who are contributing to our “Planet Scale AR” efforts. And today, we are announcing that we have acquired the computer vision and machine learning company Matrix Mill, and have established our first London office. It’s through the coordination of these teams that we’ve been able to establish what the Niantic Real World Platform looks like today, and what it will be in the future.
We think of the Niantic Real World Platform as an operating system that bridges the digital and the physical worlds. Building on our collective experience to date, we are pushing the boundaries of geospatial technology, and creating a complementary, interactive real-world layer that consistently brings an engaging experience to users.
The Niantic Real World Platform advances the way computers see the world, moving from a model centered around roads and cars to a world centered around people. Modeling this people-focused world of parks, trails, sidewalks, and other publicly accessible spaces requires significant computation. The technology must be able to resolve minute details, to specifically digitize these places, and to model them in an interactive 3D space that a computer can quickly and easily read.
We are also tackling the challenge of bringing this kind of sophisticated technology to power-limited mobile devices. The highest quality gameplay requires a very accurate “live” model that adapts to the dynamics of the world. It needs to accomplish the difficult task of adjusting the model as the environment around the user changes, or as people move themselves–or their phones.
Knitting together a cohesive moving perspective is a challenge, but we are focused on solving it through a blend of machine learning and computer vision–all built on top of reliable and scalable infrastructure.
Advanced AR requires an understanding of not just how the world looks, but also what it means: what objects are present in a given space, what those objects are doing, and how they are related to each other, if at all. The Niantic Real World Platform is building towards contextual computer vision, where AR objects understand and interact with real world objects in unique ways–stopping in front of them, running past them, or maybe even jumping into them.
In the above GIF, we illustrate how our computer vision algorithm identifies objects and concludes what they are with a quantifiable confidence score. For example, when training our computer vision on a table and chairs, it will identify them and put them in context within a broader space. Ultimately, this allows those objects to be added to an AR vocabulary. The larger the vocabulary, the more understanding we have, and the richer AR on our Real World Platform can be.
Once we understand the “meaning” of the world around us, the possibilities of what we can layer on is limitless. We are in the very early days of exploring ideas, testing and creating demos. Imagine, for example, that if our platform is able to identify and contextualize the presence of flowers, then it will know to make a bumblebee appear. Or, if the AR can see and contextualize a lake, it will know to make a duck appear.
Recognizing objects isn’t limited to understanding what they are, but also where they are. One of the key limitations of AR currently is that AR objects cannot interact meaningfully in a 3D space. Ideally, AR objects should be able to blend into our reality, seamlessly moving behind and around real world objects.
This is where our new team in London has focused its research. Using computer vision and deep learning, they are developing techniques to understand 3D space enabling much more realistic AR interactions than are currently possible.
In the video above, you can see Pikachu weaving through and around different objects in the real world, dodging feet and hiding behind planters. This level of integration into the environment around us is a proof-of-concept that excites us about the future of AR.
We’re particularly focused on innovations in AR that serve Niantic’s core mission and values. Today we are revealing innovative new cross-platform AR technology that facilitates high performance shared AR experiences.
Multiplayer gameplay requires the components of Niantic’s Real World Platform to work in concert across multiple users. And, when you add more people and several varying perspectives, these components must work in specific ways in order to create a visually compelling shared experience. In our research, we’ve found that one of the biggest obstacles is latency: it’s nearly impossible to create a shared reality experience if the timing isn’t perfect.
To address this challenge, we have developed proprietary, low-latency AR networking techniques. Thanks to this solution, we’ve been able to build a unified, cross-platform solution that enables a shared AR experience with a single code base.
You can see this in action in the videos below:
This is just a brief glimpse into what we have under the hood of the Niantic Real World Platform. While we are using this technology first for games, it is clear that it will be relevant to many kinds of applications in the future.
Our Real World Platform in the Real World
Because we are so excited about the opportunity in advanced AR, we want other people to be able to make use of the Niantic Real World Platform to build innovative experiences that connect the physical and the digital in ways that we haven’t yet imagined. We will be selecting a handful of third party developers to begin working with these tools later this year, and you can sign up to receive more information here.
Thank you to those who have been on this journey with us, chasing after the future and creating the Niantic Real World Platform. We promise that there is plenty more to come.