Apple released several new augmented reality tools and technologies for software manufacturers during its annual WWDC conference this week. These technologies could be vital if Apple actually launches an augmented reality headset or glasses in the years to come.

Apple has never confirmed plans to release augmented reality hardware, but it may reportedly announce a headset as early as this year. Facebook, Snap and Microsoft are also working on devices that understand the world around them and can display information in front of the user.

To be successful with an augmented reality device, Apple needs to find compelling reasons to use it – and that depends on useful software, just as apps like Maps, Mail, YouTube, and the mobile Safari browser have helped spur adoption of the original iPhones. Bringing developers on board to develop augmented reality software now increases the chance that one or more “killer apps” will be available at launch.

Apple didn’t spend much time on augmented reality at its WWDC launch keynote on Monday, but announced several updates during the more technical parts of the conference, showing it remains an important long-term initiative for Apple. CEO Tim Cook said AR was “the next big thing”.

“From a high level, this year, and maybe even next year’s WWDC event, will be tantamount to a calm before an Apple innovation storm,” Loup Ventures founder and longtime Apple analyst Gene Munster wrote in an email this week. “The intensive further development of Apple with regard to new product categories for everything to do with augmented reality wearables and transport is not in sight today.”

What Apple announced

During the weeklong conference, Apple briefed its developers on its rapidly improving tools that can create 3D models, use a device’s camera to understand hand gestures and body language, add quick AR experiences on the web, a standard heavily supported by Apple for 3D content and a fascinating new sound technology that is like surround sound for music or other audio files.

Here are some of Apple’s AR announcements and how they pave the way for its bigger ambitions:

Object detection. Apple has introduced application programming interfaces, or software tools, that apps can use to create 3D models. 3D models are essential for AR because the software translates them into the real world. If an app doesn’t have a very detailed file for a shoe, it can’t use Apple’s image processing software to put it on a table.

Object Capture is not an app. Instead, it’s a technology that enables a camera like the iPhone’s camera to take multiple photos of an object and then stitch them together into a 3D model that can be used in the software in minutes. In the past, detailed scanning of objects required precise and costly camera adjustments.

Eventually, third-party developers like Unity, a leading maker of AR engines, will be adding it to their software. For now, it will likely be used heavily in e-commerce.

RealityKit 2. Object Capture is just part of a major update to RealityKit, a set of software tools for AR experiences. Aside from object capture, RealityKit 2 has many small improvements that make app builders’ lives easier, including improved rendering options, a way to organize images and other assets, and new tools to add player-controlled characters to augmented reality scenes to create .

Zoom In Icon Arrows pointing outwards

Apple’s new city navigation function in Apple Maps.

Apple

ARKit 5. ARKit is another set of software tools for creating AR experiences, but has a greater focus on figuring out where digital objects can be placed in the real world. This is Apple’s fifth major version of the software since it first appeared in 2017.

This year it includes so-called “location anchors”, which means that software manufacturers can program AR experiences that are tied to map locations in London, New York, Los Angeles, San Francisco and some other US. In a video session for developers, Apple said it used the tool to create AR directional overlays on Apple Maps – a potentially useful scenario for a head-mounted AR device.

AI to understand hands, people and faces. Although Apple’s machine learning and artificial intelligence tools are not directly related to augmented reality, they represent skills that will be important for a computer interface that works in 3D spaces. Apple’s vision framework software can be called by apps to recognize people, faces and poses through the iPhone’s camera. Apple computer vision software can now identify objects in pictures, including text on signs, as well as search for things in photos – like a dog or a friend.

When combined with Apple’s other tools, these AI tools can apply effects similar to Snap’s filters. One session at this year’s WWDC will even be about how to hold or move a hand, which lays the foundation for the advanced hand gestures that make up a large part of the user interface on current AR headsets like Microsoft Hololens.