In my previous post I introduced Apple’s latest AR features and explained why they are opening the AR flood gates. To quickly recap: for years one of the biggest difficulties with AR was the ability to easily add 3D objects into a scene without going through an expensive process of creating 3D models. With Apple's latest technology, you can take a picture of an object to convert it into a 3D model that can be placed in an AR scene.
In this post we’ll go over the API and build a sample application.
How it works
In summary, the way it works is an image composer takes multiple object images as input and converts them into a 3D asset by applying machine learning to predict the 3D object from 2D photos. The process of predicting a 3D object based on 2D images is called photogrammetry and Apple's new Photogrammetry API makes this possible.
The input images can be HEIC, PNG, JPG and the output models can be exported in USDZ, USDA or OBJ and supports 4 levels of model quality: Reduced, Medium, Full, Raw.
To start we first create a Photogrammetry session by providing the URLs to the input files and some configuration options.
Next, we invoke async with the session and register the receipt of progress and error messages.
With the message stream configured, it’s time to create the 3D models by passing the image URLs and selecting the detail level for the model. When processing multiple files it's faster to do it in parallel rather than sequentially.
After the process completes, we receive one of three outputs:
• ModelEntity (RealityKit)
• BoundingBox (RealityKit)
• Model file (URL) → Contains a USDZ single file or a folder with assets
Both ModelEntity and BoundingBox are programmatic representations that can be used for additional refinement like model scaling or to add animations before final exporting.
Additional model refinement occurs in the Iterate process before exporting in a new file.
AR Quick Look
Now that we've created a model, how should we use it?
Built-in Apple apps, such as Safari, Messages, Mail, News, and Notes, use Quick Look to display USDZ model files of virtual objects in 3D or AR on iPhone and iPad.
Using it is simple, use the special air tag with a ref to the usdz model file.
The snippet above allows you to embed your 3D model on your webpage for users on Safari. It also supports purchasing with Apple Pay and more. Unfortunately this is Safari only, leaving out 56% US consumers. (Source)
We’ve learned how the API works; using images as inputs, creating models and outputing with different quality levels used for different purposes. Next you should try the sample app to capture new items on your own. You will need Xcode and an iPhone/iPad with dual rear cameras.
You can learn more here.