January 29, 2020 | 10 Min

How to drive engagement with Augmented Reality: Part 2

Rhys Simpson
AuthorRhys Simpson
Default hero
EngineeringGuides

In part 1 of this blog series, we established what Augmented Reality is, how related technologies have evolved recently and how we might want to use it on a storefront. We also came up with a list of requirements that need to be fulfilled to provide AR capable content with Amplience. While these were mostly met, a big question was how we might group related assets such as different AR model formats (USDZ+GLTF) and their thumbnails, and organize these groups into media sets for display alongside other content.

In this post, we will go into detail on how this was achieved, and even provide a demo repository for you to try it yourself.

Modifying Viewer Kit

Amplience viewer-kit is a set of components that make it easy to get started with using sets and content served by Amplience Content Hub and Dynamic Media on a web page. It is a standalone set of components built with handlebars and is simple enough to use on any site or even port to another web framework. It’s entirely open source and you can even run your own demo just by following the readme.

To start us off we include the scripts for Google’s <model-viewer> element, which will allow us to use it anywhere on the page. More information on how to do this is available on the <model-viewer> home page.

As we discussed in Part 1, to serve to models to iOS and Android with <model-viewer>, we need a GLTF/GLB format model and a USDZ format model for iOS. We should also include image thumbnails, because the 3D models can take much longer to load, which is not ideal for visual navigation.

Grouping Assets with a Naming Scheme

The main function of viewer-kit is to fetch and display the contents of a media set: a collection of linked assets created in Content Hub.

To retrieve the JSON metadata of the example set you can use the following:

1https://i1.adis.ws/s/ampproduct/test_model_set.json

You can see from the metadata that all the only information returned for the USDZ and GLTF files is the name, the MIME type and the fact that it’s delivered statically. So how do we group together our separate models and thumbnail for use as one set item? The best way is to link them via their ‘name’, as asset names must be unique in Content Hub in any case.

  • The core GLTF model should have the true name: "test_model" (from "test_model.gltf")

  • The USDZ variant should have the correct mimetype and have suffix "-ios", eg. "test_model-ios" (from "test_model-ios.usdz")

  • Related thumbnails should be of type ‘image’ and have the suffix "thumb", eg. "test_model-thumb" (from "test_model-thumb.png")

In Content Hub, USDZ files have the model/vnd.pixar.usd MIME type. In iOS 13, in order for these files to display correctly, the URL must end in ".usdz". We serve USDZ files statically, so to add the "usdz" suffix we use an SEO extension as follows:

1https://ampproduct.a.bigcontent.io/v1/static/sofa-ios/sofa-ios.usdz

This approach ensures compatibility between iOS 12 and 13.

In our modified version of Viewer-Kit, we group related assets together based on their name (using the naming convention shown above) and MIME type. We do this by introducing a post-processing step into the set handler. It starts by finding all GLTF assets- these are treated as the main asset for each model and will be augmented with the related thumbnail image and USDZ files.

We’ve also included a new handlebars template for the model preview. This is chosen when the MIME type of the asset is prefixed with ‘model/’, so it would work even without the processing we do on the set. It simply contains a <model-viewer> element that points to the GLTF model, a USDZ and a loading thumbnail when present.

That’s pretty much it. Styling is pretty simple as we just want to make the viewer fill the slide space and <model-viewer> handles the rest.

Try it yourself

You can find my modified version of the viewer-kit here

Our modifications are fairly simple and demonstrate how the processing step was added on its own. Feel free to download it and test using the command ‘grunt’ - if you’re on the same network as your phone, you can even browse to it on a mobile browser to try out AR.

To try it yourself, the first step is to upload all your models (with -ios and -thumb resources) and link them all together in a set. Feel free to link in some other related images or even spin sets as well, just to verify they work together with our changes.

The sofa model

For this example, I used a Creative Commons licensed sofa model found on Sketchfab. This model was converted to USDZ by using the open source gltf2usd command line tool, which is built on top of the official Python libraries maintained by Apple and Pixar. Apple have just released a beta of their Reality Converter app which will make converting other formats into USDZ a lot easier.

A thumbnail was created just by taking a screenshot view of the model. You can find the model here.

Here's how the sofa model looks in iOS Quick Look.

All that’s left to do is host the viewer-kit test application pointed to my set, and then browse to it on a mobile device. As <model-viewer> uses three.js to do a WebGL preview, it can even display the model in a simple orbit camera on desktop, or on mobile devices before handing off to the AR viewer. Viewer-kit is built to be responsive, so browsing to the page on iOS shows an appropriate screen width mobile layout, compared to the block layout on desktop.

Thanks to our efforts combining set items, our models show in the collection as one element each and using the thumbnails we uploaded. This thumbnail is also used when the model is loading because <model-viewer> lets us display any image during loading. For any of the models that we linked USDZ assets to, a small button appeared over the 3D orbit preview to activate iOS Quick Look. Clicking it has to download the model again (the preview itself uses GLTF, quick look needs the USDZ) but this does not take too long.

The Buster Drone model

I also uploaded a GLTF file with animations the same way using the “Buster Drone” example that the chronos group themselves use, on their page by LaVADraGoN. The model can be found on Sketchfab.

Converting this one to be AR friendly is a little trouble, so here are the converted versions used in the test set:

1https://ampproduct.a.bigcontent.io/v1/static/BusterDrone
2https://ampproduct.a.bigcontent.io/v1/static/BusterDrone-ios
3https://ampproduct.a.bigcontent.io/v1/static/BusterDrone-thumb

Unfortunately, the target AR platforms only support one animation - not switching between them- so this is the limit of what we can do. Still, seeing the detailed materials and animation fit in seamlessly with the real world is a surreal experience.

Using the features of ARKit, each of the 3D models appropriately reacts to the lighting in whatever room I’m in, and even builds a cubemap for reflections as the room is scanned by the user. This is especially impressive on the drone model, where you can even see the pattern of the floor in the reflection sometimes.

Future Possibilities

While ARCore scene viewer and iOS Quick Look are clearly big leaps forward in terms of getting AR content on the web, there is very little control over how your content is presented. For a start, you can only place objects on the floor - placing objects on walls or even attaching to people are possibilities that may be missed. You can only transfer one 3D model to the viewer at a time, and it cannot be changed while the viewer is running. It might be possible to modify and output model files within JavaScript to augment them with dynamic information, though there is no tooling to do that with the USDZ format.

These things are possible with more direct control over the AR APIs, such as interfacing with ARKit and ARCore directly. However, these things are not yet exposed to web browsers. WebXR aims to do this by providing core AR information and letting the developer handle rendering with WebGL, but has not yet left early draft stages. This could also remove the reliance on the USDZ format, which would definitely simplify publishing models for iOS.

What we’ve shown here only approaches the current limit of what can be done with AR on the web. Using Content Hub and Dynamic Media in your own standalone app in a similar way would allow you much more control for AR features, interactivity and display. For example, with Dynamic Content you could define interactive points on a model with expandable information should the user tap on them, or activate animations on the model in a context sensitive manner. You could similarly create a VR showroom where models and metadata are served from Content Hub, and individual products fill positioned slots within the room, depending on the season and context.

Where to go from here

In this article, I’ve demonstrated that it’s possible to put engaging AR content on the web and still reach a variety of devices. This can easily be included alongside related images and videos of products thanks to our media sets. Control over its display is somewhat limited right now, but the future definitely looks bright with WebXR and future updates to ARKit and ARCore.

We hope this will inspire you to make your own experiments with AR on the web and give you some ideas of how you can use the technology to build more engaging sites.