Content Managing Alexa Skills: why Headless CMS makes sense for vCommerce

March 22, 2017

The arrival of Echo and Alexa has at last brought some real innovation to the models of voice interaction, opening a whole new channel to engage with customers. Anyone who is embarking down the digital architecture route of micros-services and headless content is in a great position to take early advantage of this emerging technology.

Headless CMS and micro services are now part the vernacular when talking about digital technologies for eCommerce. But why is it so important? What’s wrong with just sticking with proven web technologies? One of the justifications has been future proofing for multi device and channel technology – so far this has been limited to managing screen sizes for web and integration in native apps. Echo and Alexa have blown this out the water, HTML based presentation layers used in web technologies are completely useless in this world. The very premise they are built on, pages and site navigation, is totally irrelevant in voice interaction models required for vCommerce.

To demonstrate how easy a voice interface can be created when you employ a headless content service, we put together a simple example using Amplience to deliver the content to Alexa, and Alexa as the voice interface for our pseudo brand Anya Finn.

Step 1 Design you Voice User Interface (VUI)

With Alexa VUIs you need to think about three things the intent (the function the user can activate), the utterance (what the user says to trigger the intent) and the response.

Amazon have defined some helpful best practices for building VUIs

In this simple demo, I implemented two intents:

· Recommendations for a family member

· Latest offers

· Add content for starting and cancelling the skill

jw1
 

Step 2: Build your Alexa Skill

To build an Alexa skill, you use the skill kit interface that’s available in the developer portal. Sign into the portal using your developer account, and populate the form with the configuration of the skill. Almost no development is needed at this point as its almost entirely configuration based.

jw2
 
Here are full details for setting up an Alexa skill using the Alexa Skill kit.

The intent schema is where the interaction model is defined, we can take the intents we defined earlier and implement them here. In scenarios, such as recommendations where you need to capture information for the intent such as the family member (e.g. Wife) you can use a slot – a slot in some ways is like a parameter you pass to a function call.

{ “intents”: [ {

“intent”: “recommendation”,

“slots”: [{

“name”: “relation”,

“type”: “relation”

}]

},

{“intent”: “GetLatestOffers”,

“slots”: []

},

{“intent”: “AMAZON.CancelIntent”

} ]

}

Although Amazon provides many default slots types I created a custom slot type called ‘relation’ to represent a family member e.g. wife, husband, father, mother, etc.

To wire together a what a user says and the intent, you need to provide a list of utterances for each of the intents – this does not require an exact word for word definition for every possible sentence as Alexa is smart enough to work out what someone is saying. You are essentially training her for your VUI. She even understands my northern accent.

GetLatestOffers on offer

GetLatestOffers are your latest offers

GetLatestOffers do you have on offer

GetLatestOffers do you have on sale

GetLatestOffers are your latest offers

GetLatestOffers on offer today

GetLatestOffers on sale today

GetSpecificOffers whats on offer in {category}

recommendation what present should I get my {relation}

recommendation what do you suggest I buy for my {relation}

recommendation what should I get my {relation} for her birthday

recommendation what should I get my {relation} for christmas

recommendation what would my {relation} like for christmas

recommendation what would my {relation} like for their birthday

recommendation suggest something for my {relation}
 
Step 3: Develop your skill service

The skill service is a web service endpoint that excepts the Alexa JSON request, processes it and returns an Alexa formatted response. The coding is simple once you have sorted out the wiring and there are quick starts available in NodeJS and Java. Your code basically identifies the type of intent sent in the JSON request and calls a coded implementation for the intent along with the slot values. Integrating with Amplience requires mapping the intent and slot combinations to the relevant content references and calling the Amplience micro service to retrieve the content.
 
AAEAAQAAAAAAAA3hAAAAJGQzYTA2MGViLTFkOWItNGIwYi05MDQ1LTI5NDlkNTA5ZTkxNg
 
Example of request response using the Alexa Skill Kit test harness:

jw3
 
Step 4: Author your content

The first step in authoring the content for this VUI is to first define the content types we are going to use for the voice responses. I have created three content types:

jw4
 
Alexa Basic Response – is used for simple voice responses and has fields for voice response and intent name

Alexa Offers – A voice response used for recommendations and latest offers allow the construction of a list of reusable voice responses with fields for:

· Intent name e.g. recommendation

· Slot value – e.g. wife

· Headline voice response entry into the list

· List of offer content assets

Alexa Offer – a voice response for an offer and has fields for:

· Intent name e.g. recommendation

· Slot value – e.g. wife

· Voice response

· Card content – used for cards displayed in the companion Alexa App

· Card Image – used for cards displayed in the companion Alexa App

Example Alexa Offers content for a wife recommendation:

I think your wife would like something stylish we have just got a new Mulberry handbag come in. Also. wives can never have enough great shoes. Why not look at our Jimmy Choo gallery

Alexa Offers

· Headline – I think your wife would like something stylish

Alexa Offer

· Voice response – we have just got a new Mulberry handbag come in

Alexa Offer

· Voice response – Also, wives can never have enough great shoes. Why not look at our Jimmy Choo gallery.

jw5
 
This structure means that you can add, remove and reuse voice responses across a VUI.

jw6
 
jw7

Step 5: Test and deploy

The final step is to test your skill trying different utterances using an Alexa enable device and making adjustment to the list of utterances. Carefully list to the voice response making sure it sounds smooth and conversational. As with real spoken language, adding punctuation, such as periods to gain small pauses. Using a content managed approach makes the iterative process far more accessible than having the voice responses hard coded into the service.

After going through this process, I found that doing any Alexa integration is surprisingly easy, Amazon have done a fantastic job in abstracting and articulating the voice interfaces. The integration is just pointing your skill at your web service endpoint, no stipulation in the technologies used for end points but they do nudge you towards their Lambda service.

The actual applications I see right now in retail are in account and order management, but also content experiences that are editorial (trends, reviews, how to guides) or informational (promotions, recommendations or store information) that drive customer to a destination.

Back to top

Request a Demo

ic-body-success
e-mail

contact@amplience.com

ic-body-success
United States

Call toll free 866 623 5705
or +1 917 410 7189

ic-body-success
Europe

Call +44 (0)207 426 9990