The Missing Ingredient in GenAI That Will Keep Your Business Behind

Ali Hanyaloglu
August 11, 2023
5 mins
EcommerceAI

Amplience’s Vice President of Product Marketing, Ali Hanyaloglu, recently sat down to think about how publicly available Generative AI tools can be used in shopping experience scenarios. What he learned is that there is a missing component, or ingredient, to these powerful technologies: the need for specialized, shopping-specific context. Without it, these AI tools just aren’t cutting it. Here’s what he discovered, and what brands and retailers need to be looking for if they want to benefit from AI.

When Generative AI Gets it Wrong

If you’ve seen me on video calls, you will know that my record collection is something that says a lot about me personally. However, there are a lot of rare albums and singles I would love to have, but I haven’t won the lottery yet to justify current asking prices. So, I thought it would be fun to pretend I had them in the form of photography. As I don’t own a copy of these desirable rarities, there was only one way I could have a photo of them being proudly held up: by calling on the Generative AI “superpowers“ of tools like OpenAI’s DALL-E. Here’s what I gave it in the prompt:

A photorealistic image of someone holding up the 1991 UK vinyl edition of the album Discography by Pet Shop Boys.

I chose this one as it is quite a rare item and near mint copies are going for over $300USD, if you can find one. But it’s listed on many public sites like Discogs and the band’s own site too. So, you’d probably think something that I can search for online would be easy for the likes of DALL-E to include. You’d think wrong, my friend. Here’s the image it produced:

Umm, what? Now I don’t know who or what “Pooty Poy“ is but clearly the GenAI models are missing something significant here — whether that’s in its data, what it’s learned, or its understanding of my prompt. For now, let’s ignore those fingers and that the disc is not 12x12”!

Instead, let’s try the same experiment but in a situation that’s not about my musical taste or desires. How about a more common product available on the site and app of our dear customer Crate & Barrel? In this situation, all of the images of the product provided by the manufacturer are on a white background. But it would also be good to have a series of images in different settings. Maybe showing the product on a lovely patio overlooking the ocean on beautiful Cape Cod here in Massachusetts. A perfect and relevant setting to entice customers shopping for this product in this part of the world, at this time of year, right? Here’s the thing: a photoshoot for this one product is probably going to blow the budget, even if a photographer was available during wedding season. And this is one seasonal product out of potentially thousands right now. What to do? How about we once again pull up those GenAI superpowers to create something quickly?

Here’s the prompt I used:

Generate a photorealistic image of a Polywood Paso White Outdoor Adirondack chair on a patio by the ocean in summer in Cape Cod.

Seems clear enough, right? I specified the type of image, the full product name, the location, the season and the setting. Surely GenAI can handle this one. Here’s the actual product image, and the result from DALL-E:

As you can see, it does a pretty good job — beautiful summertime scene on a patio overlooking the ocean in a place that could be a small town on Cape Cod. Except there’s one big, glaring problem … that’s not the product I specified! We’ll ignore the appearance of the big bolts and the uncomfortable-looking seat; if this was a real Adirondack, it would probably result in a huge number of costly returns for our friends at Crate & Barrel. Not good.

Shopping Context: The Missing Ingredient in Generic AI Tech

So, how did GenAI for image generation go so wrong here? What was it missing that caused it to fail almost as badly as my personal example? The missing ingredient that the LLM (Large Language Model) used by DALL-E didn’t have was context. In these two examples, the context was product information — everything from the product name, SKU, description and images. Yes, much of that is available publicly on commerce sites, but it is copyrighted or rightly owned by a brand. So, this known context data is in reality only usable by the companies who have the rights or permission to access it for commercial purposes. Other types of known and owned contextual data include availability data from a WMS or OMS, the latest pricing, customer segmentation from a DCP or CRM, and all the existing content and assets that the brand has and how they are used. There’s also owned context that only a brand or retailer truly knows, that’s a part of its DNA: brand tone of voice, for example, or how it incorporates its stance on DEI into content. All of these matter to the consumer and the shopping experience they have.

Generic GenAI tools can’t learn from and leverage all that context, and therefore what it produces falls flat, with content that is either wrong in relation to the prompt or too generic for a brand or retailer to stand out. So, in the consumer shopping scenario we just looked at, we need to have specialized LLMs that can learn from all that rich, owned contextual data. Only then can GenAI be used to generate as many variants of content as needed to address the situation that customers are in, that’s more likely to get them to say out loud “That’s the product for me! Here’s my credit card!“

Not All Context is Known or Owned

But there’s more to influencing buying behavior than having the right product or background images. News, events, weather, locale, location, regulations and laws, competition, and even celebrity sightings and posts, all influence shopping decisions and therefore how brands and retailers respond to them.

Let’s take a recent example: a few months ago, if you had told me the Barbie movie was going to have such a profound influence on social commentary as well as buying behavior, I would have said that you need to stop playing with your dolls and get out more. But look at what this movie has done that no other has in recent memory:

Many relevant brands and retailers were able to jump on this impactful bandwagon given the time frames involved. But for those who didn’t, it was a case of ‘you snooze, you lose’.

The challenge is, these unowned contexts are mostly beyond the control of businesses, mostly unpredictable, and can happen very quickly. Yet, they are also publicly accessible sets of information. But brands and retailers need to be set up to respond to them quickly, not just in terms of strategy or campaigns, but with all the content and assets needed to support their revenue-generating campaigns. And except for the largest companies, marketing teams just don’t have the tools (or the budget) to do that.

Specialized AI is Needed to Go Beyond Filling the Blanks

So surely, GenAI can help here to quickly fill in that “blank canvas“ with content based on that context, right? Not always, and not predictably. As we’ve already seen, the LLMs behind GenAI tools need to have access to all that information to learn from but do so in a vertical or category-based context, otherwise it will just return something generic or irrelevant. If we’re to take advantage of the success of the Barbie movie in campaigns, incorporating images of Ken drinking a latte in his gas-guzzling classic supercar might be great for a sports car brand, but it isn’t going to help a small, female-owned business selling sustainably sourced, environmentally friendly coffee. That’s nuanced, but it’s an important nuance.

What we can take away from this is that if brands and retailers are going to benefit from Generative AI technologies, then the generic and horizontal tools just aren’t going to cut it for them. They need to have specialized LLMs, even SLMs (Small Language Models), from technology providers who understand those vertical and brand nuances. Only these AI technologies will be able to deliver more relevant results by leveraging this type of owned contextual data, combining it with the content and assets they have already. The businesses that incorporate these technologies into their practices, augmenting what they do today, are the ones who are going to be able to get ahead of their competition by getting more customer and contextually-relevant variants of all that content out to market faster and in greater volumes than ever before.

And if every piece of content, from product descriptions to images to articles to videos and beyond, were all contextually relevant to us as consumers, then imagine how great every shopping experience with those brands would be. Heck, I might even spend $300 on that rare album I’ve always wanted.