Article

LLMs for Retail: A Guide to Implementing Generative AI Effectively

Introduction 

Large Language Models (LLMs), particularly Generative AI models like ChatGPT and BingChat, have revolutionized various industries. Progressive e-commerce companies are earnestly trying to devise generative AI strategies to resolve their Customer Experience (CX) automation issues. This article intends to clarify the buzz around this technology, discussing the opportunities, risks, best practices, and requirements for a successful implementation. By doing so, businesses can fully harness these models' impressive capabilities without succumbing to pitfalls that have beset many naive implementations. This article presumes that readers possess a basic understanding of LLMs and generative AI and are considering these technologies for in-house CX automation or vendor evaluation.

Opportunities

As the retail and e-commerce industries continue to experience rapid digital transformations, Large Language Models (LLMs) have emerged as potent tools for enhancing customer experience (CX) through automation. Nonetheless, the successful incorporation of such advanced technologies isn't without challenges. While LLMs unlock remarkable language understanding and generation capabilities, facilitating "conversation automation," this isn't the whole story, as numerous industry pioneers have discovered.

Simply put, the language proficiency of an LLM doesn't equate to an impressive "problem-solving skill." Similarly, the impressive conversation automation facilitated by ChatGPT doesn't automatically translate into powerful "resolution automation." When considering these factors, there's a substantial amount of work that needs to be done before a business can implement a successful automation system.

Furthermore, factors like globalization trends, ever-evolving merchandising options, and fluctuating customer preferences increase the complexity of the customer experience workflow. At the same time, today's shoppers are digitally savvy, demanding superior, real-time customer service. Traditionally, customer service headcount has scaled linearly with the number of customers. However, businesses are now struggling to hire and retain talented agents. How should a business in the retail industry respond to these opportunities and realities? The following sections will delve into a more in-depth analysis of current challenges and risks that need to be addressed, followed by four specific implementation recommendations.

Key Challenges & Risks

Inaccurate and biased response at times, aka hallucination

At their core, Large Language Models (LLMs) are neural networks trained on extensive volumes of text data. They are designed to learn language statistics, patterns, and relationships inherent within this data. Generally, businesses utilize these pre-trained foundational machine learning models to address their customer experience automation requirements.

The initial and essential step is "grounding." Businesses should train or augment LLMs with business-specific knowledge articles or carefully curated responses to serve as the basis for generating replies. However, this process presents its own set of challenges. For instance, for every ten factual and accurate responses, one might also produce one or two responses that sound plausible but contain erroneous information. This phenomenon is referred to as the "hallucination" effect. LLMs may invent specific details or facts confidently to generate a natural-sounding response. If pushed, LLMs could even fabricate citations or sources. In a customer experience (CX) setting, this could undoubtedly lead to customer confusion and frustration, potentially causing additional customer service issues. This raises the question: how can we trust LLMs with critical tasks?

An additional risk associated with the hallucination effect is that pre-trained models often draw from public data, which can be a mixed bag. This data is full of opinions and littered with errors, inconsistencies, and even imaginative stretches of truth. As a result, the fabricated responses could contain biases that contradict a brand's values and image.

Difficulty in following business logic and workflow requirement

Business logic and workflow sequences are crucial for efficient operations, competitive success, and effective customer support. Examples of business-specific logic and workflows include offering an alternative subscription option to a customer initiating a cancellation or conducting a satisfaction survey as part of the return process. Often, these workflows could be seasonal or dynamic, tailored to individual shoppers. Unfortunately, these are not concepts that LLMs understand or support natively.

If an LLM-based solution overlooks these elements, it risks appearing like an untrained agent - capable of casual conversation but lacking a nuanced understanding of business-specific processes and requirements. It's essential that these sophisticated models are properly adapted and utilized to meet the unique demands of each business effectively.

A disconnect between the human agent team and LLM-based solution

When a virtual assistant fails to address end-user inquiries, the standard implementation would typically initiate a hand-over-to-human process. This allows human agent teams to intervene and rectify the situation. However, many solutions fall short of offering additional collaboration opportunities. In the era of LLM-powered solutions, such isolated mentality and practices fail to meet new standards. Agent teams often struggle to comprehend how end-users interact with the bot before the handoff. They may lack insight into the context and summary of the conversation, or they may be unable to identify emerging trends that only become apparent through analyzing large volumes of data. These issues highlight the need for powerful language and analytics skills provided by LLMs.

Conversely, agent teams also face difficulties in providing feedback to improve bot performance and manage its behavior, even for straightforward tasks such as adjusting greeting verbiage or specific answers. When this occurs, it's unfortunate that the human agent team and the virtual assistant function more like two estranged partners than a cohesive unit.

Poor performance due to a lack of API and data integration 

Off-the-shelf Large Language Models (LLMs) often struggle to grasp business-specific contexts. Without substantial resources devoted to the development and training of a customized or enhanced LLM—a potentially expensive endeavor—these models may not fully comprehend specialized business terminologies. For example, specific product titles, category names, or feature names might be misinterpreted by a generic LLM. This misunderstanding could impair its performance and lead to an incomplete understanding of the business landscape.

Moreover, these models do not natively support external integration for transactional operations or real-time information retrieval. An LLM, for instance, doesn't inherently facilitate dynamic operations like canceling an order or tracking an order's status in real-time. This constraint further curtails the potential utility of LLMs in dynamic business environments, emphasizing the need for thoughtful adaptation and integration within the specific context of each organization.

Lack of tools to manage resolution and analytics 

Deploying a Large Language Model (LLM) for a customer service solution introduces a distinctive set of challenges and risks, particularly if it is not equipped with adaptable tools designed for successful business integration. Here is a checklist of various functional areas 

  1. Can organizations create, manage, and modify the content used for generating responses?
  2. Is there any dashboard to show insightful analytics, monitor performance,  identify areas of improvement, and make data-driven decisions?
  3. Can we empower humans to refine LLM-based answers? 
  4. Is there a tool that assists customer service agents in real-time, providing recommendations based on the same CX automation solution? 

Blueprint for a Successful Implementation

Implement strong anti-hallucination measures

As mentioned earlier, one of the strategies to combat the 'hallucination' flaw inherent in Large Language Models (LLMs) is 'grounding'. Businesses typically utilize this technique by training or augmenting LLMs with business-specific knowledge articles or pre-drafted answers. This serves as the foundation for creating intelligent virtual assistants. However, grounding alone is not adequate; it is critical to implement supplementary quality control measures to minimize errors. We recommend using additional Machine Learning (ML) techniques, employing a separate model to verify the accuracy of responses and reject any that are weak or irrelevant.

Incorporating a 'human-in-the-loop' process is also beneficial. This process involves using Reinforcement Learning with Human Feedback (RLHF), a method applied by many advanced LLMs like GPT-4, to increase the precision of their output. The human component in this setup could be the end-users interacting with the bot and providing feedback, or the agent team using an integrated tool to rapidly identify areas for improvement and rectify the bot's errors. This approach ensures continuous learning and refinement of the AI model's responses.

Blend in business processes and workflow 

Despite its remarkable proficiency in understanding natural languages, a Large Language Model (LLM) lacks intrinsic knowledge of business-specific logic and preferred workflows. Therefore, it's essential to implement or adopt a dialogue policy management module capable of receiving pre-configured instructions from business users to direct the virtual assistant to follow specific procedures and rules. Below are several examples of specifications that business users should be able to instruct the virtual assistant’s dialog management to follow:

  1. User authentication requirements: This confirms users are who they claim to be before retrieving any personal information, such as account balances, or initiating a return against a specific order.
  2. User authorization levels: This determines the level of access granted, such as inviting a new member to a family subscription plan.
  3. Prerequisites for invoking an external API call: The dialogue management module should be responsible for generating an appropriate prompt to solicit necessary information if these prerequisites are not met.
  4. Navigational flow: after a response, offer follow-up topics or actions for the end-user to continue the conversation.
  5. Conditions and timing for initiating a human handover process: These could depend on factors such as office hours, agent availability, expected wait time, user types, and detected user intent and sentiment.
  6. Dynamic and conditional responses: The system should adapt responses based on factors like user type, business type, locale, and other conditions, even for the same user utterance. 

This way, an LLM-based system can be customized to meet the unique requirements and expectations of different businesses, enhancing its utility and effectiveness.

Focus on integration for increased resolution capability

The ultimate goal of businesses is to resolve customer problems. While Large Language Models (LLMs) can generate responses and automate conversations, proper integration is essential to achieve the best results.

In a typical customer experience (CX) automation implementation, two types of integration are significant: business data integration and service layer integration.

Business Data Integration Examples:

This category enhances LLM capabilities with the use of specific business data:

  1. Product Catalog Data: This data powers product searches, product recommendations, item exchanges, price lookups, and stock availability checks. In a smart LLM-enabled implementation, the bot will also be able to recognize product names and features and answer product-specific questions.
  2. Customer Graph Data: This powers personalized responses and optimizes product searches.
  3. Order/Fulfillment Record: This data aids in responding to order support-related inquiries and requests.
  4. Promotion/Coupon Data: This assists with pre-purchase questions.

Service Layer Integration Examples:

This category leverages existing business backend systems, typically via API connections, to dynamically query information or facilitate transactions:

  1. Order Management System: This can cancel an order, arrange returns/exchanges, manage missing order refunds/reshipping, add items to a cart, and process purchases.
  2. Membership Management: This manages various aspects of customer membership.
  3. Autoship/Subscription Plan Management: This oversees automatic shipping and subscription plans.
  4. Product Support: This provides product recommendations and facilitates product registration.

In all instances, businesses should direct and supervise the integration and implementation process, conducting User Acceptance Testing (UAT) to ensure that these integrations work seamlessly with LLM's language generation capabilities. 

Enhance employee productivity with LLM-enabled tools 

Implementing a Large Language Model (LLM) for a customer service solution without flexible tools for business support presents significant challenges and risks. These can be elucidated through the following points:

  1. Content Management: Content is important to equip an LLM-assisted system to keep up with updates to products, policies, and procedures. Furthermore, business users can control the engagement using techniques such as proactive chat to initiate the bot conversation with a specific context and goal in mind. 
  2. Advanced analytics & dashboard: With LLM, we can leverage its advanced NLU capability to analyze large volumes of unstructured text data such as chat transcripts, and user feedback comments to identify new trends, new intents, new use cases, and potential areas for improvements.
  3. Human in the Loop Training: Businesses would use such tools to correct the LLM's mistakes, tailor its responses, or teach it to understand complex or niche requests. This could lead to a virtuous cycle to continue learning and improvement.
  4. Agent Copilot: LLM-based customer service solutions can greatly assist human agents by suggesting responses, providing information, and automating routine tasks. With this "Agent Copilot" function, businesses could lead to increased efficiency and reduced agent training requirements, and even talent retention due to increased work satisfaction. 
  5. Writing Assistance: Good writing is key to clear communication. LLM is a powerful tool to provide well-written, grammatically correct, and easily understandable responses.

Final Thoughts

With the advent of LLM, forwarding-thinking e-commerce leaders are excited about the opportunities to leverage it as a powerful tool for enhancing customer experience (CX) through automation. However, the successful integration of such advanced technology is not without challenges. Hallucination, the disconnect between human agents and LLM-based solutions, the lack of API and data integration, and the absence of adaptable tools to manage resolution and analytics are but a few hurdles that businesses need to overcome.

Despite these challenges, we are convinced that the advantages of using LLM for CX automation in retail far outweigh the pitfalls. Businesses that take a strategic and comprehensive approach to implementing LLMs – from robust anti-hallucination measures to integrating business processes, to facilitating strong teamwork between human agents and the bot – can experience significant benefits.

Businesses can unlock new levels of productivity and customer satisfaction by treating LLMs not as a replacement, but rather a complement to their human counterparts. The key lies in realizing that while LLMs can replicate human-like conversational abilities, they still need human guidance to align with the unique business logic, style, and tone of a company.

Moreover, the effective implementation of LLMs for CX automation requires a fine balance of advanced technology and human touch. It's not just about automating customer conversations but also about building meaningful connections and delivering personalized experiences that resonate with today's digitally-savvy shoppers.

In closing, the retail and e-commerce industry is ripe for a transformation driven by LLM and AI technology. This journey towards CX automation using LLMs is not without its twists and turns, but with the right blueprint in hand, businesses are well-positioned to navigate the path successfully. The future of retail may be complex, but with the right tools and a strategic approach, businesses can harness the power of LLM to drive customer satisfaction, improve operational efficiency, and ultimately, achieve a competitive edge in an increasingly digital world.

Request a Demo