[go: up one dir, main page]

0% found this document useful (0 votes)
26 views23 pages

Text Youtube

Uploaded by

Gregor Sakovič
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views23 pages

Text Youtube

Uploaded by

Gregor Sakovič
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

https://www.air.

ai/

Introducing AIR Platform*


One Platform to manage all AI use in your Enterprise
Manage AI Across The Enterprise
AI is like air, its everywhere now, from writing content to running deeply embedded AI solutions such as
cybersecurity and agentic workflows. Know what, where, and how AI is being used in your organization on
one platform.
Live View of AI Tools & Apps Use
- Manage AI enterprise-wide with a complete view of how its being used in groups, departments,
divisions, and subsidiaries. AIR Platform* analyzes and processes multiple data streams to displays a full
360 view of all AI use across the organization - Live!
Approved AI Tools & Apps Library
- There are over a million LLM models out there. Everyone is experimenting with new AI via apps, tools,
agents, co-pilots, and even AI features within software your organization already uses. Manage a company
approved AI library to regulate AI use and manage risk.
Governance & Compliance Engine
- Our advanced G&C engine looks at all AI use in the organization and processes it against the central
G&C rulebook which draws from relevant AI laws and internal company rules on the use of AI and flags
whether the use or aggregate use is consistent with the G&C rulebook.
Powerful Analytics & Reporting
Wide range of analytics and reports can be drawn on the AI use dataset within any organization, given deep
insights into where AI can be best enhanced and where it needs to be curtailed due to risk or other factors.
Setup reports on periodic, systematic, or ad hoc parameters.
Messaging & Notification System
- Dedicated messaging and notification system across the entire enterprise. AIR Platform provides for
two way hierarchical, secure messaging and notification system to regulate and ensure AI use is consistent
with the organizational mandates and rules.
Employee Onboarding for AI Use
A comprehensive system to onboard employees on existing and new AI technologies that are increasingly
being deployed in all organizations today. AI is a new technology and necessitating an onboarding process
to ensure standardization.
Employee AI Training University
As AI use proliferates, the need to provide education to the entire organization on AIs being used grows
daily. AIR University provides client focused basic to advanced training content on AI subjects with full
testing and scoring capabilities.
Built-in Self-Reporting Capabilities
AIR Platform's built-in employee self reporting functionality allows the organization to get first-hand
feedback data on the use of AI in the organization - an absolutely invaluable tool to harness an immensely
valuable dataset.
Multilingual for Global Deployment
AIR Platform is built with global use in mind from the start. Multinationals can rely on one AI management
platform delivering functionality in many languages. Built in translation engine facilitates global insights on
AI use.
True Segregated eSaaS Architecture
AIR Platform keeps all client-specific code and data segregated and secure with client permission based
access protocols. This allows for flexibility and optionality for future client cloud and server changes with
ease without loss of functionality or data.
Easy Integration with Enterprise
AIR Platform is built to integrate with any Enterprise system through a library of connectors that can be
managed non-programmatically within the interface with data flow statistics and other metrics that can also
be managed.
The Conference will bring
together Professionals and
Academics focused on
the Management of AI in The
Enterprise and The
Government.
Sai, have you ever called customer care
and ended up being completely frustrated?
Yes Sharath, all the time.
First of all, when I call the customer care,
getting to a real person is an impossible task because
I have to answer a whole bunch of questions,
need to press a lot of keys before even I get to a real person.
Even when I'm talking to a real person,
the agent wouldn't understand why I called,
my history or any of those details.
So, overall, it was a real poor experience.
Yeah, I've had the same poor experience.
Wouldn't it be really cool if we can use generative AI
to help the agent and make it a much better experience
for the end customer?
Generative AI, that will be really cool.
How can we use generative AI in such situations?
So we can use LLMs, or large language models,
to do a number of different things, such as summarization.
So let's say we take a previous transcript,
call transcript, between an agent and a customer.
We run that through a large language model,
and the large language model can then generate a short summary
of the entire long call transcript.
Okay, so the agent will be able to understand
why the customer called in the previous instances
without actually looking at the whole transcript,
but instead looking at just a summary transcript
that is provided by the LLMs.
That's right.
We can do a couple of other things with those previous transcripts.
One is sentiment analysis.
And the third thing is intent classification.
Okay, so the agent already knows in advance
what kind of experience the customer had in the previous instances,
whether it was negative or positive experience.
That is good information to have
before the agent picks up the call and talks with the customer.
But can you explain a little more about
how intent classification can be utilized here?
Sure, so we can look at this previous call transcript
and then we can classify it as what is the main reason
or intent the customer has called.
So this could be things like
maybe the customer's calling to ask about a particular product
or a billing issue.
Or, let's say there's a recent promotion
and wanted more information about that.
So the large language model is able to look at the transcript
and determine what is the main intent for that conversation.
Oh, that'll be really great because
even before the agent talks to the customer,
picks up the call and talks to the customer,
the agent already knows a lot about the customer.
Knows the summary of previous conversations,
why the customer call in the previous instances,
and also the kind of experience the customer had.
So that'll be good information to have when
the agent is talking to the customer, so that he can
tread carefully when talking to that specific customer.
That will be helpful.
That's right.
But, haven't you had a lot of times when
an agent has just switched over
or will have to transfer to another agent?
So that's where we can use another thing known as RAG,
or Retrieval Augmented Generation.
That is interesting because every time I call the agent,
I get transferred to a different agent and I have to end up
saying the thing, saying all the things over and over again.
But how does this RAG work?
As in, can the agent just type in a question and
get the responses back from the generative AI LLMs?
Sure, so instead of transferring to a number of different agents,
RAG can help any agent become an expert on any particular topic.
So that way you don't have to get transferred to another agent.
So instead of typing out the question,
imagine if AI could automatically be listening in to the conversation
so we could have the speech-to-text listening in to the conversation.
That text then sent to the large language model,
which can then bring up the relevant information
and present it to the agent
so that the agent is knowledgeable about any topic
that a particular customer is asking about.
Okay, that actually makes a lot of sense because
the agent doesn't need a lot of training on all of the things that are available
and requires a lot of lesser switching to different agents and
the agent will be able to help the customer on the call in real time.
That is good information for the agent to have.
But how does the RAG framework work?
Can we talk about that and how can it be applied in such scenarios?
Sure, so let's say there are a number of different data sources.
This could be things like product documentation.
You could have a FAQ information
as well as previous trouble tickets.
All of this is text information,
which can then be split up or chunked
and sent to an embedding model.
This embedding model can then convert all of this text
into embeddings or vectors.
Really it is just numerical information
which can then be stored into a vector database.
So now when a user is asking a question,
this vector database is able to understand the semantic information
and then bring up the most relevant content,
send that over to a large language model,
which can then generate an answer
and send that back over to the user.
That actually will help the agent in a lot of scenarios because,
as we've been talking about,
the agent doesn't know anything and everything.
So having the generative AI LLMs bringing up
the learned information real time
as the agent is talking to the customer will be really valuable.
That's right.
So we can also do a number of other things.
So let's say, you know, in some cases
it might still require that a particular agent
needs to send some information over to another system.
So think of trouble ticketing systems
where you can have a large language model
automatically pre-populate all of the different fields
in this trouble ticket form
when it makes it easy for the agent to then just review that information.
Yes, that will be that will be really helpful because
usually the agents spend a lot of time in taking notes,
creating those trouble tickets after the call is ended.
So it'll really help boost the agent's productivity and efficiency, and
all the agent has to do is just look at the automated trouble ticket
that is created and just review it and update it if needed,
and submit a trouble ticket.
That saves a lot of time.
Exactly, yeah, and then we can also do a couple of other things.
So we can do things like product recommendation,
where a large language model can automatically recommend
the product based on the particular customer.
So that can be also personalized.
We can also do things like next best action,
where we can tell the agent what is the next best thing that
the agent needs to be doing while on the call.
So this can really guide the entire conversation between
the agent and the customer.
Well, there you have it. So with all of these things -
summarization, sentiment analysis, intent classification, RAG,
and these kind of generation tasks -
the agent will now be able to talk to the customer,
and help the customer in a more productive fashion.
Right, so next time you call a customer care
hopefully you won't be as frustrated.
That'll be really helpful, thank you.

### Detailed Paper on Generative AI in Customer Care: Enhancing Agent and Customer Experience

**Introduction**
Customer service has evolved significantly over the years, yet challenges persist. Many users express
frustration when calling customer care, often due to difficulty reaching an agent, poor communication, or
being transferred multiple times. Generative AI and Large Language Models (LLMs) offer potential solutions
to streamline these processes, enhancing both the customer and agent experience. This paper explores
how generative AI can be applied in contact centers, focusing on key concepts like summarization,
sentiment analysis, intent classification, Retrieval Augmented Generation (RAG), and other productivity
enhancements.

---

### 1. **Summarization: Enhancing Agent Understanding**

One of the most common frustrations for customers is having to repeatedly explain their issue to agents
who lack context from previous interactions. Generative AI, particularly through LLMs, addresses this
problem through **summarization**.

#### Explanation:
LLMs can take long transcripts from past interactions and generate short summaries. These summaries
provide agents with a concise overview of why the customer called in the past, what the main issues were,
and what actions were taken. This reduces the time spent reading lengthy transcripts and equips the agent
with relevant information before the conversation starts.
#### Example:
Instead of the agent reading through a 10-minute call log, a summary like *“Customer called previously
regarding a billing error on their account. Issue unresolved. Frustration expressed.”* would be generated,
enabling the agent to pick up where the last conversation left off.

---

### 2. **Sentiment Analysis: Understanding Customer Emotions**

Another critical factor in customer service is understanding the emotional state of the customer. **Sentiment
analysis** allows agents to detect whether a customer has had positive or negative experiences in the past.

#### Explanation:
By analyzing previous interactions, LLMs can identify whether the customer was satisfied or dissatisfied
during the conversation. This gives the agent a better understanding of the customer's mindset, helping
them navigate the conversation accordingly and providing a more empathetic response.

#### Example:
If the sentiment analysis shows that the customer had a negative experience, the agent can approach the
conversation with care, ensuring they are attentive and solution-focused.

---

### 3. **Intent Classification: Predicting the Purpose of Customer Calls**

Understanding why a customer is calling is crucial for efficient customer service. **Intent classification**
helps predict the main reason behind a customer’s contact.

#### Explanation:
By analyzing previous interactions, generative AI can classify the customer’s intent into categories such as
billing inquiries, product issues, or interest in promotions. This saves time and allows the agent to respond
quickly, without needing to ask many initial questions.

#### Example:
If a previous transcript indicates the customer’s interest in a promotion, the AI can classify this as a
*“Promotion Inquiry”*, and the agent can prepare information on available offers even before the call starts.

---

### 4. **RAG (Retrieval Augmented Generation): Expert-Level Assistance in Real Time**

Customers often face frustration when transferred between agents who don’t have the right answers.
**Retrieval Augmented Generation (RAG)** solves this problem by turning any agent into an expert on a
wide variety of topics.

#### Explanation:
RAG works by pulling information from multiple data sources (e.g., product documentation, FAQs, or
previous trouble tickets) and converting it into embeddings. These embeddings are stored in a vector
database, which can retrieve relevant content based on customer queries. The agent doesn’t need to be
transferred to another department, as AI automatically retrieves and presents the most relevant information.

#### Example:
If a customer asks a technical question about a product, RAG can pull up the relevant troubleshooting steps
in real time, allowing the agent to provide an accurate and fast response.

---

### 5. **Automated Trouble Ticket Creation: Reducing Agent Workload**


Agents spend a significant amount of time taking notes and filling out forms after calls. LLMs can automate
this process by pre-populating trouble tickets.

#### Explanation:
The AI listens to the conversation, automatically creates a trouble ticket by filling in necessary fields, and
provides it to the agent for review. This minimizes post-call paperwork and increases productivity.

#### Example:
Once the call ends, a ticket with fields like *“Issue: Billing error, Resolution: Awaiting system update”* will
already be populated. The agent only needs to verify the information and submit the ticket, saving time.

---

### 6. **Personalized Product Recommendations and Next Best Action**

Generative AI can also assist agents by suggesting **product recommendations** or advising on the **next
best action** during the call.

#### Explanation:
LLMs analyze customer data to suggest products based on the customer’s needs. Additionally, AI can
recommend actions for agents to take during the call, guiding the conversation toward a solution.

#### Example:
If a customer expresses interest in upgrading their service, the AI might suggest an appropriate product
bundle. Similarly, if the call is about troubleshooting, the AI may prompt the agent to ask specific diagnostic
questions or suggest steps to resolve the issue.

---

### Conclusion

Generative AI and LLMs offer transformative potential for customer service, addressing common pain points
like slow response times, lack of context, and poor customer experiences. Key technologies such as
summarization, sentiment analysis, intent classification, RAG, and automated ticket creation enable agents
to provide faster, more informed, and empathetic service.

By integrating these tools into customer care systems, contact centers can create more satisfying customer
journeys, enhance agent productivity, and reduce overall frustration for both customers and agents. As AI
technology continues to evolve, it promises to play an increasingly pivotal role in shaping the future of
customer service.

---

This paper highlights how AI, particularly through the use of LLMs, can revolutionize customer care, making
it more efficient, personalized, and responsive.
How Conversational and Generative AI Will Shape the Future of Contact Centers (youtube.com)

Topics include:
• An overview of conversational AI and generative AI
• How conversational AI can assist and improve real-time customer interactions via phone, live, and virtual
chat
• Using conversational AI to create savings and help improve the IVR experience
• Using generative AI to transform the speed and depth of knowledge sharing via live and virtual chat
• How generative AI can optimize knowledge management systems used by contact center employees
The Future of AI in Call Centers - Customer Experience 2025
SHANTANU MISRA:

MIRELLA :
What a year this has been with generative AI and conversational AI more broadly. If we would have thic celebration
just 10 months ago, well my presentation would be much more different.
Isnt that so , Čuč?

ČUČ:
Yes, Mirella, I would not be talking to you in that way! the GenAI evolution that we are seeing it right now, and we are
all part of it has and will really transformed our way how we are interacting and dealing with our clients, building
their expectations of our services.
But let me asure you all, one thing that actually will not changed— people always come first! Beinng our customers or
our employees.

And Contact Center was ripe for that, and we've seen that play out over five years. And even with all the GenAI buzz,
I think one thing that everyone agrees with is that Contact Center is literally the single biggest
use case, or at least the first use case that will come to market with GenAI solutions. I think we can all give ourselves
a round of applause for that, for being literally at the vanguard of this.
Agenda
So we have a packed session today. We'll try to squeeze out time for Q&A at the end.
But if we don't manage to do that, you can always catch us at the booth downstairs. So I'm Shantanu.
I'm group product manager with our Contact Center AI team. I have with me Meena, my colleague
who leads go-to market solution for all of CCAI and beyond. We have with us two customers who
are going to be talking about their journey with Google. We have Alex from Gen Digital, who's
going to be talking about being an early adopter of CCAI platform and how now they're doubling down on Google.
And Swarup from Wells Fargo is going to be talking about the Fargo journey, which
was a conversational AI solution that they developed with Google.
So we're going to be covering a lot of topics. As I said, we have a packed agenda,
but we'll try to squeeze in some time towards the end for Q&A as well. If we think about contact centers,
the pain points are basically the same as we have seen over the years, which is users are frustrated.
They don't want to wait in long queues. We have huge agent turnover. 30%, 40% of the agents quit their job
in the first year itself. So we have huge onboarding time, and you need to empower the agent to handle complex
questions.
And then you have contact center operations, which is trying to juggle these different hats, trying to increase revenue
while also lowering cost.
And designing this customer experience just takes months and quarters. So all those problems-- if you were
to present this slide 3 years ago, it will basically be similar problems. But I think what has changed is the
expectations,
the expectation that an end user has when it reaches out to a business.
Because end users are now used to tools like Bard and other tools in the market where
they can get these GenAI powered answers very quickly. So they are not-- they have very little empathy when
businesses
put them on hold for 10 minutes or struggle to find an answer to a question, which is very, very easy.
And also what I've observed over the last few months is that organizations expectations
have changed as well. Like, we would go into-- with the customer and we'll try to create a conversational AI
solution. And it was totally OK for us to spend six months doing that.
But that's no longer the case. We want solutions to start adding value day one onwards.
And organizations expect Google and other providers to be able to match that expectation.
So when we think of CCAI, we really think of that in terms of four broad pillars.
Our approach is AI first, and it has been that since day one. And in some ways, these products are
arranged in order of when they actually launched, but also in terms of the user journey.
You typically get started off with a virtual agent, which is powered by Dialogflow.
Dialogflow is a solution that's been GA for 5-plus years. For each of these solutions, we're
going to be talking about how we are creating the next generation version of that powered by LLM.
After Dialogflow, the call gets escalated to a human agent. You have Agent Assist, which is a suite of tools
which helps the agent answer the question better. And you can imagine if the virtual agent is doing its job really,
really well, it's
really the tough questions that end up with the live agents, and they really need to be empowered to handle those
questions.
With CCAI Insights, which is our third pillar, what we're really trying to do is make sense of the dark data that
exists in your contact centers, which is not being used to take business decisions today.
And we believe with uploading audio or transcripts, using our transcription tools to do that,
and then trying to make sense of that data can unlock a lot of value for businesses. And we have already seen that
happening in multiple customers
with CCAI Insights. And finally CCAI Platform, which we launched last year, which is our CCaaS solution.
And the whole idea behind the CCAI platform was we had these great AI services,
but our customers were struggling with integrating it with their existing solutions. There used to be long lead times.
And we really wanted to create an AI first CCaaS solution, where AI is not an afterthought, where
getting access to Google's solution is literally a click of a button. And we have seen huge success with CCAI platform,
and we have-- Gen Digital is going to be talking more about it in a moment.
As you know, and I think as Thomas Kurian and Sundar mentioned in their keynotes, our approach to GenAI
is comprehensive. So if you are a developer that wants to play with the models
or build a custom model, you can use solutions like Vertex AI or Model Garden to be able to do that.
If you want to-- if you want access to an AI platform where you can do prompt tuning, do evals,
you can do that with our AI platform. With CCAI and similar solutions, what we're really trying to do
is make all of that complexity simpler for you and present it to you in a way that makes sense for you.
So when it comes to pricing it in a way that makes sense for you, or integrating it
in a way that makes sense for you, or tuning these AI models to make sure that they address your use cases really,
really well.
I'm going to be talking more about CCAI services, and specifically about what are the new things
we are bringing to you and we're announcing today, which are powered by LLMs, which really elevate the experience.
And I wanted to play this video just to give you a glimpse of that.
[VIDEO PLAYBACK] [MUSIC PLAYING]
[END PLAYBACK] I know there's a lot in there, but we're going to unpack that over the course of this presentation,
and really talk about what are the tools that we are providing you to create an experience like that.
So as I mentioned, virtual agents powered by Dialogflow-- with Dialogflow CX, we really provide you
with a comprehensive suite through which to develop the virtual agent experience. When it comes to handling
detours,
gracefully handling fallbacks, with Dialogflow CX, you can handle all of that and do debugging analytics
and everything that comes with it. What we really want to focus on today is what are the new things that we are
bringing to the market.
And all these three are initiatives that we've been working on for a while powered by LLM and generative AI.
The first one is what we call the generative AI agent, which
we also refer to it as sometimes as info bots, which is basically creating a Virtual Agent experience just
by linking a source of information. So you literally just link your website FAQs
to the Virtual Agent, and you can have a conversational AI experience with it.
So it's literally at the click of a button. And what we've seen it-- a lot of use cases where in the past
our customers would have translated their existing business knowledge into Dialogflow CX graphs.
You don't need to do that anymore. You can just link those documents, and it works out of the way.
And we are announcing this in GA today. This has already been in production with multiple preview customers. I'm
sure a lot of you have probably
had access to this as well. And I'm going to talk more later about how this is also
powering the agent facing experience and making that better as well. The second key launch we have in Virtual Agent
is what we call generative responses. So essentially, today, if you think about it,
every single thing that your virtual agent can say has to be written in whatever virtual agent solution
you're using. But with generative AI, we can actually help generate those responses.
And we can do that at the pace that you're comfortable with. So if you're comfortable with the bot speaking
generative AI
responses only in case of a fallback, you can set that as well. Or for a given fulfillment, you can specify.
So you have full control over when the generative part of the responses take over.
The third key launch we have is what we call generative flows, which is launching in Preview today,
which is basically saying that you can write in natural language in English what the bot is supposed to do,
and we'll do that. So for example, if you have a use case of menu ordering--
I'm sure a lot of you have seen the Wendy's booth downstairs, which is exactly the same use case. You can literally
mention the menu
and say, hey, virtual agent, you're supposed to behave a very courteous agent who
can take orders from this menu. You can define the persona the way you want. And we have templates for that, and
that's it.
You don't have to go ahead and create sort of thousands of intents, and pages, and flows and go through all that
effort.
And really, the theme that we're trying to hit on with this is the ultimate agent--
the virtual agent that goes into production is a much simpler graph than what you're used to.
In the past, we have had some of our power users
will create thousands of intents and pages because they needed to do that. You literally needed to teach your Virtual
Agent
how to speak English. But now you can really focus on the business rules and on the content of what the agent is
supposed
to talk about, and not worry about these scenarios which end up consuming a lot of your development time
and a lot of the agent size as well. The other thing I wanted to emphasize
Agent Assist
is this comes as part of Dialogflow. So we know most of the people in this room
are already on a journey with Dialogflow. You don't have to replace all of that with this new experience.
You can start augmenting your existing Dialogflow virtual agent with elements of generative AI.
And that's very important. Because depending on the industry, depending on what kind of an experience
you want for your end users, your appetite for how generative the responses should be
might be very different. And you get to decide which elements of your virtual agent are powered by these
technologies versus which
rely on the existing Dialogflow CX technology. We are also announcing Dialogflow Call Companion
in Preview. And it's what you saw in the video with the multimedia input.
Dialogflow Call Companion powers it, which is basically while a voice call is going on, can we give an experience to
the end user where they actually
have on their browser, on their phone, a visual menu as well using which they could navigate your IVRs.
I wanted to spend some time on Agent Assist. So Agent Assist, as I mentioned, is a suite of features
that we have to empower your agents to make them more productive. We are announcing today two new features--
Knowledge Assist, which is launching in Preview. And, as I mentioned, you should think
of this as the generative AI agent or the info bot for your live agent.
So think of two scenarios. Your agent is in a conversation with the end user.
And they could-- they have an agent on which they could search for answers.
But we actually want to elevate that experience and proactively provide suggestions to the agent
while the conversation is going on so the agent does not even need to search for it. Based on the conversation that's
going on, a generative AI
Knowledge Assist could tell them, hey, it seems the user is asking about this. This information already exists in the
documents.
Here you go-- and make that experience better for them.
Next, summarization is such a key end of call activity
that happens for some customers literally at the end of each call. We have had customers who spend-- whose agents
spend 90 seconds at the end of the call
to summarize the conversation and fill all of that information in the system of record.
We have been able to bring that time down to near zero. An LLM generated summary is better
at compliance, which means every single call actually gets summarized. With your live agent, sometimes they do not
summarize,
and you have all sorts of summaries in different formats. It's better, of course, at spelling and making sure
that it's actually human-readable and understandable. And in terms of quality, we have seen the LLM baseline
summarization to be as good as the human summarization. And you don't have to create a custom model. You don't
have to spend three months trying
to create something which is only for yourself. You can use our LLM summarization and get the job done.
And something which has seen a lot of success over the last few months as organizations have started adopting
generative AI.
Conversational Insights
CCAI Insights, as I said, tries to make sense of conversational data that you already have in your system.
The way we do that is having annotations for this conversational data.
So what are the key topics being discussed in your conversations? What are the key questions that the users are
asking about?
Can we summarize the conversation in Insights itself? How can we understand what are the key words which
matter to you or key phrases and highlight that for you? And what do we do with that?
We create custom metrics or aggregated metrics depending on your use case.
So for example, if you have a quality management use case where you want to create an agent-level dashboard,
you can do that with these metrics. Or you want to really understand what
are the key topics for virtual agent, with our topic modeling. You can figure out which are the topics which are being
discussed and are simple enough
for the virtual agent. You can design your agent training based on the insights that you glean from this.
We are announcing today Generative FAQ as part of insights in Preview.
And what that does is really extract the key questions from your conversation.
So you want to understand, hey, I have these millions of conversations that flow through my system.
What are the questions that our users are really asking? And is my FAQ, is my agent training,
is my virtual agent program aligned with that or not? And that has been very meaningful for customers
because they're like, oh, I didn't even realize that there are x number of questions that our users are asking for,
because I thought my FAQ was
robust and comprehensive, or my agent training program was complete. And that's how Generative FAQ is really
being used, to make it more dynamic instead of once a year, or once a two year process of figuring out what those
FAQs
would be. I want to spend some time talking about CCAI Platform, because as I said,
CCI Platform
we've been very excited about this program. We launched it last year. And since then, we have had thousands of
agents
onboarded on CCAI Platform. The message that we were trying to convey to the industry
has resonated really, really well with customers and analysts. So what we're really doing is, as I said,
bringing an AI first approach to CCaaS instead of AI as an afterthought.
We are trying to ensure that all of the GenAI LLM-driven capabilities that I talked about are available on day
one on CCAI Platform. So you might have your contact center provider taking,
whatever, six months, 12 months to integrate with Google. But with Google CCAI, you can just get it on day one itself.
And it's also developed-- CCAI platform co-develops these LLM capabilities along
with the AI services, so that integration itself is very smooth and is not hacky or just appears
bolted on. With CCAI Platform, we are also very keen on providing customers with turnkey solutions, which
are extendable. So you have a lot of CCaas, cloud-first players in the market, which will provide you tools.
But then you need to create an entire army of developers working on it for multiple months
to spin up a good experience for you. With CCAI Platform, we want to make sure that you already
have these turnkey solutions and turnkey sort of templates that could get you started on day one itself.
IBA Only
Today we are announcing an IVA-only version of CCAI Platform.
We understand that sometimes, your multi-year deals and getting out of-- moving from an on-prem contact center
to a cloud contact center can be a multi-year journey, and we understand that. And that's why we created CCAI
Platform IVA-only option, which
gives you a way to consume all of our GenAI services by having a light touch pipeline going from your existing
contact center to Google CCAI. And we can work with whatever infrastructure you
have and really give you access to GenAI services as soon as possible.
Workforce Management
We're also announcing workforce management capabilities in CCAI Platform today.
And as I said, our idea with CCAI Platform is to be as comprehensive as possible. And our WFM announcement is a key
piece of that.
When it comes to scheduling, forecasting, adherence, all the things that you can imagine in a WFM system,
all of that comes native as part of CCAI Platform. And if you think about it, I talked about insights
and trying to glean metrics and annotations out of each, every single conversation.
Now that can actually feed into your WFO solutions and really make it much more powerful and much more dynamic
than it used to be. Again, we work with your existing WFM and QM providers
as well. We are announcing a partnership with Verint and Alvaria, where if you have these solutions,
CCAI Platform can seamlessly integrate with that. So I'm now going to hand it over to Meena,
Mina
who is our GDM lead, who's going to talk more about how we are bringing these solutions to market for you.
Thank you. MEENA VISHNAMPET: Thank you, Shantanu. [APPLAUSE]

Thanks. Good morning, everybody.


GenAI conversation is always very dear to me. Guess why? In 2020, the LLM model we were building at Google
was also called Meena. So I want to start off by taking you all back in time.
A few years ago, we had a dream, a dream to transform the contact center space.
And in this dream, we envisioned personalized, predictive,
and memorable conversations elevating the self-service space.
We thought about very happy super productive human agents
loving what they do, and being powered and coached by AI models on the fly.
We envisioned business intelligence as well as
insights being actionable and being offered to Contact Center
leaders to make the toughest business decisions. And we wanted to orchestrate all of this via a CCaaS platform
that's designed for scale, security, and speed. Now, as you all are aware, as we heard from Shantanu,
we've had this live for a while now. But this vision got even grander.
Because CCAI is going to be the launch vehicle to bring enterprise-ready generative
AI super-powering custom experiences. So how do we make this real for you?
So our focus is on choice and flexibility. One, you get to use the entire platform
with its full end-to-end offerings, or two, you get to consume the latest and greatest
of generative AI via your partnerships with OEMs. Or thirdly, as Shantanu just announced,
you can use the CCAI Platform as a lightweight conduit powering
the latest and greatest of generative AI on your legacy infrastructure. So with this choice and flexibility in mind,
here is how a transformation journey could look like. With our resilient focus on time to market,
we have come up with three value packages. So from day one to four weeks, you
can get info bots powered by generative AI up and running. You can have agent summarizations enabled just
like the demo that you saw around Cymbal Bikes. And you can have insights topic modeling up and running,
giving you insights into your call drivers and intents. So that's week 4.
From week 4 to week 12 could be a great time to get chat and voice enabled for Agent Assist,
as well as start augmenting and potentially replacing some of your speech applications.
Thirdly, as you enter into month 3 and onwards, this is the time for you to bring
in all of those transactional intents as well as leverage custom LLM modeling
for your unique business cases. So a transformation journey like this between day
zero to about three months and onwards can only be enabled when we have a large partner ecosystem.
So I'm super excited to share with you all that we have over 50 partners and several thousands of subject
matter experts who are trained to deliver these key CCAI
journeys for you at scale. With our partner ecosystem, what they
come with is a very deep industry expertise, the operational know-how, and most importantly,
the niche with respect to contact center skill sets. So with all the latest and greatest of generative
AI, as well as having partners with the know-how to implement the journey for you,
what do you think success would look like? So it's really the right time now
to pivot into two very important customer journeys here. These are stories of innovation, excellence,
as well as inspiration. Starting their journey on Dialogflow ES,
automating chat conversations, and then expanding that to Dialogflow CX, focusing
on the multi-language support, the omnichannel capabilities. And finally, landing on the CCAI platform,
solving for the focus areas of innovation, scale, speed, and security.
I'm extremely honored to welcome Alex Tran onto the stage as he's going to share the Gen Digital, TTEC, and Google
"Better Together" story. Welcome, Alex. [APPLAUSE]
ALEX TRAN: Thank you.
Alex
Hi, my name is Alex Tran. I am responsible for product development at--
product development and operations at Gen within the consumer support and inside sales organization.
Gen's portfolio includes Norton, Avast, LifeLock, Avira, AVG,
Reputation Defender, and CCleaner.
Jen
Our family of trusted consumer brands protects nearly 500 million consumers worldwide.
At Gen, our mission is to create technology solutions for people
and to-- in order to make the most of the digital world safely, privately, and confidential.
To deliver excellent customer experience,
an agent must be equipped with the right tools and trained to use the tools efficiently.
We needed to consolidate our multiple telephony
systems, carriers, CRMs, chat systems, workforce management,
contact flows, and AI automation across all of our brands.
We recently consolidated onto a contact solution
to Google's Contact Center AI platform.
And so why did we choose Google?
At Gen, we believe that AI services apply to the Contact
Center space on an end-to-end platform would dominate customer service.
We would believe that would transform the customer and agent experience and leapfrog the industry.
Gen needed a solution that supported integrated AI, a real-time machine translation language
translator across all channels, including voice.
We wanted a bring-your-own carrier solution that meet our business needs, stability, security,
speed of deployment, a conversation engine
that can be scaled across all channels. And finally, an AI solution that could keep up
with the rapid innovation, partner up with a leader in this space.
So how did the implementation go? We have over 7,000 phone numbers,
Implementation
domestic and international, 300-plus call flows,
2,000-plus agents, multiple carriers,
10 plus delivery locations. Oh, sorry about that.
I forgot-- introduction.
10-plus delivery locations and 21 languages.
Across all of our brands, we have experienced for multiple telephony, chat, and CRM
providers. We have experienced with every type of contact center
environment. On-premise, Infrastructure as a Service, Platform as a Service,
Software as a Service, we've tried them all. Based on our experience, the time that it takes to migrate--
just one single voice platform takes anywhere between seven months to one year, and that's provided
that you have a seasoned crew-- that this crew owns product development, operations
and infrastructure, and development. With CCIP, we deliver not only one,
but two telepathy migration in 4 and 1/2 months.
So basically, we received our environment freshly formatted in mid-April.
And that's this mid-April of 2023.
And we started the rollout to all of our contact centers beginning of August, and we completed the last one
last night. [APPLAUSE]
So with previous migrations, this would have taken approximately two years.
Our key business KPIs are service level,
average handle time, queue time, wait time, calls handled.
We were able to reach stability within the first one
to two weeks. Pretty amazing. And this is really a phenomenal indication
that our agents are trained, they're familiar with the new UI, they understand the new processes.
We have our agents profiles correctly configured and migrated, and we have calls
routing to the right queues. So how were we able to accomplish this feat?
So we partnered with TTEC, a premier system integrator
for Google. TTEC. And TTEC has their own contact center BPO
business that is currently also adopting Google CCIP,
so they have to get this right.
Our partnership with Google gave us an opportunity to adjust priorities of the critical features
that we needed at Gen. There were some gaps.
There was absence of gap. But Amit, Carolyn, and the product team at Google
did us a solid. We were able to influence the roadmap,
and quickly close the gaps and meet our timelines.
So additionally, we're also in the process of looping chat, which we estimated to be completed in three to four
months, which
is another record for us.
So what's been the impact to our business? Well, we realized our synergy goals,
a reduction of platform and infrastructure costs. Google's voice network architecture
gives you the ability to ingress the customer call at the closest point to the customer,
send it over Google's high speed network, and egress the call to the-- closest to the agents client.
And what that resulted it is outstanding high fidelity audio
without the need for costly mPOS networks.
We have a single pricing licensing model for all of our channels.
So whether it's voice, chat, SMS, email, messenger, social,
this allows us to engage our customer on the channel that works best for them.
And we have the ability to scale our licensing up or down
based on seasonal traffic.
Future plans
So we started our journey with Dialogflow in 2018 right as it went GA.
And we needed a conversation engine that was portable. We didn't want to be in a position
where we had to rebuild all of our automation, and--
from scratch. And since and since we transitioned to Dialogflow, we migrated our chat platforms three times.
And in every instance, we did not need to redo our automation.
We get about 30% containment from voice and 40% from chat.
So the Google SaaS model gives us
the ability to try out new features without all of the upfront investment and commitments.
Looker, BigQuery, Vertex AI, Agent Assists, speech to text,
sentiment.
So what's next? What is the future? What's in store for Gen in the future?
So there's been a lot of conversation at this conference about Vertex AI.
For Gen, we are interested in AI to assist our customers and our agents.
So at the end of every contact, agents spend several minutes to summarize the case
and to classify the type of call. Operationally, we want to get to a granular case type,
but what we found out is that the longer the list we make, the agents just choose the first option from the list.
So we believe that we can leverage Vertex AI and agent assist to significantly reduce the time that it
takes to summarize and classify the case. We want to enable our agents with AI-powered conversations
for all of our voice and digital channels.
To provide our 500 million customers with a personalized experience, we need to, one, identify the customer.
Two, we need to disposition their needs. And three, we need to route the appropriate--
we need to route them to the appropriate queue, whether it's self-help or to an agent.
AI routing will be a big part of the customer journey.
We want to use AI recommendations for cross-sales and upsells.
We want to leverage Google Dialogflow's native integration with generative AI.
So when we're unable to detect a customer's intent, we can fall back on generative AI to provide the knowledge.
There are times where you're going to-- where you need to have both a directive
conversation, like performing a multi-step transaction.
We need the ability to switch between directive and a generative conversation, and Dialogflow makes
that integration seamless. Google is one of two solutions that
have complete vertical integration, and it's at the forefront of AI.
We plan to take full use of Google's innovation now that our contact center's platform is
tightly coupled with the Google technology stack. That includes Google Cloud, Looker, Insights, BigQuery,
Vertex AI, and Dialogflow. So I would like to thank the team and the entire Gen
consumer support organization and inside sales organization, my team, TTEC, and Google for the teamwork
to realize our vision. And if anyone else tells you you cannot transform your contact
center platform quickly, give me a call. Thank you for your time.
[APPLAUSE]
[INAUDIBLE]
Digital transformation
SWARUP POGALUR: Thanks, Alex. One of the advantages of going last is I get to say the final words, which also feels
very familiar to the world we live in.
Three minutes, faster delivery very much summarizes what we went through. So I'll quickly go through our digital
transformation that
was primarily enabled through Dialogflow and the contact center technologies that Google offers.
One of the big challenge is traditionally, every bank is looked at as slow paced change,
and Wells Fargo is a 150-year-old bank. So we were looking at not just accelerating but leapfrogging
to more modern experiences to keep up with our customer experiences, because we are holding ourselves
to higher standards from a digital experience standpoint and want to have a digital parity of providing
a similar familiar experience to the best digital experience the customer-- prior to landing
on the Wells Fargo mobile app. With that, we started off with some founding principles around making it simplified,
personalized, and insightful.
And as we went through this journey, we partnered with Google as a part of our strategic partnership.
It was publicly announced that we are partnering with Google to transform our entire infrastructure
and move our strategic AI assets and data assets into the Google Cloud.
And one of the first use case that we started off was the most complex one, which was Fargo.
And I'll quickly run through some of the things that Fargo can do.
[VIDEO PLAYBACK] As it plays, there's no music, but I'll try to talk through.
Fargo is our virtual Assistant that was launched about four months ago in partnership with Google.
Behind the scenes, there's the Dialogflow agent. But the whole idea was to make the entire mobile interactions
through our app very familiar-- a little less bank speak, but very natural.
And with few tenants that are highly engaging from a customer standpoint.
So for example, if you had to look up your financial health
or get to the transactions faster, which by the way, is the most used feature on our app today.
And also any of these steps that you could imagine with any of your banking app that you use,
these tend to be multi-step. And we are trying to reduce the amount of time you spend
in living in a banking world. Nobody wakes up in the morning saying,
hey, I want to do banking. We want to take that friction away. And most importantly, we also want
to get to personal insights over a period of time. This is not the CCAI Insights.
That's for different operators. This is for the users. So bottom line, I think we've launched this.
[END PLAYBACK] I know everybody here, we've had several partners from Google and a whole ecosystem of teams.
It definitely does take a village to get something out there. And this also happens to be our first use
case on our Google Cloud-- virtual private cloud that we had started off. So many learnings, many takeaways.
And in terms of the Fargo evolution itself, one of the things that connects back to everything Shantanu
was talking about in how we look at evolving the experience,
but also tying back to every technology enhancement that is happening.
What we would take this forward is from an assistant to an advisor kind of an experience, where
over a period of time, these are done for the customers as opposed to them wanting to go into an app
and spending some time doing these activities.
Just to quickly go through, this is kind of our high level blueprint and where
Dialogflow sits today. We were very intentional in how we built this one out. Unlike the contact center experiences,
this is a digital and a mobile experience. And we modularized every building block
with some founding principles around security first, resiliency. And as you can imagine, from a mobile standpoint,
the number one adoption factor would be the response times. So we had to make sure that the entire platform was
extremely
resilient and had extremely fast response times as well.
And since this was our first array into a-- going out of our data centers into a Google data center,
it wasn't fun, but we had a lot of learnings from that.
That's all of my time, but one take away-- if you're a Wells Fargo customer and if you have not already
engaged with Fargo, please download the app. Please go to Fargo, provide your feedback.
And connecting back to Shantanu, this is just the journey of where we are. And we are closely partnering with Google
around the Trusted
Tester program to constantly look at where we can plug in all the new capabilities

The Future of Customer Service: 5 AI Trends In 2024 (And Beyond)


 by Ciaran Doyle
 July 1, 2024

Source: Freepik

Artificial intelligence (AI) is transforming Customer Experience (CX) by redefining the way that
businesses interact with their customers. From chatbots to predictive analytics, this
technology is truly revolutionizing how organizations provide customer service.

If your goal is to enhance customer satisfaction, loyalty, and retention, you can’t overlook the
potential of AI to improve your CX.

This article unpacks the future of customer service by looking at five emerging trends in the
world of AI. We’ll also explain how AI will support human agents, not replace them.

5 AI Customer Service Trends You Need to Know About

Source: Freepik

At Loris, we’ve identified five top trends unfolding in the world of customer service and AI:
1. Easy will be automated

This customer service trend is a no-brainer—if tasks are simple enough to automate, then you
should automate them.

AI-powered chatbots can answer 80% of routine questions that your customers have. They
can also answer these questions much faster than your customer service reps.

In this way, you’re providing your customers with a huge amount of value as they don’t have
to wait around for an agent to get back to them when they have a simple inquiry. Hello,
satisfied customers! It also removes the repetitive and often mundane questions that
customer service reps get, so they can focus on solving more interesting problems.

However, although AI self service tools will do some heavy lifting in this situation, there are
still 20% of questions that are too complex for a chatbot to handle. This is where your human
agents and their knowledge are needed.

A Nordic insurance firm adopted AI technology to help them automate a large part of their
claims management process.

Instead of having their agents spend countless hours on repetitive tasks like analyzing claim
documents to extract data, they’ve come up with an automated approach to handle this
work.

As a result, their agents now have more time to cultivate stronger relationships with their
customers. Although they’ve made the move to AI, the company is still placing humans at the
heart of their operations.

Like what you’re reading? Get the latest news and product updates delivered to you in our
newsletter.

2. AI replaces tasks, not teams

AI can play a role in tasks like data entry, organization, and identifying trends based on
customer data. These tasks are relatively simple, and AI could likely handle them faster and
more efficiently than a human could.

However, when it comes to customer service interactions, you’re dealing with people. And
people are complex. Customers require empathy, emotional intelligence, subjective decisions,
and critical thinking in a social context.
Rather than replacing your customer service representatives with AI, consider how AI can free
them from simple but time-consuming admin tasks so they can focus on complex
conversations and decisions.

Source: Freepik

Let’s look at human resources teams as an example. HR employees naturally need to


understand and manage complex human emotions and interpersonal dynamics, according to
Ian Moore, managing director at HR consultancy Lodge Court.

AI can’t replace the human element here. However, it can provide HR managers with data-
driven insights that can help them make decisions, improve recruitment processes, and
personalize employee training programs. This is a case where AI is replacing tasks rather than
teams.

3. AI will be your teammate, not your competition

As mentioned above, integrating AI into your customer service processes doesn’t mean
replacing your agents.

AI should only enhance the experience you provide to your customers. This makes AI a
valuable tool rather than something you have to compete with.

Source: Freepik

Most conversation intelligence platforms now use AI features that guide agents during live
customer interactions.

This feature provides agents with the information and guidance they need to turn negative
customer sentiment into positive customer sentiment. This improves your customer service
performance, leads to happier customers, and makes your agents better at what they do.

AI-driven tools such as our Loris CoPilot can help agents when they need to provide customers
with more information that they may not know already. By identifying what the customer is
trying to achieve in the conversation, Loris gives agents the information they need to solve
customer problems.

This helps agents say the right thing at the right time, with empathy and a human touch.

4. Metrics will prioritize value over efficiency

There’s no doubt that AI can make your business more efficient, which can save you money.
However, the cost savings you get from greater efficiency won’t necessarily lead to better
customer service interactions. And using the above methods, your efficiency gains would
eventually plateau.
Delivering real value to your customers is more lucrative in the long term because it helps you
grow your business. Great service that meets customer needs drives conversions, retention,
and loyalty. This often makes a greater impact on your bottom line than efficiency savings.

Source: Freepik

AI can be used to deliver value, as well as efficiency savings. For example, a conversational
intelligence platform like Loris allows you to monitor all your customer interactions across
your support center.

This means you can predict and identify issues that customers face and fix them before they
become larger problems. In doing so, your team can provide customers with more value.

American Express uses machine learning (ML) and AI to monitor for fraud in real-time. The AI
is able to generate fraud decisions in milliseconds every time an American Express card is
used.

This isn’t something a human could ever do. So, in this situation, implementing AI doesn’t
save time or reduce the number of staff the company needs to serve its customers. But it
does protect its customers against fraud, providing a huge amount of value.

5. When AI is everywhere, people will be the differentiator

More than 77% of companies worldwide are either using or exploring the use of AI. This
means that AI is already everywhere we look. Some customers even expect to interact with AI
when engaging with a business.

The rise of AI-powered customer service means customers expect fast responses and quick
solutions to their problems.

However, customers also expect their inquiries or complaints to be handled by someone


knowledgeable. This is where the human element comes in.

Human agents are highly professional and knowledgeable about your product. They also have
high emotional intelligence, which AI can’t compete with. Humans are also more able to
navigate complex customer complaints and inquiries.

Future Trends in AI-Driven Customer Insights 🚀


Andrea S.

Data Engineer | Solutions Architect | AI Engineer | Azure


June 27, 2024
In 2024, AI-driven customer insights is evolving rapidly, providing businesses with unprecedented
capabilities to understand and serve their customers. Here are some key trends to watch:
1. Seamless Integration of Human and Digital Channels 🤖👥
AI is revolutionizing the customer experience by seamlessly blending human interactions with digital
channels. This integration ensures that customers receive a smooth, consistent experience whether
they're interacting with a chatbot or a human agent. Predictive AI plays a pivotal role by anticipating
when human intervention is needed, making the transition seamless and maintaining service quality.
This approach not only improves customer satisfaction but also enhances the efficiency of customer
service operations (CX Today, 2024).
2. Predictive and Proactive Customer Service 🔮📈
Predictive AI technology is set to transform customer service by identifying potential issues before they
arise and resolving them in real-time. This proactive approach helps in delivering a superior customer
experience and significantly reduces the chances of customer dissatisfaction. For example, AI can
analyze customer behavior and historical data to predict when a customer might need assistance,
thereby providing timely solutions (CX Today, 2024).
3. Hyper-Personalization and Customer Segmentation 🎯👥
AI-driven hyper-personalization is taking customer insights to a new level by combining data from
various sources such as behavioral, demographic, and transactional information. This allows businesses
to create highly personalized experiences for different customer segments, enhancing customer
engagement and loyalty. The ability to deliver tailored experiences based on deep insights into customer
preferences and behaviors is becoming a crucial competitive advantage (Dovetail, 2023).
4. AI-Powered Creative Processes 🧠🎨
Generative AI is revolutionizing creative processes by enabling the rapid creation of high-quality content.
This not only boosts productivity but also allows businesses to focus more on strategic tasks. The
integration of AI in creative workflows is expected to become widespread, helping companies to produce
personalized and engaging content at scale (Quirks, 2024).
5. Ethical AI and Regulation 🌐⚖️
With AI's growing influence, there is an increasing focus on ethical AI and regulatory frameworks. The
European Union's AI Act, for example, categorizes AI applications based on risk levels and imposes strict
regulations to ensure ethical use. This emphasis on AI safety and ethics aims to foster trust and
transparency in AI applications, which is critical for their acceptance and success (IBM, 2024).

Ladies and gentlemen, thank you for joining me today as we embark on a journey to explore the future
of contact centers, powered by the incredible capabilities of Artificial Intelligence.

Now, I want to take you back a few years. Imagine this: a world where every customer conversation feels
personal, predictive, and memorable. Where every interaction leaves a customer not just satisfied, but
amazed. We envisioned a time when contact centers were more than just a reactive place for solving
problems – they would become dynamic hubs of customer delight, driven by AI.

This dream wasn't just about technology for technology's sake. It was about people. Human agents,
empowered by AI, who love what they do because they’re equipped with insights and tools that help
them work smarter, not harder. Imagine an agent who no longer drowns in repetitive tasks, but instead,
focuses on creative problem-solving while AI handles the rest in the background. This is not science
fiction – this is happening now.

We imagined a platform that scales effortlessly. A system that gives decision-makers actionable insights
right at their fingertips. Imagine the power of making real-time decisions about your business, driven by
data that’s always a step ahead, thanks to AI.

But let’s fast-forward to today. We're not just talking about a distant dream anymore. The future we
imagined is here, and it’s evolving faster than ever before.

At the heart of this transformation is generative AI. Think of generative AI as a superpower – a force that
can unlock custom experiences for every customer, at every touchpoint, with precision and creativity. It
doesn’t just answer questions – it understands your customers, predicts their needs, and provides
solutions before they even know they need them. It’s like having a team of experts, ready to assist at
the exact moment they're needed, 24/7.
But how do we make this tangible for you, for your business?

We’ve designed AI solutions that offer flexibility and choice. Whether you’re looking to transform your
entire contact center ecosystem or simply enhance your current infrastructure, AI is adaptable. You can
integrate these cutting-edge tools into what you already have, or start fresh with a platform that’s built
for the future. And the transformation? It’s faster than you might think. In as little as four weeks, you can
have AI-powered chatbots, agent assist tools, and real-time insights at your fingertips. By week twelve,
you’ll be enhancing voice and chat capabilities, replacing outdated systems, and creating a contact
center experience that’s truly customer-first.

But this transformation isn’t just about technology – it’s about partnerships, expertise, and execution.
We’ve built an ecosystem of over 50 partners, with thousands of experts trained to guide you through
this AI revolution. These partners don’t just bring technology, they bring industry knowledge and the
operational know-how to make AI work for you, seamlessly and at scale.

So, what does success look like? Imagine a world where your contact center anticipates customer needs,
where agents are happier, more productive, and deeply engaged. Imagine making decisions that propel
your business forward because you have real-time insights guiding every step. AI isn’t just a tool – it’s a
catalyst for a new era of customer service.

The time to act is now. AI is no longer a distant vision, but a present-day reality that can redefine your
contact center. The future is personalized, predictive, and powered by intelligence beyond our
imagination. Let’s embrace this moment, innovate together, and create a future where every customer
interaction is not just a transaction, but a lasting impression.

Thank you!

You might also like