Artificial Intelligence (AI) has been thrust into the spotlight — bringing with it a host of phrases, acronyms, and concepts that, until recently, were hardly used outside of computer science.
It’s fast becoming essential to have an understanding of these terms. If this new lexicon’s overwhelming you, don’t worry — we’ve got your back. Here’s your pocket dictionary of the most common, need-to-know terms in artificial intelligence.
An algorithm is a set of rules a computer must follow while executing operations. Algorithms tell a computer how to act in various situations.
Combining multiple algorithms allows applications to perform more sophisticated tasks without human intervention. For example, a chatbot can use algorithms to suggest products based on a shopper’s purchase history or route customers to a specific human agent whose specialty best matches the incoming question.
Amazon Lex is Amazon’s service for building voice and text CUIs (see conversational user interface). This is the technology that powers the Alexa voice assistant. Amazon’s own AI chatbot is still in the works, but it’ll likely be based on Lex’s technology.
An application programming interface is a set of procedures that allows an application to be accessed by another service.
Think of APIs as the technology that powers plug-ins (such as Grammarly). If an API doesn’t exist, the program can’t be used on another site.
Artificial general intelligence is a hypothetical kind of AI that's capable of understanding and learning any task that humans perform.
No AGI currently exists today — it's all considered narrow AI.
Artificial intelligence is the ability of machines to perform tasks that would otherwise require human intelligence. Examples of AI include understanding human text and speech, detecting and translating languages, and creating personalized recommendations.
Bard is Google’s version of an AI-powered chat designed to rival OpenAI’s ChatGPT. The bot was built using the Language Model for Dialogue Applications (see LaMDA), but it currently uses the Pathways Language Model (see PaLM). It is set to be widely released in 2023 and will place AI-generated responses to requests directly into Google search results.
In machine learning, bias occurs when an algorithm's result is changed in favor of or against a given idea. Bias is a systematic error that takes place because of incorrect assumptions in an algorithm.
For example, if the algorithm only had information on apples and no other fruit, it would assume that the apple is the only type of fruit. Because of bias, AI tools like chatbots are more likely to give certain responses over others, even when those answers may be false.
Big data is the name given to enormous data sets that are too large to process using traditional computing. Through data mining, powerful AI software can analyze these large databases to identify patterns and draw conclusions.
Access to big data lets AI solutions grow more intelligent and deliver more human-like interactions.
Bing AI is a chatbot that was released by Microsoft in February 2023. Based on GPT-4, Bing AI offers conversational answers to traditional search engine questions. The chatbot currently has over 100M active users and is a direct competitor to Google’s Bard.
Black box describes an AI system whose inner workings are impossible to view. Humans can’t find out how black box AI comes to a specific decision — only inputs and outputs can be observed.
For instance, ChatGPT is an example of an AI black box. It’s impossible to tell which answer it will give and why it gives any specific answer over another.
A chatbot is a computer program that simulates human conversation. Chatbots can be used in a variety of ways, but in terms of customer support, they often act as a virtual assistant answering customer FAQs.
Different chatbots have different capabilities — with the most advanced versions capable of more sophisticated tasks like detecting buying intent and even recommending products to shoppers based on location, demographic data, or purchase history.
Short for Chat Generative Pre-Trained Transformer, ChatGPT is a chatbot released by OpenAI in November 2022 that became popular because of its ability to give detailed, natural responses to a wide range of prompts.
Get a comprehensive look at ChatGPT’s role in customer support — including its potential uses and limitations — in our blog post, What Does ChatGPT Mean for the Future of Customer Service?
Claude is a chatbot developed by Anthropic that launched in March 2023. Anthropic emphasizes that Claude is meant to be an ethical chatbot, using something called constitutional AI to provide outputs that are helpful, harmless, and honest.
Clustering is the organization of data by AI into subgroups that contain certain common elements. It's useful for finding additional answers to similar questions.
Conversational AI is a type of technology, like a chatbot, that simulates human conversation, making it possible for users to interact with and talk to it.
Learn all the differences between chatbots and conversational AI in our Chatbot vs Conversational AI blog post.
A conversational user interface (also known as CUI or Conversational UI) is what allows computers to mimic conversations with real humans. These interfaces use Natural Language Processing (see below) to interpret incoming voice or text and reply with a response.
The two primary types of CUIs are voice assistants (like Siri and Alexa) and chatbots.
Data mining is the analysis of large databases to generate new information. Through data mining, AI tools become more effective at solving a wider variety of problems.
A decision tree is a structure of responses the help a chatbot give specific answers to customer questions. By asking a series of questions, known as branches, chatbots can use a decision tree to narrow down a customer's goal.
Deep learning is a type of machine learning in which multiple layers of networks are used to train algorithms using large data sets. As opposed to traditional machine learning, deep learning can understand unstructured data more effectively, often leading to higher-quality results.
An evolutionary algorithm (EA) uses mechanisms inspired by nature — think survival of the fittest — to solve problems better. Chatbots that use EAs test out and compare different possible responses to a question to determine the optimal way to answer a prompt.
Explainable AI refers to transparent systems that let people oversee how decisions or predictions are made by AI. XAI is the opposite of black box AI, whose inner workings aren’t easy to understand.
XAI is also known as interpretable AI or explainable machine learning (XML).
Fuzzy logic is an approach to computing based on varying degrees of truth as opposed to a binary true or false approach. Whenever chatbots have to respond to unclear or vague instructions, those built to incorporate fuzzy logic will come up with better, more natural responses.
Generative AI is an umbrella term for any artificial intelligence that can create new content (like text or images) using the data it was trained on. This is different from “traditional” AI, which uses patterns to make predictions.
ChatGPT and Bard are examples of advanced generative AI. What makes this technology appealing is that it can produce content that is indistinguishable from that created by humans — allowing people to have natural conversations.
Meanwhile, traditional AI is typically used in technology like the bubble tree. In this example, users can only select from a limited number of pre-defined options, with the program trying to ultimately predict your end need based on a series of prompts (like an elaborate game of 21 Questions).
To see how generative AI is transforming customer service, sign up for our interactive webinar, Generative AI in Customer Support: Benefits & Practical Applications.
Short for Generative Pre-Trained Transformer 4, GPT-4 is a language model released by OpenAI in March 2023 capable of producing human-like responses. It serves as the basis of ChatGPT.
It performs at a much higher level than its predecessor, GPT-3.
Grounding is the process of determining how factual a response generated by a chatbot actually is. Generative AI chatbots can deliver convincing answers — even when they’re wrong — so companies need to ground responses to ensure high levels of accuracy.
A hallucination occurs when a chatbot provides a nonsensical, irrelevant, or blatantly false answer. Hallucinations happen because of limitations in a chatbot’s training data or LLM (see large language model).
Think of the most bizarre, least helpful answers you’ve received from a chatbot — those are hallucinations.
A heuristic is a problem-solving technique that’s meant to quickly find an acceptable solution when picking an optimal solution is too time-consuming. AI tools use heuristic shortcuts to determine the best decision based on available data.
Intent is the goal a human has when interacting with a machine. When a customer asks a chatbot about the location of their package, for example, a powerful AI tool would be able to recognize the user’s intent as obtaining information about their order status.
By correctly identifying a user’s intent, a chatbot can generate specific responses tailored to a person’s unique needs, helping them accomplish a particular task more quickly.
A knowledge base is a set of data available for a program to draw on to perform a task or give a response. The larger the knowledge base an AI application has access to, the wider the range of problems it can solve.
It’s important to note that an AI program can only pull from the knowledge base it was given. For many online companies, an FAQ page serves as the basis for their knowledge base.
A language model is a neural network trained to generate sentences. By looking at a question, previously selected words, and even grammar cues (such as optimal character count), it creates a response designed to mimic human speech.
Generative AI tools, such as ChatGPT and Bard, use language models to create unique, rephrased answers to questions. This way, users get the same information without receiving cookie-cutter responses.
Short for Language Model for Dialogue Applications, LaMDA is a group of conversational language models developed by Google in 2021. The LaMDA name is also given to a chatbot built using these models.
In 2022, LaMDA grew in popularity after a Google engineer claimed the chatbot had become sentient.
A large language model is a deep-learning algorithm that recognizes and generates content after training on massive amounts of data. The larger the data set is, the more effective a language model will be at understanding, translating, and predicting text.
Robust LLMs are why chatbots like ChatGPT can deliver impressive responses to a wide range of topics.
Machine learning is a subfield of artificial intelligence that involves teaching computers to perform new tasks without requiring explicit programming.
Thanks to machine learning, chatbots can self-improve without constant human maintenance and identify additional questions to automate on their own.
Narrow AI is a kind of technology that uses a learning algorithm to perform a single task that humans can do. Narrow AI tools can't apply any knowledge gained from the execution of one task to others.
All AI in existence today is narrow.
NLG is a subset of NLP that focuses on the outputs a chatbot gives to people.
NLG determines how logical, appropriate, and human-like a chatbot’s replies are.
Natural language processing is a program’s ability to interpret written and spoken human language. It allows computers to understand what people are saying, including their tone and intent.
Natural language processing is what enables chatbots to detect how a customer feels or what they’re trying to achieve, whether they’re frustrated and want to complain or simply trying to complete a purchase.
Find out how NLP can be leveraged in customer service in our blog post, How Does an NLP Chatbot Actually Work?
NLU is a subset of NLP concerned with how well a chatbot comprehends the meaning behind the words people are using.
NLU is how accurately an AI tool takes the words it’s given and converts them into messages a chatbot can recognize.
OpenAI is an AI research laboratory that developed GPT-4 and ChatGPT. Based out of San Francisco, OpenAI was founded in 2015 by a group that includes current CEO Sam Altman, Peter Thiel, and Elon Musk.
Short for Pathways Language Model, PaLM is a large language model developed by Google AI in 2022 that rivals GPT-4 in its ability to generate human-like responses.
It has replaced LaMDA as the basis for Google’s Bard chatbot.
Predictive analytics is the application of AI to collect and use data to predict future trends and events.
An example of predictive analytics in business is Netflix's algorithm that's capable of recommending additional shows and movies to watch based on a person's viewing history.
Check out 5 Ways to Use Predictive Analytics in Customer Service to see how your business can apply this technology.
A programming language is a code that software developers use to write computer programs and instructions. Just like you can’t have written words without the alphabet, you can’t have computer programs without a programming language.
Self-service AI describes customer service tools that use AI to automate customer interactions.
Self-service platforms let shoppers handle many tasks, such as getting order status, making payments, and asking for a refund, without needing a human support agent.
Speech recognition is the process of training a computer to understand and respond to human speech. Siri, Alexa, and Google Assistant are all examples of AI-powered speech recognition applications.
A token is a sequence of characters or a piece of a word that a chatbot can process to interpret what a human user is saying. Reading tokens instead of entire words makes it easier for chatbots to understand what a user writes, even if misspellings or foreign languages are present.
For example, if someone writes weress my odrer?, advanced chatbots leveraging tokens can piece together and accurately respond to this question.
Designed by Alan Turing in 1950, the Turing test is a test of a computer’s ability to display intelligence that is indistinguishable from human intelligence.
While the test is not without criticism, it’s still regarded as an important tool in determining an AI tool’s power.
AI is ever-evolving, and we created this glossary to serve as a living document. We encourage you to bookmark this page and check back regularly for any updates.
If you still have questions about AI’s role in customer service, one of our automation experts will be happy to walk you through it. Schedule a call today.