Artificial Intelligence (AI) has been thrust into the spotlight — bringing with it a host of phrases, acronyms, and concepts that, until recently, were hardly used outside of computer science.
It’s fast becoming essential to have an understanding of these terms. If this new lexicon’s overwhelming you, don’t worry — we’ve got your back. Here’s your pocket dictionary of common, need-to-know terms in artificial intelligence right now.
An algorithm is a set of rules a computer must follow while executing operations. Algorithms tell a computer how to act in various situations.
Combining multiple algorithms allows applications to perform more sophisticated tasks without human intervention. For example, a chatbot can use algorithms to suggest products based on a shopper’s purchase history or route customers to a specific human agent whose specialty best matches the incoming question.
An application programming interface is a set of procedures that allows an application to be accessed by another service.
Think of APIs as the technology that powers plug-ins (such as Grammarly). If an API doesn’t exist, the program can’t be used on another site.
Artificial intelligence is the ability of machines to perform tasks that would otherwise require human intelligence. Examples of AI include understanding human text and speech, detecting and translating languages, and creating personalized recommendations.
Bard is Google’s version of an AI-powered chat designed to rival OpenAI’s ChatGPT. The bot is built using Language Model for Dialogue Applications (consult LaMDA for more info). It is set to be widely released in 2023 and will place AI-generated responses to requests directly into Google search results.
In machine learning, bias occurs when an algorithm's result is changed in favor of or against a given idea. Bias is a systematic error that takes place because of incorrect assumptions in an algorithm.
For example, if the algorithm only had information on apples and no other fruit, it would assume that the apple is the only type of fruit. Because of bias, AI tools like chatbots are more likely to give certain responses over others, even when those answers may be false.
Big data is the name given to enormous data sets that are too large to process using traditional computing. Through data mining, powerful AI software can analyze these large databases to identify patterns and draw conclusions.
Access to big data lets AI solutions grow more intelligent and deliver more human-like interactions.
A chatbot is a computer program that simulates a human conversation. Chatbots can be used in a variety of ways, but in terms of customer support, they often act as a virtual assistant answering customer FAQs.
Different chatbots have different capabilities — with the most advanced versions capable of more sophisticated tasks like detecting buying intent and even recommending products to shoppers based on location, demographic data, or purchase history.
Short for Chat Generative Pre-Trained Transformer, ChatGPT is a chatbot released by OpenAI in November 2022 that became popular because of its ability to give detailed, natural responses to a wide range of prompts.
Get a comprehensive look at ChatGPT’s role in customer support — including its potential uses and limitations — in our blog post, “What Does ChatGPT Mean for the Future of Customer Service?”
A conversational user interface (also known as CUI or Conversational UI) is what allows computers to mimic conversations with real humans. These interfaces use Natural Language Process (see NLP) to interpret incoming voice or text and reply with a response.
The two primary types of CUIs are voice assistants (like Siri and Alexa) and chatbots.
Generative AI is an umbrella term for any artificial intelligence that can create new content (like text or images) using the data it was trained on. This is different from “traditional” AI, which uses patterns to make predictions.
ChatGPT and Bard are examples of advanced generative AI. What makes this technology appealing is that it can produce content that is indistinguishable from that created by humans — allowing people to have natural conversations.
Meanwhile, traditional AI is typically used in technology like the bubble tree. In this example, users can only select from a limited number of pre-defined options, with the program trying to ultimately predict your end need based on a series of prompts (think an elaborate game of 21 Questions).
Short for Generative Pre-Trained Transformer 4, GPT-4 is a language model released by OpenAI in March 2023 capable of producing human-like responses. It serves as the basis of ChatGPT.
It performs at a much higher level than its predecessor, GPT-3.
A heuristic is a problem-solving technique that’s meant to quickly find an acceptable solution when picking an optimal solution is too time-consuming. AI tools use heuristic shortcuts to determine the best decision based on available data.
Intent is the goal a human has when interacting with a machine. When a customer asks a chatbot about the location of their package, for example, a powerful AI tool would be able to recognize the user’s intent as obtaining information about their order status.
By correctly identifying a user’s intent, a chatbot can generate specific responses tailored to a person’s unique needs, helping them accomplish a particular task more quickly.
A knowledge base is a set of data available for a program to draw on to perform a task or give a response. The larger the knowledge base an AI application has access to, the wider the range of problems it can solve.
It’s important to note that an AI program can only pull from the knowledge base it was given. For many online companies, an FAQ page serves as the basis for their knowledge base.
A language model is a neural network trained to generate sentences. By looking at a question, previously selected words, and even grammar cues (such as optimal character count), it creates a response designed to mimic human speech.
Generative AI tools, such as ChatGPT and Bard, use language models to create unique, rephrased answers to questions. This way, users get the same information without receiving the same cookie-cutter responses.
Short for Language Model for Dialogue Applications, LaMDA is a group of conversational language models developed by Google in 2021. The LaMDA name is also given to a chatbot built using these models.
In 2022, LaMDA grew in popularity after a Google engineer claimed the chatbot had become sentient.
A large language model is a deep-learning algorithm that recognizes and generates content after training on massive amounts of data. The larger the dataset is, the more effective a language model will be at understanding, translating, and predicting text.
Robust LLMs are why chatbots like ChatGPT can deliver impressive responses to a wide range of topics.
Machine learning is a subfield of artificial intelligence that involves teaching computers to perform new tasks without requiring explicit programming.
Thanks to machine learning, chatbots can self-improve without constant human maintenance and identify additional questions to automate on their own.
Natural language processing is a program’s ability to interpret written and spoken human language. It allows computers to understand what people are saying, including their tone and intent.
Natural language processing is what enables chatbots to detect how a customer feels or what they’re trying to achieve; whether they’re frustrated and want to complain or simply trying to complete a purchase.
OpenAI is an AI research laboratory that developed GPT-4 and ChatGPT. Based out of San Francisco, OpenAI was founded in 2015 by a group that includes current CEO Sam Altman, Peter Thiel, and Elon Musk.
A programming language is a code that software developers use to write computer programs and instructions. Just like you can’t have written words without the alphabet, you can’t have computer programs without a programming language.
A token is a sequence of characters or a piece of a word that a chatbot can process to interpret what a human user is saying. Reading tokens instead of entire words makes it easier for chatbots to understand what a user writes, even if misspellings or foreign languages are present.
For example, if someone writes weress my odrer?, advanced chatbots leveraging tokens can piece together and accurately respond to this question.
Designed by Alan Turing in 1950, the Turing test is a test of a computer’s ability to display intelligence that is indistinguishable from human intelligence.
While the test is not without criticism, it’s still regarded as an important tool in determining an AI tool’s power.
AI is ever-evolving, and we created this glossary to serve as a living document. We encourage you to bookmark this page and check back regularly for any updates.
If you still have questions about AI’s role in customer service, one of our automation experts will be happy to walk you through it. Schedule a call today.