To help you get your head around the (sometimes quite confusing) world of AI, we’ve put together a glossary full of clear and simple explanations for some of the key terms around AI technology. We’ll be keeping it updated, so feel free to pop back whenever you’re in need of a little AI understanding.
AI agents are autonomous entities, often software programs but also entire systems (eg: self-driving cars) that act in an intelligent manner (ie: perceive and interact with their environment) to take actions that will achieve specific goals. There are different types of agents, such as simple reflex agents, learning agents, and utility-based agents. Examples of agents include smart vacuums, self-driving cars, personal assistants on your phone or smarthome hub, even computer programs that run on their own without human involvement.
AGI (Artificial General Intelligence)
A hypothetical type of AI that’s equal to or superior to humans, exhibiting self-awareness, consciousness, the ability to adapt to its surroundings, and other uniquely human abilities.
AI (Artificial Intelligence)
A branch of computer science focused on building machines and systems that mimic human intelligence. AI is purpose-built to analyze data, generate content, and make decisions in ways that mirror human processes – ultimately becoming able to manage much larger quantities and different types of data faster and with fewer errors than humans.
A set of instructions used by computers to perform calculations and solve problems. As a subset of machine learning and key part of AI, algorithms are used to replicate tasks humans would normally do, but much faster and with greater power. There are many different types of algorithms, such as linear regression, K-means clustering, support vector machines, and more.
API (Application Programming Interface)
A set of protocols and routines that allow different software applications to communicate with each other and transfer data between themselves. The most commonly used API is REST (or representational state transfer architecture style) sometimes called RESTful APIs. APIs can be used to read, create, update, and delete data.
Catastrophic Interference/Catastrophic Forgetting
A phenomenon when artificial neural networks forget – either in part or completely – what they have learned after they’re trained on a new task or learn new information. As ANNs are given new information, new pathways are formed between the neurons, and sometimes this causes old pathways to be eliminated or broken. This is unlike humans who retain old knowledge even when learning new; we don’t overwrite memories but add to them.
An LLM-based chatbot created by OpenAI that interacts with human users in a conversational way via text commands and questions, called Prompts. GPT stands for general pre-trained transformers, a type of Neural Network model using Transformer Model architecture.
A field of AI that trains computers to understand images, videos, and data from other visual sensors through identification and classification of seen objects. Examples of computer vision include facial recognition, self-driving cars, movement analysis in medicine and sports, and plant identification.
A phenomenon in machine learning where the data changes over time, causing the model to become less accurate or even completely incorrect. Concept drift can be caused by changes in the relationship between input and target variables, learned concepts evolve beyond original bounds, statistical properties change in unforeseen ways. Real life examples would include things like significant changes to laws, drastic shifts in customer behavior or economics such as those during covid, or discoveries that invalidate previous facts such as in science and medicine.
It’s often considered a subset of AI, where practitioners are subject matter experts focused on managing, processing, and interpreting data at scale to drive decisions and extract insights. Complex math and statistics, advanced computer programming, analytics, machine learning, and AI are used by data scientists.
A type of Machine Learning and AI that processes data in ways inspired by the human brain, using artificial Neural Networks to learn and improve their output from raw input.
Explainable AI (XAI)
A type of AI that’s programmed so humans can trust its results and understand how and why the outputs were created. It’s increasingly important as government regulations increase for high-impact uses such as those within medicine, finance, and military.
Using a Pre-Trained model and customizing it to perform specifics tasks or to achieve different behaviors. Fine tuning avoids having to train a new model from scratch, allowing for faster time to value with lower expense and risk. Fine tuning involves adjustments to Neural Network parameters and new data, whereas transfer learning maintains network parameters with new data and new layers.
A large AI model that is trained on large datasets capable of a wide range of tasks and types of output. Foundation models are pre-trained models that are subsequently used and specialized to power applications instead of building a new model from scratch or using an existing model as-is. A foundation model provides a solid starting point for customization.
Generative Adversarial Network (GAN)
A type of AI in which two Neural Networks compete with each other to create new synthetic data that resembles the Training Data. There’s a generator that learns to produce the output, and a discriminator that learns to distinguish between true data and the generator’s output. Examples of GAN include face image generation, 3D object generation, and image-to-image translation (e.g. creating a photograph from a sketch).
Generative AI or Gen AI
A type of AI that produces outputs such as text and images from human inputs, called Prompts. Content is generated by the AI in real-time using specialized AI models.
Graphics Processing Unit (GPU)
Sometimes called a graphics card or video card, a GPU is a specialized computer chip designed to create images and graphics. Unlike a traditional CPU (central processing unit), a GPU excels at parallel processing and performing complex mathematical calculations quickly – which is especially valuable for powering AI.
When an AI provides an incorrect response as if it were correct. These can happen for a number of reasons, including prompts that are deliberately designed to confuse the AI, poor training or low-quality training data, and lack of context in the prompt.
Image Recognition and Object Recognition/Object Detection
These are types of computer vision AI focused on making sense of visual data, such as imagery and videos. They’re trained to detect and classify objects and are the foundations for computer vision services such as facial recognition, tracking objects and people in CCTV systems, and tagging images for search services.
After training, models are put into production and handle live data where they generate an output or make predictions and solve tasks. Inference uses an inference engine that applies logical rules to the model to evaluate and analyze the new information.
A type of AI that modifies an existing image by adding data where it’s missing or modifying existing data within the borders of the original image. Common use cases include selecting a specific object or section of an image and changing it (eg: color, different item) or image restoration (eg: removing water stains on a photograph).
Large Language Model (LLM)
A type of AI that uses deep learning techniques specializing in Natural Language Processing (NLP) tasks. LLMs are often trained on massive datasets at high costs, using both Unsupervised and Reinforcement Learning from Human Feedback (RLHF) to train and optimize the model. LLMs power well-known AI services such as OpenAI’s ChatGPT and Google’s Bard.
An applications and software development approach that requires limited development to use, leveraging visual interfaces and model-driven processes. This reduces the need to employ technical specialists with advanced programming and engineering skills to integrate systems and implement software solutions.
Machine Learning (ML)
A subset and type of AI that learns and improves without explicit programming, relying on Algorithms to perform tasks such as finding patterns in historical data, making logical decisions, making predictions, and classifying data.
A type of machine learning popularly known as “learning to learn”, as it focuses on creating models that can apply concepts already learned and apply them to new and different tasks. Meta learning takes outputs from other algorithms and stacks or layers them, allowing the combined predictions to be put to new and better use.
A type of AI that can understand, use, and generate multiple modes of data – such as text, images and audio. Multimodal AI can be more useful than unimodal AI because it more closely mimics human ability and can be more accurate and precise. It also saves the user time and effort as a single tool can handle broad data types, meaning the user doesn’t have to access and rely on different or specialized tools.
Natural Language Processing (NLP)
A type of Machine Learning and subset of AI that gives computers the ability to understand, generate, and manipulate human language, and communicate using human language. An NLP works with both written/typed and spoken language.
A Machine Learning model that’s structured to resemble a human brain and uses powerful, complex Algorithms to make sense of raw data, providing meaningful output that humans may not have calculated or discovered. A Neural Network has an input layer, many hidden layers consisting of nodes/neurons – often called the ‘black box’ – and an output layer.
An applications and software development approach that requires no development to use, leveraging drag-and-drop style interfaces which allow nearly anyone to create and modify applications. Integrations and in-app options are limited relative to low-code and custom code solutions, and therefore ideal for simple, low-maintenance, low-criticality use cases.
A type of AI that extends the original image beyond its borders by creating new content based on the existing image. For example, a closeup image of a person standing on a street could be expanded to include the buildings around her and a sky that was only partially visible.
A numerical value that represents a weight or bias in a Neural Network. An AI model’s Algorithms adjust Parameters as it learns from the input data, and in response to hyperparameters set by the engineers, to minimize output errors and maximize quality and value.
Pre-Training and Pre-Trained Models
The process of training a Machine Learning model on vast datasets so it’s useful across multiple tasks. Pre-trained deep learning models are popular because they’re effective as they are, but can also be fine-tuned for specific requirements and outputs. Examples of Pre-Trained Models (PTM) include OpenAI’s GPT-3 and Google’s BERT.
Usually a typed input in the form of a command or question given to an AI tool such as a Large Language Model (LLM) chatbot. Prompts can have different purposes, including informational for general questions and facts, creativeto generate content that’s creative or made up, or instructional where the content is directions such as a recipe or how-to guide. They can also have different styles such as zero-shot where there’s no context, one-shot where an example is provided or a simple template is used, and few-shot where multiple examples or a more complex template is used.
The practice of designing inputs for Gen AI to produce optimal outputs. Prompt engineering involves creating precise questions or instructions, refining previous Prompts, directing the behavior and outputs with examples, and working through objections or limits with creative inputs.
A multidisciplinary and cross-functional group that is offensive in nature and that mimics adversaries seeking to exploit their AI. The act of “red teaming” provides valuable insights into how well the AI handles actions such as prompt attacks, training data extraction, backdooring the model, data poisoning, and exfiltration. Red teams can also seek to validate if the AI outputs are fair or biased, provide harmful or illegal content, or even perform poorly because of benign personas and novel uses.
Reinforcement Learning (RL)
A Machine Learning (ML) training method that rewards desired outcomes and punishes undesired outcomes, similar to a human learning through trial and error. The agent, or system, that’s learning seeks to maximize rewards while it explores unknown states of its environment, leveraging what it’s learned so far.
Reinforced Learning from Human Feedback (RLHF)
A type of Machine Learning training that uses humans to influence the outputs and correct errors. Humans provide feedback, essentially rewarding positive outcomes and punishing negatives outcomes, leading to improved performance and potentially less bias and increased safety.
A type of text analytics using Natural Language Processing (NLP) to determine whether a particular text is positive, negative, or neutral in tone. Identifying the emotional state is particularly useful for customer support, social media comments, product reviews, and emails.
Data that’s organized using a standardized format with well-defined structure, typically stored in a relational database, and often features rows and columns. Examples include dates and times, addresses, UPC, and stock market ticker price.
A type of Machine Learning that uses labeled data to train its Algorithms to produce specific outputs. Examples include spam filtering for emails, where examples of spam emails are provided and over time the algorithms improve their ability to detect and correctly flag spam emails versus legitimate emails.
Labeled examples used to train machine learning models. Vast quantities of information are fed into the models so it can learn to produce correct outputs on its own. For example, self-driving car systems being fed images and videos of cars and people so it knows what to avoid, or thousands of pictures of different dogs to allow a system to identify dog breeds at a vet clinic.
A deep learning Neural Network architecture that specialized in Natural Language Processing (NLP) through mathematical techniques called attention or self-attention. A Transformer Model learns context and relationships between words and is able to create new text based on what it’s received and processed previously. They use encoder-decoder layers, where the encoder layer converts variable-length input into fixed-length representations, called Tokens, and the decoder layer generates output sequences based on the representations obtained from the encoder.
A method to determine whether a machine can think like a human at such a high level that its performance is indistinguishable from an actual human. It was developed by English scientist Alan Turing in 1950, where humans would ask questions and interact with two agents, a computer and human, and if the computer can perform as well or better than the human, it’s declared the winner. It’s a popular, but often criticized, method of attempting to determine whether a machine is truly intelligent.
Data of various formats that doesn’t follow a specific data model and isn’t stored in relational databases. Examples include image files, videos, text docs, and other rich media.
A type of Machine Learning that uses unlabeled data to train its Algorithms, which analyzes and groups data on its own, without human intervention. Examples include clustering, anomaly detection, associations and hierarchies.