The Great Big Amplience AI Glossary

To help you get your head around the (sometimes quite confusing) world of AI, we’ve put together a glossary full of clear and simple explanations for some of the key terms around AI technology. We’ll be keeping it updated, so feel free to pop back whenever you’re in need of a little AI understanding.



AI agents are autonomous entities, often software programs but also entire systems (eg: self-driving cars) that act in an intelligent manner (ie: perceive and interact with their environment) to take actions that will achieve specific goals. There are different types of agents, such as simple reflex agents, learning agents, and utility-based agents. Examples of agents include smart vacuums, self-driving cars, personal assistants on your phone or smarthome hub, even computer programs that run on their own without human involvement.

AGI (Artificial General Intelligence)

A hypothetical type of AI that’s equal to or superior to humans, exhibiting self-awareness, consciousness, the ability to adapt to its surroundings, and other uniquely human abilities.

AI (Artificial Intelligence)

A branch of computer science focused on building machines and systems that mimic human intelligence. AI is purpose-built to analyze data, generate content, and make decisions in ways that mirror human processes – ultimately becoming able to manage much larger quantities and different types of data faster and with fewer errors than humans.


A set of instructions used by computers to perform calculations and solve problems. As a subset of machine learning and key part of AI, algorithms are used to replicate tasks humans would normally do, but much faster and with greater power. There are many different types of algorithms, such as linear regression, K-means clustering, support vector machines, and more.


Alignment is a field of AI safety research concerned with ensuring AI aligns with human values, preferences, and outcomes. Alignment is achieved if the AI intended objectives are completely met; misalignment is when intended objectives are not met, even if partially. Super-alignment is focused on superintelligent AI.

API (Application Programming Interface)

A set of protocols and routines that allow different software applications to communicate with each other and transfer data between themselves. The most commonly used API is REST (or representational state transfer architecture style) sometimes called RESTful APIs. APIs can be used to read, create, update, and delete data.


Catastrophic Interference/Catastrophic Forgetting

A phenomenon when artificial neural networks forget – either in part or completely – what they have learned after they’re trained on a new task or learn new information. As ANNs are given new information, new pathways are formed between the neurons, and sometimes this causes old pathways to be eliminated or broken. This is unlike humans who retain old knowledge even when learning new; we don’t overwrite memories but add to them.


An LLM-based chatbot created by OpenAI that interacts with human users in a conversational way via text commands and questions, called Prompts. GPT stands for general pre-trained transformers, a type of Neural Network model using Transformer Model architecture.

Cognitive AI

A subfield of AI that endeavors to build systems that learn and think like humans, including complex reasoning, understand intent and are highly contextual, and excel and natural language processing and creation. They aim to be adaptive, interactive, and handle ambiguous situations better than traditional AI. They are increasingly multi-modal, mirroring the human senses to provide human-like perception.

Computer Vision

A field of AI that trains computers to understand images, videos, and data from other visual sensors through identification and classification of seen objects. Examples of computer vision include facial recognition, self-driving cars, movement analysis in medicine and sports, and plant identification.

Concept Drift

A phenomenon in machine learning where the data changes over time, causing the model to become less accurate or even completely incorrect. Concept drift can be caused by changes in the relationship between input and target variables, learned concepts evolve beyond original bounds, statistical properties change in unforeseen ways. Real life examples would include things like significant changes to laws, drastic shifts in customer behavior or economics such as those during covid, or discoveries that invalidate previous facts such as in science and medicine.


Data Science

It’s often considered a subset of AI, where practitioners are subject matter experts focused on managing, processing, and interpreting data at scale to drive decisions and extract insights. Complex math and statistics, advanced computer programming, analytics, machine learning, and AI are used by data scientists.

Deep Learning

A type of Machine Learning and AI that processes data in ways inspired by the human brain, using artificial Neural Networks to learn and improve their output from raw input.


An individual (and movement) that has a strongly pessimistic or even fatalistic view on the future of the world and society. They believe global problems such as climate change, nuclear weapons, and runaway AGI are likely to lead to the extinction of humanity and destruction of nature without swfit, serious intervention. Specific to AI, they’re focused on slowing down technological advancements to allow regulation and research to provide safeguards and ensure positive outcomes, sometimes earning them the moniker “safetyist.”


Edge Computing

A distributed computing architecture or framework that shifts the processing, storage, and usage of data closer to the end-user and consuming devices. This is contrary to more datacenter-centric approaches that rely heavily on networks and powerful servers. Edge computing is especially important for IoT (internet of things) and applications and use cases that require minimal latency, use real-time data, and minimizing data transfer to multiple sources.

Effective Accelerationism

A new philosophical movement that promotes the unhindered progress of AI and emerging technologies. It has a small but growing following, especially in Silicon Valley and similar circles. Those who support the movement put “e/acc” on their social media profiles and related content. Societal progress and technological solutions are inexorably linked.

Explainable AI (XAI)

A type of AI that’s programmed so humans can trust its results and understand how and why the outputs were created. It’s increasingly important as government regulations increase for high-impact uses such as those within medicine, finance, and military.


Fine Tuning

Using a Pre-Trained model and customizing it to perform specifics tasks or to achieve different behaviors. Fine tuning avoids having to train a new model from scratch, allowing for faster time to value with lower expense and risk. Fine tuning involves adjustments to Neural Network parameters and new data, whereas transfer learning maintains network parameters with new data and new layers.

Foundation Model

A large AI model that is trained on large datasets capable of a wide range of tasks and types of output. Foundation models are pre-trained models that are subsequently used and specialized to power applications instead of building a new model from scratch or using an existing model as-is. A foundation model provides a solid starting point for customization.

Frontier Model

Foundation models that have exceeded the existing state-of-the-art and push the boundaries of performance and capability. They’re often still under development and used for research, not public use, and introduce potential risks not posed by other, less-powerful models. Frontier models feature massive parameters counts, have exceptional outcomes – even with complex tasks, consume large resources for compute, and provide inspiration and new possibilities across numerous applications. But as they are new and largely unproven in the real world at scale, instabilities and errors occur, and risks and disadvantages aren’t yet fully understood.

Function Calling

A feature of language models that allows developers to create a function description in their code and pass it to the model in a request. The model can then call external functions or APIs, providing dynamic experiences and extending the functionality of the model. The model returns JSON. Common use cases include fetching data, integrating with external tools, automating tasks, and interacting with APIs.


Generative Adversarial Network (GAN)

A type of AI in which two Neural Networks compete with each other to create new synthetic data that resembles the Training Data. There’s a generator that learns to produce the output, and a discriminator that learns to distinguish between true data and the generator’s output. Examples of GAN include face image generation, 3D object generation, and image-to-image translation (e.g. creating a photograph from a sketch).

Generative AI or Gen AI

A type of AI that produces outputs such as text and images from human inputs, called Prompts. Content is generated by the AI in real-time using specialized AI models.

Graphics Processing Unit (GPU)

Sometimes called a graphics card or video card, a GPU is a specialized computer chip designed to create images and graphics. Unlike a traditional CPU (central processing unit), a GPU excels at parallel processing and performing complex mathematical calculations quickly – which is especially valuable for powering AI.



When an AI provides an incorrect response as if it were correct. These can happen for a number of reasons, including prompts that are deliberately designed to confuse the AI, poor training or low-quality training data, and lack of context in the prompt.

Human-AI Teaming

The collaboration of and interdependence between humans and AI while working on the same tasks and goals, leveraging the strengths of each to produce better outcomes than they could on their own. Examples include digital assistants, self-driving or driver-assist technologies in vehicles, and medical and safety robots. Human-AI teaming reduces human capital needs and reduces errors, while delivering positive results faster; humans are the main beneficiary. If the benefits are equal, as in the AI benefits from human involvement, then it would be closer to human-AI symbiosis. Symbiosis is a broader concept where humans and AI evolve positively together, each continually improved through engagements with each other.


Image Recognition and Object Recognition/Object Detection

These are types of computer vision AI focused on making sense of visual data, such as imagery and videos. They’re trained to detect and classify objects and are the foundations for computer vision services such as facial recognition, tracking objects and people in CCTV systems, and tagging images for search services.


After training, models are put into production and handle live data where they generate an output or make predictions and solve tasks. Inference uses an inference engine that applies logical rules to the model to evaluate and analyze the new information.


A type of AI that modifies an existing image by adding data where it’s missing or modifying existing data within the borders of the original image. Common use cases include selecting a specific object or section of an image and changing it (eg: color, different item) or image restoration (eg: removing water stains on a photograph).


Large Language Model (LLM)

A type of AI that uses deep learning techniques specializing in Natural Language Processing (NLP) tasks. LLMs are often trained on massive datasets at high costs, using both Unsupervised and Reinforcement Learning from Human Feedback (RLHF) to train and optimize the model. LLMs power well-known AI services such as OpenAI’s ChatGPT and Google’s Bard.


Low-Rank Adaptation is a technique to fine-tune pre-trained models for a specific task or to provide new information and reduces the number of trainable parameters, making it an attractive alternative to training new models, training entire models, or other augmentation options such as RAG. The “low rank” is the representation of model parameters in a lower-dimensional subspace, which captures the essential features of the data. This is achieved by decomposing the parameters matrix into low-rank matrices. The adaptation process helps the model generalize better against the new target distribution.


An applications and software development approach that requires limited development to use, leveraging visual interfaces and model-driven processes. This reduces the need to employ technical specialists with advanced programming and engineering skills to integrate systems and implement software solutions.


Machine Learning (ML)

A subset and type of AI that learns and improves without explicit programming, relying on Algorithms to perform tasks such as finding patterns in historical data, making logical decisions, making predictions, and classifying data.

Meta Learning

A type of machine learning popularly known as “learning to learn”, as it focuses on creating models that can apply concepts already learned and apply them to new and different tasks. Meta learning takes outputs from other algorithms and stacks or layers them, allowing the combined predictions to be put to new and better use.


A type of AI that can understand, use, and generate multiple modes of data – such as text, images and audio. Multimodal AI can be more useful than unimodal AI because it more closely mimics human ability and can be more accurate and precise. It also saves the user time and effort as a single tool can handle broad data types, meaning the user doesn’t have to access and rely on different or specialized tools.


Natural Language Processing (NLP)

A type of Machine Learning and subset of AI that gives computers the ability to understand, generate, and manipulate human language, and communicate using human language. An NLP works with both written/typed and spoken language.

Neural Network

Machine Learning model that’s structured to resemble a human brain and uses powerful, complex Algorithms to make sense of raw data, providing meaningful output that humans may not have calculated or discovered. A Neural Network has an input layer, many hidden layers consisting of nodes/neurons – often called the ‘black box’ – and an output layer.

Neuromorphic Computing

An emerging field of computer engineering that mimics the human brain and nervous system in computing structure, functions, and algorithms. Neuromorphic computing promises to have a smaller physical and energy footprint than traditional architectures, as well as being more fault-tolerant and parallel-process friendly than traditional architectures. Neuromorphic computing also spikes the neural network, creating more than one of two outputs, greatly expanding output capability.


An applications and software development approach that requires no development to use, leveraging drag-and-drop style interfaces which allow nearly anyone to create and modify applications. Integrations and in-app options are limited relative to low-code and custom code solutions, and therefore ideal for simple, low-maintenance, low-criticality use cases.


On-Device ML/AI

An emerging method to run inference on local devices instead of cloud or on-premise infrastructure. The models used are often much smaller than their cloud-based siblings, but still provide value. On-device ML/AL provides near zero latency, much stronger data privacy, offline functionality, and reduced network and compute demands.


A type of AI that extends the original image beyond its borders by creating new content based on the existing image. For example, a closeup image of a person standing on a street could be expanded to include the buildings around her and a sky that was only partially visible.



A term used to quantify the probability of AI, specifically AGI/superintelligence, of creating a doomsday scenario and ultimately causing the extinction of humanity. There are no official criteria, rather individuals or groups may decide on a p(doom) as a point of discussion to quickly convey their position on technology’s existential risk.


A numerical value that represents a weight or bias in a Neural Network. An AI model’s Algorithms adjust Parameters as it learns from the input data, and in response to hyperparameters set by the engineers, to minimize output errors and maximize quality and value.

Pre-Training and Pre-Trained Models

The process of training a Machine Learning model on vast datasets so it’s useful across multiple tasks. Pre-trained deep learning models are popular because they’re effective as they are, but can also be fine-tuned for specific requirements and outputs. Examples of Pre-Trained Models (PTM) include OpenAI’s GPT-3 and Google’s BERT.


Usually a typed input in the form of a command or question given to an AI tool such as a Large Language Model (LLM) chatbot. Prompts can have different purposes, including informational for general questions and facts, creativeto generate content that’s creative or made up, or instructional where the content is directions such as a recipe or how-to guide. They can also have different styles such as zero-shot where there’s no context, one-shot where an example is provided or a simple template is used, and few-shot where multiple examples or a more complex template is used.

Prompt Engineering

The practice of designing inputs for Gen AI to produce optimal outputs. Prompt engineering involves creating precise questions or instructions, refining previous Prompts, directing the behavior and outputs with examples, and working through objections or limits with creative inputs. 


Q(star)/Q-Learning/Model-Free Reinforcement Learning

A machine learning approach that doesn’t require a model of its environment and doesn’t predict outcomes, but instead the agent learns using a reward and punishment scheme of sorts through trial and error. The agent perceives its environment and learns directly from its experience, maximizing rewards. The benefits are numerous, including computational efficiency, reduced bias, supporting larger representations, and handling challenging scenarios.


RAG (Retrieval-Augmented Generation)

A new method of combining an LLM with external data sources such as a vector database to provide higher quality outputs and more contextual experiences in real-time without having to retrain the underlying model. This provides users the ability provide their own documents and various data sources to provide a more dynamic, customized GenAI experience compared to the parametric static knowledge from the models.

Red Team

A multidisciplinary and cross-functional group that is offensive in nature and that mimics adversaries seeking to exploit their AI. The act of “red teaming” provides valuable insights into how well the AI handles actions such as prompt attacks, training data extraction, backdooring the model, data poisoning, and exfiltration. Red teams can also seek to validate if the AI outputs are fair or biased, provide harmful or illegal content, or even perform poorly because of benign personas and novel uses.

Reinforced Learning from Human Feedback (RLHF)

A type of Machine Learning training that uses humans to influence the outputs and correct errors. Humans provide feedback, essentially rewarding positive outcomes and punishing negatives outcomes, leading to improved performance and potentially less bias and increased safety.

Reinforcement Learning (RL)

Machine Learning (ML) training method that rewards desired outcomes and punishes undesired outcomes, similar to a human learning through trial and error. The agent, or system, that’s learning seeks to maximize rewards while it explores unknown states of its environment, leveraging what it’s learned so far.



A method of reinforcement learning used to improve agent performance, often in a game or specific task. The agent plays against itself (a copy of its former self, specifically its learned policy) and through new opportunities, more is learned. Self-play can help agents discover skills without them being explicitly trained or providing an ideal environment for development. Self-play can improve both physical skills like those associated with world models, and language skills like those associated with LLMs.

Sentiment Analysis

A type of text analytics using Natural Language Processing (NLP) to determine whether a particular text is positive, negative, or neutral in tone. Identifying the emotional state is particularly useful for customer support, social media comments, product reviews, and emails.


State-of-the-art refers to AI – often models – that represent the current best offerings that are widely available to the public and deployed into production systems. They have proven to be reliable and perform well on a variety of tasks.

Structured Data

Data that’s organized using a standardized format with well-defined structure, typically stored in a relational database, and often features rows and columns. Examples include dates and times, addresses, UPC, and stock market ticker price.

Supervised Learning

A type of Machine Learning that uses labeled data to train its Algorithms to produce specific outputs. Examples include spam filtering for emails, where examples of spam emails are provided and over time the algorithms improve their ability to detect and correctly flag spam emails versus legitimate emails.



Individuals who believe technology and market capitalism will solve humanity’s problems, and that there is more good than bad associated with tech and markets. Over time, those systems have provided – and will continue to provide – increasingly positive outcomes. They are firm believers in technology delivering continual progress and beneficial solutions, and that humans are open to change and are capable of development and use of technology for good.


Tensor Processing Units are specialized, application-specific integrated circuits (ASICs) designed by Google as an alternative to more traditional CPUs and GPUs. TPUs are matrix processors specialized for neural networks leveraging a systolic array architecture and don’t rely on memory during processing. CPUs are general-purpose processors leveraging von Neumann architecture and rely heavily on memory, making them flexible but weak for handling ML/AL tasks. GPUs are better suited for ML/AI tasks as they leverage arithmetic logic units (ALUs) and parallelism to churn through massive datasets like those of deep learning.

Training Data

Labeled examples used to train machine learning models. Vast quantities of information are fed into the models so it can learn to produce correct outputs on its own. For example, self-driving car systems being fed images and videos of cars and people so it knows what to avoid, or thousands of pictures of different dogs to allow a system to identify dog breeds at a vet clinic.

Transformer Model

A deep learning Neural Network architecture that specialized in Natural Language Processing (NLP) through mathematical techniques called attention or self-attention. A Transformer Model learns context and relationships between words and is able to create new text based on what it’s received and processed previously. They use encoder-decoder layers, where the encoder layer converts variable-length input into fixed-length representations, called Tokens, and the decoder layer generates output sequences based on the representations obtained from the encoder.

Turing Test

A method to determine whether a machine can think like a human at such a high level that its performance is indistinguishable from an actual human. It was developed by English scientist Alan Turing in 1950, where humans would ask questions and interact with two agents, a computer and human, and if the computer can perform as well or better than the human, it’s declared the winner. It’s a popular, but often criticized, method of attempting to determine whether a machine is truly intelligent.


Unstructured Data

Data of various formats that doesn’t follow a specific data model and isn’t stored in relational databases. Examples include image files, videos, text docs, and other rich media.

Unsupervised Learning

A type of Machine Learning that uses unlabeled data to train its Algorithms, which analyzes and groups data on its own, without human intervention. Examples include clustering, anomaly detection, associations and hierarchies.


Vector Database

A type of database that stores, indexes, and queries information as high-dimensional vector embeddings, or mathematical representations of data objects. Unlike a relational database with rows and columns, focusing on structured data, vector databases feature a multi-dimensional space and are highly performant, even for unstructured data. Vector databases are popular for search, recommendations, and AI solutions.


World Model

A representation of the environment that an AI system operates within, often focusing on spatial and temporal aspects. This is contrary to a language model that is focused on text, and sometimes imagery. The environment can be real or virtual, but either way the agent interacts with the environment and learns from its experiences, allowing it to predict outcomes and act accordingly.