top of page

Speaking AI: Terms and Concepts for the "Emotionally Intelligent(s)".


The Brains Behind the Machines: What is AI?

At its most ambitious, AGI (Artificial General Intelligence) refers to AI that can think like humans, possessing the ability to understand, learn, and apply intelligence to any intellectual task. While true AGI is still a distant goal, current AI systems are incredibly powerful and often specialized.


When we talk about how AI arrives at its conclusions, we often refer to CoT (Chain of Thought), which describes AI thinking step-by-step. These steps are powered by AI Models, which are trained systems designed for specific tasks. For simpler interaction with these complex models, we have AI Wrappers that streamline the process.


Training and Shaping AI: How Models Learn

AI doesn't just "know" things; it learns through extensive training. This learning process is crucial to its performance:


  • Training AI: This is the general process of teaching AI by adjusting its parameters based on data.

  • Supervised Learning: A common method where AI is trained on labeled data, meaning the correct output is provided for each input.

  • Unsupervised Learning: Here, AI finds patterns in unlabeled data, discovering structures without explicit guidance.

  • Reinforcement Learning: AI learns from rewards and penalties, often by interacting with an environment to achieve a goal.

  • Fine-tuning: This involves improving AI with specific training data, often to adapt a pre-trained model to a new task.


The results of this training are influenced by Parameters, which are AI's internal variables for learning. When things go wrong, we might see Hallucination, where AI generates false information – a common challenge in AI development. Ensuring AI aligns with human values and goals is the aim of AI Alignment.


Communicating with AI: Language and Interaction

Our ability to interact with AI has vastly improved, thanks to advancements in natural language processing:


  • Chatbot: A familiar AI that simulates human conversation, commonly found in customer service.

  • NLP (Natural Language Processing): This field focuses on AI understanding human language, allowing for seamless communication.

  • LLM (Large Language Model): A powerful AI model trained on vast text data, capable of generating human-like text, translating languages, and answering questions.

  • Prompt Engineering: The art of crafting inputs to guide AI output, especially crucial for getting the best results from LLMs.

  • Vibe Coding: A more informal term for AI-assisted coding via natural language prompts, simplifying development.

  • Tokenization: The process of breaking text into smaller parts (tokens) for AI to process.

  • Embedding: The numerical representation of words for AI, allowing it to understand the relationships between words.


The Inner Workings: Key AI Components and Processes

Behind the scenes, several components and processes power AI's capabilities:


  • Compute: The processing power required for AI models, often intensive.

  • GPU (Graphics Processing Unit): Specialized hardware for fast AI processing, essential for training and running complex models.

  • Neural Network: An AI model inspired by the human brain's structure, forming the backbone of many advanced AI systems.

  • Deep Learning: A subset of machine learning using neural networks with many layers to learn from vast amounts of data.

  • TPU (Tensor Processing Unit): Google's specialized AI processor, designed for high-performance machine learning tasks.

  • Transformer: A specific AI architecture for language processing, foundational to many modern LLMs.


Understanding AI's Decisions and Outputs

It's not enough for AI to just provide an answer; understanding why it gave that answer is becoming increasingly important:


  • Explainability: How AI decisions are understood, crucial for trust and debugging.

  • Inference: Making predictions on new data once an AI model has been trained.

  • Reasoning Model: AI that follows logical thinking, allowing it to make more coherent decisions.

  • RAG (Retrieval-Augmented Generation): Combining search with responses, where AI retrieves information before generating an answer, leading to more accurate and informed outputs.

  • Generative AI: AI that creates text, images, music, and other new content.

  • Foundation Model: A large AI model adaptable to many tasks, serving as a base for various applications.

  • Ground Truth: Verified data that AI learns from, essential for accurate training.

  • Weights: Values that shape AI learning within a model, influencing its outputs.

  • Context: Information AI retains for better responses, helping it understand the flow of a conversation or data.

  • Machine Learning: A broad field of AI where systems improve from data experience without explicit programming.

  • Computer Vision: AI that understands images and videos, enabling applications like facial recognition and autonomous driving.

  • MCP (Model Context Protocol): A standard for AI external data access, allowing models to interact with outside information.


So, this is just a quick peek for those of us on the business side of the business. By understanding the basics, you will be better equipped to talk about AI. You know, like when someone asks what you think about the game you didn't see last night, and you use the go-to "it's been that kind of season."


Comments


CONTACT US

QUICK LINKS

FOLLOW US

  • Twitter

© 2025 by KarmaThink LLC

All Rights Reserved.

bottom of page