Home
5 lessons

GETTING STARTED WITH AI

Core concepts, terminology, and mental models for working with AI tools effectively.

WHERE TO SET UP YOUR MASTER PROMPT

Configure once, benefit in every conversation — here's exactly where to find it

Settings
Personalisation
Custom instructions
Settings
General
Personalisation
Privacy
About

Custom Instructions

What would you like Copilot to know about you?

I work at Anchor Capital as a portfolio manager. I prefer concise, data-driven outputs with South African financial context...
LLM, Chatbot, GenAI: What's the Difference?
Concept
Session 1

LLM, Chatbot, GenAI: What's the Difference?

These terms get thrown around interchangeably, which creates confusion when you're trying to understand what your tools actually do. Here's the distinction:

  • \u25CFLLM (Large Language Model): The prediction engine. It processes text patterns and generates responses. Think of it as the brain. GPT-5, Claude, Gemini, Grok are all LLMs.
  • \u25CFGenerative AI: The broader field. Includes models that create text, images, code, and video. The LLM is one type of GenAI, but not the only type.
  • \u25CFChatbot: The interface. How you talk to the model. It's the front-end, not the intelligence behind it. The chat window you type into is the chatbot.
  • \u25CFAI Assistant: A more capable chatbot that can reason, search, analyse, and act across tools. It doesn't just respond, it takes action.

When you use Copilot, Claude, or Gemini, you're using all of these at once: a Generative AI system built on an LLM, accessed through an AI Assistant interface. Knowing which layer you're talking about helps you understand what's possible and where to look when something doesn't work.

Tokens: How AI Measures Everything
Concept
Session 1

Tokens: How AI Measures Everything

Tokens are the unit of measurement for AI. Every word you send and every word you receive consumes tokens. 1 million tokens is roughly 750,000 words.

Why this matters: Token limits determine how much context the AI can hold in a single conversation. Think of it as the AI's working memory. If you upload a 200-page document, the AI might not be able to hold all of it at once. Newer models have larger context windows (GPT-5 handles up to 1M tokens), but there's always a ceiling.

The practical impact:

  • \u25CFLong conversations eat through tokens. After 30+ messages, the AI starts losing track of what was said earlier.
  • \u25CFIf your AI seems to 'forget' something you told it, you've likely hit the context window. It's not broken, it's out of working memory.
  • \u25CFThe fix: Start a new conversation. Re-upload the key documents. Reference the specific section you need. Fresh context produces better results every time.
  • \u25CFUploading a massive file and asking broad questions wastes tokens. Be targeted: point the AI at the exact section or data you need.
AI Agents vs Agentic AI

AI Agents vs Agentic AI

These two terms sound similar but describe fundamentally different capabilities.

AI Agent: A highly specialized assistant built for one specific thing. You tell it what to do and it executes. Think of it as a dedicated team member for a particular task. It follows instructions precisely, but it doesn't decide what to work on next.

Agentic AI: A broader concept where AI systems act autonomously, make decisions, and chain multiple actions together without constant guidance. You tell it the goal and it figures out the steps. It plans, executes, adjusts, and delivers.

The quick comparison:

  • \u25CFNormal AI Chat = a consultant you call for advice
  • \u25CFAI Agent = a team member who carries out the plan you give them
  • \u25CFAgentic AI = the team member who also plans the work, decides the approach, and executes end-to-end

This distinction matters because the tools are evolving fast. Copilot, Claude, and Gemini all now offer agent-like modes where the AI browses the web, reads documents, writes code, and chains tasks together. Understanding the difference helps you know when to give detailed instructions versus when to give a goal and step back.

Model Variants: Speed vs Depth
Concept
Session 1

Model Variants: Speed vs Depth

Every major AI platform offers multiple model variants, and choosing the right one for the task makes a real difference to both speed and output quality.

Fast models (GPT-5 Instant, Claude Haiku, Gemini Flash) are optimized for speed. They respond in seconds and handle straightforward tasks well. Use them for:

  • \u25CFQuick questions and fact lookups
  • \u25CFEmail drafts and short-form writing
  • \u25CFSimple summaries and brainstorming
  • \u25CFAny task where speed matters more than depth

Deep models (GPT-5 Pro, Claude Opus, Gemini Ultra, Grok) are optimized for reasoning. They take longer but think more carefully. Use them for:

  • \u25CFComplex financial analysis and multi-step reasoning
  • \u25CFResearch reports that require nuance and accuracy
  • \u25CFCode generation and debugging
  • \u25CFAny task where getting it right matters more than getting it fast

The practical takeaway: Most tools now have an 'Auto' mode that selects the right variant for you. If you're not sure, leave it on Auto. But when you know the task is complex, manually selecting the deep model will produce noticeably better results. The difference between a fast model and a deep model on a complex analysis task is the difference between a rough draft and a polished deliverable.

AI as Tool AND Partner
Concept
Session 1

AI as Tool AND Partner

This is the single most important mental model for getting value from AI. Get this right and everything else follows.

It is a tool, first and foremost. It gathers information, extracts insights, transforms data, automates repetitive work, and executes workflows at a speed no human can match. Use it to do the heavy lifting: data extraction, first drafts, research compilation, formatting, summarisation. This alone saves hours every week.

But it is also a strategic partner. Treat it as a trusted advisor. A savant intern with access to the sum of human knowledge who never sleeps, never gets tired, and works at the speed of thought. Ask it to challenge your assumptions. Use it to stress-test an investment thesis. Have it play devil's advocate on a client recommendation. It can see patterns across data sets that would take you days to find manually.

The people who get the most value understand both sides:

  • \u25CFIt's an extension of yourself, not a replacement for yourself
  • \u25CFIt's your backup for research, your sounding board for ideas, your first draft machine for content
  • \u25CFIt has quirks: it can hallucinate facts, reflect biases in its training data, and lose context in long conversations
  • \u25CFIt needs verification on anything that matters. Trust but verify, every time

The wrong mental model is treating it as a search engine (too passive) or a magic oracle (too trusting). The right model is treating it as a highly capable colleague who occasionally needs fact-checking. You bring the judgment, the domain expertise, and the accountability. It brings the speed, the breadth, and the tireless execution. Together, you operate at a level neither could reach alone.