Enter the password to view this presentation
Incorrect password
Carefully crafting the perfect prompt to get good results. Lots of trial and error.
Give the AI the right information and a simple prompt does the job. Context is everything.
Structured in a hierarchy, catalogued like a library. The AI can find what it needs quickly.
In formats the AI can easily parse — plain text, markdown, CSV, JSON.
Not stale. Reflects reality right now, not six months ago.
When you have good context, your prompt can be short. The AI already knows what it needs to know.
The total amount of information an AI model can "see" at one time. The entire grid is the window — like the AI's working memory.
Each square is a token — a small piece of text the model uses to understand and generate language. Roughly 100 tokens = 75 words.
Each square = one token. Filled squares = context in use. The whole grid = the context window.
Absolute maximum amount of tokens.
Binary — fits or doesn't.
Models have selective attention, like humans.
Early & late content gets priority.
Within limits but too messy.
Degrades performance & accuracy.
Great performance requires staying within all three limits: capacity, attention, and cognitive load.
Notion
Google Drive
Google Calendar
Front / Gmail
Addepar
Airtable
Attio
SlackAI can connect to all of them — but connecting isn't enough. It doesn't know which tool has the answer, or how our data is organized across them.
Front CopilotCan draft email responses — but only has access to emails in Front. Missing context from Notion, Drive, Addepar.
Attio "Ask AI"Can summarize household conversations — but only sees what's in Attio (emails). Missing everything in Notion meeting notes.
Gemini in GoogleSmart in Drive and Gmail — but doesn't know which emails connect to which Drive folders or clients.
Notion AIGood at drafting within Notion. Can plug into Google Workspace — but unfamiliar with those tools, missing other context.
Claude Chat can plug into everything via connectors — Notion, Gmail, Calendar, Slack, Attio, Airtable...
But it still doesn't know where to look. Without guidance, it searches randomly or asks you to specify.
We need to manage context actively rather than letting the model decide where to look. It's all about engineering the data pipelines.
Markdown
vs. Notion pages, Word docs, Google Docs
CSV
vs. Excel, Google Sheets
JSON
vs. custom databases
Systems that do both — beautiful for humans, structured for AI — are coming. And they'll be easier to build than you think.
Claude, ChatGPT, Gemini — the conversational interface.
Research, drafting, brainstorming, quick questions, analysis with uploaded files
Interact through:
Components:
The AI's instruction manual. Who we are, who it is, team roster, tech stack, how to look things up. Loaded automatically every conversation.
Slash commands anyone can run. /prep-client Smith triggers a full multi-system briefing in seconds.
Organized, accessible, up-to-date markdown files. The AI pulls from these instead of searching blindly across tools.
Runs on a server, not a laptop. Available 24/7, recurring jobs, event-driven triggers.
Has its own accounts and credentials. No team member's personal data is exposed. We own the infrastructure.
Any team member can trigger tasks. Common context means consistent results across the firm.
Can run AI models locally on the server — faster, cheaper, and fully private for sensitive data.
Server-grade hardware means faster processing, larger context windows, and more concurrent tasks.
More context → More autonomy → More leverage for the team
Agents have immediate access to the organized, up-to-date context they need to act
Agents are trusted and secure — with proper credentials, guardrails, and oversight
The team collaborates on shared scripts, commands, and workflows that agents execute
We make the sophisticated simple,
so you can focus on what matters.