Your brain thinks in pictures. Your meeting notes should too.
The Problem
Meetings produce information that vanishes. The average person retains about 10% of what they hear three days later. Text-based meeting notes help, but they sit unread in shared drives, formatted identically to every other document competing for attention.
The cognitive science is clear. Dual coding theory (Paivio, 1971) shows that combining text with visuals increases recall to roughly 65% — a 6x improvement. When information is encoded through both verbal and visual channels, it creates redundant memory traces. Each channel reinforces the other.
But there's a gap. Professional graphic recording costs $500–$5,000 per session. Nobody has time to hand-draw visual summaries after every meeting. And existing AI tools just produce more text — summaries, bullet points, action items — competing in the same overcrowded verbal channel.
SketchScript fills that gap. Paste any transcript — a team meeting, a YouTube video, a podcast episode, a lecture — and get a hand-drawn visual summary in under two minutes.
Building It
The build happened across a single week in February 2026 — a vibe coding sprint where the product went from Oracle extraction to launched product.
It started as a web SaaS, but pivoted quickly to a native macOS app. The reasons were practical: local processing keeps transcripts private (critical for corporate meetings), native distribution means better margins than web hosting, and Swift gives access to Apple's on-device ML frameworks. WhisperKit handles transcription entirely on-device — audio never leaves the machine.
The stack evolved through iteration. Claude analyses transcripts and designs the visual layout. Gemini generates the actual sketchnote images. The combination works because each model handles what it's best at — Claude's reasoning identifies what matters in a meeting, Gemini's image generation renders it visually.
Alongside the native app, I built a Pickaxe-hosted online version for zero-friction access — no download required, just paste and go. An n8n onboarding pipeline handles the welcome sequence, cloned from the MyCityZen pattern that was already proven.
One detour: I tried adding Anime.js animations to the marketing site for a more dynamic feel. The experiment looked good in isolation but clashed with the site's information hierarchy. Rolled it back. Not every enhancement is an improvement.
How It Works
The process has four steps: Analyze the transcript to identify key themes and decisions. Design the visual layout using sketchnote vocabulary. Generate the hand-drawn visual summary. Refine with optional iterations based on feedback.
Each step involves a different model doing what it does best. The analysis phase uses Claude to extract meaning — not just keywords, but the narrative arc of the meeting, the decisions that were made, the tensions that surfaced. The design phase maps those elements to visual components: containers for grouped concepts, connectors for relationships, icons for recurring themes, typography hierarchy for importance. Generation renders the actual image. Refinement lets you iterate.
The Validation
Before building, I ran SketchScript through the RFU (Really Fucking Useful) audit — an 11-gate validation framework that tests whether a product solves a real problem for real people. It passed all 11 gates.
The problem statement went through a 5 Whys analysis to make sure we weren't solving a surface symptom. Why do people forget meetings? Because notes are text-only. Why doesn't that work? Because the brain processes visual information differently. Why don't people use visuals? Because it requires artistic skill and time. Why hasn't AI solved this? Because AI summarizers produce more text, not visuals. Why is that? Because nobody mapped the visual vocabulary of sketchnoting into a generative pipeline. Until Oracle did.
Two target personas emerged: Sarah, a project manager drowning in back-to-back meetings who needs her team to actually remember what was decided. Marcus, an L&D manager creating training materials who needs visual aids that don't require a design team.
The Science
SketchScript isn't just a convenience tool. It's built on established cognitive science.
Dual coding theory (Paivio, 1971) demonstrates that information encoded through both verbal and visual channels creates independent but connected memory representations. When one retrieval path fails, the other remains available. This isn't a marginal improvement — it's the difference between 10% and 65% recall.
The drawing effect (Wammes, Meade & Fernandes, 2016) shows that even simple visual representations of concepts outperform writing for memory retention. The act of translating abstract ideas into visual form forces deeper processing.
There's also a neurodiversity angle. Visual redundancy particularly benefits people with ADHD (who process visual information faster than sequential text), dyslexia (where text-heavy formats create cognitive overload), and autism profiles (where visual structure provides predictable information architecture). SketchScript doesn't just serve the general case — it makes information more accessible.
John Medina's Brain Rules puts it bluntly: vision trumps all other senses. We process visual information 60,000 times faster than text. Meeting notes that ignore this are fighting the architecture of human cognition.
The Stack
Product Hunt
Try It
The free tier gives you 3 sketchnotes per month. Paste any transcript — meetings, YouTube videos, podcasts, lectures — and get a visual summary in under two minutes. No account required for the online version.