
ISSUE #2 - March 25, 2026
Working in media, there’s something that doesn’t get said enough: most data problems aren’t actually data problems.
Over on the beehiiv ad network, we’re in the data constantly: open rates, clicks, conversion rates, cost per acquisition. It’s all there. We’re really good at collecting the data.
What breaks down is never the data itself. It's everything that comes after it.
I work across several dozen active campaigns at any given time. When performance conversations get difficult, it's almost never because the numbers are wrong. It's because two people are looking at the same numbers and telling different stories about what they mean. One person sees noise, the other sees a trend. The data can’t resolve that issue, you have to.
📌 In this issue:
What AI gets right
Where it stops
How to set it up
— Chris
📊 WHAT AI GETS RIGHT
I use Claude to pull and interpret Metabase data constantly. Not just to read dashboards, but to understand what the numbers mean in context. Give it a structured export and it surfaces patterns, flags anomalies, and produces a clean summary faster than any analyst working manually. The research layer, pattern recognition, growth opportunities, even the first draft. It handles all of it.Most people stop there. That's the mistake.
AI produces fluent stories. You produce true ones. Those are not the same thing.
Fluent means well-structured and confident. True means right for this person, in this moment, with this relationship. AI is exceptional at the first half. It has no access to the second.
🧠 WHERE IT STOPS
At beehiiv, we have Metabase dashboards built by genuinely talented people that can show you every single data point you could want from a campaign. It's a lot. Overwhelming, honestly. I've spent more time than I want to admit just trying to understand what I'm looking at before I can do anything useful with it.
The same data that AI summarizes perfectly cannot tell me that this particular client has staked their internal reputation on Q2 numbers. It cannot tell me they're nervous going into the call and need honesty more than confidence. It cannot sit in my QBR and pitch $100K a month for Q2.
The prep work is solvable, but the strategic relationship work is not.
Those are two different jobs. Only one of them has been automated.
🎯 HOW TO SET IT UP RIGHT
Before I give AI anything to analyze, I ask myself two questions:
1/ Do I actually trust this data?
2/ Do I know what question it's working to answer?
If either feels uncertain, I fix that first. AI will analyze whatever you give it with equal confidence regardless of whether the inputs are sound. Confident garbage in, confident garbage out.
Once the data is clean and the question is clear, the handoff is simple. AI takes the analysis layer. You take everything that comes after.
What do you emphasize? What do you contextualize? What do you leave out entirely? Those questions don't have a prompt. They live in the relationship, the history, the read you have on the room.
Pick one recurring output you dread producing every week. Build a prompt around it — not a general one, but one that knows the context, the client, the stakes.
Here's the structure I use:
I'm preparing a [type of output] for [client name].
Their campaign goal is [goal]. Current performance is [data].
The context they care most about is [context].
Draft a narrative that emphasizes [angle] and addresses [concern].
Run it. Take what comes back and ask one question: is this the right story for this situation?
I think when reading this, the output you dread the most came up again and again. Start with that one. Let me know how it goes.
— Chris
|
🛠️ THE TECH STACK beehiiv — Where Actually Useful is built and sent. Best platform for newsletter operators serious about growth. → 20% off first 3 months Wispr Flow — Every rough draft in this newsletter starts as dictation. → Try free Granola — AI meeting notes that actually capture what happened. I stopped taking notes in calls the day I installed it. → Try Granola |
🗞 IN THE NEWS
🛠️ Google’s back in the vibe coding arms race → LINK. Replit, Cursor, Claude have all been at the forefront of vibe coding. Google launched Antigravity and it didn’t do so well initially. Looks like this is what they’ve been up to since that flop. (Also: Stitch by Google).
🤖 Garry Tan open sourced his entire Claude Code setup → LINK. 15 opinionated tools; if you're looking to go deep on Claude Code, this is about as good a starting point as you're going to find.

Thanks for reading!
If this was useful, forward it to one person who you know will do something with it.
