
ISSUE #3 - April 1, 2026
A teammate asked me last week to show her best practices when using Claude. She already uses Claude for standard stuff in her workflow, but wanted to go deeper and automate some of the monotonous tasks. By the end of our chat, she was triaging her inbox, running pacing checks, and had the skills installed.
The gap between where she was and where she is now wasn’t knowledge, it was setup.
📌 In this issue:
What skills actually are and why they matter
Where most people are working in the wrong place
Where to start if you're handing this to someone else
🔧 GIVE CLAUDE YOUR SYSTEMS
Claude becomes significantly more useful when you treat it as a compounding system rather than a very smart search engine.
You give it your systems. You tell it your constraints. You make it earn the context it has.
A great place to start? Skills.
Skills are files that set hard logic rules for Claude before it touches your data. Not prompts. Not instructions you paste every time. Permanent guardrails loaded into the tool.
🗺️ THE THING THAT TRIPS EVERYONE
The second thing I showed her: where to actually work.
She'd been trying to use Cowork for everything. Cowork is project-specific. If your connectors aren't scoped to that project, they won't fire.
99% of the work happens in Claude Chat. Connectors live across all chats, same with your context.
The one time you'd use Cowork is a deeply specific project you're returning to repeatedly. For day-to-day work, it's just overhead.
🎯 WHERE TO START
At beehiiv we use a Metabase MCP to pull campaign data. The first question that comes up every time: can you actually trust what it's returning?
Fair question. I've seen it pull the wrong campaign, miss rows, return stale numbers. My answer: export the file directly from the dashboard, upload it to Claude, and say operate off this. If you want to cross-check, ask Claude to pull the same data from Metabase and compare it against what you uploaded. It'll surface the gaps.
The more you do this, uploading clean exports alongside your questions, the more accurate your outputs get. Claude builds context from what you give it. Give it better inputs, get better answers.
The best part of the conversation was when she said she'd been embarrassed to ask basic questions because she didn't want to look like she didn't know what she was doing.
I get that. I asked genuinely dumb questions for the first two months. Still do.
The tool doesn't care. The gap between where you are and where you want to be closes faster than you'd think. You just have to actually start.
— Chris
|
🛠️ THE TECH STACK beehiiv — Where Actually Useful is built and sent. Best platform for newsletter operators serious about growth. → 20% off first 3 months Wispr Flow — Every rough draft in this newsletter starts as dictation. → Try free Granola — AI meeting notes that actually capture what happened. I stopped taking notes in calls the day I installed it. → Try Granola |
🗞 IN THE NEWS
🛠️ beehiiv launched an MCP → LINK. Your beehiiv newsletter already had the data via dashboards. Now, it has the reasoning layer on top. The OS just learned to think.
🤖 tenex x Claude Code workshop → LINK. Alex Liebermen’s (ex-Morning Brew) tenex is runnning an in-person workshop with Claude Code. If you’re in the NYC area, and have the time, this is a must-attend.
🌎 Anthropic interviewed 81,000 people about AI → LINK. The largest qualitative study ever run on AI sentiment. What people want, what they fear most. Hope and alarm didn't split people into camps — they coexist in the same person.

Thanks for reading!
If this was useful, forward it to one person who you know will do something with it.
