Congratulations! You Hired Someone or got an AI-Agent. Now what …
Last post we talked about when to add capacity—whether that's hiring an FTE, bringing in AI, or delegating to specialists. This time? What happens when you skip the system diagnostic and jump straight to hiring.
Here's the scene: Your new hire starts Monday. They're smart, motivated, ready to contribute. Three weeks later, they're still shadowing colleagues, digging through old docs, asking the same questions everyone asks, and trying to decode the unwritten rules of how work actually gets done here. The truth: slow onboarding isn't a people problem. It's a systems problem.
Research on organizational knowledge transfer shows that when critical processes exist only in people's heads, new hires face what researchers call "knowledge loss" (Argote & Ingram, 2000). Every time someone joins, you're essentially rebuilding their understanding from scratch through expensive, inconsistent shadowing.
Here's what onboarding dysfunction reveals:
1. Your processes aren't documented. "Just watch Sarah do it" isn't scalable. When your best practices live exclusively in people's heads, every new hire (or AI agent you're trying to train) has to reverse-engineer your entire operation through observation and trial-and-error.
2. Your information is scattered. SOPs in Google Drive. Workflows in Slack threads. Tool access in someone's email. That "centralized" wiki that hasn't been updated since 2022. Your new hire spends half their energy just finding where information lives, not learning how to do the work.
3. Nobody knows what "Done" looks like. Remember when we talked about defining "Done" to fix your flow? This is where that comes back to haunt you. If your existing team doesn't have clear completion criteria, how is a new hire supposed to know when they've actually finished something?
The fix isn't complicated—but it does require work:
Document recurring tasks as simple SOPs. Not 50-page manuals. Simple, visual playbooks showing: What triggers this work? What steps happen? What does "done" look like? Where do handoffs occur? These same docs work whether you're onboarding humans or training AI agents.
Create a short "How We Work" playbook. Tools we use. When we meet. How we communicate. Who owns what. Decision-making protocols. The unwritten rules that everyone "just knows"—write them down. Research from Edmondson (2012) on team learning shows that explicit protocols reduce coordination costs and speed up integration.
Centralize everything in one place. Pick one system of record. Not five. Your new hire should be able to answer "where do I find X?" with a single source. When information lives in one place with clear structure, onboarding time drops dramatically.
Here's the test: Could you onboard someone without any live human involvement? If the answer is no, your systems aren't documented well enough for sustainable growth. And if you can't onboard a human from documentation, you definitely can't train an AI agent effectively.
The pattern you're probably seeing: You decided you needed more capacity. You hired someone (or bought an AI subscription). Now you're discovering that adding resources to an undocumented system just revealed how broken the system was all along. The slow onboarding isn't the problem—it's the symptom.
Try this: Before your next hire (or AI implement), run the documentation audit:
List your 10 most common recurring tasks
Document each one as a simple SOP (trigger → steps → done)
Create one centralized "source of truth" location
Write your "How We Work" playbook
Test it: can someone new find what they need without asking?
If you can't pass that test, you're not ready to scale. Fix the documentation first, then add capacity. Your future hires—and your future self—will thank you.