AI’s Quiet Coup: How Camouflaging Your Abilities is Killing Hard Conversations
In an era where technology is supposed to connect us, it seems we’re becoming experts at avoiding the very thing that makes us human: honest, hard conversations.
The rise of AI, particularly tools like ChatGPT, has handed us a powerful crutch. But instead of using it to stride confidently toward growth, many of us are leaning on it to limp past discomfort. This isn’t just a trend—it’s a problem we won’t know we have until it’s too late.
Lemme break a few things down….
The New CYA: Camouflage Your Abilities
AI has become the ultimate “fake it ’til you make it” enabler. The tools are so accessible and convincing that many of us are outsourcing not just tasks but actual expertise.
I recently heard about a marketing hire who was “GPTing” everything. From emails to campaign strategy, their approach to the job was essentially prompt-and-go. The results? Predictably mediocre. Only when the company hired someone with actual marketing chops, who asked a few simple questions, did the façade crumble into redundancy.
This incident isn’t an outlier—it’s a cautionary tale. Aside from the best-in-class power users of AI, the gen pop will gladly do as little as possible when interfacing with any technology. For the gen-pop, AI is amplifying the risk of scaled incompetence, where people coast on tech until the cracks become too big to ignore.
The problem isn’t the tech itself; it’s how we’re using it to avoid the messy, necessary work of learning, questioning, and—brace yourself—admitting what we don’t know.
Prompting in Private: The Silent Killer of Collaboration
AI is marketed to be akin to wizardry. The illusion feels so complete that we start believing we’ve got the answers all by ourselves.
Why ask a coworker for help when ChatGPT is ready 24/7? Why risk looking clueless in a meeting when you can privately Google your way to competence?
For me, it’s the aspect of privatized prompting in AI which fosters and internalizes a false sense of expertise.
AI isn’t a specialized consultant; it’s a predictive generalist that’s only as good as your input. If you don’t know marketing strategy, constraints, anything about the desired audience, regulatory or financial considerations for specialized industries, no prompt in the world will generate a campaign that wows your clients.
But when you prompt in private, you bypass the opportunity for feedback, collaboration, and—critically—growth. You’re left in what I call the predictive panopticon: a closed loop of half-baked ideas that feel safe because they’re unchallenged.
Hard Conversations: The Cost of Avoidance
At the heart of this issue is our collective avoidance of hard conversations. Whether it’s asking for feedback, admitting you don’t know something, or challenging a flawed idea, these moments require vulnerability. And vulnerability is in short supply when you can just click “Generate Response” and move on.
The AI-driven workplace isn’t just failing to spark curiosity—it’s actively suppressing it. Why? Because we’re afraid to look dumb. With all the world’s knowledge seemingly at our fingertips, there’s no excuse for ignorance—at least that’s what we tell ourselves.
But here’s the truth: you’ve got to be dumb to get smart. Growth demands humility, curiosity, and the courage to admit what you don’t know.
Box-Checking Employees Can’t Think Outside The Box?
The problem extends beyond individuals to organizations. If your hiring process values checkbox qualifications over genuine curiosity and expertise, don’t be shocked when you end up with a box-check employee. AI might help these hires “show up,” but it doesn’t teach them to “know up.”
Knowing up means using AI not as a substitute for knowledge but as a springboard for deeper understanding. It requires a mindset shift—from “How can I look competent?” to “How can I actually become competent?” And that shift starts with hard, self-reflective conversations—both with yourself and with your team.
From Prompt to Purpose: Rethinking Our Relationship with AI
AI is a tool, not a tutor. Its value lies not in doing the work for us but in helping us do the work better. That means:
Admitting What You Don’t Know
Before you start prompting, ask yourself: Do I actually understand the problem I’m trying to solve? If not, seek guidance from people—not just machines.Turning Private Prompts into Public Conversations
Share your AI-generated outputs with colleagues. Invite feedback. Use the technology as a starting point, not the final word.Prioritizing Growth Over Appearance
Stop using AI to camouflage your abilities. Instead, use it to illuminate gaps in your knowledge and address them.Hiring for Curiosity, Not Just Credentials
Look for people who are eager to learn, not just those who can check the right boxes. AI can’t replace a growth mindset.
AI isn’t the enemy of progress—it’s the mirror reflecting our flaws. If we’re too afraid to admit ignorance, too proud to ask for help, and too reliant on tech to do the heavy lifting, we’re not just failing to connect with others—we’re failing ourselves.
Let’s stop using AI as a shield and start using it as a catalyst. Because the real magic of this “wizard tech” isn’t that it can do everything. It’s that it can inspire us to do better—if we’re willing to have the hard conversations that matter.