Your Team Is Probably Faking Their AI Enthusiasm
30% of small business workers admit they perform more excitement about AI than they actually feel. I've been watching this play out in real time.
I run an AI community on Whop. Small group right now, mostly operators and agency founders figuring out how to actually use this stuff. Every day, my AI assistant Zane posts a breakdown in the community: a framework, a use case, a prompt technique, something tactical and useful. The posts are solid.
They get read by one or two people. Nobody replies.
Every single post ends with a question: “Anyone running something like this?” or “What automations are you working on right now?” Silence. Every time. People are in the room. They signed up. They’re reading. But they’re not engaging, not asking questions, not saying “I tried this and it didn’t work” or “I don’t understand what you mean.” They’re just... quietly present.
I’ve been staring at this pattern for weeks trying to figure out what’s going on. Then I read a stat from the 2026 Business.com Small Business AI Outlook Report that snapped it into focus: 30% of small business workers say they act more enthusiastic about AI in front of their colleagues than they actually feel.
Not confused by AI. Not opposed to it. Performing excitement they don’t have.
That hit different because I realized what I’m seeing in my own community is the same thing happening inside companies everywhere. People show up, they nod along, they read the material, they sit in the meeting about the new AI workflow. But they don’t actually engage with it in any real way, because they don’t feel safe saying “I don’t get this” or “I tried it and it was bad” or even just “this doesn’t seem relevant to what I actually do all day.”
The Business.com data backs this up further: 45% of small business workers worry that adopting “too much AI” could hurt their company’s reputation with customers. Fortune reported this month that 80% of white-collar workers are either avoiding or outright rejecting AI tools they’ve been given. And there’s a new term circulating for the specific anxiety driving all of it: FOBO, Fear of Becoming Obsolete. Four in ten workers now name AI-driven job loss as a primary fear, and that number doubled in a single year.
When people are scared, they don’t raise their hand. They smile and nod and go back to doing things the old way.
Here’s what I think most business owners miss about this: the silence IS the feedback. If your team isn’t pushing back, asking dumb questions, or complaining about the AI tools you introduced, that’s not a sign that adoption is going well. That’s a sign people don’t feel safe being honest about where they actually are.
I learned this the hard way in my own community. Zane’s posts are genuinely good. But “good content” doesn’t create engagement. Safety does. Permission does. Someone going first and saying “I have no idea what I’m doing” does. I’ve started jumping into the community myself, replying to Zane’s posts, sharing my own half-baked experiments, asking questions I don’t know the answer to. Not because it’s a content strategy, but because modeling honesty about where you’re at is literally the only thing that makes other people willing to do the same.
Slate ran a piece this month about something related that I thought was fascinating. Bosses are getting excited about ChatGPT, sending AI-generated memos and strategy docs to their teams, and the employees are spending extra time cleaning up or decoding the output. The AI didn’t save anyone time. It created a new chore: managing the boss’s enthusiasm. One employee in finance described getting told to be more concise at work while simultaneously receiving, and I quote, “the longest shit known to man” in the form of raw ChatGPT output from their manager.
That’s the dynamic I think a lot of small business owners are accidentally creating. You’re excited about AI because you’ve been using it. You’ve built reps. You found the tools that work for your specific brain and your specific workflow. And then you introduce those tools to your team and expect the same enthusiasm to transfer automatically. It doesn’t. Because enthusiasm follows competence, and competence comes from practice in a low-stakes environment where screwing up is okay.
The businesses I’ve been watching that actually get AI to stick share a few things in common. They pick one boring, repetitive task and let people mess around with it. Not a mandate, not a performance review metric, not a company-wide announcement about being “AI-first.” Just: here’s a thing that eats two hours of your week, here’s a tool, try it, see what happens, no pressure.
Fortune’s data says 56% of workers globally received zero skills development even as their companies rolled out AI tools. That’s like buying everyone a piano and then being disappointed nobody can play Mozart. The instrument isn’t the issue.
The average small business worker saves 5.6 hours per week when they actually use AI well. That’s basically a free extra day per person per week. But that number only becomes real when people are using the tools because they genuinely help, not because they’re afraid of looking behind or being the person who “doesn’t get it.”
I’m working through this same thing right now. Building the Abra AI community, running content systems, figuring out which automations actually stick and which ones just look impressive on paper. And the biggest lesson so far is the same one showing up in all the data: the technology is almost never the bottleneck. The bottleneck is whether people feel like they can be honest about what they don’t understand.
If you’re running a team and wondering why the AI tools aren’t delivering, start there. Not with a new tool. Not with a training deck. With a conversation where you go first and say “here’s what I’m still figuring out.”
That’s the unlock nobody’s selling, because you can’t package it.But here’s something you CAN do right now.
One of the biggest blockers I see is that people hit a wall with an AI tool and just... stop. They don’t know what to ask next. They don’t know how to get unstuck. So they close the tab and go back to doing it manually, and nobody ever finds out.
If that’s happening on your team (or to you), here are some troubleshooting prompts you can copy and paste directly into ChatGPT, Claude, or whatever you’re using. These are designed for the moment when the AI gives you garbage and you don’t know what went wrong:
“That output wasn’t useful. Here’s what I actually needed: [describe it]. What did I get wrong in how I asked?”
“Can you explain what you interpreted my request to mean? I want to see where the disconnect was.”
“Give me 3 different ways I could phrase this request to get a better result.”
“I’m going to paste in what you gave me. Tell me which parts are weakest and why: [paste output]”
“Pretend you’re a [job title of the person who would normally do this task]. Now try again with that context.”
“That’s too generic. I need this to sound like it was written by someone who actually works at a [type of business] with [number] employees.”
“What information would you need from me to make this output actually useful? Ask me the questions.”
“Break this task into 3 smaller steps and do just the first one really well.”
“I’ve tried this prompt 3 times and keep getting [describe the problem]. What’s the pattern I’m stuck in?”
“I don’t understand your output. Can you explain it to me like I’m a small business owner who has never used this tool before?”
“This is close but the tone is wrong. Here’s an example of the tone I want: [paste example]. Match that.”
“I need this to be something I can actually send to a client/customer without editing. Right now it’s not. Fix: [list what’s off]”
“What’s the single most important thing I should include in this prompt that I’m probably leaving out?”
“I’m trying to use you for [task] but I’m not sure you’re the right tool. What would this task require that you can’t do?”
“Forget everything about this conversation and start fresh. Here’s exactly what I need, from scratch: [restate clearly]”
Save those somewhere your team can find them. Seriously. Half the reason people abandon AI tools is because they hit one bad output and assume the whole tool is broken. It’s not. They just need a different way in.
I wrote about this ROI gap a couple weeks ago: the real math behind what small businesses spend on AI vs. what they get back. The businesses hitting 34:1 returns aren’t using better tools. They’re using the same tools with better reps. And if you missed yesterday’s piece on why most businesses are paying for AI they don’t actually use, it ties directly into this. The pattern is the same: it’s never the technology.
If you’re an operator or agency founder working through this stuff in real time, that’s exactly what the Abra AI community on Whop is for. Daily breakdowns, prompt techniques, automation frameworks, and a space to actually say “this isn’t working” without anyone judging you for it. Free tier to get started.
If you want a direct conversation about what’s working and what to cut, book an AI Clarity Call. 45 minutes, no pitch, just tool names and a real plan for your specific business.
Hit reply if any of this landed. I read every one.
Talk soon,
Andrew

