How AI helps us move faster at Sonary (and where it absolutely doesn’t)

Here’s my take: if we expect small business owners to trust us, we need to be open about how we operate. Sonary exists to help people make confident software decisions, and that only works if we hold ourselves to the same standard of clarity. Over the past year, that has meant being honest about something many teams are still hesitant to discuss openly: how we use AI to support our content.
We didn’t adopt AI because it was trendy, or because we didn’t like writing about business software. We adopted it because we were hitting limits. The sheer volume of content SMB buyers need — comparisons, guides, definitions, frameworks, explanations — is enormous. And while the heart of that work is human, plenty of the scaffolding around it is not. We needed a way to move faster without sacrificing the judgment, nuance, and detail-oriented approach that define our editorial process.
Early on, the output was exactly what you’d expect: inconsistent, tone-deaf in places, occasionally impressive, occasionally baffling. It felt a bit like onboarding a very enthusiastic junior writer who had read everything on the internet but understood almost none of it. Still, it showed flashes of real value, enough to keep going — even if those flashes were surrounded by moments when I had to explain, again, what “smart but not smug” means, or why we use sentence case, or why SMBs don’t want jargon dressed up as expertise.
Where AI actually helps and where it still falls short
What became clear quickly is that AI shines when the task isn’t about brilliance but about structure. When we need a clear starting point, a way to organize a messy thought, or a set of possible angles to pressure-test, AI is genuinely helpful. It can turn the “blank page” problem into something more manageable. It can give us the bones of a piece quickly — a rough outline, a working shape — so that human editors can spend their time adding depth, details and insight.
One of the early moments when this clicked for me actually happened right here, as we were shaping this article. I asked the AI for a proposed outline, half expecting to have to redo it completely. Instead, it came back with something surprisingly workable: logical sections, a clear narrative arc, even SEO elements in the right places. I didn’t use it verbatim — I rarely do — but it gave me momentum. It broke the inertia that often slows down the start of a reflective piece. I rewrote, restructured, and refined the whole thing, but the scaffolding made the work easier. It reminded me that sometimes the most challenging part isn’t writing — it’s just getting started.
And that, I’ve learned, is the correct framing: AI isn’t a writer in the room. It’s the scaffolding crew. It sets up the environment so the real work can happen more efficiently.
Where it fails is exactly where you’d expect it to. AI doesn’t reason. It can mimic the sound of reasoning, but it doesn’t understand why an argument works, where it’s fragile, or what a reader might question. When a piece depends on subtle judgment, lived experience, or a thoughtful arc, the model’s limitations show up quickly. I’ve seen drafts that look clean at first glance but unravel the moment you ask, “Does this actually make sense?”
A perfect example happened again while writing this article. I asked the AI for a rough word-count estimate — a simple request, the kind a human would answer with a quick skim. The AI responded confidently… and was wildly off. Google Docs reported 962 words; the model estimated more than 1,300. It wasn’t surprising, but it was telling. AI will answer with total confidence even when it shouldn’t. It doesn’t pause. It doesn’t hedge. It doesn’t look back at its work with the self-awareness humans naturally (though not always) apply. It simply outputs. And then it’s up to us to review the work.
Tone is another recurring challenge. Even with extensive training, reminders, and edits, the model drifts. One paragraph will sound perfectly aligned with Sonary’s smart-but-human style, and the next will slip into something generic or overly academic. I’ve had to nudge it back more times than I can count — “less fluffy,” “less promotional,” “less like you swallowed a marketing thesaurus.” These aren’t bugs; they’re inherent limitations. Models don’t internalize values. They approximate patterns.
And of course, there’s the personal voice problem. When I write something reflective or opinionated, an AI-generated version can gesture at my tone, but it can’t replicate it. It doesn’t know what I’ve experienced or how I actually think about software, or teams, or small businesses. It can simulate a voice, but it can’t bring lived experience into the room. Readers feel the difference instantly.
How we shaped AI into something useful
Behind the scenes, we’ve spent considerable time teaching the AI where those boundaries lie. Not through elaborate training pipelines, but through repetition, examples, and collaboration. We clarified what Sonary’s voice sounds like, what we refuse to publish, and where clarity matters more than cleverness. Over time, the agents became more consistent — not perfect, but steadier. Still, never so reliable that we’d trust them without a human reviewing every line. And that’s intentional. There’s no version of this workflow where the model gets the last word.
The work has shifted from “fixing the AI” to genuinely partnering with it. We bring the context, the data, the understanding of the SMB mindset, the nuance. The AI brings structure, speed, and a kind of relentless willingness to iterate without complaint. When it goes off track, we adjust. When it oversteps, we rein it in. When it gives me something useful, we run with it. It’s a dance — and it only works when we both stay in our lanes.
What we actually use AI for at Sonary
To make this practical, here’s where AI fits into our workflow today:
- Content ideas – generating angles, stress-testing headlines, and identifying questions SMBs are already asking. It helps us see opportunity faster, but humans decide what’s worth pursuing.
- Competitor research – summarizing positioning, surfacing repeated claims, and highlighting messaging patterns across tools. We use it to scan the landscape more efficiently, not to copy it.
- Outline building – structuring long-form guides and comparisons before drafting. This lets our editors focus on analysis instead of formatting.
- Fact-checking prompts – flagging areas that may need verification or deeper review. It helps us ask better questions, but it never replaces manual validation.
- Refreshing data – identifying sections that look outdated or potentially stale so we can revisit sources and update them properly.
- First-draft scaffolding – producing structured starting points that we heavily rewrite before anything goes live.
Every one of these still runs through a human filter. AI speeds up the process; it doesn’t own it.
Knowing when not to use AI, especially for SMBs
There are moments when AI simply isn’t the right tool. If a piece requires emotional intelligence, strategic judgment, or a point of view grounded in personal experience, AI creates noise instead of clarity. In those cases, it slows me down. I can almost feel the mismatch between what it can do and what the moment requires. Some tasks need a human brain from the first word to the last.
If you’re running a small business and trying to figure out where AI fits, I don’t think the answer is all that different from what we’ve learned internally. Use it to clear the runway, not to fly the plane. Delegate the parts of the work that rely on patterns and repetition. Keep the parts that rely on judgment, empathy, and experience. And don’t feel pressured to use AI everywhere — the best outcomes come from being selective, not maximalist.
AI isn’t going to decide your strategy or understand your customers better than you do. But it might give you the time and space to focus on those things.
Content is about trust
We don’t pretend to have all the answers, and our process will keep evolving as the technology does. But I do believe in being transparent about the journey. Content is a trust-building exercise, and trust comes from honesty. AI helps us operate more efficiently, but it doesn’t define who we are or replace the human responsibility we have to our readers.
As long as we maintain that balance — speed without shortcuts, structure without losing voice, experimentation without losing accountability — I’m confident we can keep delivering content that genuinely helps people make better decisions. And I’ll keep sharing what we learn along the way, because navigating this new era is easier when we talk openly about what’s working and what isn’t.
If you’re building content inside your own small business, the lesson isn’t to automate everything. It’s to identify where structure slows you down and let AI handle that layer — so you can focus on clarity, accuracy, and the relationships that actually drive growth.
AI won’t make your content thoughtful. But it can make the process lighter. And sometimes that’s enough to help you think better.


