Skip to content
Guides

Why most NZ SMB AI projects fail (and the one rule that fixes most of them)

Most small NZ AI projects don't fail because the tech doesn't work — they fail in predictable ways. One rule fixes most of them.

Ben Anderson22 April 20269 min read

I've seen enough small NZ AI projects up close now that I can usually predict how they're going to fail before they start. Not because the tech is bad — the tech is mostly fine. They fail because of a small set of recurring mistakes that have nothing to do with which model or which platform you picked. And there's one rule that catches most of them, which is the punchline I'll get to in a minute.

This post is for owners and operations leads who've either tried something with AI that didn't stick, or are about to try and want to skip the obvious traps. The pattern I see is consistent enough that it's worth naming the failure modes plainly.

Here's the rule first, so you have it in mind as we go: if a human still has to copy work from one tool to another, your AI isn't working yet.

Hold onto that. Most of what follows is variations on the same theme.

Failure mode 1: the demo that was never a workflow

The most common failure I see goes like this. Someone in the team — often the owner, sometimes a curious admin lead — discovers that ChatGPT or Claude can do a thing. Draft a quote. Summarise a meeting. Categorise some bank lines. The first time they see it work, the obvious leap is "we should be doing this every day."

Three weeks later, nobody is doing it every day. The demo worked. The workflow never showed up.

The reason is that a demo and a workflow are completely different beasts. A demo is one person, in front of a screen, asking the AI a question and getting a useful answer. A workflow is the same task happening reliably every Tuesday at 2pm whether or not the person who set it up is in the office, with a clear input, a clear output, and a clear handover to whatever happens next.

The gap between those two is not technical. It's process design. Who runs it. When. With what inputs. Where the output goes. Who reviews it. What happens if it fails. None of that is hard, but if it doesn't get answered, the demo never becomes the workflow.

Failure mode 2: the AI that needs a human to copy work between tools

This is the one the rule catches directly, and it's the most common reason "automations" don't actually save time.

The pattern looks like this. Someone sets up an AI to draft customer follow-ups. Great. The AI generates a draft in ChatGPT. Someone copies it into Outlook. Then they update the CRM to mark the customer as followed up. Then they tick a box in the spreadsheet that tracks weekly follow-ups.

That's not automation. That's a human, doing four tool transitions, with one of them being "ask the AI for a draft." The AI saved a few minutes of typing. The workflow is still entirely manual.

The real automation question isn't "did the AI do the writing?" It's "did the work move from one place to the next without a person carrying it across?" If a person is the bridge between two systems, you haven't automated anything important — you've just changed which keys they're pressing.

This is what the rule is pointing at. When you're evaluating whether an AI project is working, look at the human movements. If they're still copy-pasting, screenshotting, or re-keying between tools, the AI is doing the easy bit and you're still doing the hard bit. The leverage is in the connections, not the generation.

Failure mode 3: the "AI strategy" with no owner

Big version of the same problem. A business decides to "do AI." There's a workshop. There's a list of opportunities. There's enthusiasm. Six months later, almost nothing is in production.

What went wrong is almost always ownership. AI projects fail when nobody is on the hook for any specific outcome. "We're going to use AI for customer service" is not an outcome. "By end of next month, our after-hours enquiries get a triaged auto-reply that captures contact details and sends them to Sarah's inbox by 8am next morning" is an outcome.

The pattern that works in small teams: one person, one workflow, one measurable result, one deadline. If the project doesn't reduce to that shape, it doesn't ship. The ones I see succeed are the ones where someone owns the whole arc — the design, the testing, the rollout, the review — not the ones where it gets handed around between people.

This isn't an AI problem. It's a project-execution problem that AI projects suffer from acutely because the technology is novel enough that everyone wants to be involved in deciding things and nobody wants to be the person whose job depends on it shipping.

Failure mode 4: the automation that runs in the background until it doesn't

This one is sneakier and tends to bite teams that have actually had some success. They set up a few automations. The automations run for a while. Everyone gets used to them. Then six months later something has drifted — a tool changed, a process changed, the rules don't quite match reality anymore — and the automation has been quietly producing wrong outputs for weeks.

Sometimes nobody notices because nobody is reviewing. Sometimes the wrong output ends up in front of a customer. Sometimes a report has been showing the wrong number all quarter and a decision gets made on it.

Real automations need a heartbeat. Not a fancy dashboard — a simple "did this run, did it produce sensible output" check that someone actually looks at. Weekly is usually enough for small things. The teams that get this right treat automations like staff members: they have a job description, they have a manager, and someone notices if they don't show up.

If a human still has to copy work from one tool to another, your AI isn't working yet.

Ben Anderson

The rule, applied

So back to the rule. It's deliberately simple, and it does a lot of work. Let me show what applying it looks like across the failure modes above.

Failure mode 1 (demo that was never a workflow). Apply the rule: where does the output go? If a person has to copy the AI's response into the next tool, the workflow isn't real yet. Either close that gap or accept it's still a demo.

Failure mode 2 (the obvious one). Apply the rule directly. Map the actual sequence of clicks and tool transitions. Every place a human moves data between systems is a place where the AI is leaning on you, not the other way around. Close those gaps in priority order — the most-frequent transitions first.

Failure mode 3 (no owner). The rule helps even here. If you can't say where the output goes and who's accountable for it landing in the right place, you don't have a workflow you can own. Forcing the question makes the ownership gap visible.

Failure mode 4 (silent drift). The rule extends to the review loop. If the automation produces something but no human ever sees it, it doesn't really matter whether the output is right. Build a checkpoint where a person notices.

The reason the rule works isn't that copying between tools is the only failure mode. It's that the rule forces you to look at the real shape of the workflow — the actual movements of data and decisions — instead of the shape you wished it had. Most failed AI projects look fine in slides and broken in practice. The rule is a forcing function for honesty.

What "fixing it" actually looks like

Concretely, here's the pattern I use when fixing one of these projects, anonymised across a handful of small NZ teams I've worked with:

  1. Map the current workflow as it actually runs. Every click, every tool, every handover. Don't sanitise it. The first version is always uglier than people expect, which is the point.
  2. Find the human-copy steps. Highlight every place where a person is moving information between systems. These are the leverage points.
  3. Pick the most frequent one. The transition that happens most often is almost always where you should automate first. Volume × annoyance = priority.
  4. Close that gap. Sometimes that means an integration. Sometimes it means switching one of the tools out for one that connects properly. Sometimes it means a small AI agent that watches one inbox, does the categorisation, and posts the result into the right system. The how depends on the specifics.
  5. Add a heartbeat. Whoever owns the workflow gets a weekly summary. Did it run? Did the output look right? If not, what changed?
  6. Then look at the next human-copy step. Not the second-most-painful gap — the next-most-frequent one. Repeat.

The result over a few months isn't a transformation. It's a quietly more reliable business where the team spends less time being a bridge between systems and more time on the work that actually needs them.

Why small NZ teams have an advantage here

One thing worth saying directly: small teams have a real advantage on this stuff that big organisations don't. You can map your workflows on a single page. You can change a process by walking across the office. You can decide on Monday and have something running by Friday. Almost none of this is true at scale.

The teams that win with AI in the next few years won't be the ones with the biggest budgets or the fanciest tools. They'll be the ones that took their actual workflows seriously, applied the rule, and closed the human-copy gaps one at a time. That work is more available to a 12-person business in Nelson than it is to a 1,200-person business anywhere.

If your AI projects haven't stuck yet, my bet is it's not because the tech failed you. It's because the workflow shape was never finished. The rule is a starting point. The rest is just doing the work.

If you want a hand applying this to your business — mapping the workflows, picking the right human-copy gap to close first, building something that actually stays on — that's the kind of thing I help with. The framework I use for picking the first automation is the natural next step if you've nodded at most of this. If you're further along and ready to think about workflows that run without you, managed agents are where that conversation goes next. Or just book a conversation and we'll work through it.

Tags

Ai ImplementationFailure ModesWorkflow
BA

Written by

Ben Anderson

Founder, Nelson AI

Ben builds practical AI and automation for New Zealand businesses — internal tools, web apps, and workflow automations scoped to what the work actually needs.

Get in touch

Related posts

Keep reading.