Mule
← Journal3 min read

AI in our process: what we actually use, and what we don't

Honest disclosure: where AI shows up in the work we ship, what a human still does, and the rule we won't break.

Broadband internet equipment on a rural island, bright midday sky.
Photo by U.S. Army photo by Sgt. Andres Chandler · Wikimedia Commons (Public domain)

People ask us this in nearly every brief call: do you use AI in your work? The honest answer is yes, in specific places, with rules. Here is the long version.

What AI does for us

Drafting. When the brief is locked, we use a frontier language model to write a first pass at headline options, page-section copy, and FAQ entries. We treat the output as a sketch, not a deliverable. A human edits at least a third of the words before anything ships.

Search and pattern-finding. When we audit a client's existing site, we use a model to find broken patterns we might miss reading manually: missing alt text, inconsistent heading order, copy that contradicts other copy. The model is faster than we are at scanning a 40-page site for that kind of inconsistency. We verify each issue before fixing.

Image direction. When we mock up a design concept and need a placeholder image to test a layout, we generate an AI image as a sketch. The placeholder never ships. The real photograph or licensed image replaces it before launch.

Internal tooling. We use coding-assistant tools for the unglamorous parts of building: writing test fixtures, formatting JSON, scaffolding repetitive components. We do not ship code we do not understand.

What AI does not do for us

Final copy. Every published sentence on every site we ship has a human author. We do not pass AI-generated paragraphs off as ours. If a sentence reads like everyone else's small-business website, we rewrite it.

Photography. We do not publish AI-generated photos of real clients. Not the owner, not the staff, not the location, not the product. The pattern is too easy to spot and it burns the trust customers spend thirty seconds testing. When a project needs photography, we hire a real photographer (see the post on why we direct content but do not shoot it).

Customer-facing decisions. A model never decides which features go on which tier of your service business. A model never picks your tagline. A model never gives someone a refund. Those are decisions a human at our studio makes and signs.

The rule we will not break

The one absolute is: no fake reviews, ever. Not real-sounding ones, not "polished from a real comment" ones, not "draft a response from this customer's note" ones. The line between editing and fabricating is something a model cannot police, so we police it instead. If a client asks us to write a review on behalf of a customer, we decline and we explain why.

This costs us deals. We are fine with that. The day we ship a fake review is the day we are no longer Mule.

Why this disclosure exists

Because the industry is currently busy hiding it. Agencies say "human-crafted" while shipping content that is 90% machine-drafted. Studios say "we do not use AI" while their copywriter is pasting Claude output into a Google Doc. The whole thing is a soft lie, and customers are getting tired of soft lies.

Our position is that AI is a real tool, that it makes our scope larger per dollar than a non-AI studio, and that the work is honest when a human is responsible for what ships. If you want a studio that pretends AI does not exist, we are not the right shop. If you want one that uses it, names it, and edits the output, we are.

Written by

Emile Holemans

Co-Founder & Creative Technologist

emile@mule-digital.com

Ready to build something?

Mule builds sites, brands, and digital strategy for rural and small-town businesses. Tiers from $799. We write back personally.