Why we don't lead with AI
We don't lead with AI in our marketing. It's the central technical unlock for the 2-week audit, but it's never the headline. This isn't false modesty — it's a deliberate read of how industrial buyers actually evaluate vendors.
The plant engineer's default skepticism
A plant engineer with 25 years of audits behind them has watched the industry cycle through Six Sigma, Lean, IoT, digital twins, and now AI. Every wave promised dramatic step-changes. Most delivered modest, real, but unspectacular improvements once the consultants left.
That engineer reads "AI-powered industrial energy audits" on a vendor page and sees a familiar pattern: marketing claim ahead of operational reality. Their default response isn't "tell me more." It's "show me the math."
Leading with AI signals to that buyer that we're selling them magic. Magic doesn't survive an engineering review at Hydro-Québec or a challenge from a P.Eng. peer reviewer. The audits we've won — and the ones we've lost — both teach the same lesson: industrial buyers don't buy capability claims, they buy defensible mechanisms.
What AI actually does in our pipeline
Our AI work is concrete and limited:
It accelerates data ingestion. Plant asset spreadsheets arrive in 50 different formats. Normalizing them by hand is a week of work. Our pipeline does it in hours, with audit-grade error tracking.
It runs reconciliation in parallel. The Audit Engine's reconciliation pipeline (top-down vs bottom-up, per carrier, per area) is deterministic math, not ML. The acceleration comes from running the search across all assets simultaneously instead of an auditor walking through them in spreadsheets.
It matches ECMs against rules. The 50+ ECM matching conditions in our library are hand-coded engineering rules — affinity laws, efficiency-class deltas, pre-defined leak rates. The matching itself is rule evaluation. Fast on a server, slow in a spreadsheet.
It drafts reports. The output of the audit pipeline is a structured data object. Generating the human-readable report from that structure — section ordering, recommendation phrasing, captioned tables — is the part where LLMs actually help. Final pass is always a human auditor and a P.Eng. signature.
Why subtle is the right framing
The framing that converts industrial buyers is specific mechanism + measurable outcome, in that order. We say "the audit takes 2 weeks instead of 10" (outcome) and explain what AI compresses (mechanism). We don't say "our AI finds savings" — that's a claim with no defensible mechanism.
On the homepage, AI is in the third line of the headline: "30 years of industrial audits. Subsidized by Hydro-Québec. Accelerated by AI." Order matters. Credibility (30 years) earns the right to claim acceleration (AI).
The downside of leading with AI
Pages that headline "AI-powered" do well in inbound from venture funds, journalists, and tech-curious sustainability teams. They do badly with the people who sign actual audit contracts: plant managers, operations VPs, energy directors at industrial corporates. Those buyers either skip the page or arrive at the meeting hostile.
We optimize the marketing for the buyer who signs. The technology depth is in these docs, where it can be evaluated by the engineer who reads them. The homepage is honest about the credibility we've earned in 30 years and the acceleration that comes from a specific software pipeline. The two layers — surface marketing and technical docs — speak to the right audiences without mixing the messages.
What this means for vendor evaluations
If you're evaluating Opnor against another energy-audit vendor and one of them is leading with "AI-discovers-ECMs" claims, the right move is to ask both:
— "Show me the formula behind a single ECM's savings number on my plant."
— "What part of the audit is AI doing, and what is your senior engineer doing?"
— "Who signs the report, and what's their accountability if the numbers are wrong?"
Vendors who can answer those questions concretely are running real engineering operations with software acceleration on top. Vendors who deflect or speak in capability metaphors are selling magic. Pick the former.
- Founder review — does this voice match what Daniel says in customer meetings?
- Add specific competitor names if Opnor wants to call out the AI-first vs engineering-first split
- Cross-link to the Audit Engine page once it has the AI-compression detail section