HVAC Google Ads resource

How to know if your Google Ads agency is actually doing a good job

Most operators evaluate their marketing agency on vibes (the monthly report looks fine, the agency answers emails, the relationship feels professional). Vibes are uncorrelated with whether the account is actually being managed well. Below is the 13-point operator-side diagnostic for evaluating a Google Ads agency. Each item is a specific question with an objective check. An agency that fails 4 or more is failing on the substance, regardless of how the reports look.

Quick answers

Direct answers

What "good" looks like operationally. A good agency makes specific changes to the account on a weekly cadence, documents what changed and why, hits or beats the cost-per-booked-job target on a 90-day rolling window, and surfaces strategic decisions (budget changes, campaign-structure shifts) BEFORE making them, not in the monthly report after the fact. Everything below is a check against one of those four dimensions.

The vibes problem. Polished monthly reports and responsive email cadence are necessary but not sufficient. Agencies that fail on substance often have the polish dialed in; that is why most operators do not catch the failure mode until they fire the agency and discover the account was on autopilot for 6 months. The 13-point checklist below is built to bypass the polish and check the substance directly.

The cadence question. A managed account should show change-history activity at least weekly (Google Ads > Tools > Change history). If the change-log is empty for 30+ days, the agency is not touching the account; the bidder is doing all the work and you are paying a retainer for the bidder. This is the single highest-leverage check; most failing agencies fail it.

The transparency question. The agency should be willing to share the account directly (read-only access at minimum, admin access on the manager account ideally) without making it awkward. If sharing access is treated as unusual, that is signal. You are paying for the account; you should be able to look at it.

The diagnostic

Diagnostic checklist

The 13-point checklist for evaluating a Google Ads agency. Failing 4+ items is a substance problem, regardless of report polish.

Change-history activity at least weekly. Open Google Ads > Tools > Change history. Are there entries from the agency at least weekly? If the change-log shows zero activity for 30+ days, the agency is not actively managing.
Conversion tracking integrity verified quarterly. Has the agency run a tracking-integrity check in the last 90 days? Documented in writing? Most agencies skip this; broken-tracking accounts produce false-positive monthly reports because the bidder optimizes on the broken numbers.
Search-term report reviewed at least monthly. Has the agency added negative keywords from the search-term report in the last 30 days? If the negative-keyword list has not grown, broad-match leakage is eating the budget.
Cost-per-booked-job target, not just cost-per-lead. Does the agency know your gross margin and your close rate? Are they targeting cost-per-booked-job, or just cost-per-lead? Cost-per-lead is a vanity metric without close-rate context.
Branded vs. non-branded campaign separation. Are branded queries (your own name) and non-branded queries in separate campaigns with separate budgets and bid strategies? Pooled campaigns hide branded ROAS inside the blended number; a 10x branded ROAS and a 2x non-branded ROAS pool to 6x, which looks fine on a report but hides the fact that non-branded is unprofitable.
Account-ownership verified. Are you on the manager account at admin level? If not, the agency owns your account, not you. This is non-negotiable for any long-term relationship.
Conversion-window settings match sales cycle. For HVAC, repair campaigns close in days; replacement campaigns close in weeks. Are the conversion windows configured to match each campaign's actual sales cycle? Default 30-day windows on a 14-day repair cycle costs the bidder learning speed.
Strategic decisions surfaced before changes. Budget shifts, campaign-structure changes, bid-strategy migrations: are these communicated before they happen, or first time you see them is the monthly report after the fact? Should be before.
Real performance review on a 90-day rolling window. Does the monthly report show 90-day rolling cost-per-booked-job, not just last-30-days vs. previous-30-days? Short windows produce noise; agencies that report short windows are picking the windows that look best.
Negative-keyword list growing, not static. The negative-keyword list should grow every month as new waste appears. A static list from agency setup that has not been touched in 6 months is a failing agency, not a clean account.
Call tracking wired and validated quarterly. For HVAC, phone calls are 70 percent of conversions. Is call tracking configured? Has the agency validated that calls are firing as conversions in the last 90 days? "We assume call tracking is working" is not a quarterly validation.
Quality Score actively managed. Is the agency tracking quality score on top spending keywords and addressing low scores via landing-page improvements, ad-copy rewrites, or keyword-relevance tightening? Quality score is a multiplier on every click cost; ignoring it is leaving 15-30 percent of efficiency on the table.
Competitive context surfaced regularly. Does the agency monitor competitor activity (brand-defense gaps, sponsored GBP cards on local pack, AI Overview citation share, competitor ad copy)? At least quarterly? Competitive context shifts; agencies that never surface it are not paying attention to the market the prospect is competing in.

Audit logic

What the audit catches

The audit serves as the objective check on the 13-point list. The prospect can run the list themselves (most items are checkable in under 15 minutes from inside Google Ads), but the audit produces the evidence base in writing: which items pass, which fail, with the specific finding for each failure.

For an operator deciding whether to fire their current agency, the audit produces three useful artifacts: (1) the pass/fail status on each of the 13 items, (2) the specific evidence behind each failure (e.g., "negative-keyword list has 23 terms; no additions in the last 47 days; search-term report shows 8 wrong-service queries in the top 50 by cost"), and (3) the prioritized fix list the next agency (or a self-managed account) should run in the first 30 days.

The audit's job is not to recommend firing the agency. The audit's job is to make the substance visible. Operators who run the audit and find their agency passes 12 of 13 items often keep the agency with a tighter scope; operators who find the agency passes 4 of 13 items have the evidence base for a firing conversation.

Get the audit

Free, ~48-hour turnaround, no sales call. The audit catches the exact issue this page describes.

Get the audit
Focus
Specialist paid search
Pricing
$500/month flat
Reply time
Within 24 hours, weekends too
Direct line