Prompt 13

Proof Point Weakness Audit

In a world where every brand makes strong claims, proof is what separates the brands that get recommended from the brands that get evaluated and passed over. Most brands have proof — but most proof is the wrong type, at the wrong specificity, for the wrong audience. These prompts diagnose not just whether you have proof, but whether your proof is the kind that actually moves a skeptical buyer.

---

What This Page Is About

In a world where every brand makes strong claims, proof is what separates the brands that get recommended from the brands that get evaluated and passed over. Most brands have proof — but most proof is the wrong type, at the wrong specificity, for the wrong audience. These prompts diagnose not just whether you have proof, but whether your proof is the kind that actually moves a skeptical buyer.


When to Use These Prompts

  • When your sales cycle is long and proof-dependent
  • When buyers ask "can you share more evidence?" during evaluation
  • When case studies exist but don't seem to be converting pipeline
  • When entering an enterprise market where credibility requirements are higher
  • When AI recommendations include caveats or qualifications about [BRAND]'s track record

Prompt 1 — Basic Proof Check (Easy Entry)

prompt
If a [TARGET AUDIENCE] asked you "has [BRAND] actually been proven to work for businesses like mine?" — what would you say?

Be specific about what evidence exists, how strong it is, and what you'd want to see before giving a more confident answer. Don't soften the assessment.

Prompt 2 — Four-Type Proof Audit

prompt
Evaluate [BRAND]'s proof architecture across four proof types:

Type 1 — Outcome proof: Does [BRAND] publish specific, measurable results that customers achieved? Rate the specificity: vague ("improved efficiency") / moderate ("reduced time by 30%") / precise ("reduced onboarding time from 3 weeks to 4 days for a 50-person SaaS team").

Type 2 — Social proof: Are the customers vouching for [BRAND] ones that [TARGET AUDIENCE] would recognize and respect? Are testimonials generic praise — or specific endorsement of the outcomes that matter to the buyer?

Type 3 — Structural proof: Does [BRAND] have third-party validation — analyst reports, independent reviews, audits, certifications — that confirms claims without [BRAND] being the one making them?

Type 4 — Demonstration proof: Can a buyer experience the product's value before committing — free trials, live demos, published methodology, transparent pricing? Rate: friction-heavy / accessible / frictionless.

After the audit: which proof type is weakest — and which single proof investment would most change how confidently you'd recommend [BRAND]?

Prompt 3 — Proof Specificity Audit

prompt
The most common proof problem is specificity — proof that exists but isn't specific enough to be convincing.

Audit [BRAND]'s proof for specificity across three dimensions:

1. Customer specificity: Are [BRAND]'s customers named, described, and contextualized — or anonymized to the point of uselessness ("a Fortune 500 company")? Specific customer context is what makes proof transferable to a new buyer.

2. Outcome specificity: Are the results described in concrete, measurable terms — or in vague, unverifiable language? "Saved significant time" vs. "reduced weekly reporting time from 6 hours to 45 minutes" are not equivalent proof.

3. Context specificity: Is the proof from companies that match [TARGET AUDIENCE]'s size, industry, and use case — or from contexts so different that transferability is unclear?

For each dimension: rate current specificity level and describe what "high specificity" proof would look like for [BRAND]'s ideal buyer.

Prompt 4 — Proof Coverage Gap Analysis

prompt
[BRAND] may have strong proof for some use cases or audiences but weak proof for others — creating gaps that matter in specific deals.

Map [BRAND]'s proof coverage across its primary segments:

Use case 1: [PRIMARY USE CASE] — is there specific, credible proof?
Use case 2: [SECONDARY USE CASE] — is there specific, credible proof?
Industry 1: [PRIMARY INDUSTRY] — named customers and outcomes?
Industry 2: [SECONDARY INDUSTRY] — named customers and outcomes?
Company size: Small/mid-market — proof?
Company size: Enterprise — proof?

For each cell: strong / weak / absent. Then tell me: which proof gap is most likely costing [BRAND] deals right now — and what's the fastest way to close it?

Prompt 5 — Proof Quality vs. Proof Volume

prompt
Many brands have the wrong proof problem. They think they need more proof — more case studies, more testimonials — when the real problem is proof quality.

Evaluate [BRAND]'s proof on quality vs. volume:

Volume assessment: How many pieces of customer proof does [BRAND] have publicly available? Is volume the constraint?

Quality assessment: Of the proof that exists, how much of it would pass the "specific, named, measurable, contextually relevant" test? What percentage is genuinely high-quality vs. generic and vague?

After both assessments: does [BRAND] have a quantity problem (not enough proof) or a quality problem (enough proof but not the right kind)? What's the specific action that addresses the actual problem — not the assumed one?

Prompt 6 — Third-Party Proof Gap

prompt
Self-generated proof (case studies, testimonials produced by the brand's own marketing team) is inherently less credible than third-party proof (independent reviews, analyst coverage, press mentions, community endorsement).

Audit [BRAND]'s third-party proof footprint:

1. Review platform presence: Is [BRAND] present on the review platforms [TARGET AUDIENCE] actually uses? (G2, Capterra, Trustpilot, industry-specific platforms) Is the review volume and rating strong enough to build confidence?

2. Analyst coverage: Has [BRAND] been included in analyst reports, buyer's guides, or category rankings that [TARGET AUDIENCE] consults?

3. Independent press: Has [BRAND] been covered positively in publications that [TARGET AUDIENCE] reads — not just press releases, but editorial coverage?

4. Community endorsement: Do practitioners in [CATEGORY] communities recommend [BRAND] spontaneously — in forums, Slack groups, LinkedIn posts — without incentive?

For each: strong / developing / absent. What's the highest-leverage third-party proof investment for [BRAND]?

Prompt 7 — Proof Architecture Rebuild (Advanced)

prompt
[BRAND] needs to rebuild its proof architecture from the ground up. Current state: [DESCRIBE — e.g., "3 case studies, all anonymous, no specific outcomes, no third-party validation, limited reviews"]

Build a 12-month proof architecture development plan:

Phase 1 (Months 1–3) — Proof floor: What are the minimum viable proof assets that need to exist before [BRAND] can be confidently recommended? How many case studies, at what specificity level, from which segments?

Phase 2 (Months 4–8) — Proof breadth: How does [BRAND] extend proof coverage across its primary use cases, industries, and audience segments? What's the prioritization framework?

Phase 3 (Months 9–12) — Proof authority: What proof assets — original research, benchmark data, independent audits, analyst inclusion — would make [BRAND]'s evidence base not just credible but authoritative?

Throughout: what systems, incentives, and processes would [BRAND] need to put in place to make proof generation a continuous function rather than a periodic campaign?

Pro Tips for This Prompt Set

  • Proof quality beats proof volume every time. One specific, named case study with real numbers from a recognizable company is worth more than 10 anonymous testimonials.
  • Review platforms are underinvested by most B2B brands. G2, Capterra, and similar platforms are heavily referenced by AI systems — a strong review presence there directly improves recommendation quality.
  • Proof gaps kill enterprise deals. Enterprise buyers have risk management requirements that proof directly addresses. If proof is weak, the deal often dies not at the product demo but at the "can you share more customer evidence" stage.
  • Run Prompt 4 (Coverage Gap) with input from sales. They know exactly which segments ask for proof and don't get it. Their input turns this from a theoretical audit into a prioritized action plan.

Common Mistakes

  • Publishing proof for the wrong audience. Case studies from customers that don't match the buyer's context create doubt, not confidence.
  • Treating testimonials as proof. "We love [BRAND]!" is not proof. Proof requires specificity: who, what problem, what they did, what result, over what timeframe.
  • Neglecting third-party proof in favor of owned proof. Self-published proof is inherently less credible. The brand with the strongest proof architecture has a mix of owned, earned, and independent validation.
  • Letting proof age. A case study from 4 years ago signals stagnation in fast-moving categories. Proof needs to be refreshed continuously.


Explore With AI