WalletGrower

Our editorial methodology

How we verify every rate, APR, fee, and bonus on WalletGrower โ€” including the framework we built after a site-wide accuracy audit in April 2026 that found and fixed 225+ articles.

Updated April 26, 2026 ยท What changed: Documented the 7-lesson anti-fabrication framework, added the monthly accuracy audit cadence, and disclosed the post-audit fix counts.

The short version

  • Every rate, APR, fee, and bonus on this site has a verified source. If we can't verify it, we don't publish it.
  • We re-verify every article on a monthly cadence โ€” not "as needed," not "annually." The first of every month, automatically.
  • We disclose what changed. Every refresh shows an "Updated [date] ยท What changed: [specific change]" note. Date-bumping without a real change is forbidden.
  • Affiliate compensation does not affect rankings. Several of our top-recommended products pay us nothing. Several products we don't recommend would pay us more than the ones we do.

The April 2026 site-wide accuracy audit

In April 2026 we discovered a "plausibility-with-wrong-details" pattern across part of our content library: real companies, competent-sounding language, but specific structural fields (APRs, fees, subscription tiers, state availability, product menus, welcome bonuses, cosigner-release rules) were sometimes incorrect. The trigger was a single article on Funding U โ€” a real student lender, but we'd misdescribed their ISA program, loan ceiling, and credit minimum.

We took the entire library offline-equivalent for review. The site-wide audit covered:

~225 CMS articles reviewed for structural-field accuracy (APRs, fees, tier names, state availability, product menus, welcome bonuses).
~230 static pages reviewed for the same dimensions, plus dead-product and dead-brand mentions.
15 product-data files (~162 product entries) re-validated against primary-source disclosures from each provider.
14 dead-brand patterns + 8 stale-numerical patterns scanned across every published page.

Findings were either auto-corrected (clear-cut cases like a renamed product) or flagged for editor review (judgment calls where multiple interpretations were defensible). The full audit log is preserved internally for regulatory and editor reference.

The 7-lesson anti-fabrication framework

The audit produced a framework that now governs every new article and every refresh. Each lesson encodes a specific failure mode we observed.

1. Real brand + wrong details is the dominant failure mode

When LLMs are wrong, they're rarely wrong about whether a company exists. They're wrong about specific structural fields. We treat every (rate, fee, limit, tier name, state list, cosigner rule) as suspect until source-verified.

2. "Cons" sections are the most dangerous

It's easy to invent limitations that don't exist. Every "Cons" or "Watch-outs" claim must be tied to a primary source. If we can't source the criticism, we don't publish it.

3. TypeScript validation must be enforced on product data

Every product file now has a Zod schema that validates required fields at module-load time. Disabling type-checking on data files is forbidden.

4. LLM training data biases rates ~100โ€“150 bps high

HYSA APYs in training data reflect peak 2023โ€“early 2024 conditions. We verified that bulk-seeded HYSAs ran 100โ€“150 bps too high; CDs ran 60โ€“80 bps high; specific products had outright hallucinated rates. We never accept a default rate without verification against the vendor's published disclosure.

5. Brand transitions happen ~once every 2โ€“3 months

In 2.5 years we've tracked: Marcus PL discontinued, Petal renamed Tilt, Mint shut down, Truebill became Rocket Money, Trim shut down, Digit became Oportun Set & Save, Personal Capital became Empower, Capital One acquired Discover, Apple Card moved from Goldman to JPMorgan. Before recommending any brand, we run a "still-active + product-still-exists" check.

6. Categorical "dead product" risk and detail-level "drift" risk are different problems

Dead brands need editorial transition notes (one-time fix). Drift needs disclosure + recurring scans (forever ongoing). We use both: editorial transition notes for confirmed dead products, plus universal rates-change disclaimers on every page that contains specific numbers.

7. Per-entry source verification BEFORE generation is the only real prevention

Every product entry now stores a source URL alongside the data. If we can't source it, we don't publish it. Post-hoc auditing doesn't scale; gating publication on a source URL does.

Monthly accuracy scan

On the first of every month, an automated scan runs across every CMS article and static page. It checks for:

  • Dead-product mentions โ€” 14 patterns covering brands and products that have shut down or transitioned
  • Stale numerical claims โ€” 8 patterns flagging APYs, APRs, and fee figures that haven't been re-verified in 60+ days
  • Future-date references โ€” guardrail against accidentally publishing dates that haven't happened yet
  • Brand-transition flags โ€” automated check against a watch-list of companies under acquisition, restructuring, or product changes

Unambiguous fixes (e.g., a confirmed product shutdown) are auto-patched. Ambiguous findings are flagged for editor review. Every fix is logged with a timestamp and rationale.

How we evaluate financial products

Every product recommendation goes through the same structured evaluation. We compare products within their category using consistent criteria, drawing on:

Primary-source disclosures

Provider websites, rate-disclosure documents, official terms & conditions.

Direct testing where possible

For apps and services, we test signup flows, payout speeds, and customer support response times.

Regulatory standing

FDIC/NCUA membership, BBB rating, licensing in advertised states, regulatory actions.

Real customer reviews

Trustpilot, App Store, Google Play, and BBB review patterns โ€” weighted by recency and volume.

We identify which specific user profiles each product serves best ("Best for families spending $500+/mo on groceries") rather than declaring one universal winner. The right answer depends on the user, and our content reflects that.

How rankings work โ€” and how compensation does NOT affect them

Products are ranked based on their score across our evaluation criteria. Affiliate compensation does not influence rankings. Three concrete commitments:

  • A product that pays us a higher commission will not rank higher than a product that scores better on our criteria.
  • We frequently feature and recommend products from which we earn no compensation at all.
  • When two products score similarly, we publish the tradeoffs โ€” not a winner-takes-all summary that benefits whichever pays more.

Corrections and reader feedback

If you spot an error or believe a recommendation needs updating, email hello@walletgrower.com. We treat reader-flagged errors as priority items and aim to investigate within 24 hours. Verified corrections are made publicly with a "What changed" note on the affected page.

Related