Every SEO has been there. You present a clear fix to a problem, “Put the FDIC disclosure in text on the Certificate Deposit product page”, and instead of agreement, the resistance begins:
- “Why should we change if AI should be able to figure it out?”
- “What about that competitor who doesn’t do this?”
- “Why do they still rank even though they skipped it?”
On the surface, these sound like fair questions. In reality, they’re stall tactics. They’re symptoms of something more profound: organizational drag, the cultural, compliance, and political friction that slows or blocks obvious SEO fixes.
The SEO is usually advocating for change. Everyone else is searching for reasons to avoid it.
The Case of the Missing Banks
In a recent webinar and in my recent Signal and Fricton article, I used a real example from “best CD rates” results. Two banks that I knew had competitive rates weren’t listed:
- Bank A required a ZIP code to view rate pages. Google’s bots and AI agents, which pull financial data, can’t enter a ZIP code. No data = no rate validation = no listing.
- Bank B had its FDIC insurance statement only as an image in the footer, not in the main text of its CD rates page. Result? The AI didn’t see it.
Immediately, the chat questions from teh webinar guests poured in:
- “Why didn’t the bot read the image?”
- “Can it read the alt text?”
- “If we put the FDIC statement in a FAQ, won’t that cover us?”
Totally legitimate questions, and I am glad they asked. However, my point was that it was not readable, so the offer pages need to be revised. That is why I could not rest a snarky response:
“Why should they? Why do we expect bots to work so hard to ingest and learn things about us? Why can’t we just put the criteria where it belongs—on the actual product page?”
What ChatGPT Revealed
When I asked ChatGPT why it chose the CDs it did, it revealed six clear screening criteria:
- Rate Competitiveness – Preference for APYs in the 4.40%–4.60% band, verified across multiple aggregators.
- Liquidity & Term Variety – Included short-term, one-year, and no-penalty CDs to match saver profiles.
- Minimum Deposit Requirements – Favored low-barrier entry ($500–$1,000).
- Institution Type & Access – Prioritized broadly available banks/credit unions with clear membership rules.
- FDIC/NCUA Insurance – Excluded anything not explicitly insured.
- Freshness & Consensus – Focused on late-August 2025 data, validated across trackers.
AI answers don’t work off gut feel — they work off eligibility.
Eligibility in Action
Behind every “best” or “top” answer sits a set of eligibility gates: visibility, completeness, thresholds, competitiveness, and consensus. The exact criteria shift by category (finance, travel, healthcare, e-commerce, SaaS), but the pattern is consistent: the moment a query includes a superlative adjective like best, top, fastest, cheapest, safest, you can expect multiple weighted filters to be applied. Miss one, and you’re out.
I’ve broken down these Five Eligibility Gates in detail over on my Substack Signal & Friction, including real-world criteria pulled straight from ChatGPT’s CD rate answers.
The Myth of Superpowered Bots
Here’s the problem: many non-SEOs assume search engines and now the even smarter AI are magical. That they’ll:
- Connect dots across scattered FAQs.
- Apply blanket rules (“FDIC applies everywhere”).
- Crawl every hidden nook and cranny of data and edge cases and elevate it.
Unfortunately, no, they are not and will not. The simple reality is, they don’t need your content that badly. It costs a lot to do all that work and massage all that data to synthesize a result. They will take the path of least resistance, which is why they prefer content aggregators and sites that simplify their job for them.
AI-generated answers are unforgiving. They rely on structured logic rather than subjective judgment. Even if your content is indexed, you won’t appear unless you clear all the required gates. And the gates are getting stricter.
Organization Drag
Here’s the uncomfortable part: most resistance to these fixes isn’t technical—it’s organizational.
In highly regulated industries, such as financial services, even minor adjustments trigger a rigorous compliance review process. Legal sign-offs. Committee debates. More reviews. By the time everyone’s finished, the opportunity is gone.
That’s where organization drag sets in.
Instead of implementing changes, managers and stakeholders spend hours searching for reasons not to move:
- “What about Bank X? They don’t list it and they’re fine.”
- “Why can’t AI deduce it from our FAQ?”
- “Isn’t our brand big enough that we don’t need to spell it out?”
These aren’t really search questions. They’re justifications for inaction. And they rest on a convenient belief that AI’s “superpowers”—deduction, aggregation, brand prominence—will magically make the effort unnecessary.
But they won’t. The eligibility gates don’t bend. If you don’t meet them, you don’t get included. Full stop.
Every cycle spent justifying why not is another cycle competitors spend clearing gates and gaining share. Organization drag doesn’t just slow you down—it sidelines you.
Callout: Healthcare Example
I advised a healthcare company to strengthen doctor bios, add schema indicating medical content, and surface signals of medical review to boost their E-E-A-T. The response? Pushback: “But Mayo Clinic and the AMA don’t do that.”
Exactly. Their authority isn’t questioned — that’s why they keep showing up in AI answers. But for a challenger brand, the bar is higher. If you want to appear alongside established authorities, you must present yourself as equally credible. The gates aren’t applied equally — incumbents get trust by default, everyone else has to over-prove it.
Callout: B2B Example
In a B2B case, I directly showed a company the changes needed to get included in answers from the AI engines. The fix was simple: organize product specifications on the product page into like-for-like clusters (all temperature ranges together, other technical specs grouped logically).
Their pushback? “But we already have that in our technical PDFs — organized from our ‘technical experience.’” In other words, they wanted AI to sift through multiple PDFs and piece the information together, while their competitors presented the information on the product page in clean, orderly clusters. And guess who got included in the results?
The simplest way I explained it: think about your last trip to Home Depot. You know the replacement part is there. The store has them “organized,” but you have to scan multiple end caps and walk aisles to find it. That’s what you’re asking AI to do with your PDFs. Meanwhile, your competitors are the ones who built the end cap display — easy, visible, and obvious.
Final Thought
SEOs don’t cause the Whatabout and Whydothey Conundrum; it’s what SEOs are forced to answer when others resist the work.
The path forward is clear:
- Stop debating competitors’ shortcuts.
- Stop hoping AI will connect the dots for you.
- Clear the eligibility gates.
- Cut through organizational drag.
- Place the essential details directly on the product page and proceed.
Because in the AI era, “best” isn’t subjective—it’s computed. And the only way in is to meet the requirements. No excuses.