All resources

How a Leading Fintech Platform Cut Through One of the Most Crowded Categories to Lift AI Visibility 33%

Delia Rowland

March 6, 2026

5

minutes read

Case study

The Challenge

Treasury and cash management is one of the most saturated categories in B2B finance. 

The global treasury management software market is valued at roughly $6.9 billion, projected to nearly double over the next decade, with legacy players and fintech challengers all competing for the same buyers.

Our fintech client was already showing up in AI answers for its core products. But when buyers asked about treasury-specific solutions (one of the company's highest-value categories), it was barely being mentioned.

The brand wasn't unknown. It just wasn't showing up in the articles and roundups that AI tools like ChatGPT, Google, and Perplexity pull from when answering treasury-specific questions.

The Hypothesis

LLMs don't invent recommendations. They pull from sources they've indexed like third-party listicles, roundups, and comparison articles that function like a dynamic knowledge graph.

If the right mentions could be placed in the sources LLMs actually cite for treasury prompts, visibility for that cluster should follow.

The Approach

The company partnered with Noble over a one-month pilot. Noble placed the client in 9 targeted articles, each selected based on relevance to the treasury prompt cluster and a track record of appearing in LLM citations.

Each article was tagged to a primary LLM target: ChatGPT, Google AI Overviews, or Perplexity.

Visibility was tracked daily using Profound, which measures share-of-voice across a defined prompt cluster (i.e., the percentage of AI answers that surface a given brand on any given day).

The Results

Overall visibility

Across all tracked treasury and cash management prompts:

  • Before (Oct 23–31): visibility score ~0.054
  • After (Jan 10–31): visibility score ~0.072
  • Change: +33% relative increase

That lift accumulated steadily as mentions went live. Not as a single jump, but as a compounding effect across the pilot.

Where it actually came from

Of the 9 mentions placed, average weekly visibility was +19% higher than the week before, with each mention contributing incrementally to the total lift.

Two articles produced the largest individual jumps:

Mention LLM Target Local Uplift
Top Treasury Management Systems Google AI Overviews +47%
The Best Banks for Small Businesses in 2025 Google AI Overviews +43%

Individual placements ranged in impact from single-digit to +47% local uplift. The highest-performing articles were high-fit, high-authority pages that already had multiple LLM citations, and they produced the sharpest visibility jumps in the exact week after their expected indexing dates.

That said, in a category this crowded, the gap between a placement that moves the needle and one that doesn't can be hard to predict. What broader coverage gives you is more shots on goal. 

More chances to land on the pages that produce outsized impact. 

LLM targeting changed the outcome

Grouping mentions by primary LLM target:

LLM Mentions Average Local Uplift
Google AI Overviews 3 +31%
Perplexity 1 +29%
ChatGPT 3 +6%

ChatGPT-targeted placements underperformed for this prompt cluster. The likely reason is that treasury queries are search-adjacent, and Google AI Overviews and Perplexity draw more heavily from indexed web content when generating answers.

The implication isn't that ChatGPT doesn't matter, but that LLM targeting needs to be calibrated to the query type, and that calibration is part of what drives results.

Correlation with active mentions

As mentions accumulated and were indexed (assuming a 3-day lag from publication), visibility moved up in tandem. Correlation between active indexed mentions and daily visibility: 0.36 — moderate, positive, and consistent.

The Timeline

Oct 23–31: No mentions indexed. Baseline visibility ~0.054.

Nov–Dec: First articles go live. Visibility begins climbing. The two anchor articles produce +43–47% local lifts in their indexing windows.

Jan 10–31: Majority of mentions matured. Visibility holds at ~0.072. The effect sustained rather than faded.

What This Means

  1. Crowded categories are winnable. Even in a market dominated by established players, a focused campaign on high-intent queries moved visibility 33% in three months. Point being, a saturated category doesn’t mean you can’t get the results you’re looking for.
  2. LLM targeting matters more than you think. Google AI Overviews and Perplexity outperformed ChatGPT 5-to-1 for this category. Your results will depend on where your buyers search and what kinds of questions they ask, but the point is that this is a variable worth paying attention to.
  3. Coverage compounds. Each placement builds on the last. The 33% lift wasn't one big win; it was 9 placements stacking on top of each other over time. So, casting a wider net across relevant sources gives you more chances for high-impact placements and a more durable overall lift.

Methodology Notes

  • Measurement period: October 23, 2025 – February 4, 2026 (100 daily observations)
  • Indexing assumption: 3-day lag from publication to LLM incorporation; 4- and 5-day lags tested as sensitivity checks with consistent directional results
  • Visibility metric: Daily share-of-voice score via Profound (treasury/cash management prompt cluster)
  • Limitations: Small sample (9 mentions, 7 with full pre/post windows), no control group, overlapping pre/post windows across mentions, 2 most recent mentions excluded from per-mention analysis

For the full technical methodology and raw data, contact Noble.

Let’s win AI search

GET DISCOVERED ON