Research3 min read

We Tested 15 European Clinics Across 375 AI Queries — Here's Who AI Recommends

Original research: we audited 15 aesthetic and dental clinics across 13 European cities on ChatGPT, Gemini, Claude, and Perplexity. The average AEO Score was just 33 out of 100.

By AEO Media·

We wanted to answer a simple question: when someone asks AI "What's the best aesthetic clinic in [European city]?" — who does AI recommend?

To find out, we ran 375 real queries across ChatGPT, Gemini, Claude, and Perplexity for 15 clinics in 13 European cities. The results reveal a massive AI visibility gap in the healthcare and aesthetics industry.

The Headline Numbers

Metric Result
Clinics audited 15
Cities covered 13
Total AI queries tested 375
Average AEO Score 33/100
Highest score 58/100
Lowest score 12/100
Clinics invisible to AI (score < 20) 4 out of 15

The average European clinic is invisible for roughly 7 out of every 10 AI queries about their services in their city.

Key Findings

1. Google Rankings Don't Predict AI Visibility

Several clinics with strong Google presence and hundreds of 5-star reviews scored below 25 on AI visibility. One clinic ranking #1 on Google for multiple high-intent keywords scored just 18/100 on AI — meaning AI almost never recommends them.

The disconnect is clear: what makes Google rank you and what makes AI recommend you are fundamentally different signals.

2. Structured Data Is Rare

Only 3 out of 15 clinics had proper Schema.org markup beyond basic Organization data. None had MedicalBusiness or FAQPage schema. This is the lowest-hanging fruit in AEO — it tells AI engines exactly what you do, where you are, and what services you offer.

3. Directory Presence Is Fragmented

Most clinics had Google Business profiles but were missing from 60-80% of the directories and platforms that AI engines cross-reference for authority signals. Inconsistent naming, outdated addresses, and missing phone numbers further diluted their entity clarity.

4. Content Isn't Optimized for AI Consumption

Clinic websites are built for humans and Google — not for AI. Long-form pages without clear headings, no FAQ sections, no structured answers to common patient questions. AI engines struggle to extract clean, quotable information from these sites.

5. The Gap Between Best and Worst Is Enormous

The spread from 12/100 to 58/100 shows that even small improvements in AEO can dramatically change a clinic's AI visibility. The top-scoring clinic didn't have an explicit AEO strategy — it simply happened to have better structured data and more authoritative citations from press coverage.

What This Means for Clinic Owners

If you're running a clinic in Europe and relying solely on Google Ads and SEO, you're missing a rapidly growing channel. AI-powered recommendations are where an increasing share of high-intent patients are discovering their providers.

The clinics that invest in AEO now will dominate AI recommendations in their cities within months — because almost nobody else is doing it yet.

The Early Mover Window

This benchmark reveals an industry-wide blind spot. In markets where everyone scores 20-40, reaching 60-70 makes you the AI-recommended choice by default. That window won't stay open forever. As awareness of AEO grows, the cost of catching up will increase dramatically.

Methodology

  • Queries: 25 unique queries per clinic, covering services, location, comparisons, and specialty questions
  • Engines: ChatGPT (GPT-4), Gemini 2.5, Claude 3.5, Perplexity
  • Scoring: Based on mention rate, recommendation position (1st, 2nd, 3rd), sentiment, and citation quality
  • Period: February–March 2026

Want to see your clinic's AEO Score? Request a free AI visibility audit — we'll test your brand across all major AI engines and show you exactly where you stand versus competitors.

benchmarkclinicsAI visibilityresearchAEO ScoreEurope

Curious how AI sees your brand?

Get a free AEO visibility audit — we test real queries across ChatGPT, Gemini, Claude, and Perplexity.

Get Your Free Audit