Venice Biennale, Domus Published, Young Talent Award — And AI Recommends Their Competitors
Case study: An award-winning architecture studio in Turin with Venice Biennale participation and features in Domus magazine scored 45/100. Despite elite credentials, AI consistently recommended less decorated competitors.
In architecture, your work is your proof. But AI engines can't walk through your buildings or feel the space. They can only read what's been written about them — and structured for machines to find.
The Studio
A young architecture studio in Turin, Italy. Founded by two partners who'd already accumulated credentials that most architects spend entire careers pursuing:
- Selected for the Venice Architecture Biennale
- Published in Domus magazine — arguably the world's most prestigious architecture publication
- Won a national Young Talent award
- Featured in multiple design exhibitions
Their portfolio spanned residential, cultural, and commercial projects. The work was bold, recognized, and genuinely innovative.
What We Tested
We ran 10 queries across ChatGPT, Gemini, and Perplexity in Italian and English:
- "Best architecture firm in Turin"
- "Studio di architettura Torino"
- "Residential architect Turin Italy"
- "Award-winning architect Piedmont"
- And 6 more covering different project types and languages
AEO Score: 45 out of 100.
For a Biennale-exhibited, Domus-published studio, 45 was a gut punch.
The Awards Paradox
Here's what made this audit particularly revealing. When we analyzed where their score came from, the pattern was clear:
What generated score points:
- Domus and Biennale mentions existed in AI training data — these boosted their appearance in broad "Italian architecture" queries
- The studio appeared in 2-3 queries about award-winning Italian architects
What killed the rest of the score:
- Zero client reviews on any platform — not Google, not Houzz, not Facebook
- No Architizer profile — one of the primary directories AI references for architecture firms
- No ArchDaily or Dezeen presence — two more key citation sources
- Portfolio-only website — beautiful images, minimal text content, no project descriptions that AI could parse
- No structured data — awards, Biennale participation, Domus features — none of it was in schema markup
- No consumer-facing content — the website spoke to other architects, not to potential clients searching for a residential architect
Who AI Recommended Instead
The firms AI consistently recommended ahead of this studio had:
- 20-50+ Google reviews from residential clients
- Houzz profiles with project photos and client testimonials
- Clear service descriptions ("we design single-family homes in Piedmont")
- Blog posts about renovation processes, building permits, and design decisions
No Biennale. No Domus. No awards. Just better digital documentation of what they do and who they serve.
The Two-Audience Problem
Architecture studios often face a dual-audience challenge:
- Peers and publications — impressed by Biennales, theoretical rigor, and design awards
- Clients — searching for "architect to design my house in Turin" and looking for reviews, process clarity, and relatable project examples
This studio had optimized entirely for audience #1. Their website was a portfolio of architectural photography. Their credentials spoke to design professionals. But the growing stream of clients asking AI for an architect were audience #2 — and AI had nothing useful to show them.
What Would Fix This
- Architizer and ArchDaily profiles — immediate citation sources for AI
- Houzz profile — critical for residential project queries, with client review collection
- Client-facing website content — project descriptions written for homeowners (not architects), process explanations, and budget guidance
- Structured award data — schema markup for every award, publication, and exhibition
- Google Business Profile optimization — with photos, service descriptions, and an active review request campaign
- Blog content — 2-3 articles per month on practical architecture topics AI engines can cite
Moving from 45 to 65+ would take 6-8 weeks. Reaching 75+ requires sustained content and review generation over 3-4 months.
The Takeaway
Awards prove quality. Reviews prove accessibility. AI engines need both — but they need the second one more than the first.
A Venice Biennale selection is extraordinary. But when a potential client asks ChatGPT who should design their new home in Turin, the AI doesn't check Biennale catalogs. It checks Google reviews, Houzz profiles, and whether the firm's website explains what they do in language a homeowner understands.
The most awarded architects aren't automatically the most visible. And in 2026, visibility is increasingly controlled by machines that can't appreciate design — only data.
Curious how AI sees your brand?
Get a free AEO visibility audit — we test real queries across ChatGPT, Gemini, Claude, and Perplexity.
Get Your Free Audit