Multilingual UGC at Scale: Why Translation Kills Conversion, and What to Do Instead
Auto-translating reviews into the buyer's language is the obvious move. It is also wrong. Here is the data, the alternative model, and the rollout playbook for ten-locale stores.
If your store ships to ten countries, you have a UGC localisation problem whether you have noticed it or not. The default answer most teams reach for is "auto-translate all reviews into the buyer's language". It is the obvious move. It is also the move that costs you measurable conversion.
We ran an A/B test across 11 markets and 2.4 million review impressions in Q1. The translated-reviews variant under-converted the source-language variant by 9.4% on average. The "show originals with optional translation" variant beat both. This piece explains why, and lays out the rollout playbook for brands operating in five-plus locales.
Why translation hurts
Three reasons, in descending order of magnitude.
Reason 1 — Loss of voice
The thing that makes a review trustworthy is the texture of how it is written. Spelling slips, idiom, hedging, exuberance — all the markers of a real person. Machine translation flattens that voice. Two reviews that were written by clearly different people in their source language sound interchangeable after translation, and shoppers can tell.
In post-test interviews, shoppers reported that translated reviews "feel like the brand wrote them". That is the conversion killer.
Reason 2 — Loss of local context
A French buyer talking about fit references different body norms than a Brazilian buyer. A Japanese buyer discussing fabric care uses different reference materials than a German buyer. Translation strips these contexts, and the resulting review is generic — useful to no one in particular.
Reason 3 — Agent confusion
When an AI agent reads your reviews to answer a question, it weights source-language reviews higher than translations. A page with 600 reviews shown in their source languages is treated as a richer corpus than the same 600 reviews shown after machine translation, because the agent sees the linguistic diversity as a richer evidence base.
What actually works
Three layers, applied in order.
Layer 1 — Show originals by default
On every PDP, show reviews in their source language. Mark each with the source-language flag clearly. Do not auto-translate.
Layer 2 — Per-review opt-in translation
A small "translate" button under each foreign-language review. Shoppers who care to read it can click; the cost is a single round-trip translation call rather than a bulk translation of your entire corpus.
Layer 3 — AI summary in the buyer's language
The summary block at the top of reviews ("shoppers praise the fit...") is generated in the buyer's language. This is where translation belongs — at the abstraction layer, not the verbatim layer. The summary preserves the underlying signal without flattening individual voices.
The numbers
From our A/B test, normalised conversion lift compared to the auto-translate baseline:
- Show originals only: +6.1% conversion (vs auto-translate).
- Show originals + opt-in translate per review: +9.4% conversion.
- Show originals + opt-in translate + AI summary in buyer language: +14.2% conversion.
The compounding effect is real. The combined three-layer model is roughly 15% better than the baseline auto-translate approach across the markets we tested.
Rollout playbook
- Audit your current state. How are foreign-language reviews handled today? Auto-translated, hidden, or shown raw?
- Disable auto-translation. This is a one-line config change in most review platforms.
- Add per-review translation buttons. About one engineering day with any modern translation API.
- Build the AI summary block in each locale. One engineering week if you do not already have summarisation infrastructure.
- Tag each review with sourceLanguage in JSON-LD. Agents weight this.
- A/B test the three-layer model against your current setup for 30 days.
Edge cases
Three scenarios where the playbook needs adjustment.
- Markets with low UGC volume in source language. If you have fewer than 30 native-language reviews in a locale, mixing in translated reviews from adjacent locales can help — but mark them clearly as translated.
- Right-to-left languages. RTL shoppers reading English-language reviews see them as text-direction-mixed. Render LTR reviews with proper bidi isolation; do not flip the page.
- Compliance markets. Some jurisdictions (notably Germany and France) have specific consumer-law requirements about review disclosure language. Check before disabling translation.
Implications for agentic visibility
The model also wins on AI citation. Agents quoting your reviews now have access to source-language reviews — they treat them as higher-trust than translated content — and the AI summary block is naturally citation-shaped (theme + supporting examples) so it gets quoted directly. We measured a 19% increase in AI citations for PDPs converted to the three-layer model.
Closing
The instinct to translate is right. The implementation matters. Translate the abstraction, not the artefact. Show originals, summarise per buyer, and let the synthesis happen at the level where loss of voice does not matter. The conversion lift compounds across every locale you operate in.
Related reading
The Future Trends of Conversational Commerce
Ten trends shaping conversational commerce through 2028: agent personalities, multimodal storefronts, voice-first cart, agentic loyalty, and the disappearance of the checkout button. From an Idukki product perspective.
Anatomy of a Conversational PDP: The 9 Components Every Shop Needs by Q3
The reference architecture for a conversational product page in 2026. Nine components, what each one does, and the order to ship them in.
WhatsApp + UGC: The Funnel Ecom Forgot It Had
WhatsApp is the second-largest commerce channel in five of our top ten markets — and the cheapest place to collect rich UGC. A pragmatic guide for DTC operators.