Idukki
uIdukki essay · Idukki Strategy notebook

AI in the UGC loop, part 2 — tagging: a content dump becomes a catalogue

Time-to-tag is the most under-loved KPI in commerce ops. A folder of 400 untagged clips is dead inventory. Here is how AI tagging turns raw UGC into a shoppable catalogue — and why everything downstream depends on it.

Rohin AggarwalRohin AggarwalCo-founder · Idukki.io·May 15, 2026·8 minFrom the Idukki desk

If part 1 solved your sourcing problem, you now have a new one: a folder of 400 unsorted UGC clips, growing by 50 a week, that nobody on the team has time to watch.

This is the silent killer of every UGC programme. The content is there. The rights are clear. The clips are good. And they sit in a folder for six weeks because nobody has time to watch them, identify the products, link them to SKUs, and write the alt text. Every day a clip sits untagged it loses value — the post that triggered the upload trends down, the product goes on sale then off sale, and the shopper who would have converted on it sees the same six-month-old hero shot instead. The technical name for this is time-to-tag, and it is the most under-loved number in ecommerce operations.

Why tagging is hard when humans do it

A single piece of UGC needs around a dozen metadata fields before it is useful: the products in frame mapped to your SKU IDs, category type, scene type, mood and aesthetic tags, fit context, occasion, colour palette, setting, language, brand-safety flags, and alt text.

A junior coordinator doing that by hand averages 90 to 180 seconds per asset on a good day. Multiply by a 400-clip backlog and you have burned 10 to 20 hours of their week before they touch this week’s intake. So they do not. The backlog grows. The clips age. The programme quietly stops being a revenue channel and becomes a folder.

What AI changes in tagging

Visual recognition got genuinely good

The hardest part — "what SKU is this person wearing?" — used to be the bottleneck. Modern visual models trained on retail catalogues now handle occlusion, partial views, multiple products in one frame, and products being worn rather than laid flat. Our own first AI tagging pass shipped at 71% precision; we thought that was good, it was not, and we rebuilt it on Claude vision. That is the unlock: tagging stops being a typing job and becomes a review job.

Scene and mood tagging is now ambient

"This is a coffee-shop morning aesthetic, palette warm neutrals, mood cosy weekend" used to need a tagging rubric and a trained reviewer. Multimodal models do it in a single pass with a consistent vocabulary across thousands of assets. The downstream effect is large — your personalisation engine can finally ask "show cosy-weekend assets to shoppers browsing loungewear on a Sunday morning" and the system can actually answer.

The reviewer becomes an editor, not a typist

In a healthy setup the system pre-fills 10 of the 12 fields. The human confirms the SKU match, fixes the one tag the model got wrong, and approves. Time per asset drops from 90–180 seconds to single digits. The same coordinator clears a 400-clip backlog in two afternoons.

~92%Idukki AI tagging precisionUp from 71% on the first version, after a rebuild on Claude vision. Precision is what makes the reviewer’s job a confirm-and-correct pass instead of a re-do.
“Every dashboard, every personalisation rule, every brand-safety filter downstream of ingestion is paying interest on the quality of your tags.”

The one number to track

Track median time-to-tag in hours, at the 75th percentile. Not the average — the average lies, because weekend gaps drag it around. The P75 tells you "for three out of four assets, they are tagged and live within X hours of ingestion".

  • < 12 h

    Excellent

    UGC behaves like merchandising

  • < 48 h

    Healthy

    Still timely on the storefront

  • > 7 d

    Broken

    UGC is decoration, not a channel

Representative 2026 bands for P75 time-to-tag — consolidated guidance, not Idukki-measured customer averages.

Why this post earns the rest of the series

Without good tagging the rest of the pipeline cannot function. Moderation has nothing to filter on. Personalisation falls back to "newest first" because there is nothing to match on. Reporting cannot tell you which mood, colour or creator type converts because there is no dimension to slice by. AI does not just speed tagging up — it makes the rest of the stack possible.

“We've had a great experience using Idukki for integrating live UGC on our WYO website. The platform is extremely user-friendly, and the entire setup and usage process has been smooth and efficient. It has helped us seamlessly showcase real-time content from our social media, adding a layer of authenticity to our product pages.”
Wear Your Opinion — verbatim, Shopify App Store review, April 1 2026

Three things to do this quarter

  1. 1Measure your current P75 time-to-tag. Sample 30 assets, calculate hours from ingestion to publish-ready. That is your baseline.
  2. 2Audit your tag vocabulary. Freeform tags create a personalisation problem later. Move to a controlled vocabulary so AI tagging stays consistent.
  3. 3Run a side-by-side. Tag 25 fresh clips by hand, 25 with an AI assist plus human review. Compare time, accuracy and reviewer fatigue. The numbers make the business case for you.

Next: part 3, moderation — the brand-safety layer most teams skip, and the three-tier review queue that makes it operational. The product view of this stage lives on the AI tagging page.

Get the full series — AI in the UGC loop

All four parts plus the pipeline self-audit worksheet, in one file.

Sources + note on numbers

  1. 1Wyzowl — State of Video MarketingVideo production and processing throughput context.
  2. 2Olapic / Social Native — visual commerce researchVisual UGC tagging and product-match research.
  3. 3Baymard Institute — product page UX researchPDP content freshness and shopper expectation data.
  4. 4Note on numbersThe 92% precision figure is Idukki’s own measured AI-tagging precision. Time-to-tag bands are representative 2026 guidance consolidated from the sources above, not verbatim customer-measured averages.
#ugc#ai-tagging#product-tagging#ai-in-ugc-loop

More from Rohin Aggarwal

Where Idukki ships

Same data model. Every surface a shopper meets.