Multi-Agent AI Pipeline with Componentized Prompts
AI-generated content at scale is not a prompting problem. It's a systems engineering problem - discrete components, clear separation of areas, and full visibility into what's happening at each stage.
Multi-agent pipeline.
A concept agent reads Yangra's app content - titles, descriptions, articles - alongside category-specific rules, and generates a structured scene description. A separate output agent applies the style lock, color strategy, and composition rules to produce the illustration.
One model handling both interpretation and generation produced poor results and made failures impossible to isolate & solve. Separating them meant cleaner outputs, independent debugging, and no need for manual briefs per image.
Prompt componentization.
Rather than a monolithic prompt, the generation input is broken into discrete, independently tunable components: content description, style lock, composition rules, color strategy, and reference image sets. Each can be versioned, tested, and owned by a different team member.
Systematic variation.
Eight color strategies carved from the brand palette create visual differentiation without going off-brand. Five composition types and category-specific content templates ensure a movement scene conveys energy while a recovery scene conveys stillness. This is what prevents 200 illustrations from looking like 200 AI copies of the same image.
Traceability.
Every generated image maps back to its full prompt chain - every component, every variable, every reference image. Click any output, see exactly what produced it, identify which layer to adjust. Bulk generation enables systematic testing - compare ten images against ten to evaluate whether a change produced real improvement, not a one-off fluke.