This is what the user sees when they switch to this agent.
When the LLM is wired up (Anthropic Claude Sonnet 4.6), this is the system prompt that defines this agent's personality and what it knows about.
Built-in interactive flows you can attach. The visualizer flow is the full image-to-image experience.
Inject the global Company Knowledge Base into this agent's context. Visualizer agents typically opt out β they use Style Examples instead.
When the user types free text into this agent, the router rules below check if any keywords match. If they do, the user is dispatched to the matching subagent. Otherwise the agent answers from its own LLM knowledge + KB.
Company Knowledge Base
A single document that gets injected into the system prompt of every agent that has "Inherit Knowledge Base" turned on. Use it to teach the AI everything about your business β services, pricing, process, voice, licensing, contact details.
Plain text or simple markdown. Anything you put here is appended to the agent's system prompt under a "COMPANY KNOWLEDGE BASE" header. Visualizer agents are excluded from this.
Style Examples (Visualizer)
Reference images the AI uses during image-to-image rendering. When a user picks one β e.g. "Contemporary Concrete" β the visualizer tries to replicate that look on their uploaded photo.
Inspiration Patterns
The small circles users see under the AI Designer compare frame. Click a circle = that style description is added to the user's prompt. Hover = a 2.5Γ preview pops up. Each item can use either a custom image URL or one of the built-in CSS pattern presets.
AI Engine Settings
Master configuration for image-to-image rendering on the AI Designer page. Change engine, API token, prompt augmentation, and image dimensions here. Stored locally; in production this maps to a Supabase config row.
FLUX Kontext from Black Forest Labs gives the best image-to-image fidelity (preserves your photo's structure). Pollinations is free fallback. Replicate and Hugging Face are alternative providers.
Browsers can't call api.bfl.ai directly (CORS). For local dev: run node proxy.js from the outputs folder and leave this as http://localhost:8787/v1. For production: deploy a Supabase Edge Function that forwards to BFL and put its URL here (see AI_DESIGNER_SETUP.md).
This biases every render toward your brand aesthetic. Tune it if results look off β too cartoony, wrong vibe, etc.
SHA from a Replicate model page (e.g. FLUX dev img2img). Leave default for the prebuilt FLUX img2img endpoint.
Any img2img model on Hugging Face Hub. The page POSTs the user's photo as the input.