Figma's new /figma-use skill lets agents write objects straight into a design file. Cursor 3's Design Mode lets them annotate the rendered DOM like a markup pencil. Stitch 2.0 listens to you redesign by voice. The agent's primary surface this fortnight stopped being the chat box.
Figma's May 2026 release notes shipped a quiet but load-bearing change: an agent connected via the Figma MCP server can now create and modify real Figma objects using the file's existing components, variables, and styles. The mechanism is a skill called /figma-use — sometimes referred to as write to canvas in the developer docs — and Figma is publishing the SKILL.md in a public guide so teams can fork it and add their own conventions on top.
The shift is not "AI in Figma." Figma has had AI features for two years. The shift is that the agent now operates on the same surface a designer does — pushing instances of your Button/Primary/Large component, snapping to your spacing tokens, writing into the layer tree — instead of sitting outside the file and emitting code that someone has to translate back to design.
Cursor 3 — which launched on April 2 with a new Agents Window — included a feature called Design Mode that's still being absorbed by frontend teams. Inside the agents window, Design Mode opens an annotation layer over the rendered browser. Shift-drag selects a region, Option-click targets a specific element, and the selection (with full DOM context) lands in the agent chat as a structured reference. Sean Kim's writeup calls it the "browser annotation layer built into the IDE." It's the move that closes the loop between "what the page looks like" and "what the agent thinks it's editing."
Google's Stitch — which began life as a text-to-UI experiment in Google Labs — shipped a 2.0 upgrade earlier this spring that has continued to ripple. The headline addition is a voice canvas: you talk and Stitch's agent makes real-time changes — "give me three different menu options," "show me this in a warmer palette," "make the spacing on the left more generous." The same release added multi-screen generation (up to five interconnected screens from one prompt) and an explicit DESIGN.md import/export so design rules round-trip between Stitch and any agent-aware code tool.
The old prompt pattern was a paragraph of prose: "The hero is too tight, the spacing under the headline should be larger, and the call-to-action should sit further right." The new pattern, available in both Cursor 3's Design Mode and Figma's canvas chat, is to select the element first and then say one short thing: select the hero block, type "more breathing room above the CTA." The agent sees the selection in DOM or layer-tree form and operates on it directly.
This is the same shift Bret Victor argued for in 2012 — direct manipulation reduces the distance between intent and target — but it now applies to working with an agent, not a hand-built editor. The selection becomes the noun, your sentence becomes the verb, and the agent doesn't have to triangulate. If you've been writing long descriptive prompts, your sentences will get shorter as canvas-aware tooling spreads.
Several teams (notably the workflows shown in Figma's May release notes roundup) have started writing design briefs as annotated wireframes directly on the canvas — sticky notes, callouts, "this should feel quieter" — and then prompting the agent to "read the brief and propose three layouts." The agent treats the annotations as authored context rather than chat history that will scroll out of view.
A workflow surfacing in studio threads this week — and demonstrated in Figma's own release-notes livestream — is the canvas-to-code-to-canvas loop. The studio designer mocks the rough shape in Figma using their existing components, the agent imports the design tokens into v0 or Cursor via the registry/MCP bridge, the agent generates working code, and then the agent writes the implemented version back to Figma using /figma-use so the dev-built component reappears as a clean Figma instance for the design review. The design file and the deployed code stay in sync as a single source.
Two side-effects worth noting. First, your design system stops being a slide deck and starts being a registry — Vercel documented this pattern for v0 a few weeks ago and the registry-plus-MCP shape has stabilized across the ecosystem. Second, the round trip rewards teams that have actually named their tokens. If your spacing scale is --space-3 instead of "looks about right," the agent can preserve your decisions on both sides of the loop. If it's not, the agent will guess, and your design system erodes one round trip at a time.
Independent of the canvas story but in the same drift: OpenAI's product marketing team has been talking openly about running Codex on an hourly schedule to scan Slack, Gmail, Notion, Figma, and Google Drive and prep updates while the human sleeps. The framing matters: Codex is not an interactive partner in this workflow, it's a background process that has its own schedule. Design teams should pay attention — the same pattern works for visual housekeeping (closing rounded review threads in Figma, flagging stale components, generating QA screenshots).
A canvas-aware prompt, written for Cursor 3's Design Mode after selecting the hero block of a marketing page. Notice the shortness — the agent already has the selection.
// Selection: <section class="hero">...</section> // Tokens available: --space-3, --space-4, --space-5 ## Goal Tighten the hero. Make the deck feel like it's been allowed to breathe, and pull the CTA further from the edge of the wrap. ## Constraints - Use only the spacing tokens above. - Don't introduce new shadows or borders. - Headline size stays put. ## Done when Deck has --space-4 above and below, CTA sits at --space-5 from the right edge of .wrap. No other visual changes.
The structure is deliberately minimal — Goal / Constraints / Done when. The Goal sentence is the only place where "feel" language lives; the Constraints and Done-when sections are concrete enough to let the agent verify its own output before reporting. With a canvas selection providing the noun, that's all the prompt has to carry.
Two weeks ago the daily question was "which prompt do I write to get a usable design system." Today the question is "which surface do I want the agent to work on — the canvas or the code?" That's a real promotion for designers: the agent is no longer downstream of the brief, it's a participant in the same artifact. The tradeoff, of course, is that the design file is now a working environment with edit history and merge conflicts, not a presentation deck. Studios that treat Figma as living source — registry-backed tokens, named components, a brief written on the canvas — will pull ahead this quarter. Studios that treat it as a polished handoff will keep waiting for the dev team to translate.
A field experiment from the team behind Beaver Builder.