Keyframer: Empowering Animation Design using Large Language Models
AuthorsTiffany Tseng, Ruijia Cheng, Jeffrey Nichols
Keyframer: Empowering Animation Design using Large Language Models
AuthorsTiffany Tseng, Ruijia Cheng, Jeffrey Nichols
Large language models (LLMs) have the potential to impact a wide range of creative domains, as exemplified in popular text-to-image generators like DALL·E and Midjourney. However, the application of LLMs to motion-based visual design has not yet been explored and presents novel challenges such as how users might effectively describe motion in natural language. Further, many existing generative design tools lack support for iterative refinement of designs beyond prompt engineering. In this paper, we present Keyframer, a design tool that leverages the code generation capabilities of LLMs to support design exploration for animations. Informed by interviews with professional motion designers, animators, and engineers, we designed Keyframer to support both ideation and refinement stages of animation design processes by enabling users to explore design variants throughout their process. We evaluated Keyframer with 13 users with a range of animation and programming experience, examining their prompting strategies and how they considered incorporating design variants into their process. We share a series of design principles for applying LLM to motion design prototyping tools and their potential implication for visual design tools more broadly.
Mapping the Design Space of User Experience for Computer Use Agents
February 12, 2026research area Human-Computer Interaction, research area Tools, Platforms, Frameworksconference IUI
Large language model (LLM)-based computer use agents execute user commands by interacting with available UI elements, but little is known about how users want to interact with these agents or what design factors matter for their user experience (UX). We conducted a two-phase study to map the UX design space for computer use agents. In Phase 1, we reviewed existing systems to develop a taxonomy of UX considerations, then refined it through…
Improving User Interface Generation Models from Designer Feedback
January 6, 2026research area Data Science and Annotation, research area Human-Computer Interactionconference CHI
Despite being trained on vast amounts of data, most LLMs are unable to reliably generate well-designed UIs. Designer feedback is essential to improving performance on UI generation; however, we find that existing RLHF methods based on ratings or rankings are not well-aligned with designers’ workflows and ignore the rich rationale used to critique and improve UI designs. In this paper, we investigate several approaches for designers to give…