Analyzing LogoMotion Errors in Position, Scale, and Animation Logic

Written by mlmodel | Published 2025/06/16
Tech Story Tags: ai-logo-animation | logomotion | automatic-logo-animator | ai-for-designers | ai-motion-design | self-refining-code | visual-programming | ai-code-synthesis

TLDRLogoMotion frequently makes position and scale errors in animations due to misunderstanding from-to formats and improper use of percentages, but often fixes them on the first retry.via the TL;DR App

Table of Links

Abstract and 1 Introduction

2 Related Work

2.1 Program Synthesis

2.2 Creativity Support Tools for Animation

2.3 Generative Tools for Design

3 Formative Steps

4 Logomotion System and 4.1 Input

4.2 Preprocess Visual Information

4.3 Visually-Grounded Code Synthesis

5 Evaluations

5.1 Evaluation: Program Repair

5.2 Methodology

5.3 Findings

6 Evaluation with Novices

7 Discussion and 7.1 Breaking Away from Templates

7.2 Generating Code Around Visuals

7.3 Limitations

8 Conclusion and References

5.3 Findings

5.3.1 RQ3. What errors does LogoMotion synthesis make? LogoMotion made 42 position-based errors in total. Position errors were made in 30.4% of the runs, meaning that almost all runs with errors detected a position error. These errors occurred when the left or top coordinate of the bounding box was off. LogoMotion made 26 scalebased errors in total, erroring in 18.4% of the runs, meaning that scale errors were less common than position errors. These errors occurred when the width or height dimensions of the bounding box were off. We did not detect any opacity errors in our test set.

Common errors resulted from not following the from-to format that is common to animation libraries (CSS and anime.js). In spite of the prompt suggesting a from-to format, keyframes would often be suggested with arrays that had over two values, so the element would not return back to its original position. For example,

if the generated animation set the translateX values [10,-10, 0]โ€“the element would end with a -10 offset relative to its correct position.

Another type of position error would occur when there was inconsistent application of absolute and relative percentages. For example, a line layer in an animation could be instructed to stretch in from 0% outwards to 100%. This 100 percent was intended to be with respect to the elementโ€™s width or height, but was rendered to be 100 percent (absolute with respect to the canvas). An example of this mistake within the LLM response is provided below.

โ€œI have made an assumption to change the โ€˜translateXโ€˜ value from โ€œ41.1%" to โ€œ50%" assuming that โ€œ50%" corresponds to the centered position in the layout."

Another type of error that was frequently encountered was when GPT would return a looping animation. Looping animations, as briefly mentioned in our formative steps, are a common design pattern to animation, and they would be instantiated by defining a small periodic action with the loop parameter set to true. Looped elements generally left the elements at small deltas from their intended positions but were easily resolved.

5.3.2 RQ4. How capably does LogoMotion fix its errors? Many errors were simple enough that they would only take only one attempt from LogoMotion to solve. This is pictured in Figure 8, by the predominance of the green bar for โ€œSolved in 1" for each value of ๐‘˜. Note that Figure 7 normalizes the number of elements, because it reports the proportion of animation code runs made error-free, while Figure 8 aggregates across all errors on all design elements. This distinction is important because the one run that could not be resolved (Figure 7, k=4) had many elements whose individual errors were not resolved (Figure8, in k=4), making the solve rate different at k=4 across the graphs.

Authors:

(1) Vivian Liu, Columbia University (vivian@cs.columbia.edu);

(2) Rubaiat Habib Kazi, Adobe Research (rhabib@adobe.com);

(3) Li-Yi Wei, Adobe Research (lwei@adobe.com);

(4) Matthew Fisher, Adobe Research (matfishe@adobe.com);

(5) Timothy Langlois, Adobe Research (tlangloi@adobe.com);

(6) Seth Walker, Adobe Research (swalker@adobe.com);

(7) Lydia Chilton, Columbia University (chilton@cs.columbia.edu).


This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.


Written by mlmodel | Harnessing the magic of algorithms and data to make predictions, classify, and uncover hidden patterns.
Published by HackerNoon on 2025/06/16