AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents

Chris Rawles*1, Yifan Chang†2, Sarah Clinckemaillie†2, Jonathan Waltz2, Gabrielle Lau2, Marybeth Fair2, Robert Berry1, Wei Li1, Will Bishop1, Alice Li1, Folawiyo Campbell-Ajala1, Divya Tyamagundlu2, Daniel Toyama1, Timothy Lillicrap1, Oriana Riva1

1 Google DeepMind 2 Google
*Lead contributor Equal contribution

Paper Code Data

Overview of AndroidWorld Autonomous agents that execute human tasks by controlling computers can enhance human productivity and application accessibility. However, progress in this field will be driven by realistic and reproducible benchmarks. We present AndroidWorld, a fully functional Android environment that provides reward signals for 116 programmatic tasks across 20 real-world Android apps. Unlike existing interactive environments, which provide a static test set, AndroidWorld dynamically constructs tasks that are parameterized and expressed in natural language in unlimited ways, thus enabling testing on a much larger and more realistic suite of tasks. Reward signals are derived from the computer’s system state, making them durable across task variations and extensible across different apps. To demonstrate AndroidWorld’s benefits and mode of operation, we introduce a new computer control agent, M3A. M3A can complete 30.6% of AndroidWorld’s tasks, leaving ample room for future work. Furthermore, we adapt a popular desktop web agent to work on Android, which we find to be less effective on mobile, suggesting future research is needed to achieve universal, cross-domain agents. Finally, we conduct a robustness analysis by testing M3A against a range of task variations on a representative subset of tasks, demonstrating that variations in task parameters can significantly alter a task’s complexity and, consequently, an agent’s performance, highlighting the importance of testing agents under diverse conditions.


(a) Record an audio and save it

(b) Add multiple expenses.

(c) Create a marker in a map app

(d) Create multiple recipes.

(e) Add a calendar event.

(f) Create a playlist in VLC.

(g) Send received address to contact.

(h) Retrieve high priority tasks due on date.

(i) Retrieve sports tracking stats.

Dataset

Key Features:

  • 📝 116 diverse tasks across 20 real-world apps
  • 🎲 Dynamic task instantiation for millions of unique variations
  • 🏆 Durable reward signals for reliable evaluation
  • 🌐 Open environment with access to millions of Android apps and websites
  • 💾 Lightweight footprint (2 GB memory, 8 GB disk)
  • 🔧 Extensible design to easily add new tasks and benchmarks
  • 🖥️ Integration with MiniWoB++ web-based tasks

Dataset Statistics

Task tags distribution

The distribution tags across AndroidWorld tasks

Task step histogram

The distribution of the number of steps taken to perform tasks

Comparison to other datasets

Table comparing AndroidWorld to other datasets

Citation

@misc{rawles2024androidworld,
      title={AndroidWorld: A Dynamic Benchmarking Environment for Autonomous Agents}, 
      author={Christopher Rawles and Sarah Clinckemaillie and Yifan Chang and Jonathan Waltz and Gabrielle Lau and Marybeth Fair and Alice Li and William Bishop and Wei Li and Folawiyo Campbell-Ajala and Daniel Toyama and Robert Berry and Divya Tyamagundlu and Timothy Lillicrap and Oriana Riva},
      year={2024},
      eprint={2405.14573},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}