March 20, 2024
We introduce SceneScript, a method that directly produces full scene models as a sequence of structured language commands using an autoregressive, token-based approach. Our proposed scene representation is inspired by recent successes in transformers & LLMs, and departs from more traditional methods which commonly describe scenes as meshes, voxel grids, point clouds or radiance fields. Our method infers the set of structured language commands directly from encoded visual data using a scene language encoder-decoder architecture. To train SceneScript, we generate and release a large-scale synthetic dataset called Aria Synthetic Environments consisting of 100k high-quality indoor scenes, with photorealistic and ground-truth annotated renders of egocentric scene walkthroughs. Our method gives state-of-the art results in architectural layout estimation, and competitive results in 3D object detection. Lastly, we explore an advantage for SceneScript, which is the ability to readily adapt to new commands via simple additions to the structured language, which we illustrate for tasks such as coarse 3D object part reconstruction.
Written by
Armen Avetisyan
Chris Xie
Henry Howard-Jenkins
Tsun-Yi Yang
Samir Aroudj
Suvam Patra
Fuyang Zhang
Duncan Frost
Luke Holland
Campbell Orme
Jakob Julian Engel
Edward Miller
Richard Newcombe
Vasileios Balntas
Publisher
arXiv
Research Topics
May 06, 2024
Haoyue Tang, Tian Xie
May 06, 2024
April 23, 2024
Sachit Menon, Ishan Misra, Rohit Girdhar
April 23, 2024
April 18, 2024
Jonas Kohler, Albert Pumarola, Edgar Schoenfeld, Artsiom Sanakoyeu, Roshan Sumbaly, Peter Vajda, Ali Thabet
April 18, 2024
March 29, 2024
Judy Ye, Abhinav Gupta, Kris Kitani, Shubham Tulsiani
March 29, 2024
Product experiences
Foundational models
Product experiences
Latest news
Foundational models