Limited Resources, Unlimited Impact with Multimodal AI Models

AI foundation models are increasingly dominating various academic and industrial fields, yet the R&D of related technologies is controlled by limited institutions capable of managing extensive computational and data resources. To counter this dominance, there is a critical need for technologies that can develop practical AI foundation models using the standard computational and data resources. It is said that the scaling laws no longer provide the reliable roadmap for developing AI foundational models. Our community (LIMIT.Community) and the international lab (LIMIT.Lab) therefore aim to put in place exactly those technologies that permit the construction of {Vision, Vision-Language, Multimodal}AI foundational models even when compute and data are limited. Drawing on our members’ prior successes in (i) generative pre-training methods that can be applied horizontally across any modality with image, video, 3D, & audio, and (ii) high-quality AI models from extremely scarce data (including a single image), we have been committed to AI multimodal foundational models under very limited resources. As of 2025, LIMIT.Lab is composed primarily of international research teams from Japan, UK, and Germany. Through collaborative research projects and the workshop organization, we actively foster global exchange in the field of AI and related areas.

Check out upcoming events

paper ocean backgroundpaper ocean focus

Recent News

2025-06-01: Two workshops are accepted at ICCV 2025.
2025-06-01: Our official website is now live.

Our Members

?
Hirokatsu Kataoka
AIST / Oxford VGG
?
Yoshihiro Fukuhara
AIST
?
Rintaro Yanagi
AIST
?
Ryousuke Yamada
AIST / UTN FunAI Lab
?
Daichi Otsuka
AIST
?
Partha Das
UvA
?
Risa Shinoda
Osaka University / AIST
?
Nakamasa Inoue
Science Tokyo
?
Go Irie
TUS
?
Rio Yokota
Science Tokyo
?
Ikuro Sato
Science Tokyo
?
Rei Kawakami
Science Tokyo
?
Christian Rupprecht
Oxford VGG
?
Iro Laina
Oxford VGG
?
Yuki M. Asano
UTN FunAI Lab
?
Elliott (Shangzhe) Wu
Cambridge
?
Daniel Schofield
Oxford VGG
?
Martin R. Oswald
UvA
limit.lab logo
© 2025 LIMIT.Lab. All rights reserved.