Welcome to the Visual and General Intelligence (VisAGI) Lab! At the VisAGI Lab, we envision building general and robust AI systems that can be deployed to solve a wide range of real-world problems, ultimately progressing toward Artificial General Intelligence (AGI). Our long-term academic journey starts with a vision-centric approach, focusing on computer vision as the first step toward building more general and robust AI systems.

Our current research interests include, but are not limited to, the following areas:

  • Robust Visual Perception Models: Designing models that can accurately perceive and interpret visual data (e.g, image/video/3D), even under challenging conditions.
  • Multimodal Generative Models: Exploring state-of-the-art generative models, including diffusion models and multimodal large language models, that can generate, understand, and reason across multiple modalities.
  • Data-Centric AI: Developing methodologies that emphasize the strategic collection, curation, and utilization of data, which we see as essential for the next generation of AI breakthroughs.

Through these research directions, we strive to take meaningful steps toward realizing Artificial General Intelligence.

[Notice] We are always looking for self-motivated and passionate students to join our team. If you’re interested, please visit the Joining Us page for more information.

Latest News

[May. 2025] Juhyun Park joined our lab as an undergraduate intern. Welcome!

[Apr. 2025] Our lab will be supported with high-performance computing resources (H100) by the Ministry of Science and ICT this year.

[Mar. 2025] Prof. Park was invited to give a talk on ‘A Data-Centric Perspective on Vision-Centered AI’ at Yonsei University.

[Mar. 2025] Our KOALA model has been successfully transferred to a gaming startup to support their game development.

[Mar. 2025] Dohyun Kim and Wonjun Heo joined our lab as undergraduate interns. Welcome!

[Mar. 2025] The Visual & General Intelligence Lab. has been newly established at the University of Seoul.

[Oct. 2024] Our team won third place in the OmniLabel Challenge at ECCV 2024, following a joint team from Meta and Google.

[Sep. 2024] Our paper on KOALA, a fast and memory-efficient diffusion model, was accepted to NeurIPS 2024.