Welcome to the Visual and General Intelligence (VisAGI) Lab! At the VisAGI Lab, we envision building general and robust AI systems that can be deployed to solve a wide range of real-world problems, ultimately progressing toward Artificial General Intelligence (AGI). Our long-term academic journey starts with a vision-centric approach, focusing on computer vision as the first step toward building more general and robust AI systems.

Our current research interests include, but are not limited to, the following areas:

  • Robust Visual Perception Models: Designing models that can accurately perceive and interpret visual data (e.g, image/video/3D), even under challenging conditions.
  • Multimodal Generative Models: Exploring state-of-the-art generative models, including diffusion models and multimodal large language models, that can generate, understand, and reason across multiple modalities.
  • Data-Centric AI: Developing methodologies that emphasize the strategic collection, curation, and utilization of data, which we see as essential for the next generation of AI breakthroughs.
  • Agentic AI: Building autonomous agents capable of multi-step reasoning, long-term planning, and tool-augmented execution to solve complex tasks.

Through these research directions, we strive to take meaningful steps toward realizing Artificial General Intelligence.

[Notice] We are always looking for self-motivated and passionate students to join our team. If you’re interested, please visit the Joining Us page for more information.

Latest News

[Jan. 2026] Chaeyun Park joined our lab as an undergraduate intern. Welcome!

[Dec. 2025] Prof. Park was invited to give a talk at KAIST.

[Sep. 2025] Donghyeop Woo joined our lab as an undergraduate intern. Welcome!

[Sep. 2025] Two papers on vision-language models has been accepted to ICCV 2025 Workshop.

[Aug. 2025] Prof. Park was invited to give a talk on ‘Recent Vision-Language Foundation Models’ at ETRI and Gachon University.

[Jul. 2025] Our paper on language-based object detection has been accepted to IJCV 2025 (Q1, JCR: Top 3.3%).

[Jun. 2025] Our paper on flexibly controllable image captioning has been accepted to ICCV 2025 as a Highlight paper🌟.

[Jun. 2025] Junyong Lhim joined our lab as an undergraduate intern. Welcome!

[May. 2025] Juhyun Park joined our lab as an undergraduate intern. Welcome!

[Apr. 2025] Our lab will be supported with high-performance computing resources (H100) by the Ministry of Science and ICT this year.