AIVS Lab, Korea University
Who We Are
We study intelligent systems that tightly connect perception, reasoning, and action. Our research lies at the intersection of computer vision, multimodal foundation models, and embodied intelligence, with a strong emphasis on building AI systems that can operate reliably in the real world.
Rather than focusing solely on isolated models or benchmarks, we aim to develop robust, scalable, and deployable AI technologies that integrate sensing, decision-making, and control. Our work spans both fundamental AI research and system-level implementations, addressing challenges such as generalization under distribution shift, data efficiency, and real-world constraints.
- Designing generative and multimodal models that enable controllable and trustworthy perception.
- Developing Vision–Language–Action (VLA) frameworks for embodied agents and robotic systems.
- Building end-to-end AI systems that bridge simulation, learning, and physical execution.
If you are interested in pushing the boundaries of real-world AI systems and working on challenging problems at the intersection of theory and practice, we encourage you to contact us and explore opportunities to join the lab.
Highlights
-
[Funded Projects]Currently conducting three nationally funded research projects in multimodal, generative, and embodied AI.
-
[Publications]A strong publication record with international journal and conference papers, including multiple JCR top 10% journals.
-
[Research Group]An active research group consisting of 7 integrated MS/PhD students and 6 undergraduate researchers.
News
-
2025-12[Congratulations] One paper has been accepted to Advanced Science (IF 14.1, JCR top 7%).
-
2025-04[New Project] An IITP-funded project has been launched, focusing on Vision–Language–Action models (2025.04–2025.12).
-
2025-03[New Project] A NRF-funded project has been launched, focusing on on-device generative AI (2025.03–2030.02).