News
2025 Publication Summary
In 2025, we have got 1 NeurIPS spotlight, 1 ICCV highlight, 1 CVPR highlight, 1 CVPR Workshop spotlight, and 1 BMVC oral, and 1 TPAMI papers, all first-authored by KHU VLL students.
In other words, all five papers first-authored by KHU VLL students are oral/highlight/spotlight or TPAMI papers.
We also have 1 AAAI and 1 CVPR poster papers first-authored by students from other labs.
For more details, refer to the Publication page.
Sep. 2025:
Our work on video explainable AI is accepted as an Spotlight paper of NeurIPS 2025.
"Disentangled Concepts Speak Louder Than Words: Explainable Video Action Recognition"
Jongseo, big congratulations! Now you know how to conduct high-impact research!
Aug. 2025:
A grant proposal, “Explainable and Robust Multi-modal Long-form Video Understanding with Life-long Learning (RS-2025-22362968)” has been accepted by NRF, Korea.
Jinwoo Choi is the PI of the grant.
Aug. 2025:
Press Coverage: Our recent work, MASH-VLM (CVPR 2025 Highlight) has been featured in Digital Chosun Ilbo (August 4, 2025).
Digital Chosun Ilbo Article
Jul. 2025:
Our work on Egocentric Visual Query Localization (VQ2D of Ego4D) is accepted as an Oral paper of BMVC 2025.
"HERO-VQL: Hierarchical, Egocentric and Robust Visual Query Localization"
Congratulations to Hyogun, Soyeon, and Joohyun on their first oral paper!
Jul. 2025:
Our work on Video Continual Learning is selected as a Highlight presentation of ICCV 2025.
"ESSENTIAL: Episodic and Semantic Memory Integration for Video Class-Incremental Learning"
Congratulations to Jongseo and Kyungho on this well-deserved recognition!
May 2025:
A grant proposal, “Multi-Purpose Visual Information Coding for Human and Machine Vision
(인간과 기계 시각을 동시에 지원하는 다목적 시각정보 압축 연구), RS-2025-02216217” has been selected as a Global Basic Research Lab funded by NRF, Korea (한국연구재단 글로벌기초연구실).
Jinwoo Choi is one of the Co-PIs of the grant.
Duration: June 2025 ~ May 2028
Apr. 2025:
Our work on VideoLLMs' action-scene hallucination mitigation is selected as a Highlight presentation of CVPR 2025.
Kyungho Bae, Jinhyung Kim, Sihaeng Lee, Soonyoung Lee, Gunhee Lee*, Jinwoo Choi*, "MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations"
Kyungho, big congratulations! Now you know how to conduct high-impact research!
Mar. 2025:
Geo Ahn, a 2nd year master student, starts her research internship at NAVER Cloud to conduct interesting research on video understanding. Congratulations!
Feb. 2025:
Two papers are accepted to CVPR 2025.
Kyungho Bae, Jinhyung Kim, Sihaeng Lee, Soonyoung Lee, Gunhee Lee*, Jinwoo Choi*, "MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations"
Congratulations to Kyungho!
Kyungho did this wonderful work during his internship at LG AI Research.
Seun-An Choe, Keon-Hee Park, Jinwoo Choi*, Gyeong-Moon Park*, "Universal Domain Adaptation for Semantic Segmentation"
This is a collaboration with Seun-An Choi, Keon-Hee Park, and Gyeong-Moon Park from KHU AGI Lab.
Congratulations, and thank you for the collaboration!
2024
Aug 2024:
Our work on disentangled video representation learning is selected as an Oral presentation of ECCV 2024.
"DEVIAS: Learning Disentangled Video Representations of Action and Scene"
Kyungho, Geo, and Youngrae, big congratulations on the FIRST top conference oral paper!
Jul. 2024:
One paper on disentangled video representation learning has been accepted to ECCV 2024.
"DEVIAS: Learning Disentangled Video Representations of Action and Scene"
Kyungho, Geo, and Youngrae, big congratulations on your first top conference paper!
Apr. 2024:
A grant proposal, “AI-based OTT user and content data analysis and content-based video recommendation systems
(인공지능 기반 OTT 사용자 및 콘텐츠 데이터 분석과 비디오 추천시스템 개발), RS-2024-00353131” has been accepted by IITP (정보통신기획평가원).
Jinwoo Choi is one of the Co-PIs of the grant.
Funding amount: 400,000,000 KRW (4억원) for KHU VLL out of 2,000,000,000 KRW (20억원) in total
Duration: May 2024 ~ December 2027
Mar. 2024:
Prof. Jinwoo Choi serves as an Industry Chair of KCCV 2024.
Feb. 2024:
Two papers have been accepted to CVPR 2024.
"Do You Remember? Dense Video Captioning with Cross-Modal Memory Retrieval"
"Open Set Domain Adaptation for Semantic Segmentation"
Congratulations!
Jan. 2024:
Yuri Kim (KHU BME/EE Dual Major) joined our lab as an undergraduate intern!