News
May 2025:
A grant proposal, “Multi-Purpose Visual Information Coding for Human and Machine Vision
(인간과 기계 시각을 동시에 지원하는 다목적 시각정보 압축 연구), RS-2025-02216217” has been selected as a Global Basic Research Lab funded by NRF, Korea (한국연구재단 글로벌기초연구실).
Jinwoo Choi is one of the Co-PIs of the grant.
Duration: June 2025 ~ May 2028
Apr. 2025:
Our work on VideoLLMs' action-scene hallucination mitigation is selected as a Highlight presentation of CVPR 2025.
Kyungho Bae, Jinhyung Kim, Sihaeng Lee, Soonyoung Lee, Gunhee Lee*, Jinwoo Choi*, "MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations"
Kyungho, big congratulations! Now you know how to conduct high-impact research!
Mar. 2025:
Geo Ahn, a 2nd year master student, starts her research internship at NAVER Cloud to conduct interesting research on video understanding. Congratulations!
Feb. 2025:
Two papers are accepted to CVPR 2025.
Kyungho Bae, Jinhyung Kim, Sihaeng Lee, Soonyoung Lee, Gunhee Lee*, Jinwoo Choi*, "MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations"
Congratulations to Kyungho!
Kyungho did this wonderful work during his internship at LG AI Research.
Seun-An Choe, Keon-Hee Park, Jinwoo Choi*, Gyeong-Moon Park*, "Universal Domain Adaptation for Semantic Segmentation"
This is a collaboration with Seun-An Choi, Keon-Hee Park, and Gyeong-Moon Park from KHU AGI Lab.
Congratulations, and thank you for the collaboration!
2024
Aug 2024:
Our work on disentangled video representation learning is selected as an Oral presentation of ECCV 2024.
"DEVIAS: Learning Disentangled Video Representations of Action and Scene"
Kyungho, Geo, and Youngrae, big congratulations on the FIRST top conference oral paper!
Jul. 2024:
One paper on disentangled video representation learning has been accepted to ECCV 2024.
"DEVIAS: Learning Disentangled Video Representations of Action and Scene"
Kyungho, Geo, and Youngrae, big congratulations on your first top conference paper!
Apr. 2024:
A grant proposal, “AI-based OTT user and content data analysis and content-based video recommendation systems
(인공지능 기반 OTT 사용자 및 콘텐츠 데이터 분석과 비디오 추천시스템 개발), RS-2024-00353131” has been accepted by IITP (정보통신기획평가원).
Jinwoo Choi is one of the Co-PIs of the grant.
Funding amount: 400,000,000 KRW (4억원) for KHU VLL out of 2,000,000,000 KRW (20억원) in total
Duration: May 2024 ~ December 2027
Mar. 2024:
Prof. Jinwoo Choi serves as an Industry Chair of KCCV 2024.
Feb. 2024:
Two papers have been accepted to CVPR 2024.
"Do You Remember? Dense Video Captioning with Cross-Modal Memory Retrieval"
"Open Set Domain Adaptation for Semantic Segmentation"
Congratulations!
Jan. 2024:
Yuri Kim (KHU BME/EE Dual Major) joined our lab as an undergraduate intern!