We study vision and learning

We read, code, discuss and write about computer vision and machine learning


News

2025

Apr. 2025:

Our work on VideoLLMs' action-scene hallucination mitigation is selected as a Highlight presentation of CVPR 2025.
Kyungho Bae, Jinhyung Kim, Sihaeng Lee, Soonyoung Lee, Gunhee Lee*, Jinwoo Choi*, "MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations"
Kyungho, big congratulations! Now you know how to conduct high-impact research!

Mar. 2025:

Geo Ahn, a 2nd year master student, starts her research internship at NAVER Cloud to conduct interesting research on video understanding. Congratulations!

Feb. 2025:

Two papers are accepted to CVPR 2025.
Kyungho Bae, Jinhyung Kim, Sihaeng Lee, Soonyoung Lee, Gunhee Lee*, Jinwoo Choi*, "MASH-VLM: Mitigating Action-Scene Hallucination in Video-LLMs through Disentangled Spatial-Temporal Representations"
Congratulations to Kyungho! Kyungho did this wonderful work during his internship at LG AI Research.
Seun-An Choe, Keon-Hee Park, Jinwoo Choi*, Gyeong-Moon Park*, "Universal Domain Adaptation for Semantic Segmentation"
This is a collaboration with Seun-An Choi, Keon-Hee Park, and Gyeong-Moon Park from KHU AGI Lab. Congratulations, and thank you for the collaboration!

Jan. 2025:

Juan Lee (KHU CSE) joined our lab as an undergraduate intern!



2024

Dec. 2024:

Gangmin Choi (KHU EE) joined our lab as an undergraduate intern!

Nov. 2024:

Jiwook Han (KHU IEM/CSE Dual Major) joined our lab as an undergraduate intern!

Sep. 2024:

Wooil Lee (KHU SWCON) joined our lab as an undergraduate intern!

Aug 2024:

Our work on disentangled video representation learning is selected as an Oral presentation of ECCV 2024.
"DEVIAS: Learning Disentangled Video Representations of Action and Scene"
Kyungho, Geo, and Youngrae, big congratulations on the FIRST top conference oral paper!

Jul. 2024:

One paper on disentangled video representation learning has been accepted to ECCV 2024.
"DEVIAS: Learning Disentangled Video Representations of Action and Scene"
Kyungho, Geo, and Youngrae, big congratulations on your first top conference paper!

Apr. 2024:

A grant proposal, “AI-based OTT user and content data analysis and content-based video recommendation systems
(인공지능 기반 OTT 사용자 및 콘텐츠 데이터 분석과 비디오 추천시스템 개발), RS-2024-00353131” has been accepted by IITP (정보통신기획평가원).
Jinwoo Choi is one of the Co-PIs of the grant.
Funding amount: 400,000,000 KRW (4억원) for KHU VLL out of 2,000,000,000 KRW (20억원) in total
Duration: May 2024 ~ December 2027

Mar. 2024:

Prof. Jinwoo Choi serves as an Industry Chair of KCCV 2024.

Feb. 2024:

Two papers have been accepted to CVPR 2024.
"Do You Remember? Dense Video Captioning with Cross-Modal Memory Retrieval"
"Open Set Domain Adaptation for Semantic Segmentation"
Congratulations!

Jan. 2024:

Yuri Kim (KHU BME/EE Dual Major) joined our lab as an undergraduate intern!