Haojian Huang is a PhD student at The Hong Kong University of Science and Technology (GZ), supervised by Prof. Yingcong Chen. His research spans Trusted AI, Embodied AI, and Video Understanding & Generation. He previously interned at TeleAI and Huawei Noah's Ark Lab, collaborating closely with Associate Professor Mulin Chen and Principal Researcher Yinchuan Li. He will join Knowin.ai to deepen his focus on Embodied AI, aiming to build engaging and reliable intelligent systems that advance human well-being. Beyond research, he leads CareerSynapse, a dynamic, student-driven initiative that explores practical applications of agentic AI systems. He welcomes ANY AI research collaboration.
Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models
Temporal Regularization Makes Your Video Generator Stronger
Evidential Deep Learning for Robust Video Temporal Grounding
Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs
A Collaborative Framework for Multi-Shot Video Generation
DependEval: Benchmarking LLMs for Repository Dependency Understanding
Evidential Deep Learning for Robust Video Temporal Grounding
Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning
Trusted Unified Feature-Neighborhood Dynamics for Multi-View Classification
Evidential Deep Partial Multi-View Classification With Discount Fusion
Towards Robust Uncertainty-Aware Incomplete Multi-View Classification
Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs
Recent Trends of Multimodal Affective Computing: A Survey from NLP Perspective
3D Human Virtual Try-ON via Multi-Stage Gaussian Splatting Editing with Image Prompting