Toggle navigation
OpenReview
.net
Login
×
Back to
CVPR
CVPR 2024 Workshop VLADR Submissions
Open6DOR: Benchmarking Open-instruction 6-DoF Object Rearrangement and A VLM-based Approach
Yufei Ding
,
Haoran Geng
,
Chaoyi Xu
,
Xiaomeng Fang
,
Jiazhao Zhang
,
Songlin Wei
,
Qiyu Dai
,
Zhizheng Zhang
,
He Wang
Published: 22 Apr 2024, Last Modified: 29 Apr 2024
VLADR 2024 Oral
Readers:
Everyone
Evolutionary Reward Design and Optimization with Multimodal Large Language Models
Ali Emre Narin
Published: 22 Apr 2024, Last Modified: 04 May 2024
VLADR 2024 Poster
Readers:
Everyone
Language-Driven Active Learning for Diverse Open-Set 3D Object Detection
Ross Greer
,
Bjørk Antoniussen
,
Andreas Møgelmose
,
Mohan Trivedi
Published: 22 Apr 2024, Last Modified: 23 Apr 2024
VLADR 2024 Poster
Readers:
Everyone
Driver Activity Classification Using Generalizable Representations from Vision-Language Models
Ross Greer
,
Mathias Viborg Andersen
,
Andreas Møgelmose
,
Mohan Trivedi
Published: 22 Apr 2024, Last Modified: 23 Apr 2024
VLADR 2024 Poster
Readers:
Everyone
DriVLMe: Enhancing LLM-based Autonomous Driving Agents with Embodied and Social Experiences
Yidong Huang
,
Jacob Sansom
,
Ziqiao Ma
,
Felix Gervits
,
Joyce Chai
Published: 22 Apr 2024, Last Modified: 04 May 2024
VLADR 2024 Poster
Readers:
Everyone
ATLAS: Adaptive Landmark Acquisition using LLM-Guided Navigation
Utteja Kallakuri
,
Bharat Prakash
,
Arnab Neelim Mazumder
,
Hasib-Al Rashid
,
Nicholas R Waytowich
,
Tinoosh Mohsenin
Published: 22 Apr 2024, Last Modified: 04 May 2024
VLADR 2024 Poster
Readers:
Everyone
Improving End-To-End Autonomous Driving with Synthetic Data from Latent Diffusion Models
Harsh Goel
,
Sai Shankar Narasimhan
Published: 22 Apr 2024, Last Modified: 11 May 2024
VLADR 2024 Poster
Readers:
Everyone
Explanation for Trajectory Planning using Multi-modal Large Language Model for Autonomous Driving
Takuya Nanri
,
Siyuan Wang
,
Akio Shigekane
,
Jo Nishiyama
,
CHU Tao
,
Kohei Yokosawa
29 Mar 2024 (modified: 27 Apr 2024)
Submitted to VLADR 2024
Readers:
Everyone
Ambiguous Annotations: When is a Pedestrian not a Pedestrian?
Luisa Schwirten
,
Jannes Scholz
,
Daniel Kondermann
,
Janis Keuper
Published: 22 Apr 2024, Last Modified: 03 May 2024
VLADR 2024 Poster
Readers:
Everyone
AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving
Mingfu Liang
,
Jong-Chyi Su
,
Samuel Schulter
,
Sparsh Garg
,
Shiyu Zhao
,
Ying Wu
,
Manmohan Chandraker
Published: 22 Apr 2024, Last Modified: 23 Apr 2024
VLADR 2024 Oral
Readers:
Everyone
DriveLM: Driving with Graph Visual Question Answering
Chonghao Sima
,
Katrin Renz
,
Kashyap Chitta
,
Li Chen
,
Hanxue Zhang
,
Chengen Xie
,
Jens Beißwenger
,
Ping Luo
,
Andreas Geiger
,
Hongyang Li
Published: 22 Apr 2024, Last Modified: 23 Apr 2024
VLADR 2024 Oral
Readers:
Everyone
Multi-Frame, Lightweight & Efficient Vision-Language Models for Question Answering in Autonomous Driving
Akshay Gopalkrishnan
,
Ross Greer
,
Mohan Trivedi
Published: 22 Apr 2024, Last Modified: 23 Apr 2024
VLADR 2024 Poster
Readers:
Everyone
Collision Avoidance Metric for 3D Camera Evaluation
Vage Taamazyan
,
Alberto Dall'Olio
,
Agastya Kalra
Published: 22 Apr 2024, Last Modified: 01 May 2024
VLADR 2024 Oral
Readers:
Everyone
Optimizing Visual Question Answering Models for Driving: Bridging the Gap Between Human and Machine Attention Patterns
Kaavya Rekanar
,
Martin Hayes
,
Ganesh Sistu
,
Ciaran Eising
Published: 22 Apr 2024, Last Modified: 30 Apr 2024
VLADR 2024 Poster
Readers:
Everyone
On the Safety Concerns of Deploying LLMs/VLMs in Robotics: Highlighting the Risks and Vulnerabilities
Xiyang Wu
,
Ruiqi Xian
,
Tianrui Guan
,
Jing Liang
,
Souradip Chakraborty
,
Fuxiao Liu
,
Brian M. Sadler
,
Dinesh Manocha
,
Amrit Bedi
Published: 22 Apr 2024, Last Modified: 02 May 2024
VLADR 2024 Poster
Readers:
Everyone
RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation
Hanxiao Jiang
,
Binghao Huang
,
Ruihai Wu
,
Zhuoran Li
,
Shubham Garg
,
Hooshang Nayyeri
,
Shenlong Wang
,
Yunzhu Li
Published: 22 Apr 2024, Last Modified: 24 Apr 2024
VLADR 2024 Oral
Readers:
Everyone