v2x-sim-visualizer

A library for collaborative perception.


Keywords
computer-vision, deep-learning, autonomous-driving, collaborative-learning, knowledge-distillation, communication-networks, multi-agent-learning, multi-agent-system, 3d-object-detection, graph-learning, point-cloud-processing, v2x-communication, multi-agent-perception, 3d-scene-understanding, benchmark, collaborative-perception, dataset, machine-learning, multi-robot-systems, pytorch, simulation, v2x, vehicle-to-everything
License
Apache-2.0
Install
pip install v2x-sim-visualizer==0.0.1

Documentation

V2X-Sim: A Virtual Collaborative Perception Dataset and Benchmark for Autonomous Driving

Yiming Li, Zixun Wang, Ziyan An, Yiqi Zhong, Siheng Chen, Chen Feng

This repository provides a PyTorch benchmark implementation of the paper V2X-Sim: A Virtual Collaborative Perception Dataset and Benchmark for Autonomous Driving

Abstract

Vehicle-to-everything (V2X), which denotes the collaboration via communication between a vehicle and any entity in its surrounding, can fundamentally improve the perception in self-driving systems. As the single-agent perception rapidly advances, collaborative perception has made little progress due to the shortage of public V2X datasets. In this work, we present V2X-Sim, the first public large-scale collaborative perception dataset in autonomous driving. V2X-Sim provides: 1) well-synchronized recordings from roadside infrastructure and multiple vehicles at the intersection to enable collaborative perception, 2) multi-modality sensor streams to facilitate multi-modality perception, and 3) diverse well-annotated ground truth to support various downstream tasks including detection, tracking, and segmentation. We seek to inspire research on multi-agent multi-modality multi-task perception, and our virtual dataset is promising to promote the development of collaborative perception before realistic datasets become widely available.

Dataset

You could find more detailed documents and the download link in our website! dataset

Requirements

Tested with:

  • Python 3.7
  • PyTorch 1.8.0
  • Torchvision 0.9.0
  • CUDA 11.2

Benchmark

We implement when2com, who2com, V2VNet, lowerbound and upperbound benchmark experiments on our datasets. You are welcome to go to detection, segmentation and tracking to find more details.

Acknowledgement

We are very grateful to multiple great opensourced codebases, without which this project would not have been possible:

Citation

If you find V2XSIM useful in your research, please cite: