GriffinX

GriffinX is a middleware package for real time event detection


License
MIT
Install
pip install GriffinX==1.2

Documentation

🚗 🚡 Griffin: Drone-Car-Collaboration

This is the official implementation for our paper titled "Griffin: Real-time Road Event Detection by Collaborative UAV and Ground Vehicle" (in-progress)

[Paper]

⚡ Description

Autonomous Vehicles rely heavily on sensing the environment for movement guidance and event detection. Yet, their sensing capabilities may be restricted, potentially compromising accuracy and response time leading to hazardous accidents. It is observed that UAV provides more flexible and comprehensive view, but suffer from limited resources and have never been used to guide ground vehicles in a real-time fashion. To mitigate and solve the above, we introduces Griffin, a novel system that achieves Real-time Road Event Detection by utilizing the resources of a pair of drone and ground vehicle in a collaborative fashion. Our contributions are as follows: 1) we collect a dataset of 350 bird-eye view, road trajectory video clips, to emulate dissimilar lighting, flight settings, weather, and road conditions; We use offline algorithms to assist human annotation and build the ground truth for the collected dataset; 2) we introduce a partitioning-based distributed image processing approach, which adapts to the dynamic network condition and drone resource status and detects road events in a latency/accuracy -optimal fashion; 3) we implement the griffin system and measure its performance in real-world usage scenario and using our dataset. We evaluate the system's performance in terms of its latency, energy consumption, and accuracy, by deploying it on a Jetson Nano as the drone's process unit and an edge server as the road side unit or the car-mounted computer. Our results show that compared with ground-vehicle-based sensing, Griffin can improve the responding time for event detection by xxx percent, while causing a insignificant impact on the drone's flight time.

📹 Dataset

We contribute a dataset (GriffinWild) of ~ 500 high-quality videos (x frames) of drone-view road trajectories collected from wild (captured from various sources) for public use. We classify the data into several categories that can be utilized for drone-view research in the future. It can be accessed at [GriffinWild]

⚡ System Overview

⚡ Dataset

We contribute a dataset (~500 video streams) of birdview trajectories of ground-vehicles on roads taken from wild. Further, we annotate the vehicles with their speeds estimated using CV algorithms. The dataset can be accessed at [Griffin-Dataset]

⚡ Implementation Modules

The following are the modules that are in development for our proposed system. Their implementations can be found in their respective folders.

  1. Speed Detection of Ground Vehicles in Birdview datasets

We use Yolo-V8 for tracking Ground Vehicles in Birdview trajectory datasets, followed by speed detection of the corresponding vehicles

speed detection-gif

  1. Speed Detection & Direction Estimation of Drone

Using Lucas-Kanade optical flow approach, we calculate the speed of drone from the video stream. Direction is estimated using the video stream by utilizing the displacement of the pixels in an ROI (selected by user). The camera view-point is initially calibrated when the user selects the reference origin of the frame. Another parameter pixel_per_meter (ppm) is ingested real-time according to the video stream.

birdview_1_1

  1. Frame partitioning and model offloading

An offline demonstration of the basic concept of frame partitioning combined with model offloading

  1. Energy consumption estimation for JetsonNano device

We use [Jeston Power] for measuring the energy consumption of JetsonNanoTx2 that is mounted on-top of drone

  1. Real time commication channel between edge devices and data transfer

We take part of implemetation from [Sky-Socket] for demonstration of a simple communication channel between JetsonNano and a computer device, followed by partitioning and model offloading (as provided in point 4 above). We plan to implement a robust TCP channel in the near future.

  1. Training Yolov5 with VisDrone dataset

We finetune existing Yolov5 checkpoints with data from VisDrone dataset [VisDrone]. We perform this to improve Ground Vehicle(specifically Car) detections on roads in birdview real scenes. The code and checkpoints would be released shortly.

🛠️ Ongoing & Future Work

We plan to implement a real-time communication channel between the Drone and Car. We wish to evaluate drone's resources and network conditions in the experiment setup, following which we plan to simulate and test our proposed Griffin system. Our goal is to evaluate the accuracy and response-time for event detection.