SeaDronesSee

Home

Logo

Description:

SeaDronesSee is a large-scale data set aimed at helping develop systems for Search and Rescue (SAR) using Unmanned Aerial Vehicles (UAVs) in maritime scenarios. Building highly complex autonomous UAV/drone systems that aid in SAR missions requires robust computer vision algorithms to detect and track objects or persons of interest. This data set provides three sets of tracks: object detection, single-object tracking and multi-object tracking. Each track consists of its own fully labeled data set and for most there is a leaderboard.


News:

As the Maritime Computer Vision initiative is getting bigger, this page will slowly integrate into the new MaCVi page. As a first step, the upload options will move from here to there. So, from now on, please upload over there. The uploads and user database will remain the same and there is no need for any change on your end. Potentially, you have to reenter your login credentials. If you experience any bug, don't hesitate to contact Benjamin or Lojze.
Just as an early teaser: The 2nd Workshop on Maritime Computer Vision is in the planning phase. It will feature new competitions and is similarly structured as the last workshop. Stay tuned.
This is just a very preliminary announcement of the MaCVi initiative. You will find more information on the new MaCVi page (coming soon).
If the webserver should be down, please do not hesitate to contact benjamin.kiefer ät uni-tuebingen.de.
Find material resulting from the workshop on the summary page.
Item Start Time
Opening 08:30
1st Keynote: Underwater computer vision challenges 08:40
2nd Keynote: UAV-based object detection for maritime Search-And-Rescue missions     09:10
Spotlight presentations of submitted papers     09:40
Coffee break & poster session; simultaneous online meet-up 10:20
3rd Keynote: Satellite-based marine litter detection     10:45
Challenges overview and results 11:15
Presentations of challenge winners 12:15
5 minute break 12:55
4th Keynote: Scaling aerial whale monitoring using active learning 13:00
Panel Discussion with Sentient-Vision, Whaleseeker, and TBA. 13:30
Closing Remarks 14:00
We are excited about the last announced keynote from SearchWing. Julian will talk about computer vision challenges in maritime search and rescue.
After careful analysis of the submitted predictions, we can finally announce the winners for each of the challenge tracks. The winners are as follows:

UAV-based (Binary) Object Detection v2:
1. USYD (AP: 0.6152), 2. Fraunhofer IOSB (AP: 0.6062), 3. BUPT MCPRL (AP: 0.5900)

UAV-based Multi-Object Tracking:
1. BUPT MCPRL (HOTA: 0.666), 2. NUDT (HOTA: 0.650), 3. VITA-ISPGroup (HOTA: 0.633)

USV-based Obstacle Segmentation:
1. BUPT MCPRL (Avg.: 93.5), 2. HKUST (Avg.: 93.2)

USV-based Obstacle Detection:
1. Fraunhofer IOSB (Avg.: 0.546), 2. Nvlab x Acvlab (Avg.: 0.514), 3. Ocean U. (Avg.: 0.492)

Congratulations to the winning teams! Soon, you will receive mails regarding a presentation invitation during the workshop.

Furthermore, we are happy to announce that Sentient Vision Systems will be sponsoring prizes for the best three teams of the UAV-based Object Detection v2 track. Each of these teams will receive a GPU card selected and supplied by Sentient (details soon). Again, we thank all participants for their submissions and are looking forward to an eventful workshop with fruitful discussions. More on the workshop program soon.
We are happy to have Kevin Köser onboard the keynote presentations! He will talk about underwater computer vision challenges.
We sent out the decisions about acceptance of submitted papers. Please upload your final submissions by November 18th, incoporating the suggestions made by the reviewers as best as possible. Importantly, each accepted paper must have a corresponding in-person author registration by Nov 19th. If you cannot attend in person, you still have to obtain an in-person registration, but please let us know immediately so that we can plan accordingly for virtual attendance.
We sent out requests for short technical reports to those people that achieved sufficient performances. We kindly ask you to send them in until November 8th.
The next exciting keynote will be a joint talk from Devis Tuia and Marc Rußwurm about satellite-based marine litter detection.
The challenge submission window is closed. Thank you all very much for the fair and close competition! We will work hard to give you feedback very soon.
We are happy to announce the first keynote talk on the exciting and relevant topic of Whale detection by Justine Boulet.
We restructured the uploading for the Multi-Object Tracking part as part of the challenge. Now you can upload json-predictions and are provided with a more thorough uploading instruction.
The Sensors Journal approached us. Authors of the best three papers will be invited to submit an extended version of their work to a Special Issue of the Sensors Journal free of charge.
The uploading option for the challenges Object Detection v2, Binary Object Detection v2, Obstacle Segmentation and Obstacle Detection are open now. Feel free to submit any model using the instructions given on the challenges overview.
We will host the Workshop "1st Workshop on Maritime Computer Vision (MaCVi)" as part of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) in January '23. Stay tuned for updates.
The Boat-MNIST uploading option is open again.
The Boat-MNIST challenge is over. Congratulations to the winner groups 10, 106, 120, 50 and 107! Your prizes are on here ;) We will reopen the uploading option for this track soon.
We created a Github repository for this benchmark. Over time, you will find code examples, baseline models and other helpful stuff over there.
The SeaDronesSee evaluation webpage is online. IF you find any bugs, please send us an email. Thank you!

Datasets:

Object Detection v2: 8,930 train images, 1,547 validation images, 3,750 testing images

Object Detection: 2,975 train images, 859 validation images, 1,796 testing images

Single-Object Tracking: 58 training video clips, 70 validation video clips and 80 testing video clips

Multi-Object Tracking: 22 video clips with 54,105 frames

Multi-Spektral Object Detection: 246 train images, 61 validation images, 125 testing images

MODS Obstacle Detection and Segmentation: hosted as part of the upcoming Workshop.

DeepGTAV-SeaDronesSee: 90,000 synthetic images

Seagull - Traffic Monitoring and Surveillance: advertised here as part of the upcoming Workshop.

Boat-MNIST: 3,765 train images, 1,506 validation images, 2,259 testing images


We will continue to update this data set to make it more versatile and reflect real-world requirements in dynamic situations.

Citation:

If you find SeaDronesSee or this evaluation webpage useful, consider citing this or this paper:

@article{kiefer20221st, title={1st Workshop on Maritime Computer Vision (MaCVi) 2023: Challenge Results}, author={Kiefer, Benjamin and Kristan, Matej and Per{\v{s}}, Janez and {\v{Z}}ust, Lojze and Poiesi, Fabio and Andrade, Fabio Augusto de Alcantara and Bernardino, Alexandre and Dawkins, Matthew and Raitoharju, Jenni and Quan, Yitong and others}, journal={arXiv preprint arXiv:2211.13508}, year={2022} }

@inproceedings{varga2022seadronessee,
title={Seadronessee: A maritime benchmark for detecting humans in open water},
author={Varga, Leon Amadeus and Kiefer, Benjamin and Messmer, Martin and Zell, Andreas},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={2260--2270},
year={2022} }

If you use MODS, consider citing the following paper:

@article{bovcon2021mods,
title={MODS--A USV-oriented object detection and obstacle segmentation benchmark},
author={Bovcon, Borja and Muhovi{\v{c}}, Jon and Vranac, Du{\v{s}}ko and Mozeti{\v{c}}, Dean and Per{\v{s}}, Janez and Kristan, Matej},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2021},
publisher={IEEE} }