Download the MODS dataset on the dataset page. Train your model on any external data (for your convenience we provide MODD2 dataset and older MODD dataset). Upload a zip file with results and predictions on the upload page.
The MODS Obstacle Detection Benchmark is hosted here as part of the upcoming WACV '23 Workshop. The goal of MODS is to benchmark segmentation-based and detection-based obstacle detection methods for the maritime domain, specifically for use in unmanned surface vehicles (USVs). It contains 94 maritime sequences captured using a radio-controlled USV. Instead of using plain boilerplate evaluation code, MODS scores models using USV-oriented metrics.
For the obstacle detection task, your task is to develop a object detection method that detects the obstacles that are found on the water surface.
Please see the corresponding Github for instructions on how to set up your submission file.Create object detection method that can detect the two specific classes of obstacles on the water surface: vessels and persons. Other obstacles should be detected as well, but they should be classified as belonging to others category. An obstacle is everything that the USV can crash into or that it should avoid (e.g. boats, swimmers, land, buoys).
MODS consists of 94 maritime sequences totalling approximately 8000 annotated frames with over 60k annotated objects. It consists of annotations for two different types of obstacles: i) dynamic obstacles, which are objects floating in the water (e.g. boats, buoys, swimmers) and ii) static obstacles, which are all remaining obstacle regions (e.g. shoreline, piers). Dynamic obstacles are annotated using bounding boxes. For static obstacles, MODS labels the boundary between static obstacles and water annotated as polylines. Only dynamic obstacles are relevant to the object detection challenge!
The algorithm should output detections of all waterborne objects of the above mentioned semantic classes: vessel, person and others with rectangular axis-aligned bounding boxes.
MODS evaluation protocol is designed to score the predictions in a way meaningful for practical USV navigation. Detection methods are evaluated on standard IoU metric for bounding box detections. A detection counts as true positive (TP) if it has at least a 0.3 IoU score with the ground truth, otherwise it is counted as a false positive (FP). The final score is composed of three different metrics:
To determine the winner of the challenge, the average of the above three F1 metrics scores will be used as an overall measure of quality of the method. In case of a tie, the first of the F1 metrics specified above will be used to select the winner.
Furthermore, we require every participant to submit information on the speed of their method measured in frames per second (FPS). Please also indicate the hardware that you used. Lastly, you should indicate which data sets (also for pretraining) you used during training and whether you used meta data.
To participate in the challenge you can perform the following steps: