-
Notifications
You must be signed in to change notification settings - Fork 1
/
Copy pathindex.html
292 lines (264 loc) · 13 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
<html xml:lang="en" xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>ODAI</title>
<!-- For data table -->
<!-- <script src="css/jquery-3.2.1-dist.min.js"></script>
<script src="https://cdn.datatables.net/1.10.15/js/jquery.dataTables.min.js"></script>
<link rel="stylesheet" href="https://cdn.datatables.net/1.10.15/css/jquery.dataTables.min.css" />
<script src="https://cdn.datatables.net/buttons/1.4.0/js/dataTables.buttons.min.js"></script>
<script src="https://cdn.datatables.net/buttons/1.4.0/js/buttons.html5.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jszip/3.1.3/jszip.min.js"></script>
<script src="https://cdn.rawgit.com/bpampuch/pdfmake/0.1.32/build/pdfmake.min.js"></script>
<script src="https://cdn.rawgit.com/bpampuch/pdfmake/0.1.32/build/vfs_fonts.js"></script>
<link rel="stylesheet" href="https://cdn.datatables.net/buttons/1.4.0/css/buttons.dataTables.min.css" />
<link rel="stylesheet" href="https://cdn.datatables.net/1.10.16/css/jquery.dataTables.min.css"/> -->
<!-- <link rel="stylesheet" href="http://cdn.static.runoob.com/libs/bootstrap/3.3.7/css/bootstrap.min.css"> -->
<link rel="stylesheet" href="bootstrap-3.3.7-dist/css/bootstrap.min.css">
<link rel="stylesheet" type="text/css" href="css/mystyle.css"/>
<script src="http://cdn.static.runoob.com/libs/jquery/2.1.1/jquery.min.js"></script>
</head>
<body>
<div class="container" >
<div class="jumbotron">
<div class="content">
<h1 style="text-align:center; margin-top:80px; font-weight: bold; color:rgb(100,10,250); font-size: 48px;">
Object Detection in Aerial Images (ODAI)
</h1>
<!-- <p style="text-align:center; color:blue; position:relative; top:7%"> -->
<p style="text-align:center; margin-top:15px; color: rgb(10,10,250); font-size: 32px;">
<strong>
<span style="font-style:italic">A Contest on ICPR'2018 </span>
</strong>
</p>
<!-- <body background="images/largeimage2.jpg"></body> -->
<!-- <img src="images/largeimage2.jpg" class="img-responsive center-block" /> -->
</div>
</div>
<div class="row">
<div class="span6 offset2">
<ul class="nav nav-tabs">
<li class="active"> <a href="index.html">Home</a></li>
<!-- <li> <a href="dataset.html">Dataset</a></li> -->
<!-- <li><a href="tasks.html">Tasks</a></li> -->
<li><a href="evaluation.html">Evaluation</a></li>
<li><a href="results.html">Results</a></li>
<li><a href="contact.html">Contact</a></li>
<br />
</ul>
</div>
</div>
<div class="row">
<div class="span12">
<h2 style="text-align:left; margin-bottom:10px; margin-top:20px; ">
News
</h2>
<p>
<ul>
<li style="font-size:16px">
<strong>2018-03-27</strong> The submission deadline is extended to April 25.<strong class="news">New</strong>
</li>
<li style="font-size:16px">
<strong>2018-03-16</strong> The registration is open now. Please <a href="http://119.23.15.48:8001">registrate</a> to download the extra test images and submit your results.<strong class="news">New</strong>
</li>
<li style="font-size:16px">
<strong>2018-03-8</strong> DOTA <a href="https://github.com/CAPTAIN-WHU/DOTA_devkit">development kit</a> ia available now. It's helpful to play on ODAI! <strong class="news">New</strong>
</li>
<li>
<strong>2018-2-7</strong> The website for ODAI on <a href="http://www.icpr2018.org/">ICPR'2018</a> is online.
</li>
</ul>
</p>
<h2>
Registration
</h2>
<p>
For download the extra test images and submit your results, registrate on the <a href="http://119.23.15.48:8001/login/?next=/l">contest page</a>.
</p>
<h2 style="text-align:left; margin-bottom:10px; margin-top:20px; ">
Description
</h2>
<!-- <p style="text-align:justify;">
Object detection in Earth Vision, also known as Earth Observation and Remote Sensing, refers to
localizing objects of interest (e.g., vehicles and airplanes) on the earth’s surface and predicting their
corresponding land-use categories. It contains <strong style="color:blue">2806</strong> aerial images from different sensors and platforms.
Each image is of the size in the range from about <strong style="color:blue"> 800 × 800 to 4000 × 4000 </strong> pixels
and contains objects exhibiting a wide variety of scales, orientations, and shapes. These DOTA images are then annotated
by experts in aerial image interpretation using 15 common object categories. The fully annotated DOTA images
contains <strong style="color:blue">188, 282</strong> instances, each of which is labeled by an arbitrary (8 d.o.f.) quadrilateral.
<!-- See <a href="dataset.html">Dataset</a> for details. -->
<!-- </p> -->
<!-- <p style="text-align:justify; font-size: 17px">
For more details, refer to the <a href="https://arxiv.org/abs/1711.10398"> <strong style="color:blue">arXiv preprint</strong></a> of DOTA.
</p> -->
<p>
Object detection in Earth Vision, also known as Earth Observation and Remote Sensing, refers to
localizing objects of interest (e.g., vehicles and airplanes) on the earth’s surface and predicting their
corresponding land-use categories. The task of object detection in aerial images is distinguished
from the conventional object detection task in the following respects:
<ul>
<li>
The scale variations of object instances in aerial images are considerably huge.
</li>
<li>
Many small object instances are densely distributed in aerial images, for example, the ships in a harbor and the vehicles in a parking lot.
</li>
<li>
Objects in aerial images often appear in arbitrary orientations.
</li>
</ul>
</p>
<p>
This contest, organizing on <a href="http://www.icpr2018.org/">ICPR'2018</a>, features a new large-scale image database of object detection in aerial images, named <a href="https://captain-whu.github.io/DOTA/index.html">DOTA</a> with nearly <strong style="color:blue">3000</strong>
large-size images <strong style="color:blue">(4000 × 4000)</strong>, which contain <strong style="color:blue">15</strong> categories.
</p>
<p>
Through the dataset and the tasks, we aim to draw attention from the a wide range of communities
and call for more future research and efforts on the problems of object dection in aerial
images.
<!-- We believe the contest will not only promote the development of algorithms for object
detection in Earth Vision, but also pose interesting algorithmic questions to general object
detection in computer vision. -->
</p>
<h2>
Timeline
</h2>
<ul>
<!-- <li>
<strong>Whole Train, validation sets and part of test images are avaliable</strong> February 1, 2018
</li> -->
<!-- <li>
<strong>Registration and Submission open</strong> February 1, 2018
</li> -->
<li>
<strong>Extra test images for ODAI-18 available</strong> March 16, 2018
</li>
<li>
<strong>Submission open</strong> March 16, 2018
</li>
<li>
<strong>Submission deadline</strong> April 25, 2018
</li>
<li>
<strong>Submission of contest report</strong> April 30, 2018
</li>
</ul>
<h2>
Tasks
</h2>
<p>
We propose <a href="https://captain-whu.github.io/DOTA/tasks.html">two tasks</a> for this contest, namely object detection with horizontal bounding box
and object detection with oriented bounding box.
Task1 uses the initial annotation as ground truth, while Task2 uses the generated axis-aligned bounding boxes as ground truth.
We recommond you to test your algorithms by way of Task1, although the results from task2 are also of great practical value.
</p>
<!-- <h3>
Task1 - Detection with oriented bounding boxes
</h3>
<p>
The aim of this task is to locate the ground object instances with an oriented bounding box.
The oriented bounding box follows the same format with the original annotation
</p>
<h3>
Task2 - Detection with horizontal bounding boxes
</h3>
<p>
Detecting object with horizontal bounding boxes is usual in many previous contests for object detection.
The aim of this task is to accurately localize the instance in terms of horizontal bounding box with (x, y, w, h) format.
In the task, the ground truths for training and testing are generated by calculating the axis-aligned bounding boxes over original annotated bounding boxes.
</p> -->
<strong>
For more details and submission format, refer to the <a href="https://captain-whu.github.io/DOTA/tasks.html">Tasks Page </a> on DOTA.
</strong>
<h2>
Dataset
</h2>
<p>
The detection tasks of ODAI are based on the DOTA dataset.
Specificly, the Train and Validation sets are the same as <a href="https://captain-whu.github.io/DOTA/dataset.html">DOTA-v1</a>.
However, the images of Test set are partly from <a href="https://captain-whu.github.io/DOTA/dataset.html">DOTA-v1</a>, other test images are not available currently.
For the use of the data, please cite our <a href="https://arxiv.org/abs/1711.10398">article</a> and
follow the <a href="https://captain-whu.github.io/DOTA/dataset.html">usage license</a> described in <a href="https://captain-whu.github.io/DOTA/index.html">DOTA</a>.
The data of the DOTA are available at <strong style="color:blue"><a href="https://captain-whu.github.io/DOTA/dataset.html">DOTA Dataset Page</a></strong>.
</p>
<strong style="color:blue">NOTE: Except the train/val set of DOTA-v1, extra data is also allowed to train your detector, but you must give a description in your submission.</strong>
<br><br>
<h2>
Communicate
</h2>
<p>
For any problem you have in using DOTA or ODAI, you can join the WeChat group and communicate.
</p>
<img src="images/wechat7.png" height="200" width="200" />
<h2>
Organizers
</h2>
<ul>
<li>
<strong><a href="http://captain.whu.edu.cn/xia_En.html">Gui-Song Xia</a></strong> Professor at LIESMARS, Wuhan University, China
</li>
<li>
<strong><a href="http://mclab.eic.hust.edu.cn/~xbai/">Xiang Bai</a></strong> Professor of the School of Electronic Information and Communications, Huazhong University of Science and Technology, China
</li>
<li>
<strong><a href="http://vision.cornell.edu/se3/people/serge-belongie/">Serge Belongie</a></strong> Professor at Cornell Tech and the Department of Computer Science at Cornell University, United States
</li>
<li>
<strong><a href="http://www.cs.rochester.edu/u/jluo/">Jiebo Luo</a></strong> Professor of Computer Science, University of Rochester, United States
</li>
<li>
<strong><a href="http://www.dlr.de/caf/en/desktopdefault.aspx/tabid-5242/8788_read-933/sortby-lastname/">Mihai Datcu</a></strong> Scientist with the German Aerospace Center (DLR), Germany
</li>
<li>
<strong><a href="http://www.dsi.unive.it/~pelillo/">Marcello Pelillo</a></strong> Professor of Computer Science, Ca’ Foscari University of Venice, Italy
</li>
<li>
<strong><a href="http://www.lmars.whu.edu.cn/prof_web/zhangliangpei/rs/xueshu.htm">Liangpei Zhang</a></strong> Professor at LIESMARS, Wuhan University, China
</li>
<li>
<strong><a href="http://captain.whu.edu.cn/hufan.html">Fan Hu</a></strong> Postdoctoral researcher at the Electronic Information School, Wuhan University, China
</li>
</ul>
<ul>
<!-- <li style="font-size:16px">
<strong>2018-1-27</strong> Fix a problem of annotations, new version annotations are released.<strong class="news">New</strong>
</li> -->
<!-- <li>
<strong>2018-1-26</strong> The ODAI-18 submission is open now! All competitors can submit the ODAI-18 results under the guidance of
<a href="tasks.html">Tasks Page</a> and <a href="evaluation.html">Evaluation Page</a> <strong class="news">New</strong>
</li> -->
<!-- <li style="font-size:16px"> -->
<!-- <strong>2018-1-26</strong> DOAI-18 released with all images and oriented bounding box annotations for training and vallidation! -->
<!-- <strong>2018-1-26</strong> ODAI-18 released with all images and oriented bounding box annotations for training and vallidation, and part of test images, for detail see -->
<!-- <a href="tasks.html">overview</a> in tasks -->
<!-- <strong>2018-1-26</strong> -->
<!-- <strong class="news">New</strong> -->
<!-- </li> -->
</ul>
<!-- <div class="section bibtex"> -->
<!-- <h3>
Citation
</h3>
<p> If you make use of the DOTA dataset, please cite our following paper: </p>
<pre>
@article{xia2017dota,
title={DOTA: A Large-scale Dataset for Object Detection in Aerial Images},
author={Xia, Gui-Song and Bai, Xiang and Ding, Jian and Zhu, Zhen and Belongie, Serge and Luo, Jiebo and Datcu, Mihai and Pelillo, Marcello and Zhang, Liangpei},
journal={arXiv preprint arXiv:1711.10398},
year={2017}
}
</pre> -->
<!-- </div> -->
</p>
<br>
</div>
</div>
</div>
<br>
<div align="center">
<a href="http://www.amazingcounters.com"><img border="0" src="http://cc.amazingcounters.com/counter.php?i=3220030&c=9660403" alt="AmazingCounters.com"></a>
</div>
<br>
</div>
</body>
</html>