3D Reconstruction in the Wild 2018
in conjunction with ECCV2018
(September 14, 2018)
- Jul. 21: Invited speakers page updated.
- Jul. 20: Submission page updated.
- Jul.11: Submission deadline extended. (Jul. 24 [23:59 Pacific Time])
- Jul. 9: Invited speakers page is available.
- Jun. 28: Submission deadlines of paper and supplementary material are fixed.
- Jun. 21: Submission instructions are available.
- Apr. 18: This site is opened.
Call for Papers
Research on 3D reconstruction has long been focused on recovering 3D information from multi-view images captured in ideal conditions. However, the assumption of ideal acquisition conditions severely limits the deployment possibilities for reconstruction systems, as typically several external factors need to be controlled, intrusive capturing devices have to be used or complex hardware setups need to be operated to acquire image data suitable for 3D reconstruction. In contrast, 3D reconstruction in unconstrained settings (referred to as 3D reconstruction in the wild) usually imposes only little restrictions on the data acquisition procedure and/or on data capturing environments, but, therefore, represents a far more challenging task.
The goal of this workshop is to foster the development of 3D reconstruction techniques that are robust and realtime, and consequently perform well on a variety of environments with different characteristics. Toward this goal, we are interested in all parts of 3D reconstruction techniques ranging from multi-camera calibration, feature extraction, matching, data fusion, depth learning, and meshing techniques to 3D modeling approaches capable of operating on image data captured in the wild. Topics of interest include, but are not limited to:
- Various environments/extreme conditions
- 3D for agriculture, bio-imaging, and physics
- features from images in the heavy rain
- features from backlit images
- reconstruction of athletes in sports
- reconstruction of planets
- tracking in the snow
- underwater camera calibration, refractive concerns and
- autonomous underwater navigation
- 3D from images captured by underwater cameras
- 3D from images captured using drones
- lighting/camera configuration
- mapping, localization and SLAM
- Stereo algorithm and calibration
- 3D from unordered image sequences
- 3D from data (DNN and learning approach)
- depth from heavily incomplete data
- heavily distorted image matching
- structure from super-wide-baseline images
- structure from remote sensing images
- structure-from-motion and visual odometry
- reconstruction of thin objects
- fusion for unreliable depth sequences
- geometry for unsynchronized multi-views
- mesh generation for fast deforming objects
- mesh interpolation for deforming objects
- benchmarking dataset under challenging scenarios
- fusion for heterogeneous images