'SLAM / Sensor Fusion' 카테고리의 설명


(김태규) #1

Benchmarks

A Benchmark Comparison of Monocular Visual-Inertial Odometry Algorithms for Flying Robots


SLAM

Papers

CodeSLAM - Learning a Compact, Optimisable Representation for Dense Visual SLAM

QuadricSLAM: Constrained Dual Quadrics from Object Detections as Landmarks in Semantic SLAM

Global Pose Estimation with an Attention-based Recurrent Network


Visual Odometry / Ego-Motion Estimation

Papers

(LIBVISO2)

  • #2011

Learning visual odometry with a convolutional network

Exploring Representation Learning With CNNs for Frame-to-Frame Ego-Motion Estimation

PoseNet: A convolutional network for real-time 6-DOF camera relocalization

SVO 2.0: Fast Semi-Direct Visual Odometry for Monocular, Wide Angle, and Multi-camera Systems

Learning to Fuse: A Deep Learning Approach to Visual-Inertial Camera Pose Estimation

DeepVO: Towards End-to-End Visual Odometry with Deep Recurrent Convolutional Neural Networks

UnDeepVO: Monocular Visual Odometry through Unsupervised Deep Learning

DeMoN: Depth and Motion Network

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

(ESP-VO) End-to-End, Sequence-to-Sequence Probabilistic Visual Odometry through Deep Neural Networks


(Note the serious rolling-shutter effect and image blur of the video.)

Towards Visual Ego-motion Learning in Robots

VidLoc: A Deep Spatio-Temporal Model for 6-DoF Video-Clip Relocalization

Toward Low-Flying Autonomous MAV Trail Navigation using Deep Neural Networks for Environmental Awareness


(GTC2017)

Geometric Consistency for Self-Supervised End-to-End Visual Odometry

DepthNet: A Recurrent Neural Network Architecture for Monocular Depth Prediction


Workshops & Etc

Ronald Clark Website

CVPR2018 Deep Learning for Visual SLAM