IROS 2019: Robust Real-time RGB-D Visual Odometry in Dynamic Environments via Rigid Motion Model

In the paper, we propose a robust real-time visual odometry in dynamic environments via rigid-motion model updated by scene flow. The proposed algorithm consists of spatial motion segmentation and temporal motion tracking. The spatial segmentation first generates several motion hypotheses by using a grid-based scene flow and clusters the extracted motion hypotheses, separating objects that move independently of one another. Further, we use a dual-mode motion model to consistently distinguis...

BMVC 2019: Edge Detection for Event Cameras using Intra-pixel-area Events

In this work, we propose an edge detection algorithm by estimating a lifetime of an event produced from dynamic vision sensor, also known as event camera. The event camera, unlike traditional CMOS camera, generates sparse event data at a pixel whose log-intensity changes. Due to this characteristic, theoretically, there is only one or no event at the specific time, which makes it difficult to grasp the world captured by the camera at a particular moment. In this work, we present an algorith...

IROS 2018: Edge-Based Robust RGB-D Visual Odometry Using 2-D Edge Divergence Minimization

This paper proposes an edge-based robust RGB-D visual odometry (VO) using 2-D edge divergence minimization. Our approach focuses on enabling the VO to operate in more general environments subject to low texture and changing brightness, by employing image edge regions and their image gradient vectors within the iterative closest points (ICP) framework. For more robust and stable ICP-based optimization, we propose a robust edge matching criterion with image gradient vectors. In addition, to red...

SMC 2017: Real-time Rigid Motion Segmentation using Grid-based Optical Flow

In the paper, we propose a rigid motion segmentation algorithm with the grid-based optical flow. The algorithm selects several adjacent points among grid-based optical flows to estimate motion hypothesis based on a so-called entropy and generates motion hypotheses between two images, thus separates objects which move independently of each other. The grid-based entropy is accumulated as a new motion hypothesis generated and the high value of entropy means that the motion has been estimated ina...

MS Thesis 2017: Robust Visual Odometry via Rigid Motion Segmentation for Dynamic Environments

In the paper, we propose a robust visual odometry algorithm for dynamic environments via rigid motion segmentation using a grid-based optical flow. The algorithm first divides image frame by a fixed-size grid, then calculates the three-dimensional motion of grids for light computational load and uniformly distributed optical flow vectors. Next, it selects several adjacent points among grid-based optical flow vectors based on a so-called entropy and generates motion hypotheses formed by thre...

ICROS 2017: Survey on Visual Odometry Technology for Unmanned Systems

This paper surveys visual odometry technology for unmanned systems. Visual odometry is one of the most important technologies to implement vision-based navigation; therefore, it is widely applied to unmanned systems in recent years. Visual odometry estimates a trajectory and a pose of the system, and it could be classified into the following: 1) stereo vs. monocular, 2) feature-based or indirect vs. direct, and 3) linear vs. nonlinear based on the number of cameras, information attributes, an...

ICCAS 2016: Exposure Correction and Image Blending for Planar Panorama Stitching

In this paper, we propose a planar panorama stitching method to blend consecutive images captured by a multirotor equipped with a fish-eye camera. In addition, we suggest an exposure correction method to reduce the brightness difference between contiguous images, and a drift error correction method to compensate the estimated position of multirotor. In experiments, the multi-rotor flies at 35 meters above the ground, and the fish-eye camera attached in gimbals system takes pictures. Then we v...