In September 2022, I started working for Samsung Electronics in the System LSI business, Device Solution Division. My current responsibility involves evaluating and validating the iToF sensor. This sensor uses an infrared pulse to measure the phase difference between the pulse and its reflection, which in turn produces a frame-based depthmap.
Back in the Spring of 2015, I joined LARR (formerly known as ICSL) at Seoul National University to conduct research in Robotics. I collaborated with Pyojin Kim, Changhyeon Kim, Haram Kim, and Youngseok Jang, and my research interests in robotics focused on the areas of 3D Vision, particularly on Robust Visual Odometry in Dynamic Environments [1, 2], and Applications for Event Camera [3], as well as 2D Vision [4].
Abstract: This dissertation addresses the problem of estimating the angular velocity of the event camera with robustness to a dynamic environment where moving objects exist. These vision-based navigation problems have been mainly dealt with in frame-based cameras. The traditional frame-based came...
Abstract: Event cameras are bio-inspired sensors that capture intensity changes of pixels individually, and generate asynchronous and independent ``events’’. Due to the fundamental difference from the conventional cameras, most research on event cameras builds a global event frame by...
Abstract: This paper proposes the object depth estimation in real-time, using only a monocular camera in an onboard computer with a low-cost GPU. Our algorithm estimates scene depth from a sparse feature-based visual odometry algorithm and detects/tracks objects’ bounding box by util...
Abstract: In the paper, we propose a robust real-time visual odometry in dynamic environments via rigid-motion model updated by scene flow. The proposed algorithm consists of spatial motion segmentation and temporal motion tracking. The spatial segmentation first generates several mo...
Abstract: In this work, we propose an edge detection algorithm by estimating a lifetime of an event produced from dynamic vision sensor, also known as event camera. The event camera, unlike traditional CMOS camera, generates sparse event data at a pixel whose log-intensity changes. D...
Abstract: This paper proposes an edge-based robust RGB-D visual odometry (VO) using 2-D edge divergence minimization. Our approach focuses on enabling the VO to operate in more general environments subject to low texture and changing brightness, by employing image edge regions and th...
Abstract: In the paper, we propose a rigid motion segmentation algorithm with the grid-based optical flow. The algorithm selects several adjacent points among grid-based optical flows to estimate motion hypothesis based on a so-called entropy and generates motion hypotheses between two images, th...
Abstract: In the paper, we propose a robust visual odometry algorithm for dynamic environments via rigid motion segmentation using a grid-based optical flow. The algorithm first divides image frame by a fixed-size grid, then calculates the three-dimensional motion of grids for light ...
Abstract: This paper surveys visual odometry technology for unmanned systems. Visual odometry is one of the most important technologies to implement vision-based navigation; therefore, it is widely applied to unmanned systems in recent years. Visual odometry estimates a trajectory and a pose of t...
Abstract: In this paper, we propose a planar panorama stitching method to blend consecutive images captured by a multirotor equipped with a fish-eye camera. In addition, we suggest an exposure correction method to reduce the brightness difference between contiguous images, and a drift error corre...