Until now, I’ve illustrated celestial bodies as a completely spherical shapes: Earth, Sun, and Saturn. They have enough gravity to form nearly smooth, spherical shapes. However, Phobos, one of Mars’ moons, has an irregular shape due to its low mass and weak gravity, which aren’t strong enough ...
In the solar system, there are two main types of planets: rocky and gas giants, some of which have rings. Among them, Saturn is the most well-known gas giant with prominent rings. This project focuses on creating a realistic 3D model of Saturn, with a detailed representation of its rings, surf...
This article provides an overview of interpolation methods including flat, linear, and cubic, along with examples. I’ll provide a simple example to use them. These interpolation methods aim to compute a value inside vertices and, further, advanced interpolation methods aim to make smooth value...
The depth filter is a useful component in monocular camera visual odometry systems, such as SVO (Semi-Direct Visual Odometry) 1 2. It is designed to estimate depth values from a sequence of images captured by a single camera. The filter operates under the assumption of Gaussian measurement noi...
In image processing, 2-D convolution is a highly useful operation. It can be used for tasks such as blurring, morphology, edge detection, and sharpening. In Python, a naive 2-D convolution method takes a huge computational load for a large image. This post introduces the use of np.lib.stride_t...
In the previous article1, we’ve created a realistic Earth. Unlike the Earth that consists of solid elements, the sun is full of gas. In order to render a gas flowing through the surface of the sun, I’ll utilize fractal noise, a.k.a., fractal Brownian motion, that is mentioned at here2. Also, I...
Let’s return to creating a realistic Earth using Three.js. Unlike the previous Earth1, we are going to render Earth using shader material. First, we will describe the day and night with two different textures, since there are city lights at night. Second, I’ll make an effect for mountain shado...
Now, it’s time to design a custom pattern to illustrate the shader. Since all vertex and fragment is “blind” to others, we have to script a code with different manner from the concurrent programming. Thus, the position and color of a vertex should be defined with its own attributes and the sha...
A shader program consists of vertex and fragment shaders. A vertex shader defines the geometric attributes of vertices, whereas fragment shader defines their color. In this post, I’ll address how to create the vertex and fragment shaders and how to use them.
GLSL (OpenGL Shading Language) is a programming language for simple program, Shader, that describes the color attribute of each vertex in parallel computation. Thus, all vertices do not know the status of other vertices nearby, i.e., vertices are blind to the others. But, if we use a uniform v...
A shader material is rendered with a custom shader. It requires vertex and fragment shaders which are written in GLSL (openGL Shading Language) code and depict the position of a vertex and its color, respectively. Since these codes run on the GPU using WebGL, a ShaderMaterial is rendered prope...
Three.js provides the material attribute for a 3D object, which determines how the object reflects light and how the object is rendered in a camera. The properties of material are composed of base color, metalness, roughness, and so on. Moreover, we can decorate the surface of a 3D object by u...
Sun-Earth system
In this article, I’ll create a simple Sun-Earth system. Firstly, create an orange sphere and a blue sphere which represent the Sun and the Earth, respectively.
Create the scene environment
For Three.js to render a scene, it needs scene, camera, and renderer. In JavaScript, you have to import Three.js depending on the installation options as mentioned in the previous article.
The above image is the snapshot of a solar system simulator.1 The posts will be serialized in the order of contents below.
https://portfolio.sangillee.com/apps/solar ↩
The autoUpdater enables an electron app to check the latest version and update itself automatically. 1
https://www.electronjs.org/docs/latest/api/auto-updater ↩
Abstract: This dissertation addresses the problem of estimating the angular velocity of the event camera with robustness to a dynamic environment where moving objects exist. These vision-based navigation problems have been mainly dealt with in frame-based cameras. The traditional frame-based came...
Abstract: Event cameras are bio-inspired sensors that capture intensity changes of pixels individually, and generate asynchronous and independent ``events’’. Due to the fundamental difference from the conventional cameras, most research on event cameras builds a global event frame by...
Abstract: This paper proposes the object depth estimation in real-time, using only a monocular camera in an onboard computer with a low-cost GPU. Our algorithm estimates scene depth from a sparse feature-based visual odometry algorithm and detects/tracks objects’ bounding box by util...
Abstract: In the paper, we propose a robust real-time visual odometry in dynamic environments via rigid-motion model updated by scene flow. The proposed algorithm consists of spatial motion segmentation and temporal motion tracking. The spatial segmentation first generates several mo...
Abstract: In this work, we propose an edge detection algorithm by estimating a lifetime of an event produced from dynamic vision sensor, also known as event camera. The event camera, unlike traditional CMOS camera, generates sparse event data at a pixel whose log-intensity changes. D...
Abstract: This paper proposes an edge-based robust RGB-D visual odometry (VO) using 2-D edge divergence minimization. Our approach focuses on enabling the VO to operate in more general environments subject to low texture and changing brightness, by employing image edge regions and th...
Abstract: Disclosed is an ego motion estimation method and apparatus, wherein the apparatus calculates a scene flow field from a plurality of spaces of an input image, clusters the plurality of spaces based on a scene flow, updates a probability vector map for clustered spaces, identifies a stati...
Abstract: In the paper, we propose a rigid motion segmentation algorithm with the grid-based optical flow. The algorithm selects several adjacent points among grid-based optical flows to estimate motion hypothesis based on a so-called entropy and generates motion hypotheses between two images, th...
Abstract: In the paper, we propose a robust visual odometry algorithm for dynamic environments via rigid motion segmentation using a grid-based optical flow. The algorithm first divides image frame by a fixed-size grid, then calculates the three-dimensional motion of grids for light ...
Abstract: This paper surveys visual odometry technology for unmanned systems. Visual odometry is one of the most important technologies to implement vision-based navigation; therefore, it is widely applied to unmanned systems in recent years. Visual odometry estimates a trajectory and a pose of t...
Abstract: In this paper, we propose a planar panorama stitching method to blend consecutive images captured by a multirotor equipped with a fish-eye camera. In addition, we suggest an exposure correction method to reduce the brightness difference between contiguous images, and a drift error corre...
This is an application to simulate particles’ movement under Newton’s law of universal gravitation. Particles attract each other, and then they are collided and merged or orbit the center of mass. Some particles can fade away toward the horizon of the universe. I designed and implemented the appl...