<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://yurimjeon1892.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://yurimjeon1892.github.io/" rel="alternate" type="text/html" /><updated>2025-12-18T15:27:31+00:00</updated><id>https://yurimjeon1892.github.io/feed.xml</id><title type="html">Yurim Jeon, Ph.D.</title><subtitle>Ph.D. in Electrical and Computer Engineering</subtitle><author><name>Yurim Jeon</name></author><entry><title type="html">Follow the Footprints: Self-supervised Traversability Estimation for Off-road Vehicle Navigation based on Geometric and Visual Cues</title><link href="https://yurimjeon1892.github.io/research/2024/03/01/research-traversability-estimation.html" rel="alternate" type="text/html" title="Follow the Footprints: Self-supervised Traversability Estimation for Off-road Vehicle Navigation based on Geometric and Visual Cues" /><published>2024-03-01T00:00:00+00:00</published><updated>2024-03-01T00:00:00+00:00</updated><id>https://yurimjeon1892.github.io/research/2024/03/01/research-traversability-estimation</id><content type="html" xml:base="https://yurimjeon1892.github.io/research/2024/03/01/research-traversability-estimation.html"><![CDATA[<div align="center">
    <div style="position: relative; padding-bottom: 56.25%; height: 0;">
        <iframe src="https://www.youtube.com/embed/zZ7iKr001Z4" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" style="position: absolute; width: 100%; height: 100%; left: 0; top: 0;"></iframe>
    </div>
</div>

<p><br /></p>

<div class="icon-container">
    <span class="link-with-icon">
        <i data-feather="paperclip"></i>
        <a href="https://arxiv.org/abs/2402.15363" target="_blank">Paper Link</a>
    </span> 
    <span class="link-with-icon">
        <i data-feather="github"></i>
        <a href="https://github.com/yurimjeon1892/FtFoot.git" target="_blank">Code</a>
    </span> 
    <span class="link-with-icon">
        <i data-feather="youtube"></i>
        <a href="https://youtu.be/zZ7iKr001Z4" target="_blank">Video</a>
    </span>    
</div>

<p><br /></p>

<p>In this study, we address the off-road traversability estimation problem, that predicts areas where a robot can navigate in off-road environments. An off-road environment is an unstructured environment comprising a combination of traversable and non-traversable spaces, which presents a challenge for estimating traversability. This study highlights three primary factors that affect a robot’s traversability in an off-road environment: surface slope, semantic information, and robot platform. We present two strategies for estimating traversability, using a guide filter network (GFN) and footprint supervision module (FSM). The first strategy involves building a novel GFN using a newly designed guide filter layer. The GFN interprets the surface and semantic information from the input data and integrates them to extract features optimized for traversability estimation. The second strategy involves developing an FSM, which is a self-supervision module that utilizes the path traversed by the robot in pre-driving, also known as a footprint. This enables the prediction of traversability that reflects the characteristics of the robot platform. Based on these two strategies, the proposed method overcomes the limitations of existing methods, which require laborious human supervision and lack scalability. Extensive experiments in diverse conditions, including automobiles and unmanned ground vehicles, herbfields, woodlands, and farmlands, demonstrate that the proposed method is compatible for various robot platforms and adaptable to a range of terrains.</p>

<p>This paper is presented at <strong>IEEE International Conference on Robotics and Automation (ICRA), May. 2024</strong>.</p>]]></content><author><name>Yurim Jeon</name></author><category term="Research" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Development of perception system for unmanned vehicles in off-road scenarios</title><link href="https://yurimjeon1892.github.io/project/2023/02/01/project-off-road-perception.html" rel="alternate" type="text/html" title="Development of perception system for unmanned vehicles in off-road scenarios" /><published>2023-02-01T00:00:00+00:00</published><updated>2023-02-01T00:00:00+00:00</updated><id>https://yurimjeon1892.github.io/project/2023/02/01/project-off-road-perception</id><content type="html" xml:base="https://yurimjeon1892.github.io/project/2023/02/01/project-off-road-perception.html"><![CDATA[<p><strong>Government project, Seoul national university</strong></p>

<figure>
    <img src="/assets/off-road.png" />
</figure>

<p>Unstructured off-road environments present new challenges for autonomous driving. This project focuses on developing a perception system for unmanned vehicles in off-road scenarios.</p>

<p>Off-road environments have the following characteristics:</p>

<ul>
  <li>Ambiguous definition of traversable space: In off-road scenarios, driving intelligence must comprehensively analyze spatial and visual data to accurately distinguish traversable spaces.</li>
  <li>Environmental changes according to seasons: Even in the same area, the environment can appear completely different, such as dense foliage in summer versus snow-covered landscapes in winter.</li>
</ul>

<p>We successfully developed a perception system to support the safe operation of vehicles in off-road environments, completing unmanned exploration experiments in mountainous terrain.</p>]]></content><author><name>Yurim Jeon</name></author><category term="Project" /><summary type="html"><![CDATA[Unstructured off-road environments present new challenges for autonomous driving. This project focuses on developing a perception system for unmanned vehicles in off-road scenarios.]]></summary></entry><entry><title type="html">Development of automatic labeling tool for autonomous driving dataset generation</title><link href="https://yurimjeon1892.github.io/project/2022/11/01/project-auto-label.html" rel="alternate" type="text/html" title="Development of automatic labeling tool for autonomous driving dataset generation" /><published>2022-11-01T00:00:00+00:00</published><updated>2022-11-01T00:00:00+00:00</updated><id>https://yurimjeon1892.github.io/project/2022/11/01/project-auto-label</id><content type="html" xml:base="https://yurimjeon1892.github.io/project/2022/11/01/project-auto-label.html"><![CDATA[<p><strong>Thordrive</strong></p>

<p>Creating high-quality, large-scale datasets is crucial for artificial intelligence research. This project aims to develop an automatic labeling system to reduce human resource costs in dataset creation and enhance dataset quality.</p>

<p>As a deep learning engineer, I designed and developed a deep learning engine for multi-sensor object detection. This engine predicts object class, position, scale, rotation, and track IDs from raw collected data, generating high-quality annotations for autonomous driving research. Our automatic labeling system reduces human resource costs for dataset creation by up to 30%.</p>]]></content><author><name>Yurim Jeon</name></author><category term="Project" /><summary type="html"><![CDATA[Creating high-quality, large-scale datasets is crucial for artificial intelligence research. This project aims to develop an automatic labeling system to reduce human resource costs in dataset creation and enhance dataset quality.]]></summary></entry><entry><title type="html">EFGHNet: A Versatile Image-to-Point Cloud Registration Network for Extreme Outdoor Environment</title><link href="https://yurimjeon1892.github.io/research/2022/01/01/research-image-to-point-cloud-registration.html" rel="alternate" type="text/html" title="EFGHNet: A Versatile Image-to-Point Cloud Registration Network for Extreme Outdoor Environment" /><published>2022-01-01T00:00:00+00:00</published><updated>2022-01-01T00:00:00+00:00</updated><id>https://yurimjeon1892.github.io/research/2022/01/01/research-image-to-point-cloud-registration</id><content type="html" xml:base="https://yurimjeon1892.github.io/research/2022/01/01/research-image-to-point-cloud-registration.html"><![CDATA[<div align="center">
    <div style="position: relative; padding-bottom: 56.25%; height: 0;">
        <iframe src="https://www.youtube.com/embed/Xo7GRKyvKuo" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" style="position: absolute; width: 100%; height: 100%; left: 0; top: 0;"></iframe>
    </div>
</div>

<p><br /></p>

<div class="icon-container">
    <span class="link-with-icon">
        <i data-feather="paperclip"></i>
        <a href="https://ieeexplore.ieee.org/document/9799751" target="_blank">Paper Link</a>
    </span> 
    <span class="link-with-icon">
        <i data-feather="github"></i>
        <a href="https://github.com/yurimjeon1892/EFGH.git" target="_blank">Code</a>
    </span> 
    <span class="link-with-icon">
        <i data-feather="youtube"></i>
        <a href="https://youtu.be/Xo7GRKyvKuo" target="_blank">Video</a>
    </span>    
</div>

<p><br /></p>

<p>We present an accurate and robust image-to-point cloud registration method that is viable in urban and off-road environments. Existing image-to-point cloud registration methods have focused on vehicle platforms along paved roads. Therefore, image-to-point cloud registration on UGV platforms for off-road driving remains an open question. Our objective is to find a versatile solution for image-to-point cloud registration.</p>

<p>We present a method that stably estimates a precise transformation between an image and a point cloud using a two-phase method that aligns the two input data in the virtual reference coordinate system (virtual-alignment) and then compares and matches the data to complete the registration (compare-and-match). Our main contribution is the introduction of divide-and-conquer strategies to image-to-point cloud registration. The virtual-alignment phase effectively reduces relative pose differences without cross-modality comparison. The compare-and-match phase divides the process of matching the image and point cloud into the rotation and translation steps. By breaking down the registration problem, it is possible to develop algorithms that can robustly operate in various environments.</p>

<p>We performed extensive experiments on four datasets (Rellis-3D, KITTI odometry, nuScenes, and KITTI raw). Experiments cover a variety of situations in which image-to-point cloud registration is applied, from image-based localization in off-road environments to camera-LiDAR extrinsic calibration in urban environments. The experiments demonstrate that the proposed method outperforms the existing methods in accuracy and robustness.</p>

<p>This paper is published in <strong>IEEE Robotics and Automation Letters (RA-L)</strong>, and presented at <strong>IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2022</strong>.</p>]]></content><author><name>Yurim Jeon</name></author><category term="Research" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">ABCD: Attentive Bilateral Convolutional Network for Robust Depth Completion</title><link href="https://yurimjeon1892.github.io/research/2021/06/01/research-depth-completion.html" rel="alternate" type="text/html" title="ABCD: Attentive Bilateral Convolutional Network for Robust Depth Completion" /><published>2021-06-01T00:00:00+00:00</published><updated>2021-06-01T00:00:00+00:00</updated><id>https://yurimjeon1892.github.io/research/2021/06/01/research-depth-completion</id><content type="html" xml:base="https://yurimjeon1892.github.io/research/2021/06/01/research-depth-completion.html"><![CDATA[<div align="center">
    <div style="position: relative; padding-bottom: 56.25%; height: 0;">
        <iframe src="https://www.youtube.com/embed/29uWojsPU4A" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" style="position: absolute; width: 100%; height: 100%; left: 0; top: 0;"></iframe>
    </div>
</div>

<p><br /></p>

<div class="icon-container">
    <span class="link-with-icon">
        <i data-feather="paperclip"></i>
        <a href="https://ieeexplore.ieee.org/document/9565353" target="_blank">Paper Link</a>
    </span> 
    <span class="link-with-icon">
        <i data-feather="github"></i>
        <a href="https://github.com/yurimjeon1892/ABCD.git" target="_blank">Code</a>
    </span> 
    <span class="link-with-icon">
        <i data-feather="youtube"></i>
        <a href="https://youtu.be/29uWojsPU4A" target="_blank">Video</a>
    </span>    
</div>

<p><br /></p>

<p>We propose a point-cloud-centric depth completion method called attention bilateral convolutional network for depth completion (ABCD). The proposed method uses LiDAR data and camera data to improve the resolution of the sparse depth information. Color images, which have been seen as fundamental to depth completion tasks, are inevitably sensitive to light and weather conditions.</p>

<p>We designed an attentive bilateral convolutional layer (ABCL) to build a robust depth completion network under diverse environmental conditions. An ABCL efficiently learns geometric characteristics by directly leveraging a 3D point cloud and enhances the representation capability of sparse depth information by highlighting the core while suppressing clutter. The ABCD, with an ABCL as a building block, stably fills the void in sparse depth images even under unfamiliar conditions with minimum dependency on unstable camera sensors. Therefore, the proposed method is expected to be a solution to depth completion problems caused by changes in the environment in which images are captured.</p>

<p>Through comparative experiments with other methods using the KITTI and VirtualKITTI2 datasets, we demonstrated the outstanding performance of the proposed method in diverse driving environments.</p>

<p>This paper is published in <strong>IEEE Robotics and Automation Letters (RA-L)</strong>.</p>]]></content><author><name>Yurim Jeon</name></author><category term="Research" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Research on human-level driving intelligence for autonomous driving of unmanned vehicles</title><link href="https://yurimjeon1892.github.io/project/2021/02/01/project-driving-intelligence.html" rel="alternate" type="text/html" title="Research on human-level driving intelligence for autonomous driving of unmanned vehicles" /><published>2021-02-01T00:00:00+00:00</published><updated>2021-02-01T00:00:00+00:00</updated><id>https://yurimjeon1892.github.io/project/2021/02/01/project-driving-intelligence</id><content type="html" xml:base="https://yurimjeon1892.github.io/project/2021/02/01/project-driving-intelligence.html"><![CDATA[<p><strong>Government project, Seoul national university</strong></p>

<p>While driving, humans encounter vast amounts of information, selectively focus on key details, and make quick decisions. This project aims to develop driving intelligence that replicates the efficient selection and concentration processes observed in human perception systems.</p>

<p>We applied the attention mechanism to driving intelligence, highlighting crucial information while disregarding less important details. This approach enables high-precision perception with fewer computations.</p>

<p>The developed perception system processes real-time 2D and 3D data in urban environments, delivering highly accurate perception results for safe autonomous driving.”</p>]]></content><author><name>Yurim Jeon</name></author><category term="Project" /><summary type="html"><![CDATA[While driving, humans encounter vast amounts of information, selectively focus on key details, and make quick decisions. This project aims to develop driving intelligence that replicates the efficient selection and concentration processes observed in human perception systems.]]></summary></entry><entry><title type="html">Development of real-time object detection module using LiDAR sensor</title><link href="https://yurimjeon1892.github.io/project/2020/06/01/project-lidar-object-detection.html" rel="alternate" type="text/html" title="Development of real-time object detection module using LiDAR sensor" /><published>2020-06-01T00:00:00+00:00</published><updated>2020-06-01T00:00:00+00:00</updated><id>https://yurimjeon1892.github.io/project/2020/06/01/project-lidar-object-detection</id><content type="html" xml:base="https://yurimjeon1892.github.io/project/2020/06/01/project-lidar-object-detection.html"><![CDATA[<p><strong>Industry-academia cooperation project, Seoul national university</strong></p>

<div align="center">
    <div style="position: relative; padding-bottom: 56.25%; height: 0;">
        <iframe src="https://www.youtube.com/embed/pnsvPiWt4Ss" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" style="position: absolute; width: 100%; height: 100%; left: 0; top: 0;"></iframe>
    </div>
</div>

<p><br /></p>

<p>LiDAR sensors are resilient to variations in light and weather, providing accurate position measurements that are crucial for object detection algorithms to deliver precise distance information about surrounding objects. Therefore, the use of LiDAR sensors is essential for autonomous driving. This project focuses on developing a deep learning-based object detection algorithm using LiDAR for autonomous vehicles operating in urban environments. As the project lead, I was involved in all stages, from designing the deep learning model to implementing the source code for execution within a ROS environment.</p>

<p>Project objectives:</p>
<ul>
  <li>The final development outcome should be provided in the form of executable source code for the ROS environment.</li>
  <li>The developed object detection algorithm should meet the following performance criteria:
    <ul>
      <li>Execution time on NVIDIA GeForce GTX 1080 Ti or RTX 2080 with Intel Core i7 machines should be 50 ms or less.</li>
      <li>The mean average precision (mAP) difference compared to the state-of-the-art object detection algorithm should be 5% or less.</li>
    </ul>
  </li>
</ul>

<p>My contributions to the project include:</p>

<ul>
  <li>Designing the deep learning algorithm in Python using PyTorch and later converting it to C++ for ROS.</li>
  <li>Developing both the pre-processing and post-processing stages of the object detection algorithm in C++ to improve processing speed.</li>
  <li>Converting the deep learning model to ONNX format for enhanced compatibility. In cases where ONNX conversion was not feasible, I used CUDA for better computation speed.</li>
</ul>

<p>As a result, we achieved the following project objectives:</p>

<ul>
  <li>Execution time within 40 ms on the testbed.</li>
  <li>mAP difference of 5% or less compared to the state-of-the-art PointPillars.</li>
</ul>]]></content><author><name>Yurim Jeon</name></author><category term="Project" /><summary type="html"><![CDATA[LiDAR sensors are resilient to variations in light and weather, providing accurate position measurements that are crucial for object detection algorithms to deliver precise distance information about surrounding objects. Therefore, the use of LiDAR sensors is essential for autonomous driving. This project focuses on developing a deep learning-based object detection algorithm using LiDAR for autonomous vehicles operating in urban environments. As the project lead, I was involved in all stages, from designing the deep learning model to implementing the source code for execution within a ROS environment.]]></summary></entry><entry><title type="html">A BRIEF-Gist Based Efficient Place Recognition for Indoor Home Service Robots</title><link href="https://yurimjeon1892.github.io/research/2016/03/01/research-place-recognition.html" rel="alternate" type="text/html" title="A BRIEF-Gist Based Efficient Place Recognition for Indoor Home Service Robots" /><published>2016-03-01T00:00:00+00:00</published><updated>2016-03-01T00:00:00+00:00</updated><id>https://yurimjeon1892.github.io/research/2016/03/01/research-place-recognition</id><content type="html" xml:base="https://yurimjeon1892.github.io/research/2016/03/01/research-place-recognition.html"><![CDATA[<figure style="text-align: center;">
    <img src="/assets/place_recognition_overview.png" style="display: block; margin: 0 auto;" />
    <figcaption>Overview</figcaption>
</figure>

<div class="icon-container">
    <span class="link-with-icon">
        <i data-feather="paperclip"></i>
        <a href="https://ieeexplore.ieee.org/document/7832506" target="_blank">Paper Link</a>
    </span>    
</div>

<p><br /></p>

<p>Visual place recognition has been vastly researched in the last decade. Most of the previous works have concentrated on improvement of performance when the environment changes due to illumination, weather or season at outdoor with abundant features and textures for place recognition. On the other hand, when a robot moves in a home environment, input images sometimes contain less features or textures for place recognition, which in turn degrades the precision and recall performance.</p>

<p>This paper presents an efficient place recognition method based on a binary robust independent elementary features gist (BRIEF-Gist) descriptor for indoor home service robot. The proposed method simply extracts multiple BRIEF-Gist descriptors from an input image, which results in a higher performance. A simple data structure for fast comparison between images is also presented. In home environment experiments, the original BRIEF-Gist and the proposed method shows the maximum recall rate of 1.6% and 9.7%, respectively, both with a precision of 100%. On the other hand, local feature based DLoopDetector method shows below 5% of both recall and precision performance. For comparison, computation times, which are the average execution times for processing place recognition algorithms of one image with 2058 places in a map, are measured; the proposed method without the proposed data structure takes 31.4 ms, and the proposed method with the proposed data structure takes 10.5 ms.</p>

<p>This paper is presented in <strong>International Conference on Control, Automation and Systems (ICCAS)</strong>.</p>]]></content><author><name>Yurim Jeon</name></author><category term="Research" /><summary type="html"><![CDATA[Overview]]></summary></entry></feed>