Traditional tractor ground leveling operation applies a manual process with no electronic assista... more Traditional tractor ground leveling operation applies a manual process with no electronic assistance. Automated Ground Leveling (AGL) will increase quality of leveling and operator comfort. This paper outlines a machine learning approach using Artificial Neural Network (ANN). The proposed AGL uses tractor inclined angle and leveling error as inputs. The target output is the tractor scraper implement raise or lower command. The equations to run simulations are formulated and applied to the model and verified during simulations. The details can be found in section IV of this paper. John Deere StarFire 6000 GPS receiver is proposed to be the device to obtain latitude, longitude, altitude and an IMU device to obtain pitch/angling data of tractor. The proposed inputs and target output proved to be effective in producing a set of weights and biases that learns to control the scraper implement. Twenty (20) ANN trainings were conducted using the same set of training data. Out of the twenty trainings, three sets of trained weights and biases outperformed the training set. The best trained weights and biases produced an RMS error of 0.50449 compared to human training data RMS error of 0.593, which was about 14.9% improvement. The algorithm recognizes the goal of staying close to the ground reference line. This paper provides a brief review on ANN for clarity and applies it to the AGL.
Society of Instrument and Control Engineers of Japan, Sep 8, 2021
Observed color in images is easily affected by lighting conditions such as the sunlight location,... more Observed color in images is easily affected by lighting conditions such as the sunlight location, weather, or time of day. Especially for image recognition processing on an outdoor navigated autonomous mobile robot, we should consider the effect of lighting conditions to achieve consistent robust color detection regardless of lighting condition change. In this paper, we employ new reference color patches to perform robust color detection to identify surrounding images by using the omnidirectional camera. As a demonstration of color detecting capability for the proposed approach, we apply specified human finding tasks defined by Tsukuba Challenge 2019 rules. Using the proposed method, we can demonstrate that the mobile robot was stably detected color, determining specified humans regardless of surrounding light change due to mobile robots.
... A Practical knowledge can be incorporated into the identifier L9) is by limiting the realisti... more ... A Practical knowledge can be incorporated into the identifier L9) is by limiting the realistic values of m , k -, and bi to possible estimates.This enhancement improves the convergence and accuracy of the parameter estimates and helps to avoid thepossibility of instability in ...
Society of Instrument and Control Engineers of Japan, Sep 8, 2021
Stable perception of its environment is essential to navigating an autonomous mobile robot safely... more Stable perception of its environment is essential to navigating an autonomous mobile robot safely. Most autonomous mobile robots employ a LiDAR (light detection and ranging) to sense its surrounding. The LiDAR can stably and accurately identify the surrounding environment in a normal situation. However, it has a problem in the case of rain because it would detect raindrops as objects. When an autonomous mobile robot uses LiDAR to detect obstacles during rains, the laser emitted from LiDAR may hit raindrops, resulting in false-positive reading. In this paper, we employ a 3D-LiDAR and present a new method for automatic detection and tracking of raindrops. We consider the three-dimensional raindrop behavior due to gravity to reconstruct a rain-free surrounding environment. We conducted experiments with actual and simulated raindrops and validated the effectiveness of the method.
A multi-sensor collision avoidance system (GAS) presented in this paper is based on a fusion sche... more A multi-sensor collision avoidance system (GAS) presented in this paper is based on a fusion scheme that utilizes fuzzy clustering and estimation techniques. Measurements from radar, vision and sonar are discriminated, fused and used to estimate relative motion between the prime and front vehicles. The estimated motion is then used to predict possibility of collision with the vehicle in the
The Holy Grail of computer vision and image analysis is to develop an artificial visual intellige... more The Holy Grail of computer vision and image analysis is to develop an artificial visual intelligence system that can recognize, learn, and identify, in the general case, arbitrary objects in arbitrary situations. To date, cognitive science researchers have been studying for centuries how biological systems accomplish this. Concurrently, research in the fields of computer vision and image analysis has been investigating how artificial systems can accomplish this. This paper proposes what could happen and documents what has happened when these fields, the biological, psychological, and the artificial, are brought together to offer a solution to the intelligent image segmentation problem—a precursor to obtaining the grail.
Traditional tractor ground leveling operation applies a manual process with no electronic assista... more Traditional tractor ground leveling operation applies a manual process with no electronic assistance. Automated Ground Leveling (AGL) will increase quality of leveling and operator comfort. This paper outlines a machine learning approach using Artificial Neural Network (ANN). The proposed AGL uses tractor inclined angle and leveling error as inputs. The target output is the tractor scraper implement raise or lower command. The equations to run simulations are formulated and applied to the model and verified during simulations. The details can be found in section IV of this paper. John Deere StarFire 6000 GPS receiver is proposed to be the device to obtain latitude, longitude, altitude and an IMU device to obtain pitch/angling data of tractor. The proposed inputs and target output proved to be effective in producing a set of weights and biases that learns to control the scraper implement. Twenty (20) ANN trainings were conducted using the same set of training data. Out of the twenty trainings, three sets of trained weights and biases outperformed the training set. The best trained weights and biases produced an RMS error of 0.50449 compared to human training data RMS error of 0.593, which was about 14.9% improvement. The algorithm recognizes the goal of staying close to the ground reference line. This paper provides a brief review on ANN for clarity and applies it to the AGL.
Society of Instrument and Control Engineers of Japan, Sep 8, 2021
Observed color in images is easily affected by lighting conditions such as the sunlight location,... more Observed color in images is easily affected by lighting conditions such as the sunlight location, weather, or time of day. Especially for image recognition processing on an outdoor navigated autonomous mobile robot, we should consider the effect of lighting conditions to achieve consistent robust color detection regardless of lighting condition change. In this paper, we employ new reference color patches to perform robust color detection to identify surrounding images by using the omnidirectional camera. As a demonstration of color detecting capability for the proposed approach, we apply specified human finding tasks defined by Tsukuba Challenge 2019 rules. Using the proposed method, we can demonstrate that the mobile robot was stably detected color, determining specified humans regardless of surrounding light change due to mobile robots.
... A Practical knowledge can be incorporated into the identifier L9) is by limiting the realisti... more ... A Practical knowledge can be incorporated into the identifier L9) is by limiting the realistic values of m , k -, and bi to possible estimates.This enhancement improves the convergence and accuracy of the parameter estimates and helps to avoid thepossibility of instability in ...
Society of Instrument and Control Engineers of Japan, Sep 8, 2021
Stable perception of its environment is essential to navigating an autonomous mobile robot safely... more Stable perception of its environment is essential to navigating an autonomous mobile robot safely. Most autonomous mobile robots employ a LiDAR (light detection and ranging) to sense its surrounding. The LiDAR can stably and accurately identify the surrounding environment in a normal situation. However, it has a problem in the case of rain because it would detect raindrops as objects. When an autonomous mobile robot uses LiDAR to detect obstacles during rains, the laser emitted from LiDAR may hit raindrops, resulting in false-positive reading. In this paper, we employ a 3D-LiDAR and present a new method for automatic detection and tracking of raindrops. We consider the three-dimensional raindrop behavior due to gravity to reconstruct a rain-free surrounding environment. We conducted experiments with actual and simulated raindrops and validated the effectiveness of the method.
A multi-sensor collision avoidance system (GAS) presented in this paper is based on a fusion sche... more A multi-sensor collision avoidance system (GAS) presented in this paper is based on a fusion scheme that utilizes fuzzy clustering and estimation techniques. Measurements from radar, vision and sonar are discriminated, fused and used to estimate relative motion between the prime and front vehicles. The estimated motion is then used to predict possibility of collision with the vehicle in the
The Holy Grail of computer vision and image analysis is to develop an artificial visual intellige... more The Holy Grail of computer vision and image analysis is to develop an artificial visual intelligence system that can recognize, learn, and identify, in the general case, arbitrary objects in arbitrary situations. To date, cognitive science researchers have been studying for centuries how biological systems accomplish this. Concurrently, research in the fields of computer vision and image analysis has been investigating how artificial systems can accomplish this. This paper proposes what could happen and documents what has happened when these fields, the biological, psychological, and the artificial, are brought together to offer a solution to the intelligent image segmentation problem—a precursor to obtaining the grail.
Uploads
Papers by Ka C Cheok