Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (474)

Search Parameters:
Keywords = Kinect

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
1770 KiB  
Article
IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning
by Jacky C.K. Chow, Derek D. Lichti, Jeroen D. Hol, Giovanni Bellusci and Henk Luinge
Robotics 2014, 3(3), 247-280; https://doi.org/10.3390/robotics3030247 - 11 Jul 2014
Cited by 42 | Viewed by 15998
Abstract
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D [...] Read more.
Autonomous Simultaneous Localization and Mapping (SLAM) is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF). A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m) took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization. Full article
Show Figures

Figure 1

1320 KiB  
Article
Directional Joint Bilateral Filter for Depth Images
by Anh Vu Le, Seung-Won Jung and Chee Sun Won
Sensors 2014, 14(7), 11362-11378; https://doi.org/10.3390/s140711362 - 26 Jun 2014
Cited by 54 | Viewed by 9610
Abstract
Depth maps taken by the low cost Kinect sensor are often noisy and incomplete. Thus, post-processing for obtaining reliable depth maps is necessary for advanced image and video applications such as object recognition and multi-view rendering. In this paper, we propose adaptive directional [...] Read more.
Depth maps taken by the low cost Kinect sensor are often noisy and incomplete. Thus, post-processing for obtaining reliable depth maps is necessary for advanced image and video applications such as object recognition and multi-view rendering. In this paper, we propose adaptive directional filters that fill the holes and suppress the noise in depth maps. Specifically, novel filters whose window shapes are adaptively adjusted based on the edge direction of the color image are presented. Experimental results show that our method yields higher quality filtered depth maps than other existing methods, especially at the edge boundaries. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

17230 KiB  
Article
A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors
by Jan Dupuis, Stefan Paulus, Jan Behmann, Lutz Plümer and Heiner Kuhlmann
Sensors 2014, 14(4), 7563-7579; https://doi.org/10.3390/s140407563 - 24 Apr 2014
Cited by 13 | Viewed by 8793
Abstract
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a [...] Read more.
The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

1108 KiB  
Article
2.5D Multi-View Gait Recognition Based on Point Cloud Registration
by Jin Tang, Jian Luo, Tardi Tjahjadi and Yan Gao
Sensors 2014, 14(4), 6124-6143; https://doi.org/10.3390/s140406124 - 28 Mar 2014
Cited by 31 | Viewed by 8758
Abstract
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is [...] Read more.
This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

741 KiB  
Article
Vertical Dynamic Deflection Measurement in Concrete Beams with the Microsoft Kinect
by Xiaojuan Qi, Derek Lichti, Mamdouh El-Badry, Jacky Chow and Kathleen Ang
Sensors 2014, 14(2), 3293-3307; https://doi.org/10.3390/s140203293 - 19 Feb 2014
Cited by 22 | Viewed by 6698
Abstract
The Microsoft Kinect is arguably the most popular RGB-D camera currently on the market, partially due to its low cost. It offers many advantages for the measurement of dynamic phenomena since it can directly measure three-dimensional coordinates of objects at video frame rate [...] Read more.
The Microsoft Kinect is arguably the most popular RGB-D camera currently on the market, partially due to its low cost. It offers many advantages for the measurement of dynamic phenomena since it can directly measure three-dimensional coordinates of objects at video frame rate using a single sensor. This paper presents the results of an investigation into the development of a Microsoft Kinect-based system for measuring the deflection of reinforced concrete beams subjected to cyclic loads. New segmentation methods for object extraction from the Kinect’s depth imagery and vertical displacement reconstruction algorithms have been developed and implemented to reconstruct the time-dependent displacement of concrete beams tested in laboratory conditions. The results demonstrate that the amplitude and frequency of the vertical displacements can be reconstructed with submillimetre and milliHz-level precision and accuracy, respectively. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

14079 KiB  
Article
Low-Cost 3D Systems: Suitable Tools for Plant Phenotyping
by Stefan Paulus, Jan Behmann, Anne-Katrin Mahlein, Lutz Plümer and Heiner Kuhlmann
Sensors 2014, 14(2), 3001-3018; https://doi.org/10.3390/s140203001 - 14 Feb 2014
Cited by 227 | Viewed by 17583
Abstract
Over the last few years, 3D imaging of plant geometry has become of significant importance for phenotyping and plant breeding. Several sensing techniques, like 3D reconstruction from multiple images and laser scanning, are the methods of choice in different research projects. The use [...] Read more.
Over the last few years, 3D imaging of plant geometry has become of significant importance for phenotyping and plant breeding. Several sensing techniques, like 3D reconstruction from multiple images and laser scanning, are the methods of choice in different research projects. The use of RGBcameras for 3D reconstruction requires a significant amount of post-processing, whereas in this context, laser scanning needs huge investment costs. The aim of the present study is a comparison between two current 3D imaging low-cost systems and a high precision close-up laser scanner as a reference method. As low-cost systems, the David laser scanning system and the Microsoft Kinect Device were used. The 3D measuring accuracy of both low-cost sensors was estimated based on the deviations of test specimens. Parameters extracted from the volumetric shape of sugar beet taproots, the leaves of sugar beets and the shape of wheat ears were evaluated. These parameters are compared regarding accuracy and correlation to reference measurements. The evaluation scenarios were chosen with respect to recorded plant parameters in current phenotyping projects. In the present study, low-cost 3D imaging devices have been shown to be highly reliable for the demands of plant phenotyping, with the potential to be implemented in automated application procedures, while saving acquisition costs. Our study confirms that a carefully selected low-cost sensor Full article
(This article belongs to the Section Remote Sensors)
Show Figures

819 KiB  
Article
A Depth-Based Fall Detection System Using a Kinect® Sensor
by Samuele Gasparrini, Enea Cippitelli, Susanna Spinsante and Ennio Gambi
Sensors 2014, 14(2), 2756-2775; https://doi.org/10.3390/s140202756 - 11 Feb 2014
Cited by 181 | Viewed by 14956
Abstract
We propose an automatic, privacy-preserving, fall detection method for indoor environments, based on the usage of the Microsoft Kinect® depth sensor, in an “on-ceiling” configuration, and on the analysis of depth frames. All the elements captured in the depth scene are recognized [...] Read more.
We propose an automatic, privacy-preserving, fall detection method for indoor environments, based on the usage of the Microsoft Kinect® depth sensor, in an “on-ceiling” configuration, and on the analysis of depth frames. All the elements captured in the depth scene are recognized by means of an Ad-Hoc segmentation algorithm, which analyzes the raw depth data directly provided by the sensor. The system extracts the elements, and implements a solution to classify all the blobs in the scene. Anthropometric relationships and features are exploited to recognize one or more human subjects among the blobs. Once a person is detected, he is followed by a tracking algorithm between different frames. The use of a reference depth frame, containing the set-up of the scene, allows one to extract a human subject, even when he/she is interacting with other objects, such as chairs or desks. In addition, the problem of blob fusion is taken into account and efficiently solved through an inter-frame processing algorithm. A fall is detected if the depth blob associated to a person is near to the floor. Experimental tests show the effectiveness of the proposed solution, even in complex scenarios. Full article
Show Figures

Graphical abstract

31619 KiB  
Article
Foreground Segmentation in Depth Imagery Using Depth and Spatial Dynamic Models for Video Surveillance Applications
by Carlos R. Del-Blanco, Tomás Mantecón, Massimo Camplani, Fernando Jaureguizar, Luis Salgado and Narciso García
Sensors 2014, 14(2), 1961-1987; https://doi.org/10.3390/s140201961 - 24 Jan 2014
Cited by 13 | Viewed by 7754
Abstract
Low-cost systems that can obtain a high-quality foreground segmentation almostindependently of the existing illumination conditions for indoor environments are verydesirable, especially for security and surveillance applications. In this paper, a novelforeground segmentation algorithm that uses only a Kinect depth sensor is proposedto satisfy [...] Read more.
Low-cost systems that can obtain a high-quality foreground segmentation almostindependently of the existing illumination conditions for indoor environments are verydesirable, especially for security and surveillance applications. In this paper, a novelforeground segmentation algorithm that uses only a Kinect depth sensor is proposedto satisfy the aforementioned system characteristics. This is achieved by combininga mixture of Gaussians-based background subtraction algorithm with a new Bayesiannetwork that robustly predicts the foreground/background regions between consecutivetime steps. The Bayesian network explicitly exploits the intrinsic characteristics ofthe depth data by means of two dynamic models that estimate the spatial and depthevolution of the foreground/background regions. The most remarkable contribution is thedepth-based dynamic model that predicts the changes in the foreground depth distributionbetween consecutive time steps. This is a key difference with regard to visible imagery,where the color/gray distribution of the foreground is typically assumed to be constant.Experiments carried out on two different depth-based databases demonstrate that theproposed combination of algorithms is able to obtain a more accurate segmentation of theforeground/background than other state-of-the art approaches. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2013)
Show Figures

2270 KiB  
Article
Fall Risk Assessment and Early-Warning for Toddler Behaviors at Home
by Mau-Tsuen Yang and Min-Wen Chuang
Sensors 2013, 13(12), 16985-17005; https://doi.org/10.3390/s131216985 - 10 Dec 2013
Cited by 10 | Viewed by 8663
Abstract
Accidental falls are the major cause of serious injuries in toddlers, with most of these falls happening at home. Instead of providing immediate fall detection based on short-term observations, this paper proposes an early-warning childcare system to monitor fall-prone behaviors of toddlers at [...] Read more.
Accidental falls are the major cause of serious injuries in toddlers, with most of these falls happening at home. Instead of providing immediate fall detection based on short-term observations, this paper proposes an early-warning childcare system to monitor fall-prone behaviors of toddlers at home. Using 3D human skeleton tracking and floor plane detection based on depth images captured by a Kinect system, eight fall-prone behavioral modules of toddlers are developed and organized according to four essential criteria: posture, motion, balance, and altitude. The final fall risk assessment is generated by a multi-modal fusion using either a weighted mean thresholding or a support vector machine (SVM) classification. Optimizations are performed to determine local parameter in each module and global parameters of the multi-modal fusion. Experimental results show that the proposed system can assess fall risks and trigger alarms with an accuracy rate of 92% at a speed of 20 frames per second. Full article
(This article belongs to the Collection Sensors for Globalized Healthy Living and Wellbeing)
Show Figures

898 KiB  
Article
Assessing the Potential of Low-Cost 3D Cameras for the Rapid Measurement of Plant Woody Structure
by Charles Nock, Olivier Taugourdeau, Sylvain Delagrange and Christian Messier
Sensors 2013, 13(12), 16216-16233; https://doi.org/10.3390/s131216216 - 27 Nov 2013
Cited by 34 | Viewed by 8628
Abstract
Detailed 3D plant architectural data have numerous applications in plant science, but many existing approaches for 3D data collection are time-consuming and/or require costly equipment. Recently, there has been rapid growth in the availability of low-cost, 3D cameras and related open source software [...] Read more.
Detailed 3D plant architectural data have numerous applications in plant science, but many existing approaches for 3D data collection are time-consuming and/or require costly equipment. Recently, there has been rapid growth in the availability of low-cost, 3D cameras and related open source software applications. 3D cameras may provide measurements of key components of plant architecture such as stem diameters and lengths, however, few tests of 3D cameras for the measurement of plant architecture have been conducted. Here, we measured Salix branch segments ranging from 2–13 mm in diameter with an Asus Xtion camera to quantify the limits and accuracy of branch diameter measurement with a 3D camera. By scanning at a variety of distances we also quantified the effect of scanning distance. In addition, we also test the sensitivity of the program KinFu for continuous 3D object scanning and modeling as well as other similar software to accurately record stem diameters and capture plant form (<3 m in height). Given its ability to accurately capture the diameter of branches >6 mm, Asus Xtion may provide a novel method for the collection of 3D data on the branching architecture of woody plants. Improvements in camera measurement accuracy and available software are likely to further improve the utility of 3D cameras for plant sciences in the future. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

240 KiB  
Article
A Telerehabilitation Program Improves Postural Control in Multiple Sclerosis Patients: A Spanish Preliminary Study
by Rosa Ortiz-Gutiérrez, Roberto Cano-de-la-Cuerda, Fernando Galán-del-Río, Isabel María Alguacil-Diego, Domingo Palacios-Ceña and Juan Carlos Miangolarra-Page
Int. J. Environ. Res. Public Health 2013, 10(11), 5697-5710; https://doi.org/10.3390/ijerph10115697 - 31 Oct 2013
Cited by 85 | Viewed by 12227
Abstract
Postural control disorders are among the most frequent motor disorder symptoms associated with multiple sclerosis. This study aims to demonstrate the potential improvements in postural control among patients with multiple sclerosis who complete a telerehabilitation program that represents a feasible alternative to physical [...] Read more.
Postural control disorders are among the most frequent motor disorder symptoms associated with multiple sclerosis. This study aims to demonstrate the potential improvements in postural control among patients with multiple sclerosis who complete a telerehabilitation program that represents a feasible alternative to physical therapy for situations in which conventional treatment is not available. Fifty patients were recruited. Control group (n = 25) received physiotherapy treatment twice a week (40 min per session). Experimental group (n = 25) received monitored telerehabilitation treatment via videoconference using the Xbox 360® and Kinect console. Experimental group attended 40 sessions, four sessions per week (20 min per session).The treatment schedule lasted 10 weeks for both groups. A computerized dynamic posturography (Sensory Organization Test) was used to evaluate all patients at baseline and at the end of the treatment protocol. Results showed an improvement over general balance in both groups. Visual preference and the contribution of vestibular information yielded significant differences in the experimental group. Our results demonstrated that a telerehabilitation program based on a virtual reality system allows one to optimize the sensory information processing and integration systems necessary to maintain the balance and postural control of people with multiple sclerosis. We suggest that our virtual reality program enables anticipatory PC and response mechanisms and might serve as a successful therapeutic alternative in situations in which conventional therapy is not readily available. Full article
(This article belongs to the Special Issue Advances in Telehealthcare)
2212 KiB  
Article
On the Use of a Low-Cost Thermal Sensor to Improve Kinect People Detection in a Mobile Robot
by Loreto Susperregi, Basilio Sierra, Modesto Castrillón, Javier Lorenzo, Jose María Martínez-Otzeta and Elena Lazkano
Sensors 2013, 13(11), 14687-14713; https://doi.org/10.3390/s131114687 - 29 Oct 2013
Cited by 17 | Viewed by 9673
Abstract
Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot [...] Read more.
Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot or not. Our approach makes use of vision, depth and thermal sensors mounted on top of a mobile platform. The set of sensors is set up combining the rich data source offered by a Kinect sensor, which provides vision and depth at low cost, and a thermopile array sensor. Experimental results carried out with a mobile platform in a manufacturing shop floor and in a science museum have shown that the false positive rate achieved using any single cue is drastically reduced. The performance of our algorithm improves other well-known approaches, such as C4 and histogram of oriented gradients (HOG). Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2013)
Show Figures

4396 KiB  
Article
Human-Computer Interaction Based on Hand Gestures Using RGB-D Sensors
by José Manuel Palacios, Carlos Sagüés, Eduardo Montijano and Sergio Llorente
Sensors 2013, 13(9), 11842-11860; https://doi.org/10.3390/s130911842 - 6 Sep 2013
Cited by 75 | Viewed by 11655
Abstract
In this paper we present a new method for hand gesture recognition based on an RGB-D sensor. The proposed approach takes advantage of depth information to cope with the most common problems of traditional video-based hand segmentation methods: cluttered backgrounds and occlusions. The [...] Read more.
In this paper we present a new method for hand gesture recognition based on an RGB-D sensor. The proposed approach takes advantage of depth information to cope with the most common problems of traditional video-based hand segmentation methods: cluttered backgrounds and occlusions. The algorithm also uses colour and semantic information to accurately identify any number of hands present in the image. Ten different static hand gestures are recognised, including all different combinations of spread fingers. Additionally, movements of an open hand are followed and 6 dynamic gestures are identified. The main advantage of our approach is the freedom of the user’s hands to be at any position of the image without the need of wearing any specific clothing or additional devices. Besides, the whole method can be executed without any initial training or calibration. Experiments carried out with different users and in different environments prove the accuracy and robustness of the method which, additionally, can be run in real-time. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2013)
Show Figures

6373 KiB  
Article
Measuring Accurate Body Parameters of Dressed Humans with Large-Scale Motion Using a Kinect Sensor
by Huanghao Xu, Yao Yu, Yu Zhou, Yang Li and Sidan Du
Sensors 2013, 13(9), 11362-11384; https://doi.org/10.3390/s130911362 - 26 Aug 2013
Cited by 31 | Viewed by 17006
Abstract
Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, [...] Read more.
Non-contact human body measurement plays an important role in surveillance, physical healthcare, on-line business and virtual fitting. Current methods for measuring the human body without physical contact usually cannot handle humans wearing clothes, which limits their applicability in public environments. In this paper, we propose an effective solution that can measure accurate parameters of the human body with large-scale motion from a Kinect sensor, assuming that the people are wearing clothes. Because motion can drive clothes attached to the human body loosely or tightly, we adopt a space-time analysis to mine the information across the posture variations. Using this information, we recover the human body, regardless of the effect of clothes, and measure the human body parameters accurately. Experimental results show that our system can perform more accurate parameter estimation on the human body than state-of-the-art methods. Full article
(This article belongs to the Special Issue Wearable Gait Sensors)
Show Figures

2482 KiB  
Article
Background Subtraction Based on Color and Depth Using Active Sensors
by Enrique J. Fernandez-Sanchez, Javier Diaz and Eduardo Ros
Sensors 2013, 13(7), 8895-8915; https://doi.org/10.3390/s130708895 - 12 Jul 2013
Cited by 72 | Viewed by 10583
Abstract
Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can [...] Read more.
Depth information has been used in computer vision for a wide variety of tasks. Since active range sensors are currently available at low cost, high-quality depth maps can be used as relevant input for many applications. Background subtraction and video segmentation algorithms can be improved by fusing depth and color inputs, which are complementary and allow one to solve many classic color segmentation issues. In this paper, we describe one fusion method to combine color and depth based on an advanced color-based algorithm. This technique has been evaluated by means of a complete dataset recorded with Microsoft Kinect, which enables comparison with the original method. The proposed method outperforms the others in almost every test, showing more robustness to illumination changes, shadows, reflections and camouflage. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2013)
Show Figures

Graphical abstract

Back to TopTop