Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
P. Martinet
  • 1 rue de la Noë
    BP 92101
    44321 Nantes Cedex 3
  • 0240376975

P. Martinet

INRIA, Acentauri, Department Member
  • Homepage: http://www-sop.inria.fr/members/Philippe.Martinet/ Publications: http://www-sop.inria.fr/members/Philippe.M... moreedit
ABSTRACT
Research Interests:
Research Interests:
Research Interests:
In this paper, we present a complete framework for autonomous indoor robot navigation. We show that autonomous navigation is possible in indoor situation using a single camera and natural landmarks. When navigating in an unknown... more
In this paper, we present a complete framework for autonomous indoor robot navigation. We show that autonomous navigation is possible in indoor situation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along the performed path, in order to use these references as
Neither of the classical visual servoing approaches, position-based and image-based, are completely satisfactory. In position-based visual servoing the trajectory of the robot is well stated, but the approach suffers mainly from the image... more
Neither of the classical visual servoing approaches, position-based and image-based, are completely satisfactory. In position-based visual servoing the trajectory of the robot is well stated, but the approach suffers mainly from the image features going out of the visual field of the cameras. On the other hand, image-based visual servoing has been found generally satisfactory and robust in the presence of camera and hand-eye calibration errors. However, in some cases, singularities and local minima may arise, and the robot can go further from its joint limits. This paper is a step towards the synthesis of both approaches with their particular advantages, i.e., the trajectory of the camera motion is predictable and the image features remain in the field of view of the camera. The basis is the introduction of three-dimensional information in the feature vector. Point depth and object pose produce useful behavior in the control of the camera. Using the task-function approach, we demons...
Research Interests:
ABSTRACT This paper deals with platooning navigation in the context of innovative solutions for urban transportation systems. More precisely, a sustainable approach centered on au-tomated electric vehicles in free-access is considered. To... more
ABSTRACT This paper deals with platooning navigation in the context of innovative solutions for urban transportation systems. More precisely, a sustainable approach centered on au-tomated electric vehicles in free-access is considered. To tackle the major problem of congestions in dense areas, cooperative navigation according to a platoon formation is investigated. With the aim to ensure the formation stability, i.e. longitudinal disturbances within the platoon do not grow when progressing down the chain, a global decentralized platoon control strategy is here proposed. It is supported by inter-vehicle communica-tions and relies on nonlinear control techniques. A wide range of experiments, carried out with up to four urban vehicles, demonstrates the capabilities of the proposed approach: two localization devices have been tested (RTK-GPS and monocular vision) along with two guidance modes (the path to be followed is either predefined or inferred on-line from the motion of the manually driven first vehicle).
ABSTRACT
In the last decades, robotics has exerted an important role in the research on diverse knowledge domains, such as, artificial intelligence, biology, neuroscience and psychology. In particular, the study of knowledge representation and... more
In the last decades, robotics has exerted an important role in the research on diverse knowledge domains, such as, artificial intelligence, biology, neuroscience and psychology. In particular, the study of knowledge representation and thinking, has led to the proposal of cognitive architectures; capturing essential structures and processes of cognition and behavior. Robotists have also attempted to design automatic systems using these proposals. Though, certain difficulties have been reported for obtaining efficient low-level processing while sensing or controlling the robot. The main challenges involve the treatment of the differences between the computational paradigms employed by the cognitive and the robotic architectures. The objective of this work, is to propose a methodology for designing robotic systems capable of decision making and learning when executing manipulative tasks. The development of a system called the Cognitive Reaching Robot (CRR) will be reported. CRR combine...
In this paper, the problem of controlling a motion by visual servoing around an unknown object is addressed. These works can be interpreted as an initial step towards a perception goal of an unmodeled object. The main purpose is to... more
In this paper, the problem of controlling a motion by visual servoing around an unknown object is addressed. These works can be interpreted as an initial step towards a perception goal of an unmodeled object. The main purpose is to perform motion with regard to the object in order to discover several viewpoint of the object. The originality of our work is based on the choice and extraction of visual features in accordance with motions to be performed. The notion of invariant feature is introduced to control the navigational task around the unknown object. A real-time experimentation with a complex object is realized and shows the generality of the proposed ideas
International audienc
Research Interests:
ABSTRACT This paper extends the recent work proposed in [21]. In this work, it has been noted that three visual features (to control three degrees of freedom) obtained from the spherical projection of 3D spheres allows nice decoupling... more
ABSTRACT This paper extends the recent work proposed in [21]. In this work, it has been noted that three visual features (to control three degrees of freedom) obtained from the spherical projection of 3D spheres allows nice decoupling properties and global stability. However, even if such an approach is theoretically attractive, it is limited by a major practical issue since spherical objects have to be observed while only three degrees of freedom can be controlled. In this paper, we show that similar properties can be obtained by observing a set of points. The basic idea is to build a virtual 3D sphere from two 3D points and to analyse its related spherical projection. Furthermore, to control the six degrees of freedom a 2D 1/2 control scheme is proposed which allows us to fully decouple rotational motions from translational motions.
Parallel robots have proved they can have better performances than serial ones in term of rigidity and payload-to-weight ratio. Nevertheless their workspace is largely reduced by the presence of singularities. In particular, the Type 2... more
Parallel robots have proved they can have better performances than serial ones in term of rigidity and payload-to-weight ratio. Nevertheless their workspace is largely reduced by the presence of singularities. In particular, the Type 2 singularities (parallel singularities) separate the workspace in different aspects, corresponding to one (or more) robot assembly modes. In order to enlarge the workspace size, it has been proved that a mechanism can cross the singularity loci by using an optimal motion planning. However, if the trajectory is not robust to modeling errors, the robot can stop in the singularity and stay blocked. Therefore, the objective of this paper is to show new general procedure that allows the exit of a parallel manipulator from a Type 2 singularity. Two strategies are presented. The first one proposes the computation of an optimal trajectory that makes it possible for the robot to exit the singularity. This trajectory must respect a criterion that ensures the con...
Precision agriculture involves very accurate farm vehicle control along recorded paths, which are not necessarily straight lines. In this paper, we investigate the possibility of achieving this task with a CP-DGPS as the unique sensor.... more
Precision agriculture involves very accurate farm vehicle control along recorded paths, which are not necessarily straight lines. In this paper, we investigate the possibility of achieving this task with a CP-DGPS as the unique sensor. The vehicle heading is derived according to a Kalman state reconstructor, and a nonlinear velocity independent control law is designed, relying on chained systems properties. Field experiments, demonstrating the capabilities of our guidance system, are reported and discussed.

And 233 more

et mots clés Dans cet article, nous présentons une stratégie de commande de systèmes robotiques en utilisant comme entrées d'une boucle d'asservissement visuel des primitives relatives à la projection de droites dans le plan image d'une... more
et mots clés Dans cet article, nous présentons une stratégie de commande de systèmes robotiques en utilisant comme entrées d'une boucle d'asservissement visuel des primitives relatives à la projection de droites dans le plan image d'une caméra panoramique à point central unique. Afin de réaliser la commande d'un système robotique par asservissement visuel, il est nécessaire d'estimer la matrice d'interaction liant les mouvements de la caméra aux mouvements des primitives visuelles dans l'image. Dans cet article, nous dérivons la forme analytique de la matrice d'interaction générique relative à la projection de droites à partir d'un modèle de projection englobant la classe entière des caméras à point central unique. Elle est ensuite utilisée dans un schéma d'asservissement visuel. Des simulations ainsi que des résultats expérimentaux sur un robot mobile valident l'approche proposée. Vision omnidirectionnelle, point central unique, asservissement visuel, droites. Abstract and key words In this paper we consider the problem of controlling a robotic system by using the projection of 3D straight lines in the image plane of central catadioptric systems. Most of the effort in visual servoing are devoted to points, only few works have investigated the use of lines in visual servoing with traditional cameras and none has explored the case of omnidirectional cameras. First a generic central catadioptric interaction matrix for the projection of 3D straight lines is derived from the projection model of an entire class of camera. Then an image-based control law is designed and validated through simulation results and real experiments with a mobile robot. Omnidirectional camera, single view point, visual servoing, straight lines. Remerciements Ces travaux ont été financés en partie par le projet OMNIBOT de ROBEA : « Robotique mobile et Entités Artificielles ». Nous remercions particulièrement Éric Marchand de l'IRISA/INRIA à Rennes pour nous avoir fourni l'algorithme de suivi de cercle dans les images omnidirectionnelles.
Research Interests:
—In this paper, a novel approach is proposed for the kinematic calibration of parallel mechanisms with linear actua-tors at the base. The originality of the approach lies in the observation of the mechanism legs with a camera, without any... more
—In this paper, a novel approach is proposed for the kinematic calibration of parallel mechanisms with linear actua-tors at the base. The originality of the approach lies in the observation of the mechanism legs with a camera, without any mechanism modification. The calibration can hence be achieved online, as no calibration device is linked to the end-effector, on any mechanism since no additional proprioceptive sensor installation is necessary. Because of the conditions of leg observability, several camera locations may be needed during the experimentation. The associated calibration method does not however require any accurate knowledge of the successive camera positions. The experimental procedure is therefore easy to perform. The method is developed theoretically in the context of mechanisms with legs linearly actuated at the base, giving the necessary conditions of identifiability. Application to an I4 mechanism is achieved with experimental results.
Research Interests:
In this paper, we provide a comprehensive method to perform the physical model identification of parallel mechanisms. This includes both the kinematic identification using vision and the identification of the dynamic parameters. A careful... more
In this paper, we provide a comprehensive method to perform the physical model identification of parallel mechanisms. This includes both the kinematic identification using vision and the identification of the dynamic parameters. A careful attention is given to the issues of identifiability and excitation. Experimental results obtained on a H4 parallel robot show that kinematic identification yields an improvement in the static positioning accuracy from some 1 cm down to 1 mm, and that dynamic parameters are globally estimated with less than 10% relative error yielding a similar error on the control torque estimation.
In this article, we present the kinematic calibration of a H4 parallel robot using a vision-based measuring device. Calibration is performed according to the inverse kinematic model method, using first the design model then a model... more
In this article, we present the kinematic calibration of a H4 parallel robot using a vision-based measuring device. Calibration is performed according to the inverse kinematic model method, using first the design model then a model developed for calibration purpose. To do so, the end-effector pose has to be measured with the utterst accuracy. Thus, we first evaluate the practical accuracy of our vision-based measuring system to have a precision in the order of magnitude of 10µm and 10 −3 deg. Second, we calibrate the robot using our vision system, yielding a final positioning accuracy of the end-effector lower than 0.5mm.
The Robot Programming Network (RPN) is an initiative for creating a network of robotics education laboratories with remote programming capabilities. It consists of both online open course materials and online servers that are ready to... more
The Robot Programming Network (RPN) is
an initiative for creating a network of robotics education
laboratories with remote programming capabilities.
It consists of both online open course materials
and online servers that are ready to execute and test the
programs written by remote students. Online materials
include introductory course modules on robot programming,
mobile robotics and humanoids, aimed to
learn from basic concepts in science, technology, engineering,
and mathematics (STEM) to more advanced
programming skills. The students have access to the
online server hosts, where they submit and run their
programming code on the fly. The hosts run a variety
of robot simulation environments, and access
to real robots can also be granted, upon successful
achievement of the course modules. The learning
materials provide step-by-step guidance for solving
problems with increasing level of difficulty. Skill
tests and challenges are given for checking the success,
and online competitions are scheduled for additional
motivation and fun. Use of standard robotics middleware (ROS) allows the system to be extended
to a large number of robot platforms, and connected
to other existing tele-laboratories for building
a large social network for online teaching of
robotics.
Research Interests: