In the present study we investigate age-related changes in hand preference for grasping and the influence of task demands on such preference. Children (2–11), young-adults (17–28) and older-adults (57–90) were examined in a grasp-to-eat... more
In the present study we investigate age-related changes in hand preference for grasping and the influence of task demands on such preference. Children (2–11), young-adults (17–28) and older-adults (57–90) were examined in a grasp-to-eat and a grasp-to-construct task. The end-goal of these tasks was different (eat vs. construct) as was the nature of the task (unimanual vs. bimanual). In both tasks, ipsilateral and contralateral grasps were analyzed. Results showed a right-hand preference that did not change with age. Across the three age groups, a more robust right-hand preference was observed for the unimanual, grasp-to-eat task. To disentangle if the nature (unimanual) or the end-goal (grasp-to-eat) was the driver of the robust right-hand preference, a follow up experiment was conducted. Young-adult participants completed a unimanual grasp-to-place task. This was contrasted with the unimanual grasp-to-eat task and the bimanual grasp-to-construct task. Rates of hand preference for the grasp-to-eat task remained the highest when compared to the other two grasping tasks. Together, the results demonstrate that hand preference remains stable from childhood to older adulthood, and they suggest that a left hemisphere specialization exists for grasping, particularly when bringing food to the mouth.
Previous developmental research suggests that motor experience supports the development of action perception across the lifespan. However, it is still unknown when the neural mechanisms underlying action-perception coupling emerge in... more
Previous developmental research suggests that motor experience supports the development of action perception across the lifespan. However, it is still unknown when the neural mechanisms underlying action-perception coupling emerge in infancy. The goal of this study was to examine the neural correlates of action perception during the emergence of grasping abilities in newborn rhesus macaques. Neural activity, recorded via electroencephalogram (EEG), while monkeys observed grasping actions, mimed actions and means-end movements during the first (W1) and second week (W2) of life was measured. Event-related desynchronization (ERD) during action observation was computed from the EEG in the alpha and beta bands, two components of the sensorimotor mu rhythm associated with activity of the mirror neuron system (MNS). Results revealed age-related changes in the beta band, but not the alpha band, over anterior electrodes, with greater desynchronization at W2 than W1 for the observation of gra...
Evidence from recent neurophysiological studies on nonhuman primates as well as from human behavioral studies suggests that actions with similar kinematic requirements but different end-state goals are supported by separate neural... more
Evidence from recent neurophysiological studies on nonhuman primates as well as from human behavioral studies suggests that actions with similar kinematic requirements but different end-state goals are supported by separate neural networks. It is unknown whether these different networks supporting seemingly similar reach-to-grasp actions are lateralized, or if they are equally represented in both hemispheres. Recently published behavioral evidence suggests certain networks are lateralized to the left hemisphere. Specifically, when participants used their right hand, their maximum grip aperture (MGA) was smaller when grasping to eat food items than when grasping to place the same items. Left-handed movements showed no difference between tasks. The present study investigates whether the differences between grasp-to-eat and grasp-to-place actions are driven by an intent to eat, or if placing an item into the mouth (sans ingestion) is sufficient to produce asymmetries. Twelve right-hand...
This paper introduces a new modular approach to robotic grasping that allows for finding a trade off between a simple gripper and more complex human like manipulators. The modular approach to robotic grasping aims to understand human... more
This paper introduces a new modular approach to robotic grasping that allows for finding a trade off between a simple gripper and more complex human like manipulators. The modular approach to robotic grasping aims to understand human grasping behavior in order to replicate grasping and skilled in-hand movements with an artificial hand using simple, robust, and flexible modules. In this work, the design of modular grasping devices capable of adapting to different requirements and situations is investigated. A novel algorithm that determines effective modular configurations to get efficient grasps of given objects is presented. The resulting modular configurations are able to perform effective grasps that a human would consider "stable". Related simulations were carried out to validate the efficiency of the algorithm. Preliminary results show the versatility of the modular approach in designing grippers.
Reach-to-grasp tasks have become popular paradigms for exploring the neural origin of hand and arm movement. This is typically investigated by correlating limb kinematic with electrophysiological signals from intracortical recordings.... more
Reach-to-grasp tasks have become popular paradigms for exploring the neural origin of hand and arm movement. This is typically investigated by correlating limb kinematic with electrophysiological signals from intracortical recordings. However, it has never been investigated whether reach and grasp movements could be well expressed in the muscle domain and whether this could bring improvements with respect to current joint domain-based task representations. In this study, we trained two macaque monkeys to grasp 50 different objects, which resulted in a high variability of hand configurations. A generic musculoskeletal model of the human upper extremity was scaled and morphed to match the specific anatomy of each individual animal. The primate-specific model was used to perform three-dimensional reach-to-grasp simulations driven by experimental upper limb kinematics derived from electromagnetic sensors. Simulations enabled extracting joint angles from 27 degrees of freedom and the instantaneous length of 50 musculotendon units. Results demonstrated both a more compact representation and a higher decoding capacity of grasping tasks when movements where expressed in the muscle kinematics domain than in the joint kinematics domain. Accessing musculoskeletal variables might improve our understanding of cortical hand-grasping areas coding, with implications in the development of prosthetics hands.
Literature on mirror neurons has shown that seeing someone preparing to move generates in the motor areas of the observers a brain activity similar to that generated when the subject prepares his own actions. Thus, the mirroring of... more
Literature on mirror neurons has shown that seeing someone preparing to move generates in the motor areas of the observers a brain activity similar to that generated when the subject prepares his own actions. Thus, the mirroring of action would not be limited to the execution phase but also involves the preparation process. Here we confirm and extend this notion showing that, just as different brain activities prepare different voluntary actions, also different brain activities prepare to observe different predictable actions. Videos of two different actions from egocentric point of view were presented in separate blocks: (i) grasping of a cup and (ii) impossible grasping of a cup. Subjects had to passively observe the videos showing object-directed hand movements. Through the use of the event-related potentials, we found a cortical activity before observing the actions, which was very similar to the one recorded prior to the actual execution of that same action, in terms of both topography and latency. This anticipatory activity does not represent a general preparation state but an action-specific state, because being dependent on the specific meaning of the forthcoming action. These results reinforce our knowledge about the correspondence between action, perception and cognition.
This paper presents a novel type of deployable grasping manipulator (DGM), the fingers of which are constructed of serially connected metamorphic mechanism modules (MMMs), which are the key components for this type of robotic manipulator.... more
This paper presents a novel type of deployable grasping manipulator (DGM), the fingers of which are constructed of serially connected metamorphic mechanism modules (MMMs), which are the key components for this type of robotic manipulator. A systematic approach for the synthesis of the MMMs is proposed. The MMM consists of one grasping submechanism and two auxiliary sub-mechanisms, and the metamorphic principle is applied to the design of the grasping sub-mechanism to give it both deployment and grasping mobility. The design of the MMMs becomes a type of synthesis problem for the auxiliary submechanisms based on the given metamorphic mobility of the grasping sub-mechanism. The auxiliary mechanisms are exhaustively synthesised based on the typical screw theory. Computer-aided design (CAD) models and physical prototypes are used to show the feasibility of the proposed mechanisms.
The human hand is Nature's most versatile and dexterous end-effector and it has been a source of inspiration for roboticists for over 50 years. Recently, significant industrial and research effort has been put into the development of... more
The human hand is Nature's most versatile and dexterous end-effector and it has been a source of inspiration for roboticists for over 50 years. Recently, significant industrial and research effort has been put into the development of dexterous robot hands and grippers. Such end-effectors offer robust grasping and dexterous, in-hand manipulation capabilities that increase the efficiency, precision, and adaptability of the overall robotic platform. This work focuses on the development of modular, sensorized objects that can facilitate benchmarking of the dexterity and performance of hands and grippers. The proposed objects aim to offer; a minimal, sufficiently diverse solution, efficient pose tracking, and accessibility. The object manufacturing instructions, 3D models, and assembly information are made publicly available through the creation of a corresponding repository.
— Grasp planning for multi-fingered hands is com-putationally expensive due to the joint-contact coupling, surface nonlinearities and high dimensionality, thus is generally not affordable for real-time implementations. Traditional... more
— Grasp planning for multi-fingered hands is com-putationally expensive due to the joint-contact coupling, surface nonlinearities and high dimensionality, thus is generally not affordable for real-time implementations. Traditional planning methods by optimization, sampling or learning work well in planning for parallel grippers but remain challenging for multi-fingered hands. This paper proposes a strategy called finger splitting, to plan precision grasps for multi-fingered hands starting from optimal parallel grasps. The finger splitting is optimized by a dual-stage iterative optimization including a contact point optimization (CPO) and a palm pose optimization (PPO), to gradually split fingers and adjust both the contact points and the palm pose. The dual-stage optimization is able to consider both the object grasp quality and hand manipulability, address the nonlinearities and coupling, and achieve efficient convergence within one second. Simulation results demonstrate the effectiveness of the proposed approach. The simulation video is available at [1].
Artificial Intelligence is essential to achieve a reliable human-robot interaction, especially when it comes to manipulation tasks. Most of the state-of-the-art literature explores robotics grasping methods by focusing on the target... more
Artificial Intelligence is essential to achieve a reliable human-robot interaction, especially when it comes to manipulation tasks. Most of the state-of-the-art literature explores robotics grasping methods by focusing on the target object or the robot's morphology, without including the environment. When it comes to human cognitive development approaches, these physical qualities are not only inferred from the object, but also from the semantic characteristics of the surroundings. The same analogy can be used in robotic affordances for improving objects grasps, where the perceived physical qualities of the objects give valuable information about the possible manipulation actions. This work proposes a framework able to reason on the object affordances and grasping regions. Each calculated grasping area is the result of a sequence of concrete ranked decisions based on the inference of different highly related attributes. The results show that the system is able to infer on good grasping areas depending on its affordance without having any a-priori knowledge on the shape nor the grasping points.
We have recently shown that actions with similar kinematic requirements, but different end-state goals may be supported by distinct neural networks. Specifically, we demonstrated that when right-handed individuals reach-to-grasp food... more
We have recently shown that actions with similar kinematic requirements, but different end-state goals may be supported by distinct neural networks. Specifically, we demonstrated that when right-handed individuals reach-to-grasp food items with intent to eat, they produce smaller maximum grip apertures (MGAs) than when they grasp the same item with intent to place it in a location near the mouth. This effect was restricted to right-handed movements; left-handed movements showed no difference between tasks. The current study investigates whether (and to which side) the effect may be lateralized in left-handed individuals. Twenty-one self-identified left-handed participants grasped food items of three different sizes while grasp kinematics were captured via an Optotrak Certus motion capture array. A main effect of task was identified wherein the grasp-to-eat action generated significantly smaller MGAs than did the grasp-to-place action. Further analysis revealed that similar to the fi...
The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature... more
The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. We modeled shape tuning in visual AIP neurons and its relationship with curvature and gradient information from the caudal intraparietal area (CIP). The main goal was to gain insight into the kinds of shape parameterizations that can account for AIP tuning and that are consistent with both the inputs to AIP and the role of AIP in grasping. We first experimented with superquadric shape parameters. We considered superquadrics because they occupy a role in robotics that is similar to AIP, in that superquadric fits are derived from visual input and used for grasp planning. We also experimented with an alternative shape parameterization that was based on an Isomap dimension reduction of spatial derivatives of depth (i.e., distance from the observer to the object surface). We considered an Isomap-based model because its parameters lacked discontinuities between similar shapes. When we matched the dimension of the Isomap to the number of superquadric parameters, the superquadric model fit the AIP data somewhat more closely. However, higher-dimensional Isomaps provided excellent fits. Also, we found that the Isomap parameters could be approximated much more accurately than superquadric parameters by feedforward neural networks with CIP-like inputs. We conclude that Isomaps, or perhaps alternative dimension reductions of visual inputs to AIP, provide a promising model of AIP electrophysiology data. Further work is needed to test whether such shape parameterizations actually provide an effective basis for grasp control.
Reach-to-grasp movements performed without visual and haptic feedback of the hand are subject to sys- tematic inaccuracies. Grasps directed at an object specified by binocular information usually end at the wrong distance with an... more
Reach-to-grasp movements performed without visual and haptic feedback of the hand are subject to sys- tematic inaccuracies. Grasps directed at an object specified by binocular information usually end at the wrong distance with an incorrect final grip aperture. More specifically, moving the target object away from the observer leads to increasingly larger undershoots and smaller grip apertures. These systematic biases suggest that the visuomotor map- ping is based on inaccurate estimates of an object’s ego- centric distance and 3D structure that compress the visual space. Here we ask whether the appropriate visuomotor mapping can be learned through an extensive exposure to trials where haptic and visual feedback of the hand is pro- vided. By intermixing feedback trials with test trials with- out feedback, we aimed at maximizing the likelihood that the motor execution of test trials is positively influenced by that of preceding feedback trials. We found that the inter- mittent presence of feedback trials both (1) largely reduced the positioning error of the hand with respect to the object and (2) affected the shaping of the hand before the final grasp, leading to an overall more accurate performance. While this demonstrates an effective transfer of information from feedback trials to test trials, the remaining biases indi- cate that a compression of visual space is still taking place. The correct visuomotor mapping, therefore, could not be learned. We speculate that an accurate reconstruction ofthe scene at movement onset may not actually be needed. Instead, the online monitoring of the hand position relative to the object and the final contact with the object are suf- ficient for a successful execution of a grasp.
We tested whether the control of real actions in an ever-changing environment would show any dependence on prior actions elicited by instructional cues a few seconds before. To this end, adaptation of the functional magnetic resonance... more
We tested whether the control of real actions in an ever-changing environment would show any dependence on prior actions elicited by instructional cues a few seconds before. To this end, adaptation of the functional magnetic resonance imaging signal was measured while human participants sequentially grasped three-dimensional objects in an event-related design, using grasps oriented along the same or a different axis of either the same or a different object shape. We found that the bilateral anterior intraparietal sulcus, an area previously linked to the control of visually guided grasping, along with other areas of the intraparietal sulcus, the left supramarginal gyrus, and the right mid superior parietal lobe showed clear adaptation following both repeated grasps and repeated objects. In contrast, the left ventral premotor cortex and the bilateral dorsal premotor cortex, the two premotor areas often linked to response selection, action planning, and execution, showed only grasp-selective adaptation. These results suggest that, even in real action guidance, parietofrontal areas demonstrate differential involvement in visuomotor processing dependent on whether the action or the object has been previously experienced.
Grasping is essential for primates in numerous behaviors. A variety of different grasping techniques are used for obtaining food. Among humans, several studies have shown that the properties of the objects such as the size or the form... more
Grasping is essential for primates in numerous behaviors. A variety of different grasping techniques are used for obtaining food. Among humans, several studies have shown that the properties of the objects such as the size or the form influence grasp patterns. In addition, other works have tested the individual variability through grasping strategies and age and several studies have revealed some similarities between great apes and humans in grip types. Finally, results on hand preference are still equivocal and, for non-human primates, object parameters and age effect are rarely tested together, even though it is a methodological aspect important to consider. The study sought to determine whether grip type varied according to the age of the subject, the species (human versus chimpanzee), the size of the object and the hand used. Frame-by-frame analysis of hand contact strategies and statistical results indicated that (1) adults of both species used fewer contact strategies than juveniles and that there was a greater variability of contacts for small than for large objects (2) young juvenile chimpanzees and human children follow a similar grip types development, i.e. more frequent use of precision grips with age (3) juvenile chimpanzees used all five categories of grip and the adults used the “thumb-fingerpad(s)” more than the “precision grips” in addition to the “power grip” and (4) a right hand preference was greater for the grasping of small objects with “precision grips” in adults for both species. These results are discussed in relationship with neurology, morphology and grasping evolution.
People have often been reported to look near their index finger's contact point when grasping. They have only been reported to look near the thumb's contact point when grasping an opaque object at eye height with a horizontal... more
People have often been reported to look near their index finger's contact point when grasping. They have only been reported to look near the thumb's contact point when grasping an opaque object at eye height with a horizontal grip-thus when the region near the index finger's contact point is occluded. To examine to what extent being able to see the digits' final trajectories influences where people look, we compared gaze when reaching to grasp a glass of water or milk that was placed at eye or hip height. Participants grasped the glass and poured its contents into another glass on their left. Surprisingly, most participants looked nearer to their thumb's contact point. To examine whether this was because gaze was biased toward the position of the subsequent action, which was to the left, we asked participants in a second experiment to grasp a glass and either place it or pour its contents into another glass either to their left or right. Most participants' ga...
ABSTRACT In this paper we proposed to analyze in depth the thumb, index and middle fingers on the fingertips bending or grasping movement against an objects. The finger movement data are measured using a low cost DataGlove... more
ABSTRACT In this paper we proposed to analyze in depth the thumb, index and middle fingers on the fingertips bending or grasping movement against an objects. The finger movement data are measured using a low cost DataGlove "GloveMAP" which is based on fingers adapted postural movement of the principal component. In supervised classification, we are provided with a collection of grasping feature whereas the features capable to be categorized using the EigenFingers of the fingertips bending or grasping data. The classification of the fingers activities is analyzed using Principal Component Analysis (PCA) for feature extraction or normalization reduction and is used for fingertips movement dataset. Meanwhile for the finger grasping group features, the method of Best Matching Unit (PCA-BMU) was proposed whereas the concept of Euclidean Distance could be justify by the best grouping features according to the best neuron or winning neuron. The use of the first and the second principal components can be shown in the experimental results that allow for distinguishing between three fingers grasping and represent the features for an appropriate manipulation of the object grasping.
Hemispatial neglect is a neurological disorder characterized by a failure to represent information appearing in the hemispace contralateral to a brain lesion. In addition to the perceptual consequences of hemispatial neglect, several... more
Hemispatial neglect is a neurological disorder characterized by a failure to represent information appearing in the hemispace contralateral to a brain lesion. In addition to the perceptual consequences of hemispatial neglect, several authors have reported that hemispatial neglect impairs visually guided movements. Others have reported that the extent of the impairment depends on the type of visually guided task. Finally, in some cases, neglect has been shown to impair visual perception without affecting visuomotor control in relation to the very same stimuli. While neglect patients may be able to successfully pick up an object they have difficulty perceiving in its entirety, it does not mean that they are picking up the object in the same way that a neurologically intact individual would. In the current study, patients with hemispatial neglect were presented with irregularly shaped objects, directly in front of them, that lacked clear symmetry and required an analysis of their entir...
ModGrasp, an open-source virtual and physical rapid-prototyping framework that allows for the design, simulation and control of low-cost sensorised modular hands, was previously introduced by our research group. ModGrasp combines the... more
ModGrasp, an open-source virtual and physical rapid-prototyping framework that allows for the design, simulation and control of low-cost sensorised modular hands, was previously introduced by our research group. ModGrasp combines the rapid-prototyping approach with the modular concept, making it possible to model different manipulator configurations. Virtual and physical prototypes can be linked in a real-time one-to-one correspondence. In this work, the ModGrasp communication pattern is improved , becoming more modular, reliable and robust. In the previous version of the framework, each finger of the prototype was controlled by a separate controller board. In this work, each module, or finger link, is independent, being controlled by a self-reliant slave controller board. In addition, a newly redesigned multi-threading and multi-level software architecture with a hierarchical logical organisation is presented. In this regard, a new programming paradigm is delineated. The new archit...
The paper argues that an account of understanding should take the form of a Carnapian explication and acknowledge that understanding comes in degrees. An explication of objectual understanding is defended, which helps to make sense of the... more
The paper argues that an account of understanding should take the form of a Carnapian explication and acknowledge that understanding comes in degrees. An explication of objectual understanding is defended, which helps to make sense of the cognitive achievements and goals of science. The explication combines a necessary condition with three evaluative dimensions: an epistemic agent understands a subject matter by means of a theory only if the agent commits herself sufficiently to the theory of the subject matter, and to the degree that the agent grasps the theory (i.e., is able to make use of it), the theory answers to the facts and the agent's commitment to the theory is justified. The threshold for outright attributions of understanding is determined contextually. The explication has descriptive as well as normative facets and allows for the possibility of understanding by means of non-explanatory (e.g., purely classificatory) theories.
This paper presents an approach for grasp planning and grasp forces optimization of polygon shaped objects. The proposed approach is an intelligent rule-based method that figures out the minimal number of fingers and minimal values of... more
This paper presents an approach for grasp planning and grasp forces optimization of polygon shaped objects. The proposed approach is an intelligent rule-based method that figures out the minimal number of fingers and minimal values of contact forces. These fingers are required to securely grasp a rigid body in the presence of friction and under the action of some external force. This is accomplished by finding optimal contact points on the object boundary along with minimal number of fingers required for achieving the aforementioned goal. Our system handles every object case independently. It generates a rule base for each object based on adequate values of external forces. The system uses the genetic algorithm as its search mechanism, and a rule evaluation mechanism called bucket brigade for the reinforcement learning of the rules. The process mainly consists of two stages; learning then retrieval. Retrievals act on line utilizing previous knowledge and experience embedded in a rule base. If retrievals fail in some cases, learning is presumed until that case is resolved. The algorithm is very general and can be adapted for interface with any object shape. The resulting rule base varies in size according to the degree of difficulty and dimensionality of the grasping problem.
The past decade has seen great progress in the development of adaptive, low-complexity, underactuated robot hands. An advantage of these hands is that they use under-constrained mechanisms and compliance, which facilitate grasping even... more
The past decade has seen great progress in the development of adaptive, low-complexity, underactuated robot hands. An advantage of these hands is that they use under-constrained mechanisms and compliance, which facilitate grasping even under significant object pose uncertainties. However, for many minimal contact grasps such as precision fingertip grasps, these hands tend to move the object after a grasp is secured, to an equilibrium configuration determined by the elasticity of the mechanism and the contact forces exerted through the robot fingertips. In this paper, we present a methodology based on constrained optimization methods for deriving stable, minimal effort grasps for underactuated robot hands and compensating for post-contact, in-hand parasitic object motions. To do so, we compute the imposed object motions for different object shapes and sizes and we synthesize appropriate robot arm trajectories that eliminate them. The approach allows for the computation of these grasps and motions even for hands with complex, flexure-based, compliant members. The effectiveness of the proposed methods is validated using a redundant robot arm (Barrett WAM) and a two fingered, compliant, underactuated robot hand (Yale Open Hand model T42), for a series of simulated and experimental paradigms.
The embodied,cognition hypothesis suggests,that motor and premotor,areas are automatically and necessarily involved in understanding action language, as word conceptual representations are embodied. This transcranial magnetic stimulation... more
The embodied,cognition hypothesis suggests,that motor and premotor,areas are automatically and necessarily involved in understanding action language, as word conceptual representations are embodied. This transcranial magnetic stimulation (TMS) study explores,the role of the left primary,motor,cortex in action-verb processing. TMS-induced motor-evoked potentials from right-hand muscles were recorded as a measure of M1 activity, while participants were asked either to judge explicitly
In this work, the open-source plugin OpenMRH is presented for the Open Robotics Automation Virtual Environment (OpenRAVE), a simulation environment for testing, developing and deploying motion planning algorithms. The proposed plugin... more
In this work, the open-source plugin OpenMRH is presented for the Open Robotics Automation Virtual Environment (OpenRAVE), a simulation environment for testing, developing and deploying motion planning algorithms. The proposed plugin allows for a fast and automated generation of different modular hand models OpenMRH combines virtual-prototyping and modular concepts. Each modular model is generated by applying a dynamically generated code, which is consistent with the standard syntax expected by OpenRAVE for the simulated models. In this way, once the desired model is generated, an instance of OpenRAVE can be launched and the model can be visualised. Alternatively, the modular models can be generated from a user-defined input specified via a graphical user interface (GUI). The generated models can be used for testing, developing and deploying grasp or motion planning algorithms. Two case studies are considered to validate the efficiency of the proposed model generator. In the first case study, a modular robotic hand model is generated with OpenMRH by using user-defined input parameters. In the second case study, another hand model is generated with OpenMRH by using algorithmic defined input parameters.
Can viewing our own body modified in size reshape the bodily representation employed for interacting with the environment? This question was addressed here by exposing participants to either an enlarged, a shrunken, or an unmodified view... more
Can viewing our own body modified in size reshape the bodily representation employed for interacting with the environment? This question was addressed here by exposing participants to either an enlarged, a shrunken, or an unmodified view of their own hand in a reach-to-grasp task toward a target of fixed dimensions. When presented with a visually larger hand, participants modified the kinematics of their grasping movement by reducing maximum grip aperture. This adjustment was carried over even when the hand was rendered invisible in subsequent trials, suggesting a stable modification of the bodily representation employed for the action. The effect was specific for the size of the grip aperture, leaving the other features of the reach-to-grasp movement unaffected. Reducing the visual size of the hand did not induce the opposite effect, although individual differences were found, which possibly depended on the degree of subject’s reliance on visual input. A control experiment suggested that the effect exerted by the vision of the enlarged hand could not be merely explained by simple global visual rescaling. Overall, our results suggest that visual information pertaining to the size of the body is accessed by the body schema and is prioritized over the proprioceptive input for motor control.
RNA secondary structures prediction is one of the main issues in bioinformatics. It seeks to elucidate structural conserved regions within a set of RNA sequences. Unfortunately, finding an accurate conserved structure is a very hard task... more
RNA secondary structures prediction is one of the main issues in bioinformatics. It seeks to elucidate structural conserved regions within a set of RNA sequences. Unfortunately, finding an accurate conserved structure is a very hard task to do. Within the present study, the prediction problem is considered as a multi objective optimization process in which the structural conservation and the sensitivity of the multiple alignment are optimized. The proposed method called GRASPMORSA is based on an aggregate function and GRASP procedure. The initial solutions are obtained by using a random progressive local/ global algorithm, and then they are refined by an iterative realignment. Experiments within a large scale of data have shown the efficacy and effectiveness of the proposed method and its capacity to reach good quality solutions.
This study explored the neurophysiological mechanisms underlying the what-decision of planning and execution of an overt goal-related manual action. We aimed to differentiate cerebral activity, by means of event-related potentials (ERPs),... more
This study explored the neurophysiological mechanisms underlying the what-decision of planning and execution of an overt goal-related manual action. We aimed to differentiate cerebral activity, by means of event-related potentials (ERPs), between predominantly self-regulated and instructed actions. In a bartransport task, participants were given free or specified choices about the initial grip and/or final goal. The ERPs for action execution differed between free- and specified-goal conditions, but not between free- and specified-grasp conditions. We found differential activity for the goal specification in mid-frontal, midcentral, and mid-parietal regions from −1100 to −700 ms and −500 to 0 ms time-locked to grasping and in anterior right regions from −1900 to −1400 ms time-locked to movement end. There was no differential activity for grasp specifications. These results indicated that neural activity differed between free and specified actions, but only for goal conditions, suggesting different ways of operation dependent on goalrelatedness. To our knowledge, this was the first study to differentiate cerebral activity and its temporal organization underlying the what-decision involved in overt goal-related actions. Our results support the ideomotor theory by showing that neural processes underlying action preparation and execution depend on the anticipated action goal.