Humanoid Robots
Humanoid Robots
Humanoid Robots
A humanoid robot is a robot with its body shape built to resemble the human
body. The design may be for functional purposes, such as interacting with human
tools and environments, for experimental purposes, such as the study of al
locomotion, or for other purposes. In general, humanoid robots have a torso, a head,
two arms, and two legs, though some forms of humanoid robots may model only part
of the body, for example, from the waist up. Some humanoid robots also have heads
designed to replicate human facial features such as eyes and mouths. Androids are
humanoid robots built to aesthetically resemble humans.
Purpose:
Humanoid robots are now used as research tools in several scientific areas.
Researchers study the human body structure and behavior (biomechanics) to build
humanoid robots. On the other side, the attempt to simulate the human body leads
to a better understanding of it. Human cognition is a field of study which is focused
on how humans learn from sensory information in order to acquire perceptual and
motor skills. This knowledge is used to develop computational models of human
behavior and it has been improving over time.
It has been suggested that very advanced robotics will facilitate the enhancement of
ordinary humans.
Although the initial aim of humanoid research was to build
better orthosis and prosthesis for human beings, knowledge has been transferred
between both disciplines. A few examples are powered leg prosthesis for
neuromuscularly impaired, ankle-foot orthosis, biological realistic leg prosthesis and
forearm prosthesis.
Besides the research, humanoid robots are being developed to perform human tasks
like personal assistance, through which they should be able to assist the sick and
elderly, and dirty or dangerous jobs. Humanoids are also suitable for some
procedurally-based vocations, such as reception-desk administrators and automotive
manufacturing line workers. In essence, since they can use tools and operate
equipment and vehicles designed for the human form, humanoids could theoretically
perform any task a human being can, so long as they have the proper software.
However, the complexity of doing so is immense.
They are also becoming increasingly popular as entertainers. For example, Ursula, a
female robot, sings, plays music, dances and speaks to her audiences at Universal
Studios. Several Disney theme park shows utilize animatronic robots that look, move
and speak much like human beings. Although these robots look realistic, they have
no cognition or physical autonomy. Various humanoid robots and their possible
applications in daily life are featured in an independent documentary film
called Plug & Pray, which was released in 2010.
Humanoid robots, especially those with artificial intelligence algorithms, could be
useful for future dangerous and/or distant space explorationmissions, without
having the need to turn back around again and return to Earth once the mission is
completed
Body structure:
This structure involve hollow web like body which can be 4D printed, i.e. printing the
smart materials in a 3D printer. It can be built with various smart materials like
Shape memory polymer, shape memory alloys, self-expanding and contracting
materials. The use of smart materials allows the robot to change shape themselves, it
also allows them to transform in any shape practically. The movement of human
body is due to the movements at the joints thus the structures like wrist, elbow,
fingers, neck, back, etc. are to be made of smart materials so that they can change the
shape and give the movement similar to that of human. The material can be
programmed to remember the similar movement as that of human and perform the
motion when exited with external stimulus like light, heat, electricity, etc. The
supporting structures which act rigid like bones in human body will be printed with
hard polymers. By use of smart materials in the body of robots allow them to even
fold and store in small bags and when required providing them heat or light they will
take the shape of robot.
With further advanced along with smart self-replicating materials the Micro
Electromechanical systems (MEMS) can be used. This involves the use of micro sized
actuators, sensors and mechanism along with magnetic and electro mechanical
components. The portion of humanoid which has to perform the movement can be
designed with micro actuators along with micro magnets thus this magnetic
movement can be used to actuate the motion. The complete system can be
programmed so that each matter or say the MEMS devices interact with each other
and lead to the movements of robot.
Benefit:
1. Doesn’t involve and complex mechanism
2. Does not require complex wiring systems
3. Can replicate the human body movement in accurate manner
4. Can be folded and stored in smallest available space.
Applications:
1. Aerospace applications in satellites.
2. Automobiles to perform the work of driver.
3. Day to day human work in house.
Sensors:
A sensor is a device that measures some attribute of the world. Being one of the three
primitives of robotics (besides planning and control), sensing plays an important role
in robotic paradigms.
Sensors can be classified according to the physical process with which they work or
according to the type of measurement information that they give as output. In this
case, the second approach was used.
Proprioceptive sensors
Proprioceptive sensors sense the position, the orientation
orientation and the speed of the
humanoid's body and joints.
In human beings the otoliths and semi-circular
semi circular canals (in the inner ear) are used to
maintain balance and orientation. In addition humans use their own proprioceptive
sensors (e.g. touch, muscle extension,
extension, limb position) to help with their orientation.
Humanoid robots use accelerometers to measure the acceleration, from which
velocity can be calculated by integration; tilt sensors to measure inclination; force
sensors placed in robot's hands and feet to measure contact force with environment;
position sensors, that indicate the actual position of the robot (from which the
velocity can be calculated by derivation) or even speed sensors.
Exteroceptive sensors
Arrays of tactels can be used to provide data on what has been touched. The Shadow
Hand uses an array of 34 tactels arranged beneath its polyurethane skin on each
finger tip. Tactile sensors also provide information about forces and torques
transferred between the robot and other objects.
Vision refers to processing data from any modality which uses the electromagnetic
spectrum to produce an image. In humanoid robots it is used to recognize objects
and determine their properties. Vision sensors work most similarly to the eyes of
human beings. Most humanoid robots use CCD cameras as vision sensors.
Sound sensors allow humanoid robots to hear speech and environmental sounds,
and perform as the ears of the human being. Microphones are usually used for this
task.
Actuators
Actuators are the motors responsible for motion in the robot.
Humanoid robots are constructed in such a way that they mimic the human body, so
they use actuators that perform like muscles and joints, though with a different
structure. To achieve the same effect as human motion, humanoid robots use mainly
rotary actuators. They can be either electric, pneumatic, hydraulic, piezoelectric
or ultrasonic.
Hydraulic and electric actuators have a very rigid behavior and can only be made to
act in a compliant manner through the use of relatively complex feedback control
strategies. While electric coreless motor actuators are better suited for high speed
and low load applications, hydraulic ones operate well at low speed and high load
applications.
Piezoelectric actuators generate a small movement with a high force capability when
voltage is applied. They can be used for ultra-precise positioning and for generating
and handling high forces or pressures in static or dynamic situations.
Ultrasonic actuators are designed to produce movements in a micrometer order at
ultrasonic frequencies (over 20 kHz). They are useful for controlling vibration,
positioning applications and quick switching.
Pneumatic actuators operate on the basis of gas compressibility. As they are inflated,
they expand along the axis, and as they deflate, they contract. If one end is fixed, the
other will move in a linear trajectory. These actuators are intended for low speed and
low/medium load applications. Between pneumatic actuators there
are: cylinders, bellows, pneumatic engines, pneumatic stepper motors
and pneumatic artificial muscles.
GYNOIDS
...the great majority of robots were either machine-like, male-like or child-like for the
reasons that not only are virtually all roboticists male, but also that fembots posed
greater technical difficulties. Not only did the servo motor and platform have to be
‘interiorized’ (naizosuru), but the body [of the fembot] needed to be slender, both
extremely difficult undertakings
Examples of female robots include:
Researchers have noted the connection between the design of feminine robots and
roboticists' assumptions about gendered appearance and labor. Fembots in Japan,
for example, are designed with slenderness and grace in mind, and they are
employed in ways that help to maintain traditional family structures and politics in a
nation that is seeing a population decline.
People also react to fembots in ways that may be attributed to gender stereotypes.
This research has been used to elucidate gender cues, clarifying which behaviors and
aesthetics elicit a stronger gender-induced response.
ATLAS
The design and production of Atlas was overseen by the DARPA, an agency of
the United States Department of Defense, in cooperation with Boston Dynamics. One
of the robot's hands was developed by Sandia National Laboratories, while the other
was developed by iRobot. In 2013, DARPA program manager Gill Pratt compared the
prototype version of Atlas to a small child, saying that "a 1-year-old child can barely
walk, a 1-year-old child falls down a lot ... this is where we are right now."
Atlas is based on Boston Dynamics' earlier PETMAN humanoid robot, and has
four hydraulically-actuated limbs. Constructed of aluminum and titanium, it stands
approximately 5.9 feet tall, weighs 330 pounds (150 kg), and is illuminated with
blue LEDs. Atlas is equipped with two vision systems – a
laser rangefinder and stereo cameras, both controlled by an off-board computer –
and has hands with fine motor skill capabilities. Its limbs possess a total of
28 degrees of freedom. Atlas can navigate rough terrain and climb independently
using its arms and legs, although the 2013 prototype version was tethered to an
outside power supply.
In October 2013 Boston Dynamics uploaded a video showing Atlas could withstand
being hit by projectiles and balance on one leg.
In 2014, Atlas robots programmed by six different teams competed in the DARPA
Robotics Challenge to test the robot's ability to perform various tasks, including
getting in and out of a vehicle and driving it, opening a door, and using a power tool.
A variety of other robots also competed. The contest was inspired by the
2011 Fukushima Daiichi nuclear disaster, and carries a USD 2 million prize for the
winning team.
In the 2015 DARPA robotics finals Atlas from IHMC Robotics (named Running Man)
came second behind the Korean team Kaist and their robot DRC-Hubo by a margin
of six minutes, completing the entire course in a time of 50:26.
Atlas, The Next Generation
On February 23, 2016, Boston Dynamics released video of a new version Atlas robot
on YouTube. The new version of Atlas is designed to operate both outdoors and
inside buildings. It is specialized for mobile manipulation and is very adept at
walking over a wide range of terrain, including snow. It is electrically powered and
hydraulically actuated. It uses sensors in its body and legs to balance, and it
uses LIDAR and stereo sensors in its head to avoid obstacles, assess the terrain, help
with navigation, and manipulate objects, even when the objects are being moved.
This version of Atlas is about 175 cm (5 ft 9 in) tall (about a head shorter than the
DRC Atlas) and weighs 180 lb (82 kg).
Applications:
ASIMO (whose name comes from English initials or words Advanced Step
in Innovative Mobility) is a humanoid robot created by Honda in 2000. It is
currently displayed in Miraikan museum in the Japanese capital city of Tokyo.
Development:
Honda began developing humanoid robots in the 1980s, including several prototypes
that preceded ASIMO. It was the company's goal to create a walking robot. E0 was
the first bipedal (two-legged) model produced as part of the Honda E series, which
was an early experimental line of self-regulating, humanoid walking robot with
wireless movements created between 1986 and 1993. This was followed by the Honda
P series of robots produced from 1993 through 1997. The research made on the E-
and P-series led to the creation of ASIMO. Development began at Honda's Wako
Fundamental Technical Research Center in Japan in 1999 and ASIMO was unveiled
in October 2000.
Form:
ASIMO stands 130 cm (4 ft 3 in) tall and weighs 54 kg (119 lb). Research conducted
by Honda found that the ideal height for a mobility assistant robot was between
120 cm and the height of an average adult, which is conducive to operating door
knobs and light switches. ASIMO is powered by a rechargeable 51.8 V lithium-ion
battery with an operating time of one hour. Switching from a nickel metal hydride in
2004 increased the amount of time ASIMO can operate before recharging. ASIMO
has a three-dimensional computer processor that was created by Honda and consists
of a three stacked die, a processor, a signal converter and memory. The computer
that controls ASIMO's movement is housed in the robot's waist area and can be
controlled by a PC, wireless controller, or voice commands.
Abilities:
ASIMO has the ability to recognize moving objects, postures, gestures, its
surrounding environment, sounds and faces, which enables it to interact with
humans. The robot can detect the movements of multiple objects by using visual
information captured by two camera "eyes" in its head and also determine distance
and direction. This feature allows ASIMO to follow or face a person when
approached. The robot interprets voice commands and human gestures, enabling it
to recognize when a handshake is offered or when a person waves or points, and then
respond accordingly. ASIMO's ability to distinguish between voices and other sounds
allows it to identify its companions. ASIMO is able to respond to its name and
recognizes sounds associated with a falling object or collision. This allows the robot
to face a person when spoken to or look towards a sound. ASIMO responds to
questions by nodding or providing a verbal answer in different languages and can
recognize approximately 10 different faces and address them by name.
There are sensors that assist in autonomous navigation. The two cameras inside the
head are used as a visual sensor to detect obstacles. The lower portion of the torso
has ground sensor which comprises one laser sensor and one infrared sensor. The
laser sensor is used to detect ground surface. The infrared sensor with automatic
shutter adjustment based on brightness is used to detect pairs of floor markings to
confirm the navigable paths of the planned map. The pre-loaded map and the
detection of floor markings help the robot to precisely identify its present location
and continuously adjusting its position. There are front and rear ultrasonic sensors
to sense the obstacles. The front sensor is located at the lower portion of the torso
together with the ground sensor. The rear sensor is located at the bottom of the
backpack.
CYBORG
The Cyborg Antenna is a new sensory organ created to extend the human limitations
of color perception. It is implanted and osseointegrated in Harbisson’s head and it
sprouts from within his occipital bone. It has been permanently attached to
Harbisson’s head since 2004 and it allows him to feel and hear colours as audible
vibrations inside his head,including colours invisible to the human eye such as
infrareds and ultraviolets. The antenna also allows internet connection and therefore
the reception of colour from other sensors or from satellites. Harbisson began
developing the antenna at college in 2003 with Adam Montandon and it was
upgraded by Peter Kese and Matias Lizana among others. The antenna implant
surgery was repeatedly rejected by bioethical committees but went underway
regardless by anonymous doctors.
Internet of Senses:
Harbisson has given permission to 5 friends, one in each continent, to send colours,
images, videos or sounds directly into his head. If he receives colours while asleep his
friends can colour and alter his dreams. The first public demonstration of a skull
transmitted image was broadcast live on Al Jazeera's chat show The Stream. The first
person to make a phone call directly into his skull was Ruby Wax.
History:
Features:
Cameras within Sophia's eyes combined with computer algorithms allow her to see.
She can follow faces, sustain eye contact, and recognize individuals. She is able to
process speech and have conversations using a natural language subsystem. Around
January 2018 Sophia was upgraded with functional legs and the ability to walk.
Sophia is conceptually similar to the computer program ELIZA, which was one of the
first attempts at simulating a human conversation. The software has been
programmed to give pre-written responses to specific questions or phrases, like
a chatbot. These responses are used to create the illusion that the robot is able to
understand conversation, including stock answers to questions like "Is the door open
or shut?" The information is shared in a cloud network which allows input and
responses to be analysed with blockchain technology.
David Hanson has said that Sophia would ultimately be a good fit to serve in
healthcare, customer service, therapy and education. Sophia runs on artificially
intelligent software that is constantly being trained in the lab, so her conversations
are likely to get faster, Sophia's expressions are likely to have fewer errors, and she
should answer increasingly complex questions with more accuracy.
What’s the Future Role for Humanoid Robot
It has been well documented that there will be increase in the number of robots over
the next decade. According to the Boston Consulting Group, by 2025, robots will
perform 25% of all labor tasks. This is due to improvements in performance and
reduction in costs. The United States, along with Canada, Japan, South Korea, and
the United Kingdom, will be leading the way in robot adoption. The four industries
leading the charge are computer and electronic products; electrical equipment and
appliances; transportation equipment; and machinery. They will account for 75% of
all robotic installations by 2025.
The growth of robotics will also affect the service industry. In a recent report from
Berg Insight, the service robot base is expected to install 264.3 million units by 2026.
In 2016, 29.6 million service robots were installed worldwide. The robots in the
service industry broke down into the following groups:
Floor cleaning robots accounted for 80% of total service robots, with 23.8 million
units
University of Southern California Professor Maja Matarić has been pairing robots
with patients since 2014. Her robots helped children with autism copy the motions of
socially assistive robots and, in 2015, the robots assisted stroke recovery victims with
upper extremity exercises. The patients were more responsive to the exercises when
promoted and motivated by the robot
ACKNOWLEDGEMENT
It gives us a great sense of pleasure to present the report of the B. Tech Seminar undertaken
during B. Tech. Third Year. We owe special debt of gratitude to Professor Ms. Saumya
Yadav, Department of Computer Science and engineering, IEC College of Engineering and
Technology , for his constant support and guidance throughout the course of our work. His
sincerity, thoroughness and perseverance have been a constant source of inspiration for us. It
is only his cognizant efforts that our endeavors have seen light of the day.
Signature:
Roll No.:1509010166
Date :
DECLARATION
I hereby declare that this submission is my own work and that, to the best of my
knowledge and belief, it contains no material previously published or written by
another person nor material which to a substantial extent has been accepted for the
award of any other degree or diploma of the university or other institute of higher
learning, except where due acknowledgment has been made in the text.
Signature
Date
SEMINAR REPORT ON
HUMANOID ROBOTS
By
RISHABH SINGH
(1509010166)
Bachelor of Technology
in
Computer Science
IEC COLLEGE
COLLEGE OF ENGINEERING & TECHNOLOGY
April, 2018
REfERENCES
Asada, H. and Slotine, J.-J. E. (1986). Robot Analysis and Control. Wiley.
Arkin, Ronald C. (1998). Behavior-Based Robotics. MIT Press.
Brady, M., Hollerbach, J.M., Johnson, T., Lozano-Perez, T. and Mason, M. (1982), Robot
Motion: Planning and Control. MIT Press.
Horn, Berthold, K. P. (1986). Robot Vision. MIT Press.
Craig, J. J. (1986). Introduction to Robotics: Mechanics and Control. Addison Wesley.
Everett, H. R. (1995). Sensors for Mobile Robots: Theory and Application. AK Peters.
Kortenkamp, D., Bonasso, R., Murphy, R. (1998). Artificial Intelligence and Mobile Robots.
MIT Press.
Poole, D., Mackworth, A. and Goebel, R. (1998), Computational Intelligence: A Logical
Approach. Oxford University Press.
Russell, R. A. (1990). Robot Tactile Sensing. Prentice Hall.
Russell, S. J. & Norvig, P. (1995). Artificial Intelligence: A Modern Approach. Prentice-Hall.
Prentice Hall.
http://www.techentice.com/manav-indias-first-3d-printed-robot-from-iit-mumbai
/ http://www.livemint.com/Industry/rc86Iu7h3rb44087oDts1H/Meet-Manav-Indias-first-
3Dprinted-humanoid-robot.html
https://en.wikipedia.org/wiki/Humanoid_robot
http://seminartopics.info/engineering-topics/computer-it-topics/humanoid-robot/
www.robolaw.eu/...files/.../robolaw_d6.2_guidelinesregulatingrobotics_20140922.pdf
http://asimo.honda.com/
http://www.hansonrobotics.com/robot/sophia/
https://www.bostondynamics.com/atlas
https://en.wikipedia.org/wiki/Neil_Harbisson