Finial Report PDF
Finial Report PDF
A SEMINAR REPORT
ON
“SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE”
Bachelor of Engineering
in
Computer Science and Engineering
Submitted By
Name: Poornima M
Usn: 1BO16CS060
CERTIFICATE
This is to certify that the technical seminar work entitled “Self Driving Cars
Using Artificial Intelligence” is a bonafied work carried out by Poornima M
bearing the USN number 1BO16CS060 in partial fulfillment for the requirements
of Eighth Semester, Bachelor of Engineering in Computer Science and
Engineering of Visvesvaraya Technological University, Belagavi during the
year 2019-20. It is certified that all corrections and suggestions indicated for the
internal assessment have been incorporated in the report. This seminar report has
been approved as it satisfies the academic requirements in respect to technical
seminar work prescribed for the Bachelor of Engineering degree.
The satisfaction and euphoria that accompanies the successful completion of any task would
be incomplete without mentioning the people who made it possible. With deep gratitude, I
acknowledge all those guidance and encouragement, which served as bacon of light and
crowned my efforts with success. I thank each one of them for their valuable support.
I express heartfelt gratitude and humble thanks to Dr. Sasikumar .M, Head of Department,
CSE, Brindavan College of Engineering, for the constant encouragement and help to carry out
I would like to express humble thanks to seminar guide Mr. Avinash N, Assistant Professor,
CSE, Bangalore for guiding me and having facilitated me to complete my seminar work
successfully
I take this opportunity to express sincere gratitude to Seminar Coordinator Mr. Avinash N,
Assistant Professor, CSE, Brindavan College of Engineering, Bangalore for encouraging me
throughout the seminar work.
I would like to mention my special thanks to all the faculty member of Computer/Information
Science and Engineering Department, Brindavan College of Engineering, Bangalore for their
invaluable support and guidance. I finally thank my family and friends who have been
encouraging me constantly and inspiring me throughout. Without whom this report would have
never seen the light of the day.
POORNIMA M 1BO16CS060
i
Abstract
From the invention of the car there is a great relation between human and car. Because by
the invention of the car the automobile industry was established, by this car the traveling time
from one place to another place is reduced. The car brings royalty from the invention. As cars are
coming on roads at that time there are so many accidents are occurring due to lack of driving
knowledge & drink driving and soon, In that view only the Google took a great project, i.e. Google
Driverless Car in these the Google puts the technology in the car, that technology was Artificial
Intelligence with Google map view. The input video camera was fixed beside the front mirror
inside the car, A LIDAR sensor was fixed on the top of the vehicle, RADAR sensor on the front
of the vehicle and a position sensor attached to one of the rear wheels that helps locate the cars
position on the map.
ii
TABLE OF CONTENTS
3 EXISTING SYSTEM 5
6 SYSTEM WORKING 19
7 APPLICATIONS 20-21
CONCLUSION
REFERENCES
iii
LIST OF FIGURES
iv
CHAPTER 1
INTRODUCTION
The inventions of the integrated circuit and later, the microcomputer, were major
factors in the development of electronic control in automobiles. The importance of the
microcomputer cannot be overemphasized as it is the “brain” that controls many systems in
today’s cars. For example, in a cruise control system, the driver sets the desired speed and
enables the system by pushing a button. A microcomputer then monitors the actual speed of
the vehicle using data from velocity sensors. The actual speed is compared to the desired
speed and the controller adjusts the throttle as necessary.
A completely autonomous vehicle is one in which a computer performs all the tasks
that the human driver normally would. Ultimately, this would mean getting in a car, entering
the destination into a computer, and enabling the system. From there, the car would take
over and drive to the destination with no human input. The car would be able to sense its
environment and make steering and speed changes as necessary. This scenario would
require all of the automotive technologies mentioned above: lane detection to aid in passing
slower vehicles or exiting a highway; obstacle detection to locate other cars, pedestrians,
animals, etc.; adaptive cruise control to maintain a safe speed; collision avoidance to avoid
hitting obstacles in the road way; and lateral control to maintain the car’s position on the
roadway. In addition, sensors would be needed to alert the car to road or weather conditions
to ensure safe traveling speeds. For example, the car would need to slow down in snowy or
icy conditions. We perform many tasks while driving without even thinking about it.
Completely automating the car is a challenging task and is a long way off. However,
advances have been made in the individual systems.
Google’s robotic car is a fully autonomous vehicle which is equipped with radar and
LIDAR and such can take in much more information, process it much more quickly and
reliably, make a correct decision about a complex situation, and then implement that
decision far better than a human can. Google anticipates that the increased accuracy of its
automated driving system could help reduce the number of traffic-related injuries and
deaths.
SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE
The Google car system combines information gathered for Google Street View with
artificial intelligence software that combines input from video cameras inside the car, a LIDAR
sensor on top of the vehicle, radar sensors on the front of the vehicle and a position sensor
attached to one of the rear wheels that helps locate the car's position on the map. As of 2010,
Google has tested several vehicles equipped with the system, driving 140,000 miles (230,000
km) without any human intervention, the only accident occurring when one of the cars was rear-
ended while stopped at a red light. Google anticipates that the increased accuracy of its automated
driving system could help reduce the number of traffic-related injuries and deaths, while using
energy and space on roadways more efficiently.
The combination of these technologies and other systems such as video based lane
analysis, steering and brake actuation systems, and the programs necessary to control all of
the components will become a fully autonomous system. The problem is winning the trust
of the people to allow a computer to drive a vehicle for them, because of this, there must be
research and testing done over and over again to assure a near fool proof final product. The
product will not be accepted instantly, but over time as the systems become more widely
used people will realize the benefits of it.
LITERATURE SURVEY
Year: 2015
➢ Autonomous cars are the future smart cars anticipated to be driver less, efficient and
crash avoiding ideal urban car of the future.
➢ To reach this goal automakers have started working in this area to realized the potential
and solve the challenges currently in this area to reach the expected outcome.
➢ In this regard the first challenge would be to customize and imbibe existing technology
in conventional vehicle to translate them to a near expected autonomous car.
➢ This transition of conventional vehicles into an autonomous vehicle by adopting and
implementing different upcoming technologies is discussed in this paper.
Year: 2019
➢ In recent days, technology is being an integral part of everyday life and artificial
Intelligence becomes a part and parcel of both manufacturing and service systems.
➢ Computerized object recognition is the future of automobiles. To go from human
object recognition to computerized object recognition is a huge step.
➢ Autonomous cars also bring about advantages as in fuel efficiency, comfort, and
convenience. Thus leading to vast research worldwide.
➢ One key factor to achieve success in this field is creating better obstacle detecting
sensors, and Artificial Intelligence (AI) paves way for incorporating this.
SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE
Author: Jun Li, Hong Cheng, Hongliang Guo & Shaobo Qiu
Year: 2018
➢ With rapid economic development, intelligent vehicles are in urgent need. Along with
the sustained and rapid growth of car ownership, almost every country is facing severe
traffic congestion, road safety and environmental pollution problems. Relying on
advanced AI techniques, we can solve the aforementioned problems.
➢ In the beginning of 2015, Carnegie Mellon University and Uber secretly set up a ‘center
high-technology research and development institutions in Pittsburgh to research and
develop automatic driving vehicle.
➢ The advanced AI technologies include deep neural network, recurrent neural network,
spiking neuron network and transfer learning and reinforcement learning on multi-
domain and multi-time level.
➢ In AV, the driving environment perception, cognition map, path planning and strategy
control are the equivalent important task in AV [42,43,44]. How to drive like human
beings is the most important task.
Title: Advancement of Driverless Cars and Heavy Vehicles using Artificial Intelligence
Year:2019
➢ Autonomous vehicle has many external sensors connected to it. By these external
sensors it perceived or sense the environment and make decision accordingly.
➢ The basic requirements for an autonomous vehicle to work are cameras, sensory circuits
like radar laser etc.
➢ The autonomous vehicles make use of these components to interpret the world around in
technical term it’s called creating DIGITAL MAP. That’s using computer vision, a filed
of machine learning and artificial intelligence.
➢ Very first step toward implementing this is object detection.
The 2004 Grand Challenge was something of a mess. Each team grabbed some combination of
the sensors and computers available at the time, wrote their own code, and welded their own
hardware, looking for the right recipe that would take their vehicle across 142 miles of sand and
dirt of the Mojave. The most successful vehicle went just seven miles. Most crashed, flipped, or
rolled over within sight of the starting gate. But the race created a community of people—geeks,
dreamers, and lots of students not yet jaded by commercial enterprise—who believed the robot
drivers people had been craving for nearly forever were possible, and who were suddenly driven
to make them real.They came back for a follow-up race in 2005 and proved that making a car
drive itself was indeed possible: Five vehicles finished the course. By the 2007 Urban Challenge,
the vehicles were not just avoiding obstacles and sticking to trails but following traffic laws,
merging, parking, even making safe, legal U-turns.When Google launched its self-driving car
project in 2009, it started by hiring a team of Darpa Challenge veterans.
CHAPTER 4
PROPOSED SYSTEM
1. To basically feed hundreds of different images of different objects that mostly a self-driving
car will see like traffic lights, people, footpaths, fellow vehicles and many more.
A. Computer vision:
An autonomous vehicle must drive its way to its desired destination without any help of
external means , it has to it safely by avoiding any obstacles. Autonomous vechiles make use
sensors like radar , lidars to perceives its surrounding to make a digital map of the surrounding to
make a way on its own.
B. Object detection:
Object detection is technique that falls under computer vision that is used to detect or locate
the instance of an object in images or videos . Object detection typically leverages Machine
learning artificial intelligence. Advanced driver assistance system(ADAS) uses Obstacle
avoidance algorithm to perform operations such as detecting road lanes, pedestrian detection,
detecting traffic signals and take decisions accordingly. Object detection technology can also be
used in video surveillance and image processing.
C. Preprocessing data:
we made our own convolutional neural network to work with. We will be using it with
YOLO algorithm. For implementing computer vision in our model, we will be using IMAGEAI,
a python computer visions library used for object detection and processing. We used Fig4.1 as a
sample image to demonstrate with YOLO looks through the image only once, what the algorithm
does it that it goes through the image and it divides it into an AXA grid. Fig4.2 shows image grid
of sample image (3X3)
After dividing yolo implements image classification and localization on each grid and predicts
their bounding boxes and probabilities. Here Colourful square frames in fig 4.2 are bounding
boxes while the written text above them is probability count of object appearing in each boxes In
order to train our model, we passed label data to our model. Model divide image into 3X3 grid
fig 4.2, each grid is treated as class now there are three classes form which object is to be
classified. Fig 4.3 is the processed image from model. From the image we can see classes are
pedestrians, cars, footpath respectively for each grid it will make a vector In grid for each object
there will be a label for each vector. If there is no object in grid score will be zero otherwise it
will equal Intersection over union score. The main thing YOLO do is to build a CNN network to
predict a (7, 7, 30) tensor box. It uses a CNN network to reduce the dimension to 7×7.
4.1 FLOWCHART
The first step is to start,then the data computation takes place using AI for the data collected
from google maps and hardware sensors ,then by taking the target,its path,its direction.The next
part is to find the obstacle in the path of the target,if obstacles is not found then we can arrive at
the destination,if we are arrived at the destination then stop,else if we did not arrive at the
tartget,then again data computation takes place.IF we found any obstacle on the way to reach
the target,the again sensors are activated to determine the obstacle,then it will turn appropriate
direction,if obstacles are avoided then it will go back to the data computation,from there to
follow the target,then obsacle and the same process.If obstacles are not avoided then again
sensors are activated ,then the same process takes place.
most metals, by seawater, by wet land, and by wetlands. Some of these make the use
of radar altimeters possible. The radar signals that are reflected back towards the
transmitter are the desirable ones that make radar work. If the object is moving either
closer or farther away, there is a slight change in the frequency of the radio waves, due
to the Doppler effect.
Radar receivers are usually, but not always, in the same location as the
transmitter. Although the reflected radar signals captured by the receiving antenna are
usually very weak, these signals can be strengthened by the electronic amplifiers that
all radar sets contain. More sophisticated methods of signal processing are also nearly
always used in order to recover useful radar signals.
The weak absorption of radio waves by the medium through which it passes is
what enables radar sets to detect objects at relatively-long ranges at which other
electromagnetic wavelengths, such as visible light, infrared light, and ultraviolet light,
are too strongly attenuated. Such things as fog, clouds, rain, falling snow, and sleet
that block visible light are usually transparent to radio waves. Certain, specific radio
frequencies that are absorbed or scattered by water vapor, raindrops, or atmospheric
gases (especially oxygen) are avoided in designing radars except when detection of
these is intended.
Finally, radar relies on its own transmissions, rather than light from the Sun or
the Moon, or from electromagnetic waves emitted by the objects themselves, such as
infrared wavelengths (heat). This process of directing artificial radio waves towards
objects is called illumination, regardless of the fact that radio waves are completely
invisible to the human eye or cameras. High tech radar systems are associated with
digital signal processing and are capable of extracting objects from very high noise
levels
Here we use the MA COM SRS Radar Resistant to inclement weather and
harsh environmental conditions, 24 GHz ultra wide band (UWB) radar sensors
provide object detection and tracking. Parking assistance can be provided by rear
mounted sensors with 1.8 m range that can detect small objects in front of large
objects and measure direction of arrival. Sensors with ability to scan out up to 30 m
provide warning of imminent collision so airbags can be armed and seat restraints
pretension. Figure shows the RADAR waves in the system
➢ Lidar
LIDAR (Light Detection And Ranging also LADAR) is an optical remote
sensing technology that can measure the distance to, or other properties of a target by
illuminating the target with light, often using pulses from a laser. LIDAR technology
has application in geometrics’, archaeology, geography, geology, geomorphology,
seismology, forestry, remote sensing and atmospheric physics, as well as in airborne
laser swath mapping (ALSM), laser altimetry and LIDAR Contour Mapping. The
acronym LADAR (Laser Detection and Ranging) is often used in military contexts. The
term "laser radar" is sometimes used even though LIDAR does not employ microwaves
or radio waves and is not therefore in reality related to radar.
LIDAR uses ultraviolet, visible, or near infrared light to image objects and can
be used with a wide range of targets, including non-metallic objects, rocks, rain,
chemical compounds, aerosols, clouds and even single molecules. A narrow laser beam
can be used to map physical features with very high resolution. LIDAR has been used
extensively for atmospheric research and meteorology.
1. Laser — 600–1000 nm lasers are most common for non-scientific applications. They
are inexpensive but since they can be focused and easily absorbed by the eye the
maximum power is limited by the need to make them eye-safe. Eye-safety is often a
requirement for most applications .A common alternative 1550 nm lasers are eye-safe
at much higher power levels since this wavelength is not focused by the eye, but the
detector technology is less advanced and so these wavelengths are generally used at
longer ranges and lower accuracies. They are also used for military applications as 1550
nm is not visible in night vision goggles unlike the shorter 1000 nm infrared laser.
Airborne topographic mapping lidars generally use 1064 nm diode pumped YAG lasers,
while bathymetric systems generally use 532 nm frequency doubled diode pumped
YAG lasers because 532 nm penetrates water with much less attenuation than does
1064nm
2. Scanner and optics — How fast images can be developed is also affected by the
speed at which it can be scanned into the system. There are several options to scan the
azimuth and elevation, including dual oscillating plane mirrors, a combination with a
polygon mirror, a dual axis scanner. Optic choices affect the angular resolution and
range that can be detected. A hole mirror or a beam splitter are options to collect a return
signal.
3. Photo detector and receiver electronics — two main photo detector technologies
are used in lidars: solid state photo detectors, such as silicon avalanche photodiodes, or
photomultipliers. The sensitivity of the receiver is another parameter that has to be
balanced in a LIDAR design.
4. Position and navigation systems — LIDAR sensors that are mounted on mobile
platforms such as airplanes or satellites require instrumentation to determine the
absolute position and orientation of the sensor. Such devices generally include a Global
Positioning System receiver and an Inertial Measurement Unit (IMU).3D imaging can
be achieved using both scanning and non-scanning systems. "3D gated viewing laser
radar" is a non-scanning laser ranging system that applies a pulsed laser and a fast gated
camera.
➢ Global Positioning System
The Global Positioning System (GPS) is a space-based global navigation
satellite System (GNSS) that provides location and time information in all weather,
anywhere on or near the Earth, where there is an unobstructed line of sight to four or
more GPS satellites.GPS receiver calculates its position by precisely timing the signals
sent by GPS satellites high above the Earth.
applications do however use the time; these include time transfer, traffic signal timing,
and synchronization of cell phone base stations.
➢ Position sensor
A position sensor is any device that permits position measurement Here we use
a rotator encoder also called a shaft encoder, is an electro-mechanical device that
converts the angular position or motion of a shaft or axle to an analog or digital code.
The output of incremental encoders provides information about the motion of the shaft
which is typically further processed elsewhere into information such as speed, distance,
RPM and position. The output of absolute encoders indicates the current position of the
shaft, making them angle transducers. Rotary encoders are used in many applications
that require precise shaft unlimited rotation—including industrial controls, robotics,
special purpose photographic lenses, computer input devices (such as opto mechanical
mice and trackballs), and rotating radar platforms.
➢ Cameras
Google has used three types of car-mounted cameras in the past to take Street
View photographs. Generations 1–3 were used to take photographs in the United States.
The first generation was quickly superseded and images were replaced with images
taken with 2nd and 3rd generation cameras. Second generation cameras were used to
take photographs in Australia.
Google Street View displays images taken from a fleet of specially adapted cars. Areas not
accessible by car, like pedestrian areas, narrow streets, alleys and ski resorts, are sometimes
covered by Google Trikes (tricycles) or a snowmobile. On each of these vehicles there are
nine directional cameras for 360° views at a height of about 8.2 feet, or 2.5 meters, GPS units
for positioning and three laser range scanners for the measuring of up to 50 meters 180° in the
front of the vehicle.
There are also 3G/GSM/Wi-Fi antennas for scanning 3G/GSM and Wi-Fi hotspots. Recently,
'high quality' images are based on open source hardware cameras from Elphel.
• Steering
• brake
SYSTEM WORKING
Sophisticated software then processes all this sensory input, plots a path, and sends instructions
to the car’s actuators, which control acceleration, braking, and steering. Hard-coded rules,
obstacle avoidance algorithms, predictive modeling, and object recognition help the software
follow traffic rules and navigate obstacles.
CHAPTER 7
APPLICATIONS
1. Taxi services:
Another business that would be strongly affected is taxi services. It is based solely on
driving someone around who does not have a car or does not want to drive. This type of service
could lower the number of vehicles on the road because not everyone would have to own a car,
people could call to request an autonomous car to bring them around. Taxis also drive around
cities and wait in busy areas for people to request a cab.
2. Shipping:
Autonomous vehicles will have a huge impact on the land shipping industry. One way to
transport goods on land is by freight trucks. There are thousands of freight trucks on the road
everyday driving for multiple days to reach their destination. The truck is also able to drive to
their destination without having to stop to sleep, eat, or anything besides more fuel. All that is
necessary is someone to load the vehicle and someone to unload the vehicle.
3. Military applications:
The Army has autonomous resupply trucks that can be operated by remote control or in
convoys in “leader-follower” mode. Keeping soldiers safe continues to be the main reason the
military enlists unmanned vehicles into its ranks, especially for resupply missions. Automated
navigation system with real time decision making capability of the system makes it more
applicable in war fields and other military applications.
PROS:
a. It is very helpful for the physically challenged people as they cannot drive.
c. It improves the fuel efficiency as this vehicle lower the number of vehicles on road.
d. It reduces the time required for parking, as it can park itself without any human
interactions.
CONS:
a. The cost to own this car will be in huge amount as it using many technologies.
b. Introduction of this car to the society can make people to lose their jobs, example taxi
drivers and truck drivers.
c. Computer malfunction can happen such as even just a minor glitch could easily cause
a far worse accident then anything.
d. Hackers getting into the vehicles software and controlling (or) affecting its operation
would be a major concern.
e. Autonomous vehicles have difficulty operating in certain types of weather, heavy rain
interferes with roof-mounted laser sensors, snow can interfere with cameras.
CHAPTER 9
EVALUATION RESULT
The idea of driverless cars though has been around for decades in the world of film and
television. Some have been good, some have been troubling, but all have inspired others to work
towards turning fiction into reality. Here are 8 examples of autonomous vehicles in film &
TV.For example movies like The Love Bug, Knight Rider TV Series, Christine , Minority
Report.
2. Healthcare:
Google and artificial intelligence startup care.ai announced a partnership Oct. 24 to bring
autonomous monitoring technology to hospital rooms to prevent avoidable falls, protocol
breaches and other medical errors, and improve staff efficiency. Each "Self-Aware Room" will
be equipped with an AI sensor that combines care. ai's machine learning platform and library of
human behavioral data with Google's Coral Edge Tensor Processing Unit.
3. Food Delivery:
Look in the sky, it’s a bird, it’s a plane, no, it is an autonomous drone. Not just any autonomous
drone, but one carrying a freshly cooked juicy hamburger and some mouth-watering crispy
French fries. Meanwhile, please look down there at the road below, is it a speeding locomotive,
no, it’s a self-driving driverless car that is going to be the landing pad for the fast food-carrying
driverless drone.
4. Mobile Work:
With falling prices in renewable energy, electric autonomous vehicles (or EAV) will
increasingly resemble mobile offices supported by redesigned service stations that evolve to
support live-work lifestyles. Today’s digital nomads will flourish as new industries emerge to
serve their need for balancing work and play. Companies like Shanghai-based Yanfeng
Automotive Interiors is already beginning to explore the transformation of the car as new modes
of living and working become better integrated.
5.Construction:
AVs are already a reality in many controlled environments including mining and
farming. Driverless trucks are being used to move iron ore in mines in Australia, and the
Canadian energy company Suncor Energy is working with Japan's Komatsu Ltd to automate its
trucks. AVs will impact all construction equipment, including tractors, bulldozers, dump trucks
cranes, and excavators.
6. Travel:
Cross-country travel is now a rite of passage. Autonomous vehicles (AVs) will make
recreational travel even more compelling as frictionless car travel becomes a safe and convenient
alternative to the hassles of plane and train travel. The expanding range of electric AVs will
radically disrupt the hotel industry as people simply choose to sleep and eat in their vehicles.
Business travelers will have the option to avoid taking domestic flights entirely even as new
generations of Americans begin to explore the possibilities of cross-country travel without the
need for car ownership.
4. Dragomir Anguelov, Carole Dulong, Daniel Filip, Christian Frueh, Stéphane Lafon
“Google Street View: Capturing the World at Street Level, International Journal of
Engineering Research & Technology (IJERT),Vol.43, Issue:6 Page(s):32 – 38.2011
5. Julien Moras, V´eronique Cherfaoui, Phillipe Bonnifait “A lidar Perception Scheme for
Intelligent Vehicle Navigation” 11th International Conference on Control Automation
Robotics & Vision (ICARCV), Pages: 1809 – 1814, 2010 ,
6. A. Frome , "Large-Scale Privacy Protection in Google Street View", Proc. 12th IEEE
Int\'l Conf. Computer Vision (ICCV 09), 2009