Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
152 views

Finial Report PDF

The document is a seminar report on self-driving cars using artificial intelligence submitted by Poornima M. It discusses Google's driverless car project which uses technology like artificial intelligence, video cameras, LIDAR sensors, radar sensors, and position sensors. The sensors gather environmental data that is processed using AI to enable the car to drive autonomously by detecting lanes, obstacles, traffic conditions and avoiding collisions without human input. Google aims to reduce traffic accidents through increased accuracy of its automated driving system. The report provides an overview of the key components and functioning of self-driving cars.

Uploaded by

durga
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views

Finial Report PDF

The document is a seminar report on self-driving cars using artificial intelligence submitted by Poornima M. It discusses Google's driverless car project which uses technology like artificial intelligence, video cameras, LIDAR sensors, radar sensors, and position sensors. The sensors gather environmental data that is processed using AI to enable the car to drive autonomously by detecting lanes, obstacles, traffic conditions and avoiding collisions without human input. Google aims to reduce traffic accidents through increased accuracy of its automated driving system. The report provides an overview of the key components and functioning of self-driving cars.

Uploaded by

durga
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

VISVESVARAYA TECHNOLOGICAL UNIVERSITY

“JNANA SANGAMA” BELAGAVI-590018, KARNATAKA

A SEMINAR REPORT
ON
“SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE”

Submitted in the partial fulfillment of the requirement of the award of

Bachelor of Engineering
in
Computer Science and Engineering

Submitted By
Name: Poornima M
Usn: 1BO16CS060

Under The Guidance of


Mr.Avinash N
Asst. Prof, Dept of CSE

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

Brindavan College of Engineering


DWARAKANAGAR, BAGALUR MAIN ROAD, YELAHANKA,
BANGALORE-63, 2019-20
Brindavan College of Engineering
Department of Computer Science and Engineering

CERTIFICATE

This is to certify that the technical seminar work entitled “Self Driving Cars
Using Artificial Intelligence” is a bonafied work carried out by Poornima M
bearing the USN number 1BO16CS060 in partial fulfillment for the requirements
of Eighth Semester, Bachelor of Engineering in Computer Science and
Engineering of Visvesvaraya Technological University, Belagavi during the
year 2019-20. It is certified that all corrections and suggestions indicated for the
internal assessment have been incorporated in the report. This seminar report has
been approved as it satisfies the academic requirements in respect to technical
seminar work prescribed for the Bachelor of Engineering degree.

………………………. ……………………… ……………………….


Seminar Guide Seminar Coordinator Head Of Department
Mr. Avinash N Mr. Avinash N Dr. Sasikumar .M
Department of CSE Department of CSE Department of CSE
Acknowledgement

The satisfaction and euphoria that accompanies the successful completion of any task would
be incomplete without mentioning the people who made it possible. With deep gratitude, I
acknowledge all those guidance and encouragement, which served as bacon of light and
crowned my efforts with success. I thank each one of them for their valuable support.

I express kind thanks to Dr. R Prabhakara, Principal, Brindavan College of Engineering,


Bangalore, for providing necessary facilities and motivation to carry out seminar work
successfully.

I express heartfelt gratitude and humble thanks to Dr. Sasikumar .M, Head of Department,

CSE, Brindavan College of Engineering, for the constant encouragement and help to carry out

seminar work successfully.

I would like to express humble thanks to seminar guide Mr. Avinash N, Assistant Professor,
CSE, Bangalore for guiding me and having facilitated me to complete my seminar work
successfully

I take this opportunity to express sincere gratitude to Seminar Coordinator Mr. Avinash N,
Assistant Professor, CSE, Brindavan College of Engineering, Bangalore for encouraging me
throughout the seminar work.

I would like to mention my special thanks to all the faculty member of Computer/Information
Science and Engineering Department, Brindavan College of Engineering, Bangalore for their
invaluable support and guidance. I finally thank my family and friends who have been
encouraging me constantly and inspiring me throughout. Without whom this report would have
never seen the light of the day.

POORNIMA M 1BO16CS060
i
Abstract

From the invention of the car there is a great relation between human and car. Because by
the invention of the car the automobile industry was established, by this car the traveling time
from one place to another place is reduced. The car brings royalty from the invention. As cars are
coming on roads at that time there are so many accidents are occurring due to lack of driving
knowledge & drink driving and soon, In that view only the Google took a great project, i.e. Google
Driverless Car in these the Google puts the technology in the car, that technology was Artificial
Intelligence with Google map view. The input video camera was fixed beside the front mirror
inside the car, A LIDAR sensor was fixed on the top of the vehicle, RADAR sensor on the front
of the vehicle and a position sensor attached to one of the rear wheels that helps locate the cars
position on the map.

ii
TABLE OF CONTENTS

Chapter no Title Page No.


1 INTRODUCTION 1-2

2 LITERATURE SURVEY 3-4

3 EXISTING SYSTEM 5

4 PROPOSED SYSTEM 6-8

5 CONTROL UNIT 9-18

6 SYSTEM WORKING 19

7 APPLICATIONS 20-21

8 PROS AND CONS 22

9 EVALUATION RESULTS 23-25

CONCLUSION

REFERENCES

iii
LIST OF FIGURES

Figure No: Topic Page No:


1.1 Google car 2
3.1 A Photographic History of Self-Driving Cars 5
4.1 Sample image used 6
4.2 Gridded image 7
4.3 Processed Image 7
4.4 Elements of the Grid 7
4.1.1 Obstacle avoidance algorithm flow chart 8
5.1 Radar 9
5.2 RADAR waves in autonomous cars 11
5.3 Lidar used for 3D imaging 12
5.4 3-D map of car surroundings 13
5.5 Google Map Each 14
5.6 Street View camera system 15
5.7 Street View 17
5.8 Hardware assembly of the system 18
6.1 working 19
7.1 Taxi services 20
7.2 Shipping 20
7.3 Military Applications 21
7.4 Transportation in hazardous places 21
9.1 Film and TV 23
9.2 Healthcare 23
9.3 Food Delivery 24
9.4 Mobile Work 24
9.5 Construction 25
9.6 Travel 25

iv
CHAPTER 1
INTRODUCTION
The inventions of the integrated circuit and later, the microcomputer, were major
factors in the development of electronic control in automobiles. The importance of the
microcomputer cannot be overemphasized as it is the “brain” that controls many systems in
today’s cars. For example, in a cruise control system, the driver sets the desired speed and
enables the system by pushing a button. A microcomputer then monitors the actual speed of
the vehicle using data from velocity sensors. The actual speed is compared to the desired
speed and the controller adjusts the throttle as necessary.
A completely autonomous vehicle is one in which a computer performs all the tasks
that the human driver normally would. Ultimately, this would mean getting in a car, entering
the destination into a computer, and enabling the system. From there, the car would take
over and drive to the destination with no human input. The car would be able to sense its
environment and make steering and speed changes as necessary. This scenario would
require all of the automotive technologies mentioned above: lane detection to aid in passing
slower vehicles or exiting a highway; obstacle detection to locate other cars, pedestrians,
animals, etc.; adaptive cruise control to maintain a safe speed; collision avoidance to avoid
hitting obstacles in the road way; and lateral control to maintain the car’s position on the
roadway. In addition, sensors would be needed to alert the car to road or weather conditions
to ensure safe traveling speeds. For example, the car would need to slow down in snowy or
icy conditions. We perform many tasks while driving without even thinking about it.
Completely automating the car is a challenging task and is a long way off. However,
advances have been made in the individual systems.
Google’s robotic car is a fully autonomous vehicle which is equipped with radar and
LIDAR and such can take in much more information, process it much more quickly and
reliably, make a correct decision about a complex situation, and then implement that
decision far better than a human can. Google anticipates that the increased accuracy of its
automated driving system could help reduce the number of traffic-related injuries and
deaths.
SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

The Google car system combines information gathered for Google Street View with
artificial intelligence software that combines input from video cameras inside the car, a LIDAR
sensor on top of the vehicle, radar sensors on the front of the vehicle and a position sensor
attached to one of the rear wheels that helps locate the car's position on the map. As of 2010,
Google has tested several vehicles equipped with the system, driving 140,000 miles (230,000
km) without any human intervention, the only accident occurring when one of the cars was rear-
ended while stopped at a red light. Google anticipates that the increased accuracy of its automated
driving system could help reduce the number of traffic-related injuries and deaths, while using
energy and space on roadways more efficiently.

Fig 1.1: Google Car

The combination of these technologies and other systems such as video based lane
analysis, steering and brake actuation systems, and the programs necessary to control all of
the components will become a fully autonomous system. The problem is winning the trust
of the people to allow a computer to drive a vehicle for them, because of this, there must be
research and testing done over and over again to assure a near fool proof final product. The
product will not be accepted instantly, but over time as the systems become more widely
used people will realize the benefits of it.

Dept of CSE, BrCE 2019-20 Page 2


CHAPTER 2

LITERATURE SURVEY

Title:” Autonomous vehicles: The future of automobiles”


Author: M V Rajasekhar , Anil Kumar Jaswal

Year: 2015

➢ Autonomous cars are the future smart cars anticipated to be driver less, efficient and
crash avoiding ideal urban car of the future.
➢ To reach this goal automakers have started working in this area to realized the potential
and solve the challenges currently in this area to reach the expected outcome.
➢ In this regard the first challenge would be to customize and imbibe existing technology
in conventional vehicle to translate them to a near expected autonomous car.
➢ This transition of conventional vehicles into an autonomous vehicle by adopting and
implementing different upcoming technologies is discussed in this paper.

Title:” Artificial Intelligence in Autonomous Vehicles ”


Author: Vinyas D Sagar, Dr T S Nanjundeswaraswamy

Year: 2019

➢ In recent days, technology is being an integral part of everyday life and artificial
Intelligence becomes a part and parcel of both manufacturing and service systems.
➢ Computerized object recognition is the future of automobiles. To go from human
object recognition to computerized object recognition is a huge step.
➢ Autonomous cars also bring about advantages as in fuel efficiency, comfort, and
convenience. Thus leading to vast research worldwide.
➢ One key factor to achieve success in this field is creating better obstacle detecting
sensors, and Artificial Intelligence (AI) paves way for incorporating this.
SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

Title: “Survey on Artificial Intelligence for Vehicles”

Author: Jun Li, Hong Cheng, Hongliang Guo & Shaobo Qiu

Year: 2018

➢ With rapid economic development, intelligent vehicles are in urgent need. Along with
the sustained and rapid growth of car ownership, almost every country is facing severe
traffic congestion, road safety and environmental pollution problems. Relying on
advanced AI techniques, we can solve the aforementioned problems.
➢ In the beginning of 2015, Carnegie Mellon University and Uber secretly set up a ‘center
high-technology research and development institutions in Pittsburgh to research and
develop automatic driving vehicle.
➢ The advanced AI technologies include deep neural network, recurrent neural network,
spiking neuron network and transfer learning and reinforcement learning on multi-
domain and multi-time level.
➢ In AV, the driving environment perception, cognition map, path planning and strategy
control are the equivalent important task in AV [42,43,44]. How to drive like human
beings is the most important task.

Title: Advancement of Driverless Cars and Heavy Vehicles using Artificial Intelligence

Author: Balika J. Chelliah, Vishal Chauhan, Shivendra Mishra, Vivek Sharma

Year:2019

➢ Autonomous vehicle has many external sensors connected to it. By these external
sensors it perceived or sense the environment and make decision accordingly.
➢ The basic requirements for an autonomous vehicle to work are cameras, sensory circuits
like radar laser etc.
➢ The autonomous vehicles make use of these components to interpret the world around in
technical term it’s called creating DIGITAL MAP. That’s using computer vision, a filed
of machine learning and artificial intelligence.
➢ Very first step toward implementing this is object detection.

Dept of CSE,BrCE 2019-20 Page 4


CHAPTER 3
EXISTING SYSTEM

SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE


People have been dreaming about self-driving cars for nigh a century, but the first vehicle that
anyone really deemed “autonomous” was the Stanford Cart. First built in 1961, it could navigate
around obstacles using cameras and an early version of artificial intelligence by the early 70s.
One problem: It needed 10 to 15 minutes to plan every one-meter move.

Fig 3.1: A Photographic History of Self-Driving Cars

The 2004 Grand Challenge was something of a mess. Each team grabbed some combination of
the sensors and computers available at the time, wrote their own code, and welded their own
hardware, looking for the right recipe that would take their vehicle across 142 miles of sand and
dirt of the Mojave. The most successful vehicle went just seven miles. Most crashed, flipped, or
rolled over within sight of the starting gate. But the race created a community of people—geeks,
dreamers, and lots of students not yet jaded by commercial enterprise—who believed the robot
drivers people had been craving for nearly forever were possible, and who were suddenly driven
to make them real.They came back for a follow-up race in 2005 and proved that making a car
drive itself was indeed possible: Five vehicles finished the course. By the 2007 Urban Challenge,
the vehicles were not just avoiding obstacles and sticking to trails but following traffic laws,
merging, parking, even making safe, legal U-turns.When Google launched its self-driving car
project in 2009, it started by hiring a team of Darpa Challenge veterans.
CHAPTER 4
PROPOSED SYSTEM

SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE


The main objective of this system is to train our own CNN model and testing hundreds of images
of different objects in our object detection algorithms. The specific objectives are:

1. To basically feed hundreds of different images of different objects that mostly a self-driving
car will see like traffic lights, people, footpaths, fellow vehicles and many more.

A. Computer vision:
An autonomous vehicle must drive its way to its desired destination without any help of
external means , it has to it safely by avoiding any obstacles. Autonomous vechiles make use
sensors like radar , lidars to perceives its surrounding to make a digital map of the surrounding to
make a way on its own.

B. Object detection:
Object detection is technique that falls under computer vision that is used to detect or locate
the instance of an object in images or videos . Object detection typically leverages Machine
learning artificial intelligence. Advanced driver assistance system(ADAS) uses Obstacle
avoidance algorithm to perform operations such as detecting road lanes, pedestrian detection,
detecting traffic signals and take decisions accordingly. Object detection technology can also be
used in video surveillance and image processing.

C. Preprocessing data:
we made our own convolutional neural network to work with. We will be using it with
YOLO algorithm. For implementing computer vision in our model, we will be using IMAGEAI,
a python computer visions library used for object detection and processing. We used Fig4.1 as a
sample image to demonstrate with YOLO looks through the image only once, what the algorithm
does it that it goes through the image and it divides it into an AXA grid. Fig4.2 shows image grid
of sample image (3X3)

Fig4.1:Sample image used


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

Fig 4.2: Gridded image

Fig 4.3: Processed Image

After dividing yolo implements image classification and localization on each grid and predicts
their bounding boxes and probabilities. Here Colourful square frames in fig 4.2 are bounding
boxes while the written text above them is probability count of object appearing in each boxes In
order to train our model, we passed label data to our model. Model divide image into 3X3 grid
fig 4.2, each grid is treated as class now there are three classes form which object is to be
classified. Fig 4.3 is the processed image from model. From the image we can see classes are
pedestrians, cars, footpath respectively for each grid it will make a vector In grid for each object
there will be a label for each vector. If there is no object in grid score will be zero otherwise it
will equal Intersection over union score. The main thing YOLO do is to build a CNN network to
predict a (7, 7, 30) tensor box. It uses a CNN network to reduce the dimension to 7×7.

Fig 4.4: Elements of the Grid

Dept of CSE,BrCE 2019-20 Page 7


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

4.1 FLOWCHART

OBSTACLE AVOIDANCE ALGORITHM FLOW CHART

Fig 4.1.1: Obstacle avoidance algorithm flow chart

The first step is to start,then the data computation takes place using AI for the data collected
from google maps and hardware sensors ,then by taking the target,its path,its direction.The next
part is to find the obstacle in the path of the target,if obstacles is not found then we can arrive at
the destination,if we are arrived at the destination then stop,else if we did not arrive at the
tartget,then again data computation takes place.IF we found any obstacle on the way to reach
the target,the again sensors are activated to determine the obstacle,then it will turn appropriate
direction,if obstacles are avoided then it will go back to the data computation,from there to
follow the target,then obsacle and the same process.If obstacles are not avoided then again
sensors are activated ,then the same process takes place.

Dept of CSE,BrCE 2019-20 Page 8


CHAPTER 5
CONTROL UNIT

5.1 HARDWARE SENSORS


➢ Radar
Radar is an object-detection system which uses electromagnetic waves
specifically radio waves - to determine the range, altitude, direction, or speed of both
moving and fixed objects such as aircraft, ships, spacecraft, guided missiles, motor
vehicles, weather formations, and terrain.

Fig 5.1 Radar


The radar dish, or antenna, transmits pulses of radio waves or microwaves which
bounce off any object in their path. The object returns a tiny part of the wave's energy
to a dish or antenna which is usually located at the same site as the transmitter. The
modern uses of radar are highly diverse, including air traffic control, radar astronomy,
air-defense systems, antimissile systems; nautical radars to locate landmarks and other
ships; aircraft anti collision systems; ocean-surveillance systems, outer-space
surveillance and rendezvous systems; meteorological precipitation monitoring;
altimetry and flight-control systems; guided-missile target-locating systems; and
ground-penetrating radar for geological observations. High tech radar systems are
associated with digital signal processing and are capable of extracting objects from very
high noise levels.
A radar system has a transmitter that emits radio waves called radar signals in
predetermined directions. When these come into contact with an object they are
usuallyreflectedand/orscatteredinmanydirections.
SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

most metals, by seawater, by wet land, and by wetlands. Some of these make the use
of radar altimeters possible. The radar signals that are reflected back towards the
transmitter are the desirable ones that make radar work. If the object is moving either
closer or farther away, there is a slight change in the frequency of the radio waves, due
to the Doppler effect.
Radar receivers are usually, but not always, in the same location as the
transmitter. Although the reflected radar signals captured by the receiving antenna are
usually very weak, these signals can be strengthened by the electronic amplifiers that
all radar sets contain. More sophisticated methods of signal processing are also nearly
always used in order to recover useful radar signals.
The weak absorption of radio waves by the medium through which it passes is
what enables radar sets to detect objects at relatively-long ranges at which other
electromagnetic wavelengths, such as visible light, infrared light, and ultraviolet light,
are too strongly attenuated. Such things as fog, clouds, rain, falling snow, and sleet
that block visible light are usually transparent to radio waves. Certain, specific radio
frequencies that are absorbed or scattered by water vapor, raindrops, or atmospheric
gases (especially oxygen) are avoided in designing radars except when detection of
these is intended.
Finally, radar relies on its own transmissions, rather than light from the Sun or
the Moon, or from electromagnetic waves emitted by the objects themselves, such as
infrared wavelengths (heat). This process of directing artificial radio waves towards
objects is called illumination, regardless of the fact that radio waves are completely
invisible to the human eye or cameras. High tech radar systems are associated with
digital signal processing and are capable of extracting objects from very high noise
levels
Here we use the MA COM SRS Radar Resistant to inclement weather and
harsh environmental conditions, 24 GHz ultra wide band (UWB) radar sensors
provide object detection and tracking. Parking assistance can be provided by rear
mounted sensors with 1.8 m range that can detect small objects in front of large
objects and measure direction of arrival. Sensors with ability to scan out up to 30 m
provide warning of imminent collision so airbags can be armed and seat restraints
pretension. Figure shows the RADAR waves in the system

Dept of CSE,BrCE 2019-20 10


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

Fig 5.2 RADAR waves in autonomous cars

➢ Lidar
LIDAR (Light Detection And Ranging also LADAR) is an optical remote
sensing technology that can measure the distance to, or other properties of a target by
illuminating the target with light, often using pulses from a laser. LIDAR technology
has application in geometrics’, archaeology, geography, geology, geomorphology,
seismology, forestry, remote sensing and atmospheric physics, as well as in airborne
laser swath mapping (ALSM), laser altimetry and LIDAR Contour Mapping. The
acronym LADAR (Laser Detection and Ranging) is often used in military contexts. The
term "laser radar" is sometimes used even though LIDAR does not employ microwaves
or radio waves and is not therefore in reality related to radar.
LIDAR uses ultraviolet, visible, or near infrared light to image objects and can
be used with a wide range of targets, including non-metallic objects, rocks, rain,
chemical compounds, aerosols, clouds and even single molecules. A narrow laser beam
can be used to map physical features with very high resolution. LIDAR has been used
extensively for atmospheric research and meteorology.

Dept of CSE,BrCE 2019-20 11


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

Fig 5.3 Lidar used for 3D imaging


Downward-looking LIDAR instruments fitted to aircraft and satellites are used
for surveying and mapping. A recent example being the NASA Experimental Advanced
Research Lidar. In addition LIDAR has been identified by NASA as a key technology
for enabling autonomous precision safe landing of future robotic and crewed lunar
landing vehicles. Wavelengths in a range from about 10 micrometers to the UV (ca.250
nm) are used to suit the target. Typically light is reflected via backscattering.
There are several major components to a LIDAR system:

1. Laser — 600–1000 nm lasers are most common for non-scientific applications. They
are inexpensive but since they can be focused and easily absorbed by the eye the
maximum power is limited by the need to make them eye-safe. Eye-safety is often a
requirement for most applications .A common alternative 1550 nm lasers are eye-safe
at much higher power levels since this wavelength is not focused by the eye, but the
detector technology is less advanced and so these wavelengths are generally used at
longer ranges and lower accuracies. They are also used for military applications as 1550
nm is not visible in night vision goggles unlike the shorter 1000 nm infrared laser.
Airborne topographic mapping lidars generally use 1064 nm diode pumped YAG lasers,
while bathymetric systems generally use 532 nm frequency doubled diode pumped
YAG lasers because 532 nm penetrates water with much less attenuation than does
1064nm
2. Scanner and optics — How fast images can be developed is also affected by the
speed at which it can be scanned into the system. There are several options to scan the
azimuth and elevation, including dual oscillating plane mirrors, a combination with a

Dept of CSE,BrCE 2019-20 12


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

polygon mirror, a dual axis scanner. Optic choices affect the angular resolution and
range that can be detected. A hole mirror or a beam splitter are options to collect a return
signal.

Fig 5.4 3-D map of car surroundings

3. Photo detector and receiver electronics — two main photo detector technologies
are used in lidars: solid state photo detectors, such as silicon avalanche photodiodes, or
photomultipliers. The sensitivity of the receiver is another parameter that has to be
balanced in a LIDAR design.
4. Position and navigation systems — LIDAR sensors that are mounted on mobile
platforms such as airplanes or satellites require instrumentation to determine the
absolute position and orientation of the sensor. Such devices generally include a Global
Positioning System receiver and an Inertial Measurement Unit (IMU).3D imaging can
be achieved using both scanning and non-scanning systems. "3D gated viewing laser
radar" is a non-scanning laser ranging system that applies a pulsed laser and a fast gated
camera.
➢ Global Positioning System
The Global Positioning System (GPS) is a space-based global navigation
satellite System (GNSS) that provides location and time information in all weather,
anywhere on or near the Earth, where there is an unobstructed line of sight to four or
more GPS satellites.GPS receiver calculates its position by precisely timing the signals
sent by GPS satellites high above the Earth.

Dept of CSE,BrCE 2019-20 13


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

Fig 5.5 Google Map Each

satellite continually transmits messages that include


• The time the message was transmitted
• Precise orbital information (the ephemeris)
• The general system health and rough orbits of all GPS satellites
The receiver uses the messages it receives to determine the transit time of each
message and computes the distance to each satellite. These distances along with the
satellites' locations are used with the possible aid of trilateration, depending on which
algorithm is used, to compute the position of the receiver. This position is then
displayed, perhaps with a moving map display or latitude and longitude; elevation
information may be included. Many GPS units show derived information such as
direction and speed, calculated from position changes. Three satellites might seem
enough to solve for position since space has three dimensions and a position near the
Earth's surface can be assumed. However, even a very small clock error multiplied by
the very large speed of light — the speed at which satellite signals propagate results in
a large positional error. Therefore receivers use four or more satellites to solve for the
receiver's location and time. The very accurately computed time is effectively hidden
by most GPS applications, which use only the location. A few specialized GPS

Dept of CSE,BrCE 2019-20 14


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

applications do however use the time; these include time transfer, traffic signal timing,
and synchronization of cell phone base stations.
➢ Position sensor
A position sensor is any device that permits position measurement Here we use
a rotator encoder also called a shaft encoder, is an electro-mechanical device that
converts the angular position or motion of a shaft or axle to an analog or digital code.
The output of incremental encoders provides information about the motion of the shaft
which is typically further processed elsewhere into information such as speed, distance,
RPM and position. The output of absolute encoders indicates the current position of the
shaft, making them angle transducers. Rotary encoders are used in many applications
that require precise shaft unlimited rotation—including industrial controls, robotics,
special purpose photographic lenses, computer input devices (such as opto mechanical
mice and trackballs), and rotating radar platforms.
➢ Cameras
Google has used three types of car-mounted cameras in the past to take Street
View photographs. Generations 1–3 were used to take photographs in the United States.
The first generation was quickly superseded and images were replaced with images
taken with 2nd and 3rd generation cameras. Second generation cameras were used to
take photographs in Australia.

Fig 5.6 Street View camera system

Dept of CSE,BrCE 2019-20 15


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

The system is a rosette (R) of 15 small, outward-looking cameras using 5


megapixel CMOS image sensors and custom, low-flare, controlled-distortion lenses.
The shadows caused by the 1st, 2nd and 4th generation cameras are occasionally
viewable in images taken in mornings and evenings. The new 4th generation cameras
HD will be used to completely replace all images taken with earlier generation cameras.
Thus the total sensor components can be explained using the below figure
assembled on the car. All the components are already explained.

5.2 LOGIC PROCESSING UNIT

➢ Google Street View


Google Street View is a technology featured in Google Maps and Google Earth that
provides panoramic views from various positions along many streets in the world. It was
launched on May 25, 2007, originally only in several cities in the United States, and has since
gradually expanded to include more cities and rural areas worldwide.

Google Street View displays images taken from a fleet of specially adapted cars. Areas not
accessible by car, like pedestrian areas, narrow streets, alleys and ski resorts, are sometimes
covered by Google Trikes (tricycles) or a snowmobile. On each of these vehicles there are
nine directional cameras for 360° views at a height of about 8.2 feet, or 2.5 meters, GPS units
for positioning and three laser range scanners for the measuring of up to 50 meters 180° in the
front of the vehicle.

There are also 3G/GSM/Wi-Fi antennas for scanning 3G/GSM and Wi-Fi hotspots. Recently,
'high quality' images are based on open source hardware cameras from Elphel.

Dept of CSE,BrCE 2019-20 16


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

Fig 5.7 Street View


Where available, street view images appear after zooming in beyond the highest
zooming level in maps and satellite images, and also by dragging a "pegman" icon onto
a location on a map. Using the keyboard or mouse the horizontal and vertical viewing
direction and the zoom level can be selected. A solid or broken line in the photoshows
the approximate path followed by the camera car, and arrows link to the next photo in
each direction. At junctions and crossings of camera car routes, more arrows are shown.

➢ Artificial intelligence software


Artificial intelligence is the intelligence of machines and the branch of computer
science that aims to create it. AI textbooks define the field as "the study and design of
intelligent “agents” where an intelligent agent is a system that perceives its environment
and takes actions that maximize its chances of success. John McCarthy, who coined the
term in 1956, defines it as "the science and engineering of making intelligent machines".
Here the details about the software are a trade secret of Google. The hardware
components are placed in the vehicle boot and is shown below.

Dept of CSE,BrCE 2019-20 17


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

Fig5.8 Hardware assembly of the system

5.3 PROCESSOR UNIT


➢ Xeon Processor
Xeon Processor is a multi-core enterprise processors built on 32-nanometer
process technology .It has up to 8 execution cores. Each core supports two threads (Intel
Hyper-Threading).The main features of Xeon processor are:
46-bit physical addressing and 48-bit virtual addressingA 32-KB instruction and
32-KB data first-level cache (L1) for each core A 256-KB shared instruction/data mid-
level (L2) cache for each core.We need two processor here for handling real time sensor values
and for general working.
➢ Cortex Coprocessors
Two separate Cortex-A9 processors are used for

• Steering

• brake

The ARM Cortex-A9 MPCore is a 32-bit multicore processor providing up to 4


cache-coherent Cortex-A9 cores, each implementing the ARM v7 instruction set
architecture. They are high performance ARM processor with 1-4 cores version .It work
on AXI high-speed Advanced Microprocessor Bus architecture. Its main feature is the
increased peak performance for most demanding applications.

Dept of CSE,BrCE 2019-20 18


CHAPTER 6

SYSTEM WORKING

Fig 6.1: working


Autonomous cars rely on sensors, actuators, complex algorithms, machine learning systems,
and powerful processors to execute software.Autonomous cars create and maintain a map of
their surroundings based on a variety of sensors situated in different parts of the vehicle. Radar
sensors monitor the position of nearby vehicles. Video cameras detect traffic lights, read road
signs, track other vehicles, and look for pedestrians. Lidar (light detection and ranging) sensors
bounce pulses of light off the car’s surroundings to measure distances, detect road edges, and
identify lane markings. Ultrasonic sensors in the wheels detect curbs and other vehicles when
parking.

Sophisticated software then processes all this sensory input, plots a path, and sends instructions
to the car’s actuators, which control acceleration, braking, and steering. Hard-coded rules,
obstacle avoidance algorithms, predictive modeling, and object recognition help the software
follow traffic rules and navigate obstacles.
CHAPTER 7

APPLICATIONS
1. Taxi services:

Another business that would be strongly affected is taxi services. It is based solely on
driving someone around who does not have a car or does not want to drive. This type of service
could lower the number of vehicles on the road because not everyone would have to own a car,
people could call to request an autonomous car to bring them around. Taxis also drive around
cities and wait in busy areas for people to request a cab.

Fig 7.1: Taxi services

2. Shipping:

Autonomous vehicles will have a huge impact on the land shipping industry. One way to
transport goods on land is by freight trucks. There are thousands of freight trucks on the road
everyday driving for multiple days to reach their destination. The truck is also able to drive to
their destination without having to stop to sleep, eat, or anything besides more fuel. All that is
necessary is someone to load the vehicle and someone to unload the vehicle.

Fig 7.2: Shipping


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

3. Military applications:

The Army has autonomous resupply trucks that can be operated by remote control or in
convoys in “leader-follower” mode. Keeping soldiers safe continues to be the main reason the
military enlists unmanned vehicles into its ranks, especially for resupply missions. Automated
navigation system with real time decision making capability of the system makes it more
applicable in war fields and other military applications.

Fig 7.3: Military Applications

4. Transportation in hazardous places :


The complete real time decision making capability and sensor guided navigation will leads
to replace the human drivers in hazardous place transportation.As these cars are replaced by
humans,so these cars can travel in hazardous places to check for the hazardous of the place without
harming any human.

Fig 7.4: Transportation in hazardous places

Dept of CSE,BrCE 2019-20 Page 21


CHAPTER 8

PROS AND CONS OF SELF DRIVING CARS USING


ARTIFICIAL INTELLIGENCE

PROS:

a. It is very helpful for the physically challenged people as they cannot drive.

b. It reduces the traffic collision.

c. It improves the fuel efficiency as this vehicle lower the number of vehicles on road.

d. It reduces the time required for parking, as it can park itself without any human
interactions.

e. Speed limits can be increased.

CONS:

a. The cost to own this car will be in huge amount as it using many technologies.

b. Introduction of this car to the society can make people to lose their jobs, example taxi
drivers and truck drivers.

c. Computer malfunction can happen such as even just a minor glitch could easily cause
a far worse accident then anything.

d. Hackers getting into the vehicles software and controlling (or) affecting its operation
would be a major concern.

e. Autonomous vehicles have difficulty operating in certain types of weather, heavy rain
interferes with roof-mounted laser sensors, snow can interfere with cameras.
CHAPTER 9

EVALUATION RESULT

1. Film & TV:

The idea of driverless cars though has been around for decades in the world of film and
television. Some have been good, some have been troubling, but all have inspired others to work
towards turning fiction into reality. Here are 8 examples of autonomous vehicles in film &
TV.For example movies like The Love Bug, Knight Rider TV Series, Christine , Minority
Report.

Fig 9.1: Film and TV

2. Healthcare:

Google and artificial intelligence startup care.ai announced a partnership Oct. 24 to bring
autonomous monitoring technology to hospital rooms to prevent avoidable falls, protocol
breaches and other medical errors, and improve staff efficiency. Each "Self-Aware Room" will
be equipped with an AI sensor that combines care. ai's machine learning platform and library of
human behavioral data with Google's Coral Edge Tensor Processing Unit.

Fig 9.2: Healthcare


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

3. Food Delivery:
Look in the sky, it’s a bird, it’s a plane, no, it is an autonomous drone. Not just any autonomous
drone, but one carrying a freshly cooked juicy hamburger and some mouth-watering crispy
French fries. Meanwhile, please look down there at the road below, is it a speeding locomotive,
no, it’s a self-driving driverless car that is going to be the landing pad for the fast food-carrying
driverless drone.

Fig 9.3: Food Delivery

4. Mobile Work:

With falling prices in renewable energy, electric autonomous vehicles (or EAV) will
increasingly resemble mobile offices supported by redesigned service stations that evolve to
support live-work lifestyles. Today’s digital nomads will flourish as new industries emerge to
serve their need for balancing work and play. Companies like Shanghai-based Yanfeng
Automotive Interiors is already beginning to explore the transformation of the car as new modes
of living and working become better integrated.

Fig 9.4: Mobile Work

Dept of CSE,BrCE 2019-20 Page 24


SELF DRIVING CARS USING ARTIFICIAL INTELLIGENCE

5.Construction:

AVs are already a reality in many controlled environments including mining and
farming. Driverless trucks are being used to move iron ore in mines in Australia, and the
Canadian energy company Suncor Energy is working with Japan's Komatsu Ltd to automate its
trucks. AVs will impact all construction equipment, including tractors, bulldozers, dump trucks
cranes, and excavators.

Fig 9.5: Construction

6. Travel:

Cross-country travel is now a rite of passage. Autonomous vehicles (AVs) will make
recreational travel even more compelling as frictionless car travel becomes a safe and convenient
alternative to the hassles of plane and train travel. The expanding range of electric AVs will
radically disrupt the hotel industry as people simply choose to sleep and eat in their vehicles.
Business travelers will have the option to avoid taking domestic flights entirely even as new
generations of Americans begin to explore the possibilities of cross-country travel without the
need for car ownership.

Fig 9.6: Travel

Dept of CSE,BrCE 2019-20 Page 25


CONCLUSION
Currently, there are many different technologies available that can assist in creating
autonomous vehicle systems. Items such as GPS, automated cruise control, and lane keeping
assistance are available to consumers on some luxury vehicles. The combination of these
technologies and other systems such as video based lane analysis, steering and brake actuation
systems, and the programs necessary to control all of the components will become a fully
autonomous system. The problem is winning the trust of the people to allow a computer to drive
a vehicle for them, because of this, there must be research and testing done over and over again
to assure a near fool proof final product. The product will not be accepted instantly, but over
time as the systems become more widely used people will realize the benefits of it. The
implementation of autonomous vehicles will bring up the problem of replacing humans with
computers that can do the work for them. There will not be an instant change in society, but it
will become more apparent over time as they are integrated into society.
REFERENCES

1. Thorsten Luettel, Michael Himmelsbach, and Hans-Joachim Wuensche, “Autonomous


Ground Vehicles-Concepts and a Path to the Future”, Vol. 100, May 13th,Proceedings of
the IEEE,2012

2. S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (Intelligent Robotics and


Autonomous Agents), 2001
.

3. Nilotpal Chakraborty, Raghvendra Singh Patel, “Intelligent Agents and Autonomous


Cars : A Case Study”, International Journal of Engineering Research & Technology
(IJERT), ISSN: 2278-0181, Vol. 2 Issue 1, January- 2013

4. Dragomir Anguelov, Carole Dulong, Daniel Filip, Christian Frueh, Stéphane Lafon
“Google Street View: Capturing the World at Street Level, International Journal of
Engineering Research & Technology (IJERT),Vol.43, Issue:6 Page(s):32 – 38.2011

5. Julien Moras, V´eronique Cherfaoui, Phillipe Bonnifait “A lidar Perception Scheme for
Intelligent Vehicle Navigation” 11th International Conference on Control Automation
Robotics & Vision (ICARCV), Pages: 1809 – 1814, 2010 ,

6. A. Frome , "Large-Scale Privacy Protection in Google Street View", Proc. 12th IEEE
Int\'l Conf. Computer Vision (ICCV 09), 2009

7. Isermann, Rolf,”Fault-tolerant drive-by-wire systems”,vol 22 , International


Journal of Engineering Research & Technology (IJERT), , Issue: 5 ,pages 64 –
81,vol6 ,2011

You might also like