Azad Technical Campus: Digital Image Processing
Azad Technical Campus: Digital Image Processing
Azad Technical Campus: Digital Image Processing
SEMINAR REPORT
ON
CERTIFICATE
Mr. Md.
H.O.D
ACKNOWLEDGEMENT
Roll No-1405322503
ABSTRACT
From 2000 through 2016, the digital imaging era almost completely
replaced a magnificent silver halide technology as the de facto means of
capturing, storing, and viewing images. The motion picture industry is well
on its way to replacing film despite some of its superior imaging
characteristics. What probably few saw was the complete dominance of the
cell phone camera where the combination of a smart phone linked to the
global network and many application packages to ease the total navigation
of our complex world and a camera that took reasonably good images. This
transformation over such a short period of time is a testimony of the power
of technology and marketing. In this chapter, the focus will be on the digital
camera, be it a simple point-and-shoot, a sophisticated single-lens reflex
digital camera with 24-million pixels, or large format camera with untold
pixels. All the cameras are based on an imaging sensor, CCD or CMOS,
technology. Each camera has to find ways to prevent color aliasing
introduced by the need of a color filter array, CFA, improving color
reproduction, adjusting for different illuminants, reducing noise, and
improving the sharpness of the final image. There are no limits to the
technologies incorporated into all digital cameras including face detection,
automatic red eye removal, and stabilizing mechanisms that remove jitter
from camera motion. Here, the focus will be six the major aspects of digital
imaging: sampling and aliasing, image sharpness and enhancement, noise
in digital cameras, noise reduction methods within and outside of the
camera, exposure latitude, and, finally, the concept of ISO speed for a
camera and a sensor.
Content
What is Image Processing?
Introduction of Image Processing
History
Basic Concepts of Image Processing
Image functions
Digital Image properties
Topological Image properties
Purpose of Image Processing
Types of Image Processing
Current Research Image Processing
Future Scope
Applications
Advantages
Disadvantages
Conclusion
Reference
computer vision tries to imitate human cognition and the ability to make decisions
according to the information contained in the image.
This course deals almost exclusively with low-level image processing, high level
in which is a continuation of this course.
Age processing is discussed in the course Image Analysis and Understanding,
which is a continuation of this course.
History:
Many of the techniques of digital image processing, or digital picture processing as
it was often called, were developed in the 1960s at the Jet Propulsion Laboratory,
MIT, Bell Labs, University of Maryland, and few other places, with application to
satellite imagery, wire photo standards conversion, medical imaging, videophone,
character recognition, and photo enhancement. But the cost of processing was
fairly high with the computing equipment of that era. In the 1970s, digital image
processing proliferated, when cheaper computers Creating a film or electronic
image of any picture or paper form. It is accomplished by scanning or
photographing an object and turning it into a matrix of dots (bitmap), the meaning
of which is unknown to the computer, only to the human viewer. Scanned images
of text may be encoded into computer data (ASCII or EBCDIC) with page
recognition software (OCR).
Basic Concepts:
A signal is a function depending on some variable with physical meaning.
Signals can be o One-dimensional (e.g., dependent on time)
Two-dimensional (e.g., images dependent on two co-ordinates in a plane),
Three-dimensional (e.g., describing an object in space), o Or higher dimensional.
Image functions:
The image can be modeled by a continuous function of two or three variables;
Arguments are co-ordinates x, y in a plane, while if images change in time a
third variable t might be added.
The image function values correspond to the brightness at image points.
The function value can express other physical quantities as well (temperature,
pressure distribution, distance from the observer, etc.).
The brightness integrates different optical quantities - using brightness as a basic
quantity allows us to avoid the description of the very complicated process of
image formation.
The image on the human eye retina or on a TV camera sensor is intrinsically 2D.
We shall call such a 2D image bearing information about brightness points an
intensity image.
Current Research
A wide research is being done in the Image processing technique.
1. Cancer Imaging Different tools such as PET, MRI, and Computer aided
Detection helps to diagnose and be aware of the tumour.
2. Brain Imaging Focuses on the normal and abnormal development of brain,
brain ageing and common disease states.
3. Image processing This research incorporates structural and functional MRI
in neurology, analysis of bone shape and structure, development of
functional imaging tools in oncology, and PET image processing software
development.
4. Imaging Technology Development in image technology have formed the
requirement to establish whether new technologies are effective and cost
beneficial. This technology works under the following areas:
Magnetic resonance imaging of the knee
Computer aided detection in mammography
Endoscopic ultrasound in staging the oesophageal cancer
Magnetic resonance imaging in low back pain
Ophthalmic Imaging This works under two categories:
Future
We all are in midst of revolution ignited by fast development in computer
technology and imaging. Against common belief, computers are not able to match
humans in calculation related to image processing and analysis. But with
increasing sophistication and power of the modern computing, computation will go
beyond conventional, Von Neumann sequential architecture and would
contemplate the optical execution too. Parallel and distributed computing
paradigms are anticipated to improve responses for the image processing results.
2. Face DetectionFace detection is a computer technology that determines the locations and sizes of
human faces in arbitrary (digital) images. It detects facial features and ignores
anything else, such as buildings, trees and bodies. Face detection can be regarded
as a specific case of object-class detection; In object-class detection, the task is to
find the locations and sizes of all objects in an image that belong to a given class.
Face detection can be regarded as a more general case of face localization; In face
localization, the task is to find the locations and sizes of a known number of faces
(usually one). In face detection, one does not have this additional information.
Examples include upper torsos, pedestrians, and cars. Face detection is used in
biometrics, often as a part of (or together with) a facial recognition system. It is
also used in video surveillance, human computer interface and image database
management. Some recent digital cameras use face detection for autofocus[1].
Also, face detection is useful for selecting regions of interest in photo slideshows
that use a pan-and-scale Ken Burns effect.
4. Medical imaging
Medical imaging refers to the techniques and processes used to create images of
the human body (or parts thereof) for clinical purposes (medical procedures
seeking to reveal, diagnose or examine disease) or medical science (including the
study of normal anatomy and physiology).As a discipline and in its widest sense, it
is part of biological imaging and incorporates radiology (in the wider sense),
radiological sciences, endoscopy, (medical) thermography, medical photography
and microscopy (e.g. for human pathological investigations). Medical imaging is
often perceived to designate the set of techniques that noninvasively produce
images of the internal aspect of the body. In this restricted sense, medical imaging
can be seen as the solution of mathematical inverse problems. This means that
cause (the properties of living tissue) is inferred from effect (the observed signal).
In the case of ultrasonography the probe consists of ultrasonic pressure waves and
echoes inside the tissue show the internal structure. In the case of projection
radiography, the probe is X-ray radiation which is absorbed at different rates in
different tissue types such as bone, muscle and fat.
5.Microscope image processingMicroscope image processing is a broad term that covers the use of digital image
processing techniques to process, analyze and present images obtained from a
microscope. Such processing is now commonplace in a number of diverse fields
such as medicine, biological research, cancer research, drug testing, metallurgy,
etc. A number of manufacturers of microscopes now specifically design in features
that allow the microscopes to interface to an image processing system. Until the
early 1990s, most image acquisition in video microscopy applications was
typically done with an analog video camera, often simply closed circuit TV
cameras. While this required the use of a frame grabber to digitize the images,
video cameras provided images at full video frame rate (25-30 frames per second)
allowing live video recording and processing. While the advent of solid state
detectors yielded several advantages, the real-time video camera was actually
superior in many respects.
6.Lane departure warning systemIn road-transport terminology, a lane departure warning system is a mechanism
designed to warn a driver when the vehicle begins to move out of its lane (unless a
turn signal is on in that direction) on freeways and arterial roads.The first
production lane departure warning system in Europe was the system developed by
Iteris for Mercedes Actros commercial trucks. The system debuted in 2000 and is
now available on most trucks sold in Europe. In 2002, the Iteris system became
available on Freightliner Trucks' trucks in North America. In all of these systems,
the driver is warned of unintentional lane departures by an audible rumble strip
sound generated on the side of the vehicle drifting out of the lane. If a turn signal is
used, no warnings are generated.
7. Mathematical morphologyMathematical morphology (MM) is a theory and technique for the analysis and
processing of geometrical structures, based on set theory, lattice theory, topology,
and random functions. MM is most commonly applied to digital images, but it can
be employed as well on graphs, surface meshes, solids, and many other spatial
structures. Topological and geometrical continuous-space concepts such as size,
shape, convexity, connectivity, and geodesic distance, can be characterized by MM
on both continuous and discrete spaces. MM is also the foundation of
morphological image processing, which consists of a set of operators that
transform images according to the above characterizations. MM was originally
developed for binary images, and was later extended to grayscale functions and
images. The subsequent generalization to complete lattices is widely accepted
today as MM's theoretical foundation.
Advantages
This one is more accurate than the overlapping method because it is based upon minutia.
It is an interactive method for recognizing fingerprints.
Disadvantages
It is more time consuming as compared to the former.
More complex program.
CONCLUSION
References s
R. Gonzalez and R. Woods, Digital Image Processing 2nd Edition,
Prentice Hall, 2002