Module 1:image Representation and Modeling
Module 1:image Representation and Modeling
Jeevan K M Asst. Professor Department of Electronics & Communication Sree Narayana Gurukulam College of Engineering Kadayiruppu
What is an image?
Two dimensional function that represents a measure of some characteristic such as brightness or coloure of a viewed scene Projection of a 3D scene into a 2D projection plane A two variable function f(x,y) where each position (x,y) in the projection plane f(x,y) defines the light intensity at this point. An image is two dimensional function, f(x,y), where x and y are spatial coordinates, and the amplitude of f at any pair of coordinates (x,y) is called the intensity or grey level of the image at that point.
Image
1. Analog Image Type of images that we, as humans, look at.
Images
2. Digital image A digital image is composed of picture element called pixels Pixels are the smallest sample of an image A digital image is a matrix of many small elements, or pixels. Each pixel is represented by a numerical value. The pixel value is related to the brightness or color that we will see when the digital image is converted into an analog image for display and viewing. Analog Image . Digital Image
Sampling
Quantisation
2. Image preprocessing: To improve the image in ways that increase the chances for success of the other processes. 3. Image segmentation: To partitions an input image into its constituent parts or objects. 4. Image representation: To convert the input data to a form suitable for computer processing. 5. Image description: To extract features that result in some quantitative information of interest or features that are basic for differentiating one class of objects from another. 6. Image recognition: To assign a label to an object based on the information provided by its descriptors. 7. Image interpretation: To assign meaning to an ensemble of recognized objects. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database.
8
1. Image Sensing
Two elements are required to acquire digital images The first is a physical device that is sensitive to the energy radiated by the object we wish to image The second is a digitizer a device for converting the output of the physical sensing device into digital form
Eg. Digital Camera: the sensors produce a electrical output proportional to light intensity and the digitizer convert this output to digital data
3. Computer
is a general purpose computer can range from a PC to supercomputer
4. Software
for image processing consists of specialized modules that perform specific tasks.
A well designed package includes the capability for the user to write minimum codes by utilizing the specialized modules Eg: Matlab
12
5.Mass Storage
is a must in image processing Digital storage for image processing applications -3 categories Short term storage- use during processing eg: frame buffers-store one or many images and can be accessed rapidly, usually at video rates On-line storage-Frequent access data is stored Eg: magnetic disks ,optical disks. Archival storage-characterized by infrequent access eg: magnetic tapes and optical disks An image size of 1024X102 pixels in which the intensity of each pixel is an 8 bit quantity required one megabyte of storage space.
6.Image Displays
color tv monitors 7.Hard copy devices for recording images include laser printers, film cameras, inkjet units etc.
13
14
Contrast Is used to emphasize the difference in grey level of the object. Depend on the brightness of the background simultaneous contrast
16
2. Acuity Is the ability to detect the details in the image Less sensitive to slow and fast changes in brightness in the image plane More sensitive to intermediate changes 3. Resolution Degree of distinguishable details
There is know sense in representing visual information with higher resolution than that of the viewer
Best resolution at a distance of about 250mm from an eye under illumination of about 500lux
4.Object border
Boundaries of objects and simple patterns such as blobs or lines enable some effect similar to conditional contrast Eg: Ebbinghans illusion
18
19
20
21
Lens
Made up of concentric layers of fibrous cell Suspended by the fibers attached to the ciliary body 60-70% water and 6% fat and more protein
23
Cones
They are located in central portion of the retina fovea The muscles controlling the eye, rotate the eye ball until the image falls on the fovea They are highly sensitive to colour They are around 6-7 million in number Each cones are connected to fovea using its own nerve hence human can resolve fine details using cones Cones help us to see the objects in bright light and the cone vision is called photopic or bright-light vision
24
Rods
They are distributed over the retinal surface
25
Rods: increase its density from the centre to apprx 20 dg then decrease
27
Radiant Energy
Light Receptor
Brain
28
The distance between the centre of the lens and the retina along the visual axis is approx
The range of focal length is approx 14mm to 17m [15/100 = h/17] Perception takes place by excitation of receptors which transform radiant energy into electrical impulses that are decoded by the brain.
29
30
Brightness adaptation and Discrimination Digital images are displayed as a discrete set of intensities. The dynamic range of light intensity to which eye can adapt is in the order of from the scotopic to photopic (Glare) limit Light intensity versus subjective brightness Subjective brightness (intensity as perceived by the human visual system) : logarithmic function of light intensity.
Solid curve : the range of intensities to which the visual system can adapt [ photopic vision ]
The transition from scotopic to photopic vision is gradual (-3 to -1 in the log scale) The human visual system can not operate all the range shown in the figure simultaneously. It accomplishes this large variation by changing its overall sensitivity known as 31 brightness adaptation
When compared with the total adaptation range , the total range of distinct intensity levels the eye can discriminate simultaneously is rather small For any given set of condition, the current sensitivity level of the visual system is called the brightness adaptation level The short curve representing Bb Ba the range of subjective brightness that the eye can perceive when adapted to the level Ba. (it is restricted to the level Bb) Consider a flat uniformly illuminated area which is large enough to occupy the entire field of view It is illuminated from behind by a light source of intensity I Then add an increment of illumination in the form of a short duration flash that appears as a circle in the centre. If is not bright enough : no perceivable change When gets stronger: perceived change Webber Ratio: the quantity where is the increment of illumination distrainable 50% of the time with background illumination I
32
Small value of : small percentage change in intensity is distrainable. This represent good brightness discrimination Large value of : large percentage change in intensity required. This represent poor bright discrimination.
Brightness discrimination is poor (large Weber ratio) at low level of illumination Brightness discrimination improves as background illumination increases Rods : Poor discrimination Cones: Better discrimination Webber ratio ( ) as a function of intensity
33
34
Illumination: sun on earth: 90,000 lm/m2 on a sunny day; 10,000 lm/m2 on a cloud day; moon on clear evening: 0.1 lm/m2; in a commercial office is about 1000 lm/m2
Reflectance: 0.01 for black velvet, 0.65 for stainless steel, 0.80 for flat-white wall paint, 0.90 for silver-plated metal, and 0.93 for snow Monochrome image Intensity of monochrome image at any coordinate is called the gray level l of the image at that point l= f(x0,y0) The range of l is given by Lmin </= l</= Lmax Lmin : Positive value and Lmax : finite value but greater than Lmin Lmin = imin*rmin and Lmax = imax*rmax The interval [Lmin,Lmax] is called the gray scale It is common to shift this interval to [0,L 1] where l = 0 is considered black and l = L 1 is considered white, with intermediate values providing different shades of gray
36
1. Sampling 2. Quantization
Image continuous with respect to x and y coordinates as well as amplitude Sampling: Digitizing the coordinate values Quantization: Digitizing the amplitude values Example Figure a: Continuous image Figure b: one dimensional representation. It is a plot of amplitude (intensity level) values of the continuous image along the line AB Figure c: Equally spaced samples along the line AB - Sampling Vertical tick mark the spatial location of each samples Small white square the samples To get a digital function intensity values also converted to discrete quantity Quantization The intensity scale divided into 8 discrete levels, ranging from black to white The continuous intensity levels are quantized by assigning one of the eight value to each sample, 37 Figure d: the digital samples obtained after both sampling and quantization.
38
1. Continuous image
40
Each element of this matrix is called an image element, picture element or pixel Some times the digital image is represented with following notation
41
Size of a Digital Image Z : Set of integers R : Set of real numbers Sampling process : partitioning the xy plane into grids : Coordinates of the center of each cell in the grid pair of elements from the Cartesian products Z(2), (zi.zj); Zi and Zj are integers from Z : if the intensity levels also are integers , we can replace R as Z. That is digital image becomes a 2-D function whose co-ordinate and intensity values are integers. The values of M and N are positive. Due to processing, storage, and sampling hardware considerations, the number of gray levels typically is an integer power of 2: L = 2(K) Where k is number of bits require to represent a grey value The discrete levels should be equally spaced and that they are integers in the interval [0, L-1]. The range of values spanned by the gray scale is called the dynamic range of an image Define the dynamic range of an imaging system as the ratio of the maximum intensity to the minimum detectable intensity level in the system
42
Dynamic range establishes the lowest and highest intensity that a system can represent. The upper limit is determined by saturation and the lower limit by noise Contrast : the difference in intensity between the highest and lowest intensity level in the image.
More number of pixels in an image have a high dynamic range: high contrast
Image with a low dynamic range: dull or washed out gray look The number, b, of bits required to store a digitized image is b=M*N*k. if M=N, b = N(2)k
43
Resolution
Resolution (how much you can see the details of the image ) depends on sampling and gray level.
The bigger the sampling rate and the gray scale, the better the approximation of the digitized image from the original.
The more the quantization scale becomes, the bigger the size of the image.
Spatial Resolution: Spatial resolution is the smallest detectable detail in an image. Grey level Resolution: Gray-level resolution similarly refers to the smallest detectable change in gray level.
44
45
A pixel p have 4 diagonal neighbor with coordintes (x+1,y+1), (x+1,y-1),(x-1,y+1) and (x-1,y-1) : Denoted by Nd(p)
3. 8-neighbors
(x+1,y-1)
(x+1,y)
(x+1,y+1)
Two pixels p & q with values from V are 8-adjucent if q is in the set N8(p)
3. M-adjacency
Two pixels p & q with values from V are M-adjucent if *q is in the set N4(p) or *q is in the set Nd(p) and the set N4(p) N4(q) has no pixels whose values are from V
47
2. Logical operations
AND OR Complement (NOT) XOR
END
48