Question 1-Explain The Process of Formation of Image in Human Eye
Question 1-Explain The Process of Formation of Image in Human Eye
Question 1-Explain The Process of Formation of Image in Human Eye
The eye is an optical image-forming system. The eye play important roles in the formation of an image on the retina (which is the back surface of the eye that consists of layers of cells whose function is to transmit to the brain information corresponding to the image formed on it). Those parts of the eye that do not take an active part in the formation of the image on the retina have other important functions, such as providing mechanical support to the structures of the eye, or supplying the tissues with fluids, nutrients, etc.. A ray-diagram can be used to show how light passes from a point on a real object (located somewhere in space outside the body) to the corresponding position on the image of the object on the retina at the back of the eye. The following example is explained below: Notes about the Basic Ray Diagram of image formation within the Human Eye: I. Light leaves the object - propagating in all directions: It is assumed for simplicity that this is a scattering object, meaning that after light in the area (which may be called "ambient light") reaches the object, it leaves the surface of the object travelling in a wide range of directions. Light leaving the object in all directions is represented by the small arrows pointing upwards, up-left, up-right (small pink arrows), and downwards, downleft and down-right (small green arrows). II. Some of the light leaving the object reaches the eye: Although the object is scattering light in all directions, only a small proportion of the light scattered from it reaches the eye. These represent the direction of travel of light. III. Light changes direction when it passes from the air into the eye: When light travelling away from the object, towards the eye, arrives at the eye, the first surface it reaches is the cornea. This change in direction is due to refraction (i.e. the re-direction of light as it passes from one medium into another, different, medium). Refraction is covered in more detail on the next page. The parts of the eye responsible for most of the refraction of light passing through the eye are the cornea and the lens. Most of the refraction (bending, or
"re-directing" of the light) takes occurs at the interface between the air outside the eye and the cornea. The lens is important for accommodation, or "focusing". The header checksum field is re-calculated.
Another method for removing noise is to evolve the image under a smoothing partial differential equation similar to the heat equation which is called anisotropic diffusion. With a spatially constant diffusion coefficient, this is equivalent to the heat equation or linear Gaussian filtering, but with a diffusion coefficient designed to detect edges, the noise can be removed without blurring the edges of the image.
Question 3- Which are the two quantitative approaches used for the evaluation of image features?
The theory of histogram modification of continuous real-valued pictures is developed. It is shown that the transformation of gray levels taking a picture's histogram to a desired histogram is unique under the constraint that the transformation be monotonic increasing. Algorithms for implementing this solution on digital pictures are discussed. A gray-level transformation is useful for increasing visual contrast, but may destroy some of the information content. It is shown that solutions to the problem of minimizing the sum of the information loss and the histogram discrepancy are solutions to certain differential equations, which can be solved numerical.
Question 4- Which are the two quantitative approaches used for the evaluation of image features?
A current research project at IMM lead by Prof. Per Christian Hansen. Digital image restoration - in which a noisy, blurred image is restored on the basis of a mathematical model of the blurring process - is a well-known example of a 2-D deconvolution problem. A recent survey of this topic, including a discussion of many practical aspects, can be found in [1]. There are many sources of blur. Here we focus on atmospheric turbulence blur which arises, e.g., in remote sensing and astronomical imaging due to long-term exposure through the atmosphere, where the turbulence in the atmosphere gives rise to random variations in the refractive index. For many practical purposes, this blurring can be modelled by a Gaussian point spread function, and the discretized problem is a linear system of equations whose coefficient matrix is a block Toeplitz matrix with Toeplitz blocks. Discretizations of deconvolution problems are solved by regularization methods - such as those implemented in the Matlab package Regularization Tools - that seek to balance the noise suppression and the loss of details in the restored \ Unfortunately, classical regularization algorithms tend to produce smooth solutions, and as a consequence it is difficult to recover sharp edges in the image. We have developed a 2-D version [2] of new algorithm [3] that is much better able to reconstruct the sharp edges that are typical in digital images. The algorithm, called PP-TSVD, is a modification of the truncated-SVD 3
method and incorporates the solution of a linear ll-problem, and it includes a parameter k that controls the amount of noise reduction. The algorithm is implemented in Matlab and is available as Matlab function pptvsd. The four images on top of this page show various fundamental solutions that can be computed by means of the PP-TSVD algorithm. The underlying basis functions are delta functions, piecewise constant functions, piecewise linear functions, and piecewise 2. degree polynomials, respectively. We are currently investigating the use of the PP-TSVD algorithms in such areas as astronomy and geophysics.
Question 5- Which are the two quantitative approaches used for the evaluation of image features?
An edge in a continuous domain edge segment F(x,y) can be detected by forming the continuous one-dimensional gradient G(x,y) along a line normal to the edge slope, which is at an angle 0 with respect to the horizontal axis. If the gradient is sufficiently large (i.e., above some threshold value), an edge is deemed present. The gradient along the line normal to the edge slope can be computed in terms of the derivatives along orthogonal axes according to the following For computational efficiency, the gradient amplitude is sometimes approximated by the magnitude combination The orientation of the spatial gradient with respect to the row axis is The remaining issue for discrete domain orthogonal gradient generation is to choose a good discrete approximation to the continuous differentials of Eq. 8.3a. The simplest method of discrete gradient generation is to form the running difference of pixels along rows and columns of the image. The row gradient is defined as and the column gradient is Diagonal edge gradients can be obtained by forming running differences of diagonal pairs of pixels. This is the basis of the Roberts crossdifference operator, which is defined in magnitude form as and in square-root form as Prewitt has introduced a pixel edge gradient operator described by the pixel numbering The Prewitt operator square root edge gradient is defined as With where K = 1. In this formulation, the row and column gradients are normalized to provide unit-gain positive and negative weighted averages about a separated edge position. The Sobel operator edge detector differs from the Prewitt edge detector in that the values of the north, south, east and west pixels are doubled (i.e., K = 2). The motivation for this weighting is to give equal importance to each pixel in terms of its contribution to the spatial gradient. C) Second-Order Derivative Edge Detection Second-order derivative edge detection techniques employ some form of spatial second- order differentiation to accentuate edges. An edge is marked if a significant spatial change occurs in the second derivative. We will consider Laplacian second-order derivative method. The edge Laplacian of an image function F(x,y) in the continuous domain is defined as where, the Laplacian is 4
The Laplacian G(x,y) is zero if F(x,y) is constant or changing linearly in amplitude. If the rate of change of F(x,y) is greater than linear, G(x,y) exhibits a sign change at the point of inflection of F(x,y). The zero crossing of G(x,y) indicates the presence of an edge. The negative sign in the definition of Eq. 8.4a is present so that the zero crossing of G(x,y) has a positive slope for an edge whose amplitude increases from left to right or bottom to top in an image. Torre and Poggio have investigated the mathematical properties of the Laplacian of an image function. They have found that if F(x,y) meets certain smoothness constraints, the zero crossings of G(x,y) are closed curves. In the discrete domain, the simplest approximation to the continuous Laplacian is to compute the difference of slopes along each axis: This four-neighbor Laplacian can be generated by the convolution operation Where The four-neighbor Laplacian is often normalized to provide unit-gain averages of the positive weighted and negative weighted pixels in the 3 * 3 pixel neighborhood. The gain-normalized four-neighbor Laplacian impulse response is defined by Prewitt has suggested an eight-neighbor Laplacian defined by the gain normalized impulse response array.
Question 6- Explain about the Region Splitting and merging with example.
The basic idea of region splitting is to break the image into a set of disjoint regions which are coherent within themselves: Initially take the image as a whole to be the area of interest. Look at the area of interest and decide if all pixels contained in the region satisfy some similarity constraint. If TRUE then the area of interest corresponds to a region in the image. If FALSE split the area of interest (usually into four equal sub-areas) and consider each of the sub-areas as the area of interest in turn. This process continues until no further splitting occurs. In the worst case this happens when the areas are just one pixel in size. This is a divide and conquer or top down method. If only a splitting schedule is used then the final segmentation would probably contain many neighbouring regions that have identical or similar properties. Thus, a merging process is used after each split which compares adjacent regions and merges them if necessary. Algorithms of this nature are called split and merge algorithms. To illustrate the basic principle of these methods let us consider an imaginary image. 1. Let denote the whole image shown in Fig(a). 2. Not all the pixels in are similar so the region is split as in Fig (b). 3. Assume that all pixels within regions, and respectively are similar but those in are not. 4. Therefore is split next as in Fig (c). 5
5. Now assume that all pixels within each region are similar with respect to that region, and that after comparing the split regions, regions and are found to be identical. These are thus merged together as in Fig (d).
(d) W Merge We can describe the splitting of the image using a tree structure, using a modified quad tree. Each non terminal node in the tree has at most four descendants, although it may have less due to merging. See Fig.