Module 3 Question Bank-RE
Module 3 Question Bank-RE
Module 3 Question Bank-RE
1. Enumerate Reverse Engineering Hardware based on classification, Concepts and merits &
Demerits
RE hardware is used for RE data acquisition. There are three main technologies for RE data
acquisition: contact, noncontact and destructive.
Contact Methods Contact methods use sensing devices with mechanical arms, coordinate
measurement machines (CMM), and computer numerical control (CNC) machines, to digitize a
surface. There are two types of data collection techniques employed in contact methods: (i) point-
to-point sensing with touch-trigger probes and (ii) analogue sensing with scanning probes. In the
point-to-point sensing technique, a touch-trigger probe is used that is installed on a CMM or on
an articulated mechanical arm to gather the coordinate points of a surface. A CMM with a touch-
trigger probe can be programmed to follow planned paths along a surface.
Advantages: (i) high accuracy, (ii) low costs, (iii) ability to measure deep slots and pockets, and (iv)
insensitivity to color or transparency.
Disadvantages: (i) slow data collection and (ii) distortion of soft objects by the probe.
Noncontact Methods In noncontact methods, 2-D cross-sectional images and point clouds that
represent the geometry of an object are captured by projecting energy sources (light, sound, or
magnetic fields) onto an object; then either the transmitted or the reflected energy is observed.
The geometric data for an object are finally calculated by using triangulation, time-of-flight, wave-
interference information, and image processing algorithms. There is no contact between the RE
hardware and an object during data acquisition.
Advantages: (i) no physical contact; (ii) fast digitizing of substantial volumes; (iii) good accuracy
and resolution for common applications (iv) ability to detect colors; and (v) ability to scan highly
detailed objects, where mechanical touch probes may be too large to accomplish the task.
Disadvantages: (i) possible limitations for colored, transparent, or reflective surfaces and (ii) lower
accuracy.
The RE destructive method is useful for reverse engineering small and complex objects in which
both internal and external features are scanned. A CNC milling machine exposes 2-D cross-
sectional (slice) images, which are then gathered by a CCD camera. The scanning software
automatically converts the digital bitmap image to edge detected points, as the part is scanned.
2. Explain the Triangulation method using single and double camera arrangement.
Triangulation is a method that employs locations and angles between light sources and
photosensitive devices (CCD charge-coupled device camera) to calculate coordinates. Figure
shows two variants of triangulation schemes using CCD cameras: single and double CCD camera.
In a single camera system, a device transmits a light spot (or line) on the object at a defined angle.
A CCD camera detects the position of the reflected point (or line) on the surface. In a double
camera system, two CCD cameras are used. The light projector is not involved in any measuring
functions and may consist of a moving light spot or line, moving stripe patterns, or a static arbitrary
pattern. A high energy light source is focused and projected at a pre specified angle (θ) onto the
surface of an object. A photosensitive device senses the reflection from the illuminated point on
the surface. Because the fixed baseline length (L) between the light source and the camera is
known from calibration, using geometric triangulation from the known angle (θ ), the focal length
of the camera (F), the image coordinate of the illuminated point (P), and fixed baseline length (L),
the position of the illuminated point (Pi) with respect to the camera coordinate system can be
calculated as follows
The error in the Z measurement is directly proportional to Z2 but inversely proportional to the
focal length and the baseline length. Therefore, increasing the baseline length can produce higher
accuracy in the measurement.
3. Explain structured light techniques used in RE hard ware
In structured-light techniques a light pattern is projected at a known angle onto the surface of
interest and an image of the resulting pattern, reflected by the surface, is captured. The image is
then analyzed to calculate the coordinates of the data point on the surface. A light pattern can be
(i) a single point; (ii) a sheet of light (line); and (iii) a strip, grid, or more complex coded light as
shown in figure
The most commonly used pattern is a sheet of light that is generated by fanning out a light beam.
When a sheet of light intersects an object, a line of light is formed along the contour of the object.
This line is detected and the X,Y,Z coordinates of hundreds of points along the line are
simultaneously calculated by triangulation. The sheet of light sweeps the object as the linear slide
carrying the scanning system moves it in the X direction while a sequence of images is taken by
the camera in discrete steps. An index number k is assigned to each of the images in the order
they are taken. Therefore, each k corresponds to the X position of the sheet of light. For each
image k, a set of image coordinates (i, j) of the pixels in the illuminated stripe is obtained. The
triples (i, j, k)’s are the range image coordinates; they are transformed to (x, y, z) world coordinates
using a calibration matrix. To improve the capturing process, a light pattern containing multiple
strips is projected onto the surface of an object. Structured-light systems have the following strong
advantages compared to laser systems (i) the data acquisition is very fast (up to millions of points
per second) (ii) color texture information is available (iii) structured-light systems do not use a
laser.
4. Explain Interferometry techniques used in RE hardware to capture the data
Interferometry in which structured-light patterns are projected onto a surface to produce shadow
moiré effects. The light contours produced by moiré effects are captured in an image and analyzed
to determine distances between the lines. This distance is proportional to the height of the surface
at the point of interest, and so the sur face coordinates can be calculated. The moiré technique
gives accurate results for 3-D reconstruction and measurement of small objects and surfaces.
However, it has limitations for larger objects because precision is sacrificed for range. Figure
shows the formation of moiré fringes by superimposing a line pat tern with concentric circles and
two other line patterns that vary in line spacing and rotation.
High-resolution X-ray CT and micro CT scanners can resolve details as small as a few tens of
microns, even when imaging objects are made of high-density materials. It is applicable to a
wide range of materials, including rock, bone, ceramic, metal, and soft tissue.
Magnetic resonance imaging (MRI) is a state-of-the-art imaging technology that uses magnetic
fields and radio waves to create high-quality, cross-sectional images of the body without using
radiation. Compared to CT, MRI gives superior quality images of soft tissues such as organs,
muscle, cartilage, ligaments, and tendons in many parts of the body. CT and MRI are powerful
techniques for medical imaging and reverse engineering applications; however, they are the
most expensive in terms of both hard ware and software for data processing.
7. Explain Destructive method of capturing the data in RE
Destructive Method The RE destructive method is useful for reverse engineering small and
complex objects in which both internal and external features are scanned. A CNC milling
machine exposes 2-D cross-sectional (slice) images, which are then gathered by a CCD camera.
The scanning software automatically converts the digital bitmap image to edge detected
points, as the part is scanned. In RP processes, the part is built layer-by-layer based on 2-D
slice data. The destructive RE process is the reverse of this. To remodel the part, 2-D slice
images of the part are gathered by destroying the part layer-by-layer. The disadvantage of this
method is the destruction of the object. However, the technique is fast. The accuracy is
acceptable; the repeatability is ±0.0127 mm (CGI systems). The layer thickness is from 0.0127–
0.254 mm. The method allows capturing internal structures. In addition, a destructive system
can work with any machinable object, including parts made from aluminum alloys, plastics,
steel, cast iron, stainless steel, copper, and wood.
8. Explain the RE software classification based on application
Reverse Engineering Software Classification There is no single RE software that can completely
satisfy the requirements of RE data processing and geometric modeling. The selection of RE
software de pends on the specific requirements of RE projects. Based on applications, RE software
can be classified into the following groups: Hardware control, CAD entity manipulation, polygon
manipulation, polygon and NURBS surface construction, 2-D scan image processing and 3-D
modeling, 3-D inspection, and NURBS surface and solid modeling.
9. Describe Four phases of the RE data processing chain with fundamental RE operations
The complete RE data processing chain, from scan data to the final NURBS model, can be divided
into four main phases: points and images, polygon, curves, and NURBS surfaces. model, can be
divided into four main phases: points and images, polygon, curves, and NURBS surfaces.
11. Enumerate fundamental Reverse Engineering Operation Points and Images Phase
Data Registration-Data registration is needed to combine, align, or merge these point clouds so
that all point clouds in the series are arranged in their proper orientation relative to one another
in a common coordinate system. There are two data registration approaches: manual and
automatic alignment. In manual alignment, landmark points are manually assigned for fixed and
floating point clouds, and they are used as references for alignment. In the automatic alignment
operation, in which the tolerance between the fixed and floating point clouds is used as the
constraint for the alignment process.
Data Optimization-Noise reduction tools are used for both manually and automatically removing
the noise in scanned data. With automatic approaches, the noise removal operation
determines where the points should lie, then moves them to these locations based on statistical
information about the point data. Sampling Points The sampling function is used to minimize
the number of points in the point cloud data and to make the data well-structured so that it is
easier to handle. There are three sampling methods: curvature, random, and uniform; they are
based on a curvature, random, and proportional basis
12. Explain NURBS Surface Phase in Reverse engineering operation
The main objective of the NURBS surface phase in the RE data processing chain is to prepare
a patch structure of quadrangular shapes for NURBS surface construction. The patches (or
loops) can be drawn on the polygon model manually, semiautomatic ally, or automatically
based on a target patch count and the curvature of a model. The most useful RE operations in
the NURBS surface phase are curvature detection, patch editing, patch template reuse, and
NURBS patch merging. Curvature detection identifies components of the model based on
surface curvature. The boundary (or feature) curves that define the components of the model
are automatically detected. They are then manually edited to produce optimal curves. The
boundary curves eventually define panels of rectangular patches. Preparing a good boundary
curve layout ensures that the future patch structure maintains all necessary topological detail.
Patch editing helps the optimal organization of a patch structure. NURBS patch merging
combines two or more NURBS patches into a single patch. This helps reduce the number of
patches on the model and makes the data structure simpler for the final NURBS surface model.
To compare the systems demonstrated meaningfully, it will be necessary first to have prepared a
matrix of desired system attributes and their overall importance. The importance of a particular
attribute is usually ranked on a scale from 1–5 where, for example, 1 may mean “not at all important”
and 5 may mean “essential”. Then as the team watches the demonstrations and questions the vendor
about its product, each member should be trying form an opinion of the capability of the vendor
against each of the attributes. As soon as possible after the vendor visit and certainly within days
rather than weeks, the team should meet and agree on a score for the vendor against each attribute.
This can be done as a mean and standard deviation of each team member’s individual scores or by
discussion until a consensus is reached. Finally, the scores against each attribute are multiplied by the
importance of each attribute and the resultant weighted scores are then totalled for each vendor. This
gives the relative ranking of each vendor and may be used to reduce the list of potential vendors to
two or three.
Figure shows a sample rank weighting grid. In this example, vendor 1 and vendor 2 both have
comparable scores and should be considered; vendor 3 is poorer and should be rejected. The nature
of the process described here requires that members of the team make fairly subjective decisions
about a particular vendor’s capability against an attribute and about the importance of an attribute.
Clearly, therefore the outcome of this process should not be regarded as absolute truth. It is
sometimes useful to perform a sensitivity analysis on the results to see how the outcome varies with
small changes in the assumptions. For example, the value of each importance factor could be
randomly varied by plus or minus an amount, and the totals re-evaluated to see whether the outcome
changes. Similarly, if team mean and standard deviations were calculated, it is possible to vary a
vendor’s score randomly where a standard deviation above an agreed value indicates a lack of
agreement among the team members. For this process to be useful, it is essential that the same people
see all the vendors and contribute to the scoring process. Otherwise the outcome may be more
indicative of who attended each visit rather than the relative merits of the systems under
consideration.
14. Explain Post processing he Captured data in Reverse Engineering
Post processing the Captured Data There are a number of routes than can be followed once a
set of data has been acquired. Some vendors call these routes workflows and offer workflow
tem plates to guide the user through the process steps to achieve a desired output. Figure
shows a generalized workflow, which is intended to help clarify some of the following
discussion. To keep the role activity diagram reasonably simple, some detail has been omitted.
The primary output from any of the scanning or measuring techniques dis cussed before is a
very large set of X-Y-Z coordinates. File sizes can run to hundreds of megabytes in some
systems. But little can be done with the data without some sophisticated software to post
process the points. There are a number of classes of software for dealing with the point cloud.
The first class in terms of the workflow of reverse engineering is those that manipulate the
points in some way, usually leading to a polygon/triangulated mesh. This may be sufficient for
some purposes. For example, rapid prototyping or finite element analysis may be possible
directly from the polygon mesh.