Chapter - 1: Dept. of ECE, PESCE Mandya
Chapter - 1: Dept. of ECE, PESCE Mandya
Chapter - 1: Dept. of ECE, PESCE Mandya
CHAPTER -1
INTRODUCTION
1.1 Introduction to WSN
A Wireless sensor network can be defined as a network of devices that can communicate the
information gathered from a monitored field through wireless links.
WSN technology is to enhance the network lifetime and to reduce the energy consumption of the
sensor network. Wireless sensor nodes are dispersed typically in sensing area to monitor
earthquake, battle field, industrial environment, habitant monitoring, agriculture field, physical
atmosphere conditions and smart homes.
Sensor nodes sense the environment, gather information and transmit to BS through wireless
link. Due to escalating in Micro-Electro-Mechanical System technology, now it is possible to set
up thousands or millions of sensor nodes. The intense deployment of WSN makes it quite
difficult to recharge node batteries. Therefore, a key subject for WSNs is to curtail power
expenditure of sensor nodes to prolong network lifetime. Many clustering based algorithms are
proposed.
this work is to trim down the energy consumption of sensor nodes by logically dividing the
network into four regions. We use different communication hierarchy in different regions. Nodes
in one region communicate directly to BS while nodes in region 2 communicate directly to
gateway node. Nodes in other two regions use clustering hierarchy and sensor nodes transmit
their data to gateway node through their CHs. Gateway node assists in defining clusters and
issues a TDMA schedule for CHs. Each CH issues its own TDMA schedule for its member
nodes.
Figure 1 shows a typical sensor network structure structure, in which sensor nodes are collecting
the information from the field, as well as transmitting sensed data back to the base station .many
sensor systems are deployed in harsh, unattended and often adversarial physical environments
such as battle fields and desert.
Recently, a number of energy efficient routing protocols designed for WSNs have
been proposed. The existing routing protocols can be classified into two categories.
The first category is non-hierarchical routing protocol in which source node floods
advertisement message to destination. When a destination receives an advertisement
message from its neighbor, it sets up a route to send data to neighbor.
The main problem of non-hierarchical routing protocol is that they have trouble to
decide route. The second category is a hierarchical (or clustering) routing protocol
which organizes sensor nodes into clusters based on the received signal strength and
uses local cluster headers as routers to base station.
Suitable for the non-reachable places such as over the sea, mountains, rural
areas or deep forests.
9. Industrial monitoring:
Data logging: Wireless sensor networks are also employed for the
gathering of web data for monitoring of environmental information, this
is often as easy as the monitoring from the temperature in a very fridge
to the level of water in overflow tanks in nuclear power plants..
Clustering is a technique used in mobile ad hoc networks for best utilization of available
bandwidth and for prolonging the network life time also. The entire group of nodes participate in
election of the Cluster Head which is most of the times used for network administration but
sometimes can also be used for the task of intrusion detection i.e. network security.
Disadvantages:
1. The main features used were Battery Backup of the node, Fairness, Hop Count and
Mobility. But none of these approaches for clustering in WSN used all these
important features together in a single approach to choose a Cluster Head.
2. The Clustering depends on the criteria like Communication range, Hop Count,
Battery Power, Relative Velocity and Fairness.
An effective clustering technique has been proposed (TEEN) which is being used for the
election of Cluster Heads and Super Cluster Head. It uses five parameters i.e. Communication
range, Hop Count, Battery Power, Relative Velocity, Fairness at a time to select a Cluster Head.
Super Cluster Head as already defined is a node which can be a part of network but not itself a
cluster Head and is used to detect a misbehaving Cluster Head not doing the task of intrusion
detection properly.
A node is selected as a Super Cluster Head only if it is having maximum battery power
and it is not a Cluster Head.
Advantages:
1. This technique seems to increase network life time also because of the slow
dissipation of the energy.
This Super Cluster introduced in our proposed technique adds one more authority entity for
detecting the Cluster Head intruder nodes which is very important from the security point of
view in WSN.
The works carried out at each project phase are outlined below:-
Designing the overall functional view i.e. system architecture of the project.
Describing the language, platform used in the project implementation.
Identification and design of the modules for implementing.
Implementing the applications for accessing and controlling the different types of
services.
Testing Phase
CHAPTER -2
LITERATURE SURVEY
Literature survey is mainly carried out in order to analyze the background of the current
project which helps to find out flaws in the existing system & guides on which unsolved
problems we can work out. So, the following topics not only illustrate the background of the
project but also uncover the problems and flaws which motivated to propose solutions and work
on this project. A variety of research has been done on power aware scheduling. Following
section explores different references that discuss about several topics related to power aware
scheduling.
In paper[1] Introduce an energy factor when choosing a cluster head. Meanwhile, we learn to
conducted simulation specific to LEACH protocol and this improved algorithm in terms of
network lifetime, network stability, energy consumption.
In paper[2] LEACH arranges the nodes in the network into small clusters and chooses one of
them as the cluster-head. Node first senses its target and then sends the relevant information to its
cluster-head. Then the cluster head aggregates and compresses the information received from all
the nodes and sends it to the base station.
In paper [4] two techniques: Cluster Based Intrusion Detection and Prevention Technique and
Super Cluster Based Intrusion Detection and Prevention Technique for the detection of JF
Periodic Dropping attack in MANET are proposed which result in better network performance.
Paper [5] presents a review of various clustering schemes for Mobile Ad hoc Network. The
analysis of different clustering schemes is done and suggestions for improvement are given.
In paper [6] a new weighted distributed clustering algorithm, called CBMD is proposed.
Connectivity (I), residual battery power (B), average mobility (M), and distance (D) of the nodes
are the parameters used to choose cluster heads so that less number of clusters are formed in
order to minimize the overhead involved and also to form stable clusters maximizing the life
span of the mobile nodes.
In paper [7] k-hop cluster maintaining mechanism for mobile ad hoc networks (KCMM) is
proposed. This is basically based on the Max-Min heuristic algorithm and results in the increase
in stability of multi-hop clusters for large-scale and dense scenario.
In paper [8] to enhance the network stability; an Enhanced Sectorized Clustering Scheme based
on Transmission Range for MANETS (ESCS) is proposed which considers the residual energy
and transmission range of the cluster head to be elected.
In paper [9] Weighted Clustering Algorithm (WCA) is used for cluster formation. Cluster
maintenance is done using Mobility Prediction. This reduces the overhead in communication.
In paper [10] selective prioritized clustering (SPC) is proposed. This algorithm uses makes use of
hierarchical clustering and topology control to divide the network into clusters and further into
sub clusters having two cluster heads each.
In paper [11] CBMD algorithm is proposed which uses connectivity (C), residual battery power
(B), average mobility (M) and distance (D) of the nodes to choose locally optimal cluster heads.
The algorithm produces a global path.
In paper [12] a new clustering protocol for MANETs is proposed. It uses the routing information
for maintenance of clusters formed.
In paper [13], IWCA is proposed which keeps a node with weak battery power from being
elected as a CH.
In paper [14] a Flexible Weighted Clustering Algorithm based on Battery Power (FWCABP) is
proposed which increases the stability. It avoids the node with weak battery power from being
elected as a cluster-head.
Robust Clustering Algorithm proposed in paper [15] uses three parameters: remaining power,
mobility prediction and workload to choose cluster Head.
Summary
This chapter mainly discusses about the papers, websites that are referred while making
this dissertation report. All these papers and websites provide information related to learning of
collective behaviour, their existing solutions, methods used and also their advantages &
limitations.
CHAPTER-3
Functional Requirement defines a function of a software system and how the system must
behave when presented with specific inputs or conditions. These may include calculations, data
manipulation and processing and other specific functionality. In this system following are the
functional requirements:-
1. Cluster Head selection based on the important parameters i.e. Communication range, Hop
Count, Battery Power, Relative Velocity and Fairness
Product Requirements
Por tab i l i ty : Since the software is developed in NS2 it can be executed on any platform for
which is available with minor or no modifications.
Corre ctn ess: It followed a well-defined set of procedures and rules to compute and also
rigorous testing is performed to confirm the correctness of the data.
E as e of Use: The front end is designed in such a way that it provides an interface which
allows the user to interact in an easy manner.
Mod u l ari ty: The complete product is broken up into many modules and well-defined
interfaces are developed to explore the benefit of flexibility of the product.
Rob u s tn ess: This software is being developed in such a way that the overall performance is
optimized and the user can expect the results within a limited time with utmost relevancy and
correctness.
Non functional requirements are also called the qualities of a system. These qualities can
be divided into execution quality & evolution quality. Execution qualities are security &
usability of the system which are observed during run time, whereas evolution quality involves
testability, maintainability, extensibility or scalability.
Organizational Requirements
Process Standards: IEEE standards are used to develop the application which is the standard
used by the most of the standard software developers all over the world.
Design Methods: Design is one of the important stages in the software engineering process. This
stage is the first step in moving from problem to the solution domain. In other words, starting
with what is needed design takes us to work how to satisfy the needs.
The design of the system is perhaps the most critical factor affecting the quality of the
software and has a major impact on the later phases, particularly testing and maintenance. We
have to design the product with the standards which has been understood by the developers of
the team.
User Requirements
Mission profile or scenario: It describes about the procedures used to accomplish mission
objective. It also finds out the effectiveness or efficiency of the system.
Performance and related parameters: It points out the critical system parameters to
accomplish the mission
Utilization environments: It gives a brief outline of system usage. Finds out appropriate
environments for effective system operation.
NS2 Simulator:
NS began as a variant of the REAL network simulator in 1989 and has evolved
substantially over the past few years. In 1995 ns development was supported by DARPA through
the VINT project at LBL, Xerox PARC, UCB, and USC/ISI. Currently ns development is support
through DARPA with SAMAN and through NSF with CONSER, both in collaboration with other
researchers including ACIRI. Ns has always included substantial contributions from other
researchers, including wireless code from the UCB Daedelus and CMU Monarch projects and
Sun Microsystems. For documentation on recent changes, see the version 2 change log.
In 1997, the DARPA Virtual Inter Network Test bed (VINT) project was initiated,
including LBNL, Xerox PARC, UC Berkeley, and USC's Information Sciences Institute (ISI).
The bulk of ns-2 development occurred during this timeframe. Software maintenance activities
also migrated to ISI during this time period, eventually led by John Heidemann. After the
conclusion of the VINT project, ns-2 continued to be funded during the 2001-04 timeframe by
the DARPA SAMAN and NSF CONSER awards to USC/ISI.
Presently, ns-2 consists of over 300,000 lines of source code, and there is probably a
comparable amount of contributed code that is not integrated directly into the main distribution
(many forks of ns-2 exist, both maintained and unmaintained). It runs on GNU/Linux, FreeBSD,
Solaris, Mac OS X and Windows 95/98/NT/2000/XP. It is licensed for use under version 2 of the
GNU General Public License.
GNU Plot:
Gnu plot is a command-line program that can generate two- and three-dimensional plots
of functions, data, and data fits. It is frequently used for publication-quality graphics as well as
education. The program runs on all major computers and operating systems (GNU/Linux, Unix,
Microsoft Windows, Mac OS X, and others). It is a program with a fairly long history, dating
back to 1986. Despite its name, this software is not distributed under the GNU General Public
License (GPL), opting for its own more restrictive open source license instead.
Gnu plot can produce output directly on screen, or in many formats of graphics files,
including Portable Network Graphics (PNG), Encapsulated PostScript (EPS), Scalable Vector
Graphics (SVG), JPEG and many others. It is also capable of producing Latex code that can be
included directly in Latex documents, making use of Latex’s fonts and powerful formulae
abilities. The program can be used both interactively and in batch mode using scripts. The
program is well supported and documented. Extensive help can also be found on the Internet.
Gnu plot is used as the plotting engine of Maxima, and gretl, and it can be used from
various languages, including Perl (via CPAN), Python (via Gnu plot-py and SAGE), Java (via
JGNU plot), Ruby (via Ruby Gnu plot), Ch (via Ch Gnu plot), and Smalltalk (Squeak and GNU
Smalltalk). Gnu plot also supports piping. Gnu plot is programmed in C.
RAM : 3 GB.
Storage : 20GB.
Monitor : 15”
Mouse : 3 buttons
Platform : Ubuntu.
Language : TCL/C++
IDE/tool : NS2
Summary
This chapter gives details of the functional requirements, non-functional requirements,
resource requirements, hardware requirements, software requirements etc. Again the non-
functional requirements in turn contain product requirements, organizational requirements, user
requirements, basic operational requirements etc.
CHAPTER-4
SYSTEM ANALYSIS
System analysis is the process by which we learn about the existing problems, define
objects and requirements and evaluates the solutions. It is the way of thinking about the
organization and the problem it involves, a set of technologies that helps in solving these
problems. Feasibility study plays an important role in system analysis which gives the target for
design and development.
Depending on the results of the initial investigation the survey is now expanded to a more
detailed feasibility study. “FEASIBILTY STUDY” is a test of system proposal according to its
workability, impact of the organization, ability to meet needs and effective use of the resources.
Determine and evaluate performance and cost effective of each proposed system.
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and development
of the system is limited. The expenditures must be justified. Thus the developed system as well
within the budget and this was achieved because most of the technologies used are freely
available. Only the customized products had to be purchased.
This study is carried out to check the technical feasibility, that is, the technical requirements of
the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead to
high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.
The aspect of study is to check the level of acceptance of the system by the user. This includes
the process of training the user to use the system efficiently. The user must not feel threatened by
the system, instead must accept it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the system and to make him
familiar with it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
Summary
The main aim of this chapter is to find out whether the system is feasible enough or not.
For these reasons different kinds of analysis, such as performance analysis, technical analysis,
economical analysis etc is performed.
CHAPTER-5
SYSTEM DESIGN
Design is a creative process; a good design is the key to effective system. The system
“Design” is defined as “The process of applying various techniques and principles for the
purpose of defining a process or a system in sufficient detail to permit its physical realization”.
Various design features are followed to develop the system. The design specification describes
the features of the system, the components or elements of the system and their appearance to
end-users.
A set of fundamental design concepts has evolved over the past three decades. Although
the degree of interest in each concept has varied over the years, each has stood the test of time.
Each provides the software designer with a foundation from which more sophisticated design
methods can be applied. The fundamental design concepts provide the necessary framework for
“getting it right”. The fundamental design concepts such as abstraction, refinement, modularity,
software architecture, control hierarchy, structural partitioning, data structure, software
procedure and information hiding are applied in this project to getting it right as per the
specification.
The input Design is the process of converting the user-oriented inputs in to the computer-
based format. The goal of designing input data is to make the automation as easy and free from
errors as possible. Providing a good input design for the application easy data input and selection
features are adopted. The input design requirements such as user friendliness, consistent format
and interactive dialogue for giving the right message and help for the user at right time are also
considered for the development of the project. Input design is a part of overall system design
which requires very careful attention. Often the collection of input data is the most expensive
part of the system, which needs to be route through number of modules.
It is the point where the user ready to send the data to the destination machine along with known
IP address; if the IP address is unknown then it may prone to error.
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other systems through outputs. It is most important and direct source information to the user.
Efficient and intelligent output improves the systems relationship with source and destination
machine. Outputs from computers are required primarily to get same packet that the user has
send instead of corrupted packet and spoofed packets. They are also used to provide to
permanent copy of these results for later consultation.
Swing actually makes use of a simplified variant of the MVC design called the model-
delegate. This design combines the view and the controller object into a single element that
draws the component to the screen and handles GUI events known as the UI delegate.
Communication between the model and the UI delegate becomes a two-way street. Each Swing
component contains a model and a UI delegate. The model is responsible for maintaining
information about the component’s state. The UI delegate is responsible for maintaining
information about how to draw the component on the screen. The UI delegate (in conjunction
with AWT) reacts to various events that propagate through the component.
The design method that has been followed to design the architecture of the system is MVC
design pattern. Swing uses the model-view-controller (MVC) architecture as the fundamental
design behind each of its components. Essentially, MVC breaks GUI component into three
elements. Each of these elements plays a crucial role in how the component behaves. The MVC
design pattern separates a software component into three distinct pieces: a model, a view, and a
controller.
Model
The model is the piece that represents the state and low-level behavior of the component.
It manages the state and conducts all transformations on that state. It encompasses the state data
for each component. There are different models for different types of components. For example,
the model of a scrollbar component might contain information about its current position of its
adjustable “thumb”, its minimum and maximum values, and the thumb’s width. A menu on the
other hand, may simply contain a list of the menu items the user can select from. The system
itself maintains links between model and views and notifies the views when the model changes
state.
View
The view refers to how you see the component in the screen. It is the piece that manages
the visual display of the state represented by the model. Almost all window frames will have a
title bar spanning the top of the window. However the title bar may have a close box on the left
side or on the right side. These are the examples of different types of views for the same window
object. A model can have more than one view, but that is typically not the case in the Swing set.
Controller
The controller is the piece that manages user interaction with the model. It provides the
mechanism by which changes are made to the state of the model. It is the portion of the user
interface that dictates how the component interacts with events.
The view cannot render the scrollbar correctly without obtaining information from the
model first. In this case the scrollbar will not know where to draw its “thumb” unless it can
obtain its current position and width relative to the minimum and maximum. Likewise the view
determines if the component is the recipient of user events, such as mouse clicks. The view
passes these events on to the controller, which decides how to handle them best. Based on the
controller’s decision the values in the model may need to be altered. If the user drags the
scrollbar thumb, the controller will react by incrementing the thumb’s position in the model. At
that point the whole cycle can repeat.
The JFC user interface component can be broken down into a model, view, and
controller. The view and controller are combined into one piece, a common adaptation of the
basic MVC pattern. They form the user interface for the component.
System development method is a process through which a product will get completed or a
product gets rid from any problem. Software development process is described as a number of
phases, procedures and steps that gives the complete software. It follows series of steps which is
used for product progress. The development method followed in this project is waterfall model.
The waterfall model is a sequential software development process, in which progress is seen
as flowing steadily downwards (like a waterfall) through the phases of Requirement initiation,
Analysis, Design, Implementation, Testing and maintenance.
Requirement Analysis: This phase is concerned about collection of requirement of the system.
This process involves generating document and requirement review.
System Design: Keeping the requirements in mind the system specifications are translated in to
a software representation. In this phase the designer emphasizes on:-algorithm, data structure,
software architecture etc.
Coding: In this phase programmer starts his coding in order to give a full sketch of product. In
other words system specifications are only converted in to machine readable compute code.
Implementation: The implementation phase involves the actual coding or programming of the
software. The output of this phase is typically the library, executables, user manuals and
additional software documentation
Testing: In this phase all programs (models) are integrated and tested to ensure that the complete
system meets the software requirements. The testing is concerned with verification and
validation.
Maintenance: The maintenance phase is the longest phase in which the software is updated to
fulfill the changing customer need, adapt to accommodate change in the external environment,
correct errors and oversights previously undetected in the testing phase, enhance the efficiency of
the software.
No of Nodes,Area, <tcl>
Configuration
Initial Energy, Range
NS2 Simulator
Cluster Routing
Here the main modules are cluster creation and cluster routing.
Configuration: In this module, the network is created with the help of number of nodes, area,
range and energy. This created network is simulated using the tcl script.
Node: This module tells that node as been added to the network by the user.
Cluster creation: In this module clustering of nodes done and cluster head selected for each for
every cluster group and based on the energy consumption, super cluster head node elected and
monitors the cluster heads.
Cluster Routing: Routing to cluster head, super cluster head and sink is done in this module.
A class diagram in the Unified Modelling Language (UML) is a type of static structure
diagram that describes the structure of a system by showing the system's classes, their attributes,
and the relationships between the classes.
Main Simulator
+createNetwork() +createNode()
+clusterNetwork() +clusterNetwork() 1
+sendPacket() +retrieveLogs()
+viewReport()
0..*
Node
Report
+setPos()
+setRange()
+measureEnergyConsumed() +setRole()
+measureLifetime() +forward()
Sink
Cluster Head Super Cluster head
+collectpacket()
+forward() +forward()
Main: The operations in this class are create network, cluster network, send packet, view report.
This class has dependent class called simulator.
Simulator: The operations in this class are create node, cluster network, retrieve logs. This class
aggregates the node class.
Node: The operations in this class are set position, set range, set role and forward. This class
generalizes the cluster head class and super cluster head class.
Sink: This class has dependant class called report. The operation in this class is collect packets.
Report: The operations in this class are measure energy consumed and measure lifetime.
Create Network
Add Nodes
Cluster Network
Admin
Send Packets
Measure Energy
Consumption
Measure Lifetime
createNetwork
new
createNode
new
setPos
setRange
networkCreated
networkCreated
Each object interacts with other objects in a sequential order through messages. As shown above.
clusterNetwork
clusterNetwork
setRole(CH,SCH)
success
success
Each object interacts with other objects in a sequential order through messages. As shown above.
sendPacket
forward
forward
forward
collectPacket
Here Admin, Main, Node, Cluster Head, Super Cluster Head and Sink are Objects.
Each object interacts with other objects in a sequential order through messages. As shown above.
viewReport
measueEnergyConsumption
retrieveLogs
logs
measureLifetime
report
Each object interacts with other objects in a sequential order through messages. As shown above.
A context-level or level 0 data flow diagram shows the interaction between the system
and external agents which act as data sources and data sinks. On the context diagram (also
known as the Level 0 DFD) the system's interactions with the outside world are modeled purely
in terms of data flows across the system boundary. The context diagram shows the entire system
as a single process, and gives no clues as to its internal organization
Here in level 0, nodes in network are taking as input and cluster the network is taken as output.
The first main process is Clustering.
Cluster
Packet Reaches
Sender Packets Routing
Sink
2
Here in level 0, sender packets are taking as input and packet reaches sink is taken as output. The
second main process is Cluster routing.
The Level 1 DFD shows how the system is divided into sub-systems (processes), each of
which deals with one or more of the data flows to or from an external agent, and which together
provide all of the functionality of the system as a whole. It also identifies internal data stores that
must be present in order for the system to do its job, and shows the flow of data between the
various parts of the system.
The level 1 worked with the sub processes of main process. The sub processes of first main
process are cluster head selection and super cluster head selection.
Routing to
Routing to
Super Cluster Routing to Sink
Sender Packets Cluster Head
Head 2.2
2.0
2.1
Packet Reaches
Sink
The level 1 worked with the sub processes of main process. The sub processes of second main
process are Routing to cluster head, routing to super cluster head and routing to sink.
Summary
A context-level or level 0 data flow diagram shows the interaction between the system
and external agents which act as data sources and data sinks. On the context diagram (also
known as the Level 0 DFD) the system's interactions with the outside world are modeled purely
in terms of data flows across the system boundary. The context diagram shows the entire system
as a single process, and gives no clues as to its internal organization
No of Input Aggregation
Nodes at BS
Nodes 1.0
The Level 1 DFD shows how the system is divided into sub-systems (processes), each of
which deals with one or more of the data flows to or from an external agent, and which together
provide all of the functionality of the system as a whole. It also identifies internal data stores that
must be present in order for the system to do its job, and shows the flow of data between the
various parts of the system.
Verification of
Find the Data
RPS Selection Sample
Aggregate Nodes whether it is
2.0.1 Messages
false / real
2.0.2
Summary
This chapter mainly concentrates on few fundamental design concepts such as input &
output design, system architecture, class diagram, sequence diagram, use-case diagram, data flow
diagram etc.
Chapter 6
IMPLEMENTATION
Implementation is the stage of the project where the theoretical design is turned into a
working system. At this stage the main workload and the major impact on the existing system
shifts to the user department. If the implementation is not carefully planned and controlled, it can
cause chaos and confusion.
Careful planning.
Investigation of system and constraints.
Design of methods to achieve the changeover.
Evaluation of the changeover method.
Correct decisions regarding selection of the platform
Appropriate selection of the language for application development
.
TCL is simpler. Those without a C/Unix background generally find TCL syntax far easier
to learn and retain.
TCLis smaller.
TCLis easier to extend, embed, and customize.
TCL source code traditionally is a model of lucidity. Perl source code traditionally is
dense in magic.
TCL\TK is far more portable than PERL/TK and generally more current.
TCP networking is more succinct and less intimidating.
TCL'sexec, open and socket are gems of accessible and portable functionality, in
comparison to the analogous Perl offerings.
TCL's unified channel API makes life much easier, particularly on Windows.
As of spring 2001, TCL's Unicode [1] capabilities are considerably more mature.
As of spring 2001, TCL's threading savvy (read "TCL and threads") is considerably more
mature.
Subjective stuff: some people find Tcl a better fit to their own sensibilities.
You can read your own code 6 months after you've forgotten how the program worked.
(file)event, trace and friends often solve requirements for functionality better than
threads.
TCL is way ahead of Perl in VFS capabilities; fuse provides an example of the potential
consequences.
As "TCL's string handling has been written by paranoiacs", to quote DKF, TCL is
immune to many "format string vulnerabilities".
All commands defined by TCL itself generate error messages on incorrect usage.
Extensibility, via C, C++, Java, and TCL.
Interpreted language using byte code.
(EFS) with multi-user support, IP Security (IPSec), Kerberos support, Internet Explorer Add-on
Manager, NS2 Support.
The NS2 simulator is an open source simulation tool that runs on Linux. It is a discrete event
simulator targeted at networking research and provides substantial support for simulation of
routing, multicast protocols and IP protocols such as UDP, TCP, RTP and SRM over wired and
wireless(local and satellite) networks.
Cluster_Formation ( )
1. hop_count : Number of hops that are connected with a node to reach the cluster head (less
value of hop_count is efficient)
2. rel_vel: relative velocity of a node in the network (Higher value is appreciable)
3. COM_RANGE: Node is present in Communication range
4. NCOM: Node is not present in communication range
5. G: G is the set of nodes that have not been cluster-heads in the last r rounds
6. BB: Node battery backup in joules
7. min_dist: the distance calculated between each node in the network and cluster head chosen
8. temp = distance calculation between cluster Head and other nodes.
Start
otherwise
Node is NCOM
ifhop_count<hop_count_limit then
if (temp<min_dist) then
Node is COM_RANGE
otherwise
call ELECT_CLUSTER_HEAD ( )
End
ELECT_CLUSTER_HEAD ( )
// p_thresh is the threshold value for the node to become cluster head here it is 0.00030
//p= min_dis> do and hop_count< =hop_count_limit and rel_vel>rel_vel_limit and check G set
and BB>BB_thresh
//node_type shows the type of the node whether it is a normal node in the network or cluster
head.
if (p<=p_thresh) then
If node_type=N then
Node (z) can perform the task of cluster head and it check the following condition periodically
for itself
if (p >p_thresh) then
call ELECT_CLUSTER_HEAD ( )
Summary
This chapter gives implementation details of the two major subsystems which are
developed for this project. With the help of data flow diagram, it also specifies the logic of
implementation for the different modules that have been specified during the system design.
Along with these, this chapter also highlights some of the important features of the platform and
language used for implementation purpose.
Chapter 7
TESTING
System testing is actually a series of different tests whose primary purpose is to fully
exercise the computer-based system. Although each test has a different purpose, all work to
verify that all the system elements have been properly integrated and perform allocated
functions. The testing process is actually carried out to make sure that the product exactly does
the same thing what is supposed to do. Testing is the final verification and validation activity
within the organization itself. In the testing stage following goals are tried to achieve:-
Here each module that comprises the overall system is tested individually. Unit testing
focuses verification efforts even in the smallest unit of software design in each module. This is
also known as “Module Testing”. The modules of the system are tested separately. This testing is
carried out in the programming style itself. Unit testing exercises specific paths in a module’s
control structure to ensure complete coverage and maximum error detection. This test focuses on
each module individually, ensuring that it functions properly as a unit. Hence, the naming is Unit
Testing. In this step each module is found to work satisfactorily as regard to the expected output
from the module. This testing is done to check for the individual block codes for their working.
It is done so that when we carry out functional testing then the units which are part of these
functionalities should have been tested for working.
The following unit testing table shows the functions that were tested at the time of
programming. The first column lists all the functions which were tested and the second column
gives the description of the tests done.
7.2 Integration
After successful completion of unit testing or module testing, individual functions are
integrated into classes. Again integration of different classes takes into place and finally
integration of front-end with back-end occurs.
At the start of coding phase only the functions required in different parts of the program
are developed. Each of the functions is coded and tested independently. After verification
of correctness of the different functions, they are integrated into their respective classes.
Here the different classes are tested independently for their functionality. After
verification of correctness of outputs after testing each class, they are integrated together
and tested again.
After the software has been integrated, a set of high order tests are conducted. All the
modules are combined and tested as a whole. Here correction is difficult, because the isolation of
errors is complicated by the vast expanse of the entire program.
7.4Validation Testing
At the culmination of integration testing, software is completed and assembled as a
package. Interfacing errors are uncovered and corrected. Validation testing can be defined in
many ways. Here the testing validates the software function in a manner that is reasonably
expected by the customer.
Working of simrun Normal and sensor Packet transfer and the Success
nodes sense events and sensor range should be
forward the packet to displayed.
routers
Working of plot user runs the simulation Graph of No of Nodes v/s Success
graph and type ./plotgraph.sh Energy Consumption is
displayed .And Graph of No
of Nodes v/s Lifetime is
displayed
After performing the validation testing, the next step is output testing of the proposed
system, since no system could be useful if it does not produce the required output in the specified
format. Therefore the output testing involves first of all asking the users about the format
required by them and then to test the output generated or displayed by the system under
consideration. The output format is considered in 2 ways: –
The system under consideration is tested for user acceptance by constantly in touch with
the prospective system users at time of developing and making changes wherever required in
regard to the following point:
White box testing (clear box testing, glass box testing, and transparent box testingor
structural testing) uses an internal perspective of the system to design test cases based on internal
structure. It requires programming skills to identify all paths through the software. The tester
chooses test case inputs to exercise paths through the code and determines the appropriate
outputs. While white box testing is applicable at the unit, integration and system levels of the
software testing process, it is typically applied to the unit. While it normally tests paths within a
unit, it can also test paths between units during integration, and between subsystems during a
system level test.
Though this method of test design can uncover an overwhelming number of test cases, it
might not detect unimplemented parts of the specification or missing requirements, but one can
be sure that all paths through the test object are executed. Using white box testing we can derive
test cases that:
Guarantee that all independent paths within a module have been exercised at least once.
Exercise all logical decisions on their true and false sides.
Execute all loops at their boundaries and within their operational bounds.
Execute internal data structure to assure their validity
Black box testing focuses on the functional requirements of the software. It is also known
as functional testing. It is a software testing technique whereby the internal workings of the item
being tested are not known by the tester. For example, in a black box test on software design the
tester only knows the inputs and what the expected outcomes should be and not how the program
arrives at those outputs.
The tester does not ever examine the programming code and does not need any further
knowledge of the program other than its specifications. It enables us to derive sets of input
conditions that will fully exercise all functional requirements for a program. Black box testing is
an alternative to white box technique. Rather it is a complementary approach that is likely to
uncover a different class of errors in the following categories:-
Interface errors.
Performance errors.
Errors in objects.
Advantages
The test is unbiased as the designer and the tester are independent of each other.
The tester does not need knowledge of any specific programming languages.
The test is done from the point of view of the user, not the designer.
Test cases can be designed as soon as the specifications are complete.
Preparation of test data plays a vital role in the system testing. After preparing the test
data, the system under study is tested using that test data. While testing the system by using test
data, errors are again uncovered and corrected by using above testing steps and corrections are
also noted for future use.
Live test data are those that are actually extracted from organization files. After a system
is partially constructed, programmers or analysts often ask users to suggest data for test from
their normal activities. Then, the systems person uses this data as a way to partially test the
system. In other instances, programmers or analysts extract a set of live data from the files that
they have entered themselves.
It is difficult to obtain live data in sufficient amounts to conduct extensive testing and
although the realistic data that will show how the system will perform for the typical processing
requirement. Assuming that the live data entered are in fact typical; such data generally will not
test all combinations or formats that can enter the system. This bias toward typical values then
does not provide a true system test and in fact ignores the cases most likely to cause system
failure.
Artificial test data are created solely for test purposes, since they can be generated to test
all combinations of formats and values. In other words, the artificial data, which can quickly be
prepared by a data generating utility program in the information systems department, make
possible the testing of all login and control paths through the program.
The most effective test programs use artificial test data generated by persons other than
those who wrote the programs. Often, an independent team of testers formulates a testing plan,
using the systems specifications.
Quality assurance consists of the auditing and reporting functions of management. The
goal of quality assurance is to provide management with the data necessary to be informed about
product quality, thereby gaining insight and confident that the product quality is meeting its
goals. This is an “umbrella activity” that is applied throughout the engineering process. Software
quality assurance encompasses:-
Analysis, design, coding and testing methods and tools
Formal technical reviews that are applied during each software engineering
Mulitiered testing strategy
Control of software documentation and the change made to it.
A procedure to ensure compliance with software development standards.
Measurement and reporting mechanisms.
An important objective of quality assurance is to track the software quality and assess the
impact of methodological and procedural changes on improved software quality. The factors that
affect the quality can be categorized into two broad groups:
The generic risks such as the product size risk, business impact risks, customer–related
risks, process risks, technology risks, development environment risks, security risks etc. This
project is developed by considering all these important issues.
Summary
This chapter deals with several kinds of testing such as unit testing which is a method of
testing the accurate functioning of a particular module of the source code. It is also referred to as
module testing. It also gives a brief detail about different kinds of integration testing in which
individual software modules are combined and tested as a group. Other then these main two
kinds of testing, many other types such as validation testing, output testing, user acceptance
testing and preparation of test data also discussed here. This chapter also focuses on assuring
quality of the software.
CHAPTER-8
INTERPRETATION OF RESULT
The following snapshots define the results or outputs that we will get after step by step
execution of all the modules of the system.
Summary
This chapter gives a brief interpretation of the expected and obtained result when each
and every module is executed in their proper sequence
Chapter 9
9.1 Conclusion
The proposed TEEN clustering technique is very effective because it considers all the five
parameters i.e. Communication range, Hop Count, Battery Power, Relative Velocity, Fairness at
a time, all of which play an important role in choosing the appropriate nodes as Cluster Heads.
The Super Cluster Head elected is the normal node (except Cluster Head) having maximum
battery power left after the Cluster Head election process. The Super Cluster Head can detect the
misbehaving Cluster Heads. The efficient node passes the criteria and becomes the Cluster Head.
The proposed technique of clustering increases the network life time as all the nodes acting as
Cluster Heads seem to dissipate energy at very less rate.
REFERENCES
[1] LI XingGuo, WANG JunFeng, Bai LinLin (2016). “LEACH protocol and its Improved
Algorithm in wireless sensor network”.
[3] Arati Manjeswara and Dharma P Agarwal, .”TEEN: A Routing Protocol For Enhanced
Efficiency in Wireless Sensor Network”.
[4] AvitaKatal, Mohammad Wazid, Roshan Singh Sachan, R H Goudar, “Two Way Intrusion
Detection and Prevention Techniques for JF Periodic Dropping Attack”, IEEE International
Conference on Communication and Signal Processing (ICCSP), 2013.
[6] Abdel Rahman Hussein, SufianYousef, Samir Al-Khayatt,Omar S. Arabeyyat , “An Efficient
Weighted Distributed ClusteringAlgorithmfor Mobile Ad hoc Networks”, IEEE International
Conference on Computer Engineering and Systems (ICCES), 2010.
[7] Xufeng Ma, “A K-hop Cluster Maintaining Mechanism for Mobile AdHoc Networks”,
IEEE7th International Conference on Wireless Communications, Networking and Mobile
Computing (WiCOM), 2011.
[10] Vijeesh.T,Niranjan Kumar Ray, Ashok Kumar Turuk, “SPC: The Selective Prioritized
Clustering Algorithm for MANETs”,IEEEInternational Symposium on Electronic System
Design, 2010.
[11] Hussein, A. Yousef, S. Al-Khayatt, S. Arabeyyat, O.S, “An efficient weighted distributed
clustering algorithm for mobile ad hoc networks”, International Conference on Computer
Engineering and Systems (ICCES), pp.221-228, 2010.
[13] Jing An, Chang Li, Bin Li, “A Improved Weight Based Clustering Algorithm in Mobile Ad
Hoc Networks”,IEEE Youth Conference on Information,Computing and Telecommunication,
(YC-ICT) 2009.
[15] Zhaowen Xing, Le Gruenwald, K.K. Phang, “A Robust Clustering Algorithm for Mobile Ad
Hoc Networks”, Handbook of Research on Next Generation Networks and Ubiquitous
Computing, 2008.