Selenium Test Automation Framework in On-Line Based Application
Selenium Test Automation Framework in On-Line Based Application
Selenium Test Automation Framework in On-Line Based Application
Revathi.K1 , Prof.V.Janani2
1
PG Scholar , 2Assistant Professor ,Department of Computer Science and Engineering,
Adhiyamaan College of Engineering, Hosur, (India).
ABSTRACT
Software testing is the important method to find bugs and improves the software quality. At present a lot of
applications are created in web based applications that execute in a web browser. Web applications are
becoming more and more complex that applications are difficult to test manually. It will increase the time and
cost. Accurate results can't be provided. This can be avoided by using test automation. The objective of the
paper is to make test automation for Web applications using Software testing tool, Selenium. It is a set of testing
tool running with multiple browsers, operating systems and many programming languages. Selenium encloses
almost all the features to automate tests and it is used to create test cases for web application.
Keywords: Test Automation, Selenium IDE, Selenium RC, Web Driver, Selenium Grid
I INTRODUCTION
Software testing is an important process of software program. It is to find an error and improve the quality. The
process of testing software in a well intended and efficient way is known as software testing lifecycle (STLC). It
can be divided into a number of different phases: planning, analysis, design, test execution, cycles, test closure
and final test. Manual and Automation testing process is to help testing the software program application. In
manual testing is tested by manually, without using any software tool. It takes more time and execution speed is
slow, manual testing error can occur easily. There are distinct phases for manual testing like unit testing,
integration testing, system testing and user acceptance testing.Automation testing is known as test automation. It
increases the test coverage, improve accuracy and save time. Test automation is the use of testing tool and
reduces the manual task. Automation testing is more reliable, faster than manual work and numbers of resources
for task are reduced. It can reuse tests on different versions of an application and run more tests in less time.
Many automation testing tools are available in the market. There are many things to be considered for selecting
the testing tool. It is ease of integration, compatible with the design & implementation of the application,
performance of tests and maintenance. These are all offered in an automation testing tool selenium. It is not a
single tool but it’s having set of different software tools like IDE, Remote control, web driver, grid. Selenium is
tremendous software testing for web application.
153 | P a g e
International conference on Science, Technology and Management ICSTM-2015
1.1 Advantages
The code for the same object can be used across different applications. At every level duplication of the
work is minimized.
The scripts will be of uniform quality since they make use of the same code.
In automated testing, tests perform the same steps are executed at every time but in manual testing
tester make many mistakes.
Simple modifications to the application can be easily handled in the code.
Test cases are stored and maintained, if any error occurs, we can easily check that error.
II TECHNICAL OVERVIEW
Web testing is completely focused on web based applications. This testing is to help reduce the efforts required
to test the web applications, minimize the cost, increase software quality and used to reuse the test cases. There
are different web testing are available like functional testing, compatibility testing, load testing, regression
testing and performance testing.
Functional Testing: It is s software testing process, which is used to test the functionality of the
application. It will check the validations on all fields; verify page redirection, business logic &
calculation.
Compatibility Testing:Web based applications are tested on different browsers. It makes sure that the
application will be reliable on all browsers. Applications are compatible with different devices like
mobile, notebook etc,
Performance Testing:The performances of web based applications are tested. It is the process of
determining the speed of computer, software program and scalability& reliability. Load and stress tests
are one of the performance test types.
Load Testing:Load testing is the testing with the target of determining how well the product handles
competition for system resources. It will be in the form of network traffic, CPU utilization or memory
allocation. For example; multiple applications are running on a compute concurrently.
Stress Testing: This test is conducting to calculate the behavior when the system is pushed away
from the breaking point. It is to determine, if the system manages recover gracefully.
154 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Manual testing is difficult to test the high competitive websites and web applications. It will be avoided for
using web automation testing. It provides the ability to reuse, tests multiple browsers, platforms & programming
languages.
Features:
It saves time
Minimize the cost
Improves accuracy
Less effort and get more results
Selenium was created by Jason Huggins working in Thought Works in 2004. He was working on a web
application that required regular testing. He realized that manual testing replication was becoming more and
more inefficient; he created a JavaScript program that would automatically control the browser’s action. He
named this program JavaScriptTestRunner. Afterward he completed this JavaScriptRunner open source which
was later re-named as Selenium Core.Selenium is an open source browser automation tool, commonly used for
testing the web applications. It automates the control of a web browser so that cyclic tasks can be automated.
Selenium is a set of testing tools, working with multiple browsers, operating systems and writing tests in
different languages like C#, java,Ruby and Python.
Selenium Suite
155 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Selenium IDE (Integrated Development Environment) is a tool to develop Selenium test cases. Selenium IDE
was originally created by Shinya Kasatani and donated to Selenium project in 2006. It is implemented as a
Firefox Plug-in that allows recording, editing and debugging the selenium test cases. Selenium name comes
from Selenium Recorder. On start-up of the Firefox, the recording option is automatically turned on. This option
allows user to record any action done inside the web page. In Selenium IDE scripts are recorded in Selenese, a
special test scripting language which is a set of Selenium commands. It is used to test web application. Actions,
Accessors, Assertions are the classification of selenium.
Browser
Selenium IDE(Includes
Selenium Core)
Web Server
HTTP
Web Application
Application under test
3.1.2 Features
It is simple and easy record and playback.
Selenium Ide supports intellectual field selection options like ID’s, XPath and Names.
It saves test scripts in several formats like Selenese, Ruby etc.
Ide allow to customization through plug-ins.
Selenium Ide having an option for adding different asserts options in scripts.
It allows setting breakpoints and debugging the scripts.
It also supports auto complete commands.
3.1.3 Limitations
Selenium IDE works only in Mozila Firefox and it cannot be used with other browsers.
There is no option to verify images.
It can execute scripts created in selenese only.
It is difficult for checking complex test cases involving dynamic contents.
156 | P a g e
International conference on Science, Technology and Management ICSTM-2015
3.2 Selenium RC
To overcome the Selenium IDE limitations, ThoghtWork’s engineer Paul Hammant decided to create a server
that will act as HTTP proxy to “trick” the browser into believing that Selenium Core and the web application
being tested come from the same domain. This system known as Selenium Remote Control. It is possible to run
tests inside every JavaScript compatible browser using a wide range of programming language. Selenium RC
has two components
Selenium RC has two parts:
Selenium Server: It uses Selenium core and browser’s built-in JavaScript interpreter to process selenese
commands (such as click, type) and report back results.
Selenium Client Libraries: Client libraries are the API’s for the programming languages to communicate with
Selenium server.
3.2.1 Architecture of RC
Browser
HTTP Proxy
HTTP
Fig 3: Architecture of RC
3.2.2 Features
It faster execution speed than IDE.
Cross browser and cross platform
Have matured and complete API
157 | P a g e
International conference on Science, Technology and Management ICSTM-2015
3.2.3 Limitations
Selenium RC is slow.
It has limited features of drag and drop of objects
It struggles when running concurrent tests.
It does not allow simultaneously tests across different OS and browsers.
Simon Stewart created WebDriver 2006 when browsers and web applications were becoming more powerful
and more restrictive with JavaScript programs like Selenium Core. It was the first cross platform testing
framework that could control the browser. To provide a simpler, more concise programming interface. It
supports dynamic web pages where elements of a page may change without the page itself being reloaded.
WebDriver is the name of the key interface against which tests should be written in Java. Selenium Web Driver
is the successor to Selenium RC. It does not need a special server to execute tests. It directly starts a browser
instance and controls it. Selenium Grid can be used with Web Driver to execute tests on remote systems.
158 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Selenium WebDriver makes directly calls to the browser using each browser’s native support for automation.
There are so many browsers and many programming languages there is need for common specification provided
by WebDriver API. Remote Webdriver means each browser has to implement this API.Language bindings will
send the commands to the common driver API, on the other end there is going to be a driver listening to those
commands and they will be executed in browser using remote WebDriver and it’s going to return the
result/response via API to the code/Binding. WebDriver API that communicates with the use a common wire
protocol which is named as JSON Wired Protocol which is a RESTFUL webservice using JSON over HTTP.
3.3.2 Features
It allows you to execute the tests against different browsers.
Use a programming language of your own choice for creating test scripts
This architecture is simpler than Selenium RC's architecture.
It directly run with the browser by using the browser's own engine to control it.
Support the headless HtmlUnit browser.
3.3.3 Limitations
Selenium WebDriver cannot support new browsers because it operates on the OS level and also
different browsers communicate differently with the Operating System.
Built-in commands are not available.
A test of different machines against different browsers in parallel can be run by using Selenium Grid. It runs on
multiple tests at the same time against different machines running different browsers and operating systems.
Selenium Grid support distributed test execution. It is a server that allows tests to use web browser instances
running on remote machines. One server acts as the Hub. Tests contact the hub to obtain access to browser
instances. The hub offers list of servers that provide access to browser instances, and let’s tests use of these
instances. The tests will run parallel on multiple machines, and to manage different browser versions. Selenium
Grid has 2 versions - the older Grid 1 and the newer Grid 2.
Selenium Grid uses a hub-node concept. It only run the test on a single machine called a hub, but the
execution will be done by different machines called nodes.
159 | P a g e
International conference on Science, Technology and Management ICSTM-2015
3.4.1 Features
It can be extended by distributing tests on a number of machines. Executions can be done parallel.
It manages multiple environments from a central point and make test to run easily against a huge
combination of browsers as well as Operating System.
Maintenance time will be reduced for the grid by allowing you to implement regular hooks to influence
virtual infrastructure for instance.
3.4.2 Limitations
Selenium grid by itself cannot run multiple tests in parallel, the framework like TestNG or JUnit are used to
provide multiple tests to the grid
IV RELATED WORK
[2] This paper described the drawbacks in Selenium IDE tools, and find out the problems and implement them
in selenium. In case testing team uses Selenium IDE only as test automation tool the functionality cannot be
tested on all browsers, For that we has used Selenium IDE with Web Driver because Selenium Web Driver
compatible with all browsers .So, that Integrating Selenium IDE and web driver in one single package so that
recorded tests on IDE can be run as web driver tests from single UI. In this paper we had described the running
and recording testing scripts in Selenium IDE with others browser like IE, Chrome and it can only possible with
web driver and it’s also improved functionality of browsers.
[5] In this paper we have discuss about selenium framework. Selenium is a web automation framework which
uses different platform and framework according to the programming language that is used by programmer.
Selenium is a set of testing tools and all have different features which are useful for developer. Selenium IDE is
use for record and playback as well as for those developers who are new in developing side can also use easily
for their work. Developers who are good in programming language can use selenium RC or WebDriver. To run
selenium tests parallel one can use selenium grid. Choosing the proper framework one can save time as well as
money and can improve software quality.
[7] This paper introduced a new automation framework integrated by selenium and Jmeter. This automation
framework shares the test steps and test data which is convenient to switch in various types of testing for web
application. It supports multiple browser and operating System. With use of this software framework one can
efficiently improve the extensibility and reusability of automation test.
V CONCLUSION
In this paper discuss about selenium framework. The main benefit of using automated tools is to avoid manual
effort. Selenium is a web based automation framework which uses different platform and programming
languages. These features are record and playback and run in parallel tests. It can reduce the time and provide
free software and easier for developers and programmers. Future enhancements are selenium is to test window
based application. So nowadays selenium is the best available tool for web applications.
160 | P a g e
International conference on Science, Technology and Management ICSTM-2015
REFERENCES
[1] Y.C. Kulkarni, Y.C. Kulkarni,”Automating the web applications using the selenium RC”, ASM's
International Journal of Ongoing Research in Management and IT e-ISSN-2320-0065, 2011.
[2] Nidhika Uppal, Vinay Chopra,”Design and Implementation in Selenium IDE with Web Driver” International
Journal of Computer Applications (0975 – 8887) Volume 46– No.12, May 2012.
[3] Ms. Rigzin Angmo, Mrs. Monika Sharma,”Selenium Tool: A Web based Automation Testing Framework”,
International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS),2014.
[4] Sherry single, Harpreet kaur,”Selenium keyword automation testing framework”, International Journal of
Advanced Research in Computer Science and Software Engineering, Vol.4,2014.
[5] Monika Sharma and Rigzin Angmo,“Web based Automation Testing and Tools”,international journal of
Computer Science And Information Technology (IJCSIT),Vol.5(1),2014, ISSN:0975-9646, pp. 908-912.
[6] Chandraprabha, Ajeet Kumar, Sajal Saxena,” SYSTEMATIC STUDY OF A WEB TESTING TOOL:
SELENIUM ” International Journal Of Advance Research In Science And Engineering ,IJARSE, Vol. No.2,
Issue No.11, November 2013
[7] Fei Wang and Wencaai Du, “A Test Automaton Framework Based on WEB” proc. IEEE 11th International
Conference on Computer and Information (ACIS 12),IEEE Press, 2012, pp. 683-687,
doi:10.1109/ICIS.2012.21
[8] Rasul Niyazimbetov,” Web application testing solutions with selenium”.
[9] McMahon, C. , History of a Large Test Automation Project Using Selenium 2009,8.
161 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Purushothaman B
PG Scholar, Department of Computer Science and Engineering
Adhiyamaan College of Engineering Hosur, Tamilnadu (India)
ABSTRACT
Data Mining is defined as extracting the information from the huge set of data. Clustering is the process of
grouping or aggregating of data items. Sentence clustering mainly used in variety of applications such as
classify and categorization of documents, automatic summary generation, organizing the documents. In
comparison with hard clustering methods, in which a pattern belongs to a single cluster, fuzzy clustering
algorithms allow patterns to belong to all clusters with differing degrees of membership. This is important in
domains such as sentence clustering, since a sentence is likely to be related to more than one theme or topic
present within a document or set of documents. Size of the clusters may change from one cluster to another. The
traditional clustering (hard clustering) algorithms have some problems in clustering the input dataset. The
problems are instability of clusters, complexity and sensitivity. To overcome the drawbacks of these clustering
algorithms, this paper proposes an algorithm called Fuzzy Relational Eigenvector Centrality-based Clustering
Algorithm (FRECCA) which is used for the clustering of sentences. Contents present in text documents contain
hierarchical structure and there are many terms present in the documents which are related to more than one
theme hence FRECCA will be useful algorithm for natural language documents.
Keywords - Data mining, FRECCA, Fuzzy clustering, Hard clustering, Sentence level clustering.
1. INTRODUCTION
Data mining is the practice of automatically searching large stores of data to discover patterns [5] and trends that
go beyond simple analysis. Data mining is also known as Knowledge discovery in data. It is the extraction of
hidden predictive information from large databases, is a powerful new technology with great potential to help
companies focus on the most important information in their data warehouses. Data mining is accomplished by
building models. A model performs some actions on data based on some algorithm. The notion of automatic
discovery refers to the execution of data mining models. Data mining techniques can be divided into supervised
or unsupervised. Clustering is one of the unsupervised techniques. Clustering is the process of grouping a set of
objects in such a way that object in the same group are more similar to each other than those in other cluster
162 | P a g e
International conference on Science, Technology and Management ICSTM-2015
.Each group, called cluster, consists of objects that are similar between themselves and dissimilar to objects of
other groups .Clustering has become an increasingly important topic with the explosion of information available
via the Internet. It is an important tool in text mining and knowledge discovery. Representing data by fewer
clusters necessarily loses certain fine details, but achieves simplification. It represents many data objects by few
clusters, and hence, it models data by its clusters.
There are several algorithms available for clustering. Each algorithm will cluster or group similar data objects in
a useful way. This task involves dividing the data into various groups called clusters. The application of
clustering includes Bioinformatics, Business modelling, image processing etc. In general, the text mining
process focuses on the statistical study of terms or phrases which helps us to understand the significance of a
word within a document. Even if the two words didn‟t have similar meanings, clustering will takes place.
Clustering can be considered the most important unsupervised learning framework, a cluster is declared as a
group of data items, which are “similar” between them and are “dissimilar” to the objects belonging to other
clusters. Sentence Clustering mainly used in variety of text mining applications. Output of clustering should be
related to the query, which is specified by the user.
Similarity between the sentences [2] is measured in terms of some distance function; such functions are
Euclidean distance or Manhattan distance. The choice of the measure is based on our requirement that induces
the cluster size and formulates the success of a clustering algorithm on the specific application domain. Current
sentence clustering methods usually represent sentences as a term document matrix and perform clustering
algorithm on it. Although these clustering methods can group the documents satisfactorily, it is still hard for
people to capture the meanings of the documents since there is no satisfactory interpretation for document
cluster.
Similarity measure, which is generally defined on the attributes of a data set, has a major impact on clustering
results and it must be selected according to the clustering needs. Moreover, not every similarity measure can be
used with every clustering algorithm. For instance, similarity metrics that are only defined between data objects
cannot be used with algorithms that define pseudo points in the data space during the clustering process, such as
k-means [13]. Nowadays, large amount of data is available in the form of texts. It is very difficult for human
beings to manually find out useful and significant data. This problem can be solved with the help of text
summarization algorithms.
Text Summarization is the process of condensing the input text file into shorter version by preserving its overall
content and meaning. This paper is about called text summarization using natural language processing. The raw,
unlabeled data from the large volume of dataset can be classified initially in an unsupervised fashion by
clustering the assignment of a set of observations [9] into clusters so that observations in the same cluster may
be in some sense be treated similar. The outcome of the clustering process and efficiency of its domain
application is generally determined by algorithms. There are different algorithms which are used to solve this
problem. The proposal describes a system, which consists of two steps. In first step, they are implementing the
phases of natural language processing that are splitting, tokenization, and part of speech tagging, and parsing. In
163 | P a g e
International conference on Science, Technology and Management ICSTM-2015
second step, they are implementing Expectation Maximization (EM) Clustering Algorithm to find out sentence
similarity between the sentences. This is important in domains such as sentence clustering, since a sentence is
likely to be related to more than one theme or topic present within a document or set of documents.
There are several algorithms available for clustering. Each algorithm will cluster or group similar data objects
in a useful way. This task involves dividing the data into various groups called clusters. The application of
clustering includes Bioinformatics, Business modeling, image processing etc. In general, the text mining process
focuses on the statistical study of terms or phrases which helps us to understand the significance of a word
within a document. Even if the two words didn‟t have similar meanings, clustering will takes place. Clustering
can be considered the most important unsupervised learning framework, a cluster is declared as a group of data
items, which are “similar” between them and are “dissimilar” to the objects belonging to other clusters.
Sentence Clustering mainly used in variety of text mining applications. Output of clustering should be related to
the query, which is specified by the user.
Similarity between the sentences [7] is measured in terms of some distance function such functions are
Euclidean distance or Manhattan distance. The choice of the measure is based on our requirement that induces
the cluster size and formulates the success of a clustering algorithm on the specific application domain. Current
sentence clustering methods usually represent sentences as a term document matrix [6] and perform clustering
algorithm on it. Although these clustering methods can group the documents satisfactorily, it is still hard for
people to capture the meanings of the documents since there is no satisfactory interpretation for each document
cluster. Based on the similarity or dissimilarity values of clustering performance will take place.
Hierarchical clustering outputs a hierarchy, a structure that is more informative than the unstructured set of
clusters returned by flat clustering. Hierarchical clustering does not require us to pre specify the number of
clusters and most hierarchical algorithms that have been used in Information Retrieval (IR) are deterministic.
These advantages of hierarchical clustering come at the cost of lower efficiency.
k- Means [13] is one of the partitioning based clustering methods. The partitioning methods generally result in
asset of M clusters, each object belonging to one cluster. Each cluster may be represented by a centroid or a
cluster representative; this is some sort of summary description of all the objects contained in a cluster. In k-
means case a cluster is represented by its centroid, which is a mean (usually weighted average) of points within
a cluster. Each point is assigned to the cluster with the closest centroid Number of clusters, K, must be specified.
164 | P a g e
International conference on Science, Technology and Management ICSTM-2015
This obviously does not work well with a categorical attributes, it has the good geometric and statistical sense
for numerical attributes. K-means [13] has problems when clusters are of differing Sizes, Densities, Non-
globular shapes and K-means has problems when the data contains outliers.
When medoids [10] are selected, clusters are defined as subsets of points close to respective medoids, and the
objective function is defined as the averaged distance or another dissimilarity measure between a point and its
medoid. K-medoid [10] is the most appropriate data point within a cluster that represents it. Representation by
k-medoids has two advantages. First, it presents no limitations on attributes types, and, second, the choice of
medoids is dictated by the location of a predominant fraction of points inside a cluster and, therefore, it is lesser
sensitive to the presence of outliers. Like k-Means, methods based on k-Medoids [10] are highly sensitive to the
initial (random) selection of centroid, and in practice it is often necessary to run the algorithm several times
from different initializations. To overcome these problems, the Affinity Propagation, a technique which
simultaneously considers all data points as potential centroid (or exemplars). Treating each data point as anode
in a network, Affinity Propagation recursively transmits real-valued messages along the edges of the network
until a good set of exemplars (and corresponding clusters) emerges. These messages are then updated using
simple formulas that minimize an energy function based on a probability model.
The vector space model has been successful in IR because it is able to adequately capture much of the semantic
[14] content of document-level text. This is because documents that are semantically related are likely to contain
many words in common, and thus are found to be similar according to popular vector space measures such as
cosine similarity [7], which are based on word co-occurrence .However, while the assumption that (semantic)
similarity can be measured in terms of word co-occurrence may be valid at the document level, the assumption
does not hold for small-sized text fragments such as sentences, since two sentences may be semantically related
despite having few, if any, words in common. To solve this problem, a number of sentence similarity measures
have recently been proposed .Rather than representing sentences in a common vector space, these measures
define sentence similarity as some function of inter-sentence word-to-word similarities, where these similarities
are in turn usually derived either from distributional information [14] from some corpora (corpus-based
measures), or semantic information represented in external sources such as Word Net(knowledge-based
measures) of computing time.
165 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Ruspini [11] introduced a fuzzy c-partition p = (p1,p2….pc) by the extension to allow pi(x) to be functions
assuming values in the interval (O ,l) such that P1(x) +. . . + pc(x) =1 since he first applied the fuzzy set in
cluster. In fuzzy object data clustering, on the other hand, the problem of classifying N objects into C types is
typically solved by, first, finding C prototypes, which best represent the characteristics of as many groups of
objects, and then building a cluster around each such prototype, by assigning each object a membership degree
that is as much higher as greater its similarity degree with the prototype is. A prototype may be either a cluster
centre, or the most centrally located [12] object in a cluster, or a probability distribution, etc., depending on the
type of available data and the specific algorithm adopted. It should be noted that the knowledge of prototypes,
which are a condensed representation of the key characteristics of the corresponding clusters, is also an
important factor. Here the distance calculations for stable clusters in the iterative process, when the number of
proceeding iterations increases the cluster center number will also increases .In the FCM algorithm, a data item
may belong to more than one cluster with different degrees of membership. To analyzed a several popular
robust clustering methods [16] and established the connection between fuzzy set [15] theory and robust
statistics. The rough based fuzzy c-means [3] algorithm to arbitrary (non-Euclidean) dissimilarity data. The
fuzzy relational data clustering algorithm can handle datasets containing outliers and can deal with all kinds of
relational data. Parameters such as the fuzzification degree greatly affect the performance of FCM [12].
IV PROPOSED ALGORITHM
In this work, we analyze how one can take advantage of the efficiency and stability of clusters, when the data to
be clustered are available in the form of similarity [2] relationships between pairs of objects. More precisely, we
propose a new fuzzy relational clustering algorithm [1], based on the existing fuzzy C-means (FCM) algorithm,
which does not require any restriction on the relation matrix. FRECCA will give the output as clusters which are
grouped from text data which is present in a given documents. In this FRECCA algorithm, Page Rank algorithm
is used as similarity [2] measure.
4.1 Page Rank
We describe the application of the algorithm to data sets, and show that our algorithm performs better than other
fuzzy clustering algorithms. In the proposed algorithm, we describe the use of Page Rank [1] and use the
Gaussian mixture model approach. Page Rank is used as a graph centrality measure. Page Rank algorithm is
used to determine the importance of a particular node within a graph. Importance of node is used as a measure
of centrality. This algorithm assigns numerical score (from 0 to 1) to every node in graph. This score is known
as Page Rank Score. Sentence is represented by node on a graph and edges are weighted with value representing
similarity [4] between sentences. Page Rank can be used within the Expectation- Maximization algorithm to
optimize the parameter values and to formulate the clusters. A graph representation of data objects is used in
along with the Page Rank algorithm. It operates within an Expectation-Maximization; it is a framework which is
a general purpose method for learning knowledge from the incomplete data. Each sentence in a document is
represented by a node in the directed graph and the objects with weights indicate the object similarity [4].
166 | P a g e
International conference on Science, Technology and Management ICSTM-2015
4.2 EM Algorithm
It is an unsupervised method, which does not need any training phase; it tries to find the parameters of the
probability distribution that has the maximum likelihood of its parameters. Its main role is to parameter
estimation. It is an iterative method, which is mainly used to finding the maximum likelihood parameters of the
model. The E-step involves the computation of cluster membership probabilities. The probabilities calculated
from E-step are re estimated with the parameters in M-step.
167 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Figure 1 shows purity comparison and Figure 2 shows entropy comparison of various clustering algorithms.
VI CONCLUSION
In this paper already reviewed numerous clustering algorithms. But it is necessary to pre assume the number of
clusters for all these algorithms. Therefore, algorithm to find optimal solution is very important. By analyzing
various methods it is clear that each of them have their own advantages and disadvantages. The quality of
clusters depends on the particular application. When object relationship has no metric characteristics then
168 | P a g e
International conference on Science, Technology and Management ICSTM-2015
ARCA is a better choice. Among the different fuzzy clustering techniques FRECCA algorithm is superior to
others. It is able to overcome the problems in sentence level clustering. But when time is critical factor then we
cannot adopt fuzzy based approaches. A good clustering of text requires effective feature selection and a proper
choice of the algorithm for the task at hand. It is observed from the above analysis that fuzzy based clustering
approaches provide significant performance and better results.
REFERENCES
[1] Andrew Skabar,& Khaled Abdalgader 2013,„Clustering Sentence-Level Text Using a Novel Fuzzy
Relational Clustering Algorithm‟, IEEE Trans. Knowledge and Data Eng., vol. 25, no. 8,pp. 1138-
1150, No1.
[2] Chen Y, Garcia E.K, Gupta M.R, Rahimi A, & Cazzanti L 2009, „Similarity-Based Classification:
Concepts and Algorithms‟. Machine Learning Research, vol. 10, pp. 747-776.
[3] Corsini P, Lazzerini P, & Marcelloni F 2005, „A New Fuzzy Relational Clustering Algorithm Based on
the Fuzzy C-Means Algorithm‟, SoftComputing, vol. 9, pp. 439-447.
[4] Hatzivassiloglou V, Klavans J.L, Holcombe M.L, Barzilay R, Kan M , & McKeown K.R 2001,
„SIMFINDER: A Flexible Clustering Tool for Summarization‟, Proc. NAACL Workshop Automatic
Summarization, pp. 41-49.
[5] Hofmann T & Buhmann J.M 1997, „Pairwise Data Clustering by Deterministic Annealing‟, IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 1, pp. 1-14.
[6] Lee D & Seung H 2001, „Algorithms for Non-Negative Matrix Factorization‟, Advances in Neural
Information Processing Systems,vol. 13, pp. 556-562.
[7] Li Y, McLean D, Bandar Z.A, O‟Shea J.D, & Crockett K 2006,„Sentence Similarity Based on
Semantic Nets and Corpus Statistics‟, IEEE Trans. Knowledge and Data Eng., vol. 8, no. 8,pp. 1138-
1150.
[8] Luxburg U.V 2007, „A Tutorial on Spectral Clustering‟, Statistics and Computing, vol. 17, no. 4, pp.
395-416.
[9] MacQueen J.B 1967, „Some Methods for Classification and Analysis of Multivariate Observations‟,
Proc. Fifth Berkeley Symp. Math.Statistics and Probability, pp. 281-297.
[10] Noor Kamal Kaur, Usvir Kaur & Dheerendra Singh 2014, „K-Medoid Clustering Algorithm- A
Review‟, IJCAT) Volume 1 Issue 1 ISSN: 2349-1841.
[11] Ruspini, E.H 1969, „A new approach to clustering‟, Information and Control, vol. 15, pp. 22-32.
[12] Subhagata Chattopadhyay 2011, „A comparative study of fuzzy c-means algorithm and entropy-based
fuzzy clustering algorithms‟,Computing and Informatics, Vol. 30, 701–720.
[13] Tapas Kanungo, David Mount M, Nathan, J.D, Netanyahu, &Angela Y. Wu 2002, „An efficient k-
means clustering algorithm: Analysis and Implementation‟, IEEE Trans. Pattern analysis and machine
intelligence, vol. 24, no. 7.
[14] Wang D, Li T, Zhu S, & Ding C 2008,„Multi-Document Summarization via Sentence-Level Semantic
Analysis and Symmetric Matrix Factorization‟, Proc. 31st Ann. Int‟l ACM SIGIR Conf.Research and
Development in Information Retrieval, pp. 307-314.
[15] Yang, M.-S 1993, „A Survey of Fuzzy Clustering‟, Math. Computer Modelling, vol. 18, no. 11, pp 1-
16.
[16] Yu S.X & Shi J 2003, „Multiclass Spectral Clustering,” Proc. IEEENinth Int‟l Conf. Computer Vision‟,
pp. 11-17.
169 | P a g e
International conference on Science, Technology and Management ICSTM-2015
C. Y. Chen1, T. C. Chen2
1
Professor, Department of Civil and Water Resources Engineering,
National Chiayi University (Taiwan R.O.C.)
2
Jhongpu Fire Branch, Chiayi County Fire Bureau Third Corps (Taiwan R.O.C.)
ABSTRACT
A sedimentary hazard emergency rescue operation procedure in the Taihe Village Meishan Township, Chiayi
County during Typhoon Morakot (August 7-9, 2009) is reviewed and studied. A questionnaire was designed and
conducted to survey the firefighters who had participated in sedimentary disaster prevention education, disaster
preparedness, and emergency rescue and response prior to and during Typhoon Morakot. The survey results
were integrated into the case study. Results of the analysis show that problems encountered during sedimentary
hazard rescues during Typhoon Morakot include: (1) the magnitude of hazard exceeded the capacity of the
firefighters; (2) shortage of trained professionals; (3) shortage of rescue equipment; (4) roads destroyed in
mountainous areas; (5) communications cut off during severe weather conditions; (6) recurrence of hazards; (7)
difficult to rescue buried persons; (8) administrative processes were inefficient; and (9) the integration of
different rescue teams was inadequate.
I. INTRODUCTION
There were 2,620 fatal landslides recorded worldwide during 2004-2010 yrs, causing a total of 32,322 recorded
fatalities (Petley, 2012). Landslide has been seen as a major global hazard. Climate change increasing the
potential of extreme rainfall condition may be a contribution factor to landslide (Nadim et al., 2006). In the face
of multiple, compound hazards (including flood, landslides and debris flows, and breached natural dams (Chen
et al., 2011)), the relief workers feel helpless. The huge landslide in Southern Leyte, Philippines in 2006 caused
139 dead with 980 missing (Orense and Sapuay, 2006; Evans et al., 2007; Catane et al. 2007; 2008; Lagmay et
al., 2008). Only 20 people rescued, two eventually died, in the landslide dam breaching induced secondary
hazard (Catane et al. 2007).
Taiwan is regularly struck by powerful typhoons. Since 2000, typhoons Toraji (2001), Nari (2001), Mindulle
(2004), and Morakot (2009) have caused numerous sedimentary hazards such as landslides and debris flows.
Between 2006 and 2010, there were 305 sedimentary hazards involving rescue operations, injuring 70 and
170 | P a g e
International conference on Science, Technology and Management ICSTM-2015
causing 619 deaths in Taiwan (National Fire Agency, 2014). Thirty-six of these hazards (roughly 25%),
including 30 caused by Typhoon Morakot, occurred in Chiayi County in south-central Taiwan.
Typhoon Morakot made landfall on Taiwan from 7 to 9 August 2009, bringing the highest recorded rainfall in
the past 50 years to southern and south-central Taiwan (Chien and Kuo, 2011). The massive rainfall caused
immense damage to the natural and human landscape. A total of 9,333 landslides (2.26 km2) were interpreted
from change detection analysis of satellite images (Tsai et al., 2010). Numerous sedimentary hazards occurred,
resulting in injuries, road destruction, and broken bridges. At Shaolin Village in Kaohsiung County in southern
Taiwan, a giant landslide dam breach caused 398 deaths and buried at least 169 buildings during Typhoon
Morakot (CEOC, 2014; Tsou et al., 2011). No one was rescued in the debris masses buried area. Landslide
locations and magnitudes information were urgently necessary during the rescue emergency (Zhang et al., 2010).
In such large-scale landslide disaster, speed, accuracy, and the maximum appropriation resources are crucial
(Lagmay et al., 2008). “A systematic and technically informed approach to search and rescue missions in
large-scale landslide disaster, and the formulation of better disaster management policies are needed” (Lagmay
et al., 2008).
Given the urgent need for deeper assessment of disaster procedures and processes, this study reviews the
sedimentary hazard emergency rescue procedure in the Taihe area of Chiayi County in south-central Taiwan
during typhoon Morakot in 2009. The purpose of this study is to learn what problems were encountered during
the rescue process and to offer recommendations for ameliorating these problems. By doing so, it can make
important contributions to both the literature on disaster response and to the practical needs of disaster
responders.
171 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Acceptance
of case
Notification and
firefight branch
dispatched
Field
reconnaissance
Identify buried
position
Decide
excavation
methodologies
Evaluation
Y Engineering
engineering
support
support
N
excavation
N Victims
unearthed
Y
N Victim’s
Vital signs
family
Y
Medical treatment
Fig. 1: Flowchart of Emergency Rescue Operation Procedure for Sedimentary Hazards (revised after
Chiayi County Fire Bureau, http://w3.cycfd.gov.tw/)
2. Identification of possible locations of debris masses and buried people. The most direct method is by querying
eyewitnesses or persons in the area. It is possible to survey locations of victims using field topography and
scientific instruments.
3. Determine how to excavate. Excavation work by manual labor is required at the beginning to prevent heavy
machinery from doing further injury to victims. Consent of the victim’s family is necessary to permit use of
heavy machinery to excavate after 72 hours have passed without manual labor resulting in rescue.
Engineering support is required to prevent further collapse of debris masses. Excavation from the top of
collapsed buildings downward is prohibited to prevent further hazard. The field commander is responsible for
deciding on the type of excavation and for determining whether to re-survey to find the possible location of
victims.
4. Emergency medical treatment: Victims will be returned to the family if they do not display vital signs. If the
victims have vital signs or the family demands resuscitation attempts, the victim will be immediately taken to
the hospital. The rescue action is finished at this stage.
172 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Repeated hazards (buried building collapse or further slope slip) could occur under continued rainfall, during
heavy machine excavation, or as a result of earthquake aftershocks. Firefighters are generally not professionally
trained in judgment of secondary hazards and necessary engineering measures. According to the “Operating
Procedures for Prevention of Repeated Debris Flow Disasters and for Recovery and Reconstruction” issued by
the Soil and Water Conservation Bureau in Taiwan [15], SWCB workers and professional engineers should
determine the possibility of repeated hazards at the scene. Emergency engineering measures are usually
necessary to avoid repeated hazards. The procedure for the process of identification and emergency engineering
is:
1. Field investigation
(1). Hazard identification: Contact the village head to confirm the hazards and their magnitude.
(2). Professional investigation: The SWCB is the department responsible for debris flow hazards prevention and
mitigation in Taiwan. The SWCB and local government and professional engineers will investigate the
hazard magnitude, the endangered area, and give professional suggestions for the safety of rescuers.
(3). GPS orientation: using GPS to locate debris flows, landslides, and hazard spots.
(4). Items for investigation: site location, type of disaster, affected areas, injured people, damages, magnitude
(debris masses volume), estimated losses, and suggested engineering measures.
2. Emergency engineering measures and response
(1). Emergency soil and water conservation engineering measures: SWCB and local government will perform
emergency soil and water conservation engineering measures such as strengthening buildings, stabilizing
slopes, and other engineering disaster prevention measures for in danger sedimentary hazard areas.
(2). Rush to repair: Local government will make urgent repairs to blocked roads, damaged bridges,
communications, and community facilities.
(3). Temporary protection measures: Local government should perform temporary protection measures for
severely damaged areas and erect warning signals.
(4). Delineation of restrained area: local government should demarcate restricted areas to separate hazard areas
and prohibit people from entering the area or ask them to leave.
(5). Checking drainage system: SWCB and local government should check and maintain the detention, deposit,
and retention ponds drainage unobstructed in the jurisdiction to prevent riverbed deposition induced floods.
(6). Riverbed debris mass dredging: The responsible departments should dredge the debris in riverbeds to avoid
further hazards.
(7). Building reinforcement: The responsible departments should dismantle broken structures and strengthen
temporary support and protection measures for damaged buildings.
(8). Spoil and debris masses disposal: The responsible departments should establish safe deposition areas and
exchange or recycling systems for landslides, road blockage, and dredged debris masses to prevent further
hazards.
173 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Fig. 2: Site Location and Landslides after Typhoon Morakot in the Study Area
Typhoon Morakot brought torrential rainfall of up to 1,812 mm to the Taihe area during 7-10 August 2009,
equivalent to over 50% of the annual rainfall. Three potential debris flow creeks were identified by the SWCB
(http://www.swcb.gov.tw). Morakot induced 243 landslides and debris flows with a total area of 2.75 km2
interpreted from Spot 5 images in the area (Chen and Huang, 2013). Villages in the mountainous areas in Chiayi
County were isolated and communications cut off by numerous landslides and debris flows caused by Morakot.
Emergency rescue was initially unavailable. Finally, at 7:05 morning on 9 August the Chiayi County Fire
Bureau received a report that Taihe and nearby villages’ had been cut off by landslides and 4 people had been
buried by debris. The Fire Bureau put together a rescue team on 9 August, first driving and then walking to the
scene, but the team was unable to reach the hazard scene. Finally, the firefighters walked to the village using a
historical foot track.
The severe weather also stopped helicopters bringing food and other necessities to the hazard scenes until 10
August, when three firefighters took a plane to the village. A shovel loader was hanged by the helicopter for
road to rush through urgently. A rough emergency road finally enabled a team of seven firefighters, one sniffer
dog and two trainees, 30 soldiers and two big excavators, to reach the area on the 26th. The excavation started in
the morning of the 26th and by the following afternoon the bodies of the four victims were unearthed (Fig. 3).
174 | P a g e
International conference on Science, Technology and Management ICSTM-2015
(d)
Fig. 3: Sedimentary Hazards Emergency Rescue in the Taihe Village (a) Excavators Removing Debris (b)
A Sniffer Dog Searching for Buried Victims (c) A Shovel Loader was Hanged by the Helicopter for
Blocked Road to Rush Through Urgently (d) Relief Workers Excavate Debris Masses using Hand Tools
(August of 2009, Chiayi Fire Bureau, Chiayi County)
The rescue action lasted 20 days, from 9-28 August, 2009. The Chiayi County Fire Bureau sent 223 person, 40
vehicles, and support organizations assisted with a further 189 person, 65 vehicles (Chen and Chen, 2011). In
addition, there were four evacuation shelters and six public buildings available for temporary shelters in the
village. Most of the shelters were damaged or cutoff by landslides (Fig. 4).
175 | P a g e
International conference on Science, Technology and Management ICSTM-2015
176 | P a g e
International conference on Science, Technology and Management ICSTM-2015
factor 2. The official training of firefighters is sufficient to enable them to cope with sedimentary
hazards (60.5% selected disagree/strongly disagree)
C. 1. The vehicles and equipment of the fire department are sufficient to conduct sedimentary
Equipment hazard emergency rescue action (60.9% selected disagree/strongly disagree)
related 2. Firefighters can maintain vehicles, detection equipment, and stock enough fuel for
factor emergency rescue in advance. (77.4% selected agree/strongly agree)
3. The fire department can supply and repair vehicles, equipment, and food soon for
emergency rescue. (50.7% selected disagree/strongly disagree)
4. Emergency rescue equipment needed at the scene can be rapidly supplied. (38.3%
selected disagree/strongly disagree)
D. Available 1. The fire department has established a detailed list of residents and manpower, and
resources established emergency channels of communication. (73.3% selected agree/strongly agree)
2. The fire department has established a detailed list of resources in the jurisdiction to
requisition for emergency rescue. (83.1% selected agree/strongly agree)
3. The fire department has established a detailed list of resources that can be contacted
immediately and dispatched support for emergency rescue. (45.9% selected
disagree/strongly disagree)
4. Volunteer firefighters, volunteers, Neighborhood Rescue Team in the community, and the
financial resources of the people in the jurisdiction can help in sedimentary hazards
rescue. (73.3% selected agree/strongly agree)
5. The fire department and non-governmental organizations have signed a contract to supply
heavy machines for emergency rescue use. (64.2% selected agree/strongly agree)
E. 1. The fire department should cooperate with other departments to perform disaster
Coordination prevention education and promotion and evacuation drills periodically. (82.7% selected
of work agree/strongly agree)
2. The fire departments can cooperate well with other rescue teams (military, NGOs) and
coordinate the division of rescue work. (45.7% selected disagree/strongly disagree)
3. The procedure to request support is varied and time consuming and their dispatch requires
a top official to integrate. (75.3% selected agree/strongly agree)
F. Other 1. The fire department should pre-plan emergency rescue and response strategies for
factors potential sedimentary hazards. (89.7% selected agree/strongly agree)
2. The greater magnitude of sedimentary hazards compared to the ordinary duties of the fire
department means that sedimentary hazard rescue is not mature in all aspects. (95.9%
selected agree/strongly agree)
3. The information and correction of sedimentary hazards are not clear and need verified,
causing difficulties for firefighters in rescues. (89.3% selected agree/strongly agree)
4. Firefighters’ needed food and water can be supplied soon during sedimentary hazard
emergency rescue. (38.3% selected agree/strongly agree)
177 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Interviewee responses were on a 1-5 scale ranging from “strongly agree” to “strongly disagree”. Table 2 shows
the aptitude trend analysis of the questionnaire. The designed middle value was 3 (no opinion) and mean values
lower than 3 represent agreement with the questions while mean values greater than 3 represent disagreement.
178 | P a g e
International conference on Science, Technology and Management ICSTM-2015
180 | P a g e
International conference on Science, Technology and Management ICSTM-2015
VI. CONCLUSION
The questionnaire results show that the relief workers lack sedimentary hazard emergency rescue training and
equipment. The extreme rainfall conditions in the mountainous areas interrupted rescue action by cutting off
communications. Information was unclear, increasing the risks for relief workers, and the buried victims were
difficult to rescue under these straitened circumstances. Requests for support from other agencies are
time-consuming and horizontal communication across various units is hard to integrate during the rescue action.
The proposed suggestions for solving the aforementioned difficulties include promoting professional training for
relief workers, strengthening emergency rescue equipment, and establishing rescue resources prior to disasters.
Evaluations using advanced equipment, implementation of hazard inspection, reporting and notification
mechanisms during the rescue are also suggested. A disaster incident command system, promotion of the
disaster response level of the EOC, standardized military emergency rescue equipment and communications are
needed as well. Finally emergency rescue efficiency can be enhanced by workshops for sharing the sedimentary
hazard emergency rescue experience.
VII. ACKNOWLEDGEMENTS
This work was supported in part by Ministry of Science and Technology in Taiwan in contract No. NSC
102-2221-E-415-008-MY3.
REFERENCES
[1] Petley, D., “Global patterns of loss of life from landslides”, Geology 40, 2012, 927-930.
[2] Nadim, F., Kjekstad, O., Peduzzi, P., Herold, C. and Jaedicke, C., “Global landslide and avalanche hotspots”,
Landslides 3(2), 2006, 159-173.
181 | P a g e
International conference on Science, Technology and Management ICSTM-2015
[3] Chen, Y. S., Kuo, Y. S., Lai, W. C., Tsai, Y. J., Lee, S. P., Chen, K. T. and Shieh, C. L., “Reflection of
typhoon Morakot-the challenge of compound disaster simulation”, Journal of Mountain Science 8(4), 2011,
571-581.
[4] Orense, R. P. and Sapuay, S. E., “Preliminary report on 17 February 2006 Leyte Philippines landslide”, Soils
and Foundations 45, 2006, 685-693.
[5] Evans, S. G., Guthrie, R. H., Roberts, N. J. and Bishop, N. F., “The disastrous 17 February 2006
rockslide-debris avalanche on Leyte Island, Philippines: a catastrophic landslide in tropical mountain
terrain”, Natural Hazards and Earth System Sciences 7, 2007, 89-101.
[6] Catane, S. G., Cabria, H. B., Tomarong, C. P., Saturay, R. M., Zarco M. A. H. and Pioquinto, W. C.,
“Catastrophic rockslide-debris avalanche at St. Bernard, Southern Leyte, Philippines”, Landslides 4(1), 2007,
85-90.
[7] Catane, S. G. Cabria, H. B. Zarco, M. A. H. Saturay, R. M. and Mirasol-Robert, A. A., “The 17 February
2006 Guinsaugon rock slide-debris avalanche, Southern Leyte, Philippines: deposit characteristics and
failure mechanism”, Bulletin of Engineering Geology and the Environment 67(3), 2008, 305-320.
[8] Lagmay, A. M., Tengonciang, A. M., Rodolfo, R. S., Soria, J. L., Baliatan, E. G., Paguican, E. R., Ong, J. B.,
Lapus, M. R., Fernandez, D. F., Quimba, Z. P. and Uichanco, C. L., “Science guides search and rescue after
the 2006 Philippine landslide”, Disasters 32(3), 2008, 416-433.
[9] National Fire Agency, Ministry of the Interior, available at: http://www.nfa.gov.tw/main/ (accessed 10 July
2014).
[10] Chien, F. C. and Kuo, H. C., “On the extreme rainfall of Typhoon Morakot 2009”, Journal of Geophysical
Research 116, 2011, D05104, doi:10.1029/2010JD015092.
[11] Tsai, F., Hwang, J. H., Chen, L. C. and Lin, T. H., “Post-disaster assessment of landslides in southern
Taiwan after 2009 Typhoon Morakot using remote sensing and spatial analysis”, Natural Hazards and Earth
System Science 10, 2010, 2179-2190.
[12] CEOC, Central Emergency Operation Center, available at: http://210.69.173.37/eoc/ (accessed 6 July
2014).
[13] Tsou, C. Y. Feng, Z. Y. and Chigira, M., “Catastrophic landslide induced by Typhoon Morakot, Shiaolin,
Taiwan”, Geomorphology 127(3-4), 2011, 166-178.
[14] Zhang, W., Lin, J., Peng, J. and Lu, Q., “Estimating Wenchuan Earthquake induced landslides based on
remote sensing”, International Journal of Remote Sensing 31(13), 2010, 3495-3508.
[15] SWCB, Soil and Water Conservation Bureau, available at: http://www.swcb.gov.tw (accessed 20 August
2014).
[16] Chen, C. Y. and Huang, W. L., “Land use change and landslide characteristics analysis for
community-based disaster mitigation”, Environmental Monitoring and Assessment 185(5), 2013,
4125-4139.
[17] Chen, T. C. and Chen, C. Y., “Sediment disaster emergent measures study- An example of Thaihe Village,
Meishan Township in Chiayi County during Typhoon Morakot”, 2011 Conference for Disaster Management
in Taiwan, 17-18 November, Taipei, Taiwan (in Chinese).
182 | P a g e
International conference on Science, Technology and Management ICSTM-2015
[18] Pohl C. and Van Genderen, J. L., “Multisensor image fusion in remote sensing: concepts, methods and
applications”, International Journal of Remote Sensing 19(5), 1998, 823-854.
[19] Queensland Government, State Disaster Management Group, available at:
http://www.emergency.qld.gov.au/emq/css/landslides.asp (accessed 12 December 2014).
[20] The Landslide Blog-AGU Blogosphere, A huge landslide on Freeway No.3 in Taiwan, available at:
http://www.landslideblog.org/2010/04/huge-landslide-in-freeway-No.3-in-taiwan.html (accessed 12 July 12
2014).
[21] Taiwan News, 2010, Yilan County: More body parts from Suhua Highway accident victims found,
available at: http://www.taiwannews.com.tw/etn/news_content.php?id=1414401 (accessed 12 July 2014).
[22] Hall, R. A. and Cular, A., “Civil-military relations in disaster rescue and relief activities: response to the
mudslide in southern Leyte, Philippines, Scientia Militaria”, South African Journal of Military Studies 38(2),
2010, 62-85.
Biographical Notes
Dr. C. Y. Chen is working as a Professor in Department of Civil and Water Resources Engineering, National
Chiayi University, Chiayi City, Taiwan R.O.C.
Mr. T. C. Chen is working as a Leader in Jhongpu Fire Branch, Chiayi County Fire Bureau Third Corps,
Chiayi County, Taiwan R.O.C.
183 | P a g e
International conference on Science, Technology and Management ICSTM-2015
ABSTRACT
As technology scales, the physical size of the Integrated Circuit(IC) is reduced by means of reducing the transistor
count and Input Output (I/O) pins. In order to minimize the I/O pins the interconnect delays will be reduced
internally. In major applications the integrated circuits plays a major role because it control faster devices to
communicate with slower devices and also allow devices to communicate with each other over a serial data bus
without data loss. Hence to allow serial communication and to reduce the interconnect delays the I 2C (Inter-
Integrated Circuit) protocol is considered. The I2C controller provides support for a communication link between
integrated circuits and memory units on a board.I 2C is a two wire, bidirectional serial bus that provides effective
data communication between two devices. I2C bus supports many devices and each device is recognized by its
unique address. Secure Digital is the most widely used portable memory standard. Its ultra-compact and rugged
architecture, simple interface, high security, low power consumption and reliable operation. This module was
designed in Verilog HDL and synthesized using Xilinx ISE Design Suite 13.2.
Keywords: Inter-Integrated Circuit, Finite State Machine, Serial Data, Serial Clock, FPGA, Verilog.
I. INTRODUCTION
The physical size and power requirement of IC get reduce over the years. The main reason for that is more number
of transistors can be integrated into smaller size and less number of interconnections wire present in between ICs
can possible. The actual circuitry of the IC is much smaller than the packaging of the IC but it requires larger area to
cover because of interconnection wire present in between ICs. These wire requirements can be reduce by using I2C
that is Inter-Integrated circuit bus. This communication has a special protocol assigned to it which is I2C Protocol.
I2C bus physically consists of two active wires and a ground connection. The two active wires namely Serial
Clock(SCL) and Serial Data (SDA). These wires are bidirectional half duplex in natures which carry information
between the devices connected to the bus. Each device is acknowledge by a unique address whether it is a
microcontroller, LCD driver, memory or keyboard interface and can operate as either a transmitter or receiver,
depending on the function of the device. In I2C bus devices can easily added or removed which is very useful for
184 | P a g e
International conference on Science, Technology and Management ICSTM-2015
low maintenance and control application in embedded system. There are many reasons for using serial interface
design much more important application includes serial communication like sensors communication with personal
computer. Many common embedded system peripherals, such as analog-to-digital and digital-to-analog convertors,
LCDs, and temperature sensors, support serial interfaces. The objective of this project is to design and analyse the
data transmitter and receiver by using an I2C bus protocol by using Verilog hardware description language and
synthesis by the software Xilinx.
185 | P a g e
International conference on Science, Technology and Management ICSTM-2015
serial communication standard protocol. It is designed primarily for simple but efficient integrated circuit (IC)
control.
186 | P a g e
International conference on Science, Technology and Management ICSTM-2015
only two situations where the SDA lines can changes when SCL is at high.
187 | P a g e
International conference on Science, Technology and Management ICSTM-2015
188 | P a g e
International conference on Science, Technology and Management ICSTM-2015
The above simulation output gives the detail picture of data can be write in to the memory address 00010110 and
after that slave responds as acknowledge that data is received.
The design summary indicated in table 1 shows that the global clock used is 8%, the number of bounded input
output blocks utilization is 28%,hence the system interconnect will be minimized by reducing the flip flops.
VI. CONCLUSION
The design of I2C controller using Verilog HDL, simplifies the design process. The result shows successful storage
of data transmitted by the master and the power dissipation due to leakage is 0.052w analyzed by synthesis. The
result shows that minimal resources are utilized in designing the I 2C master as only 2 % slices, 1% flip flops and 2%
LUTs are utilized. The logic synthesis tool will optimize the circuit in area and timing for the new technology. The
design of I2C master controller has immense applications in future as the number of devices connected to a system is
only going to increase. So there is always a need for a system which supports multiple protocols. The drawback of
designed I2C is that the bounded I/O utilization will be more when compared it with existing design. Dumping of
Verilog code to FPGA to realize the exact hardware of the circuit and the verification of I 2C bus using system
Verilog based open verification methodology.
REFERENCES
[1] Bollam Eswari, N.Ponmagal, K.Preethi, S.G.Sreejeesh, “Implementation of I2C Master Bus Controller on
FPGA”, ICCSP/IEEE- Advanced technology,Pp-1113-1116, April 2013.
[2] Prof. Jai Karan Singh, “Design and Implementation of I2C master controller on FPGA using VHDL”,
International Journal of Engineering and Technology (IJET), Vol 4 No 4 Aug-Sep 2012.
189 | P a g e
International conference on Science, Technology and Management ICSTM-2015
[3] F. Leens, “An Introduction to I2C and SPI Protocols,” IEEE Instrumentation & Measurement Magazine, pp. 8-
13, February 2009
[4] A.K. Oudjida, M.L. Berrandjia, R. Tiar, A. Liacha, K. Tahraoui “FPGA Implementation of I2C & SPI Protocols
a Comparative Study” IEEE 2009
[5] A.R.M. Khan, A.P.Thakare, A.R.M. Khan, ”FPGA based design and implementation of serial data transmission
controller”, International Journal of Engineering Science and Technology Vol. 2(10), 2010, 5526-5533.
[6] The Fundamentals of Efficient Synthesizable Finite State Machine Clifford E. Cummings, Sunburst Design,
Inc. Xilinx, Inc. 2002
[7] I2C bus Inter Integrated Circuits bus by Philips Semiconductors.
[8] UM10204 I2C -bus specification and user manual Rev. 4 - 13 February 2012
190 | P a g e
International conference on Science, Technology and Management ICSTM-2015
ABSTRACT
Optimized local Ternary Patterns a new model for texture analysis, already many texture model has been
introduce in few years, but more simple and efficient method is Local Binary Pattern (LBP). LBP has some
problem like feature vector generation and to handle the challenges like gray scale variation, illumination
variation, rotation variation and noise. Optimal Local Ternary Pattern(OLTP) introduce for feature vector
generation. The proposed approach LTP extended from LBP. LBP and LTP still have a challenge in noise, so
new method has been introduce to reduce the noise namely NRLBP and ENRLBP to capture line patterns both
are more resistant to noise compared with LBP, LTP and many other variants. Already the experiment result
also refered in proposed texture model improves the classification accuracy and speed the classification
process.
Keywords: Center-Symmetric Local Ternary Pattern , Extended Noise-Resistant Local Binary
Pattern , fuzzy Local Binary Pattern, Local Binary Pattern , Local Ternary Pattern , Noise-
Resistant Local Binary Pattern and Optimal Local Ternary Pattern.
I INTRODUCTION
Image processing is used to extract the useful information from image has an input and extract useful
information from the digital image. Image segmentation, Image compression, Image correspondence are some
image analysis. Feature extraction is task for sub process in image analysis a feature like color texture and shape
from a digital image.Textures are defined as wide variability and is structure composed of large number of more
or less similar elements or patterns. Texture has different shapes and model is not adequate for a variety of
texture. A texture analysis have four categories they are statistical methods and signal processing methods.
LBP operator transforms an image into an array or image interger labels for micropatterns it has been formed by
pixels and immediate neighbours[2].LBP encodes the sign of pixel to pixel difference in neighbourhood to a
binary code.The histogram of such code in an image block can be used in texture classification[2],dynamic
texture recognition, facial analysis, human detection and many other tasks.LBP is less sensitive to illumination
variation extracting histogram of micropatterns in a patch location information is preserved.
191 | P a g e
International conference on Science, Technology and Management ICSTM-2015
II.EXISTING WORK
192 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Even after ten years of its introduction, still there have been various extensions and modifications from the
original LBP operator, because it is computationally simple and very robust in terms of rotational and gray-scale
variations. Some recent developments in medical imaging [1] moving object detection [7] and facial expression
recognition [8] prove that the LBP texture model is still receiving a lot of attention. However LBP texture model
is considered to be sensitive to noise especially in uniform regions [9] Moreover, it supports only a binary level
comparison for encoding and thereby it is inadequate to represent the local texture information.
193 | P a g e
International conference on Science, Technology and Management ICSTM-2015
In order to reduce the high dimensionality of LTP, Center-Symmetric LTP was proposed in [2] Instead of the
pixel difference between the neighboring pixel and the central pixel, the pixel difference between diagonal
neighbors is calculated. In Local Adaptive Ternary Patterns [2] and extended LTP [2] instead of using a constant
threshold, the threshold is calculated for each window using some local statistics, which makes them less
sensitive to illumination variations. In Local Triplet Pattern [2] the equality is modeled as a separate state, and a
tri-state pattern is formulated. It can be viewed as a special case of LTP [2] LTP and its variants partially solve
the noise-sensitive problem. However, they lack a mechanism to recover the corrupted image patterns. Here a
Noise- Resistant LBP (NRLBP) and an Extended Noise-Resistant LBP (ENRLBP) is proposed to address this
issue.
III.PROPOSED WORK
194 | P a g e
International conference on Science, Technology and Management ICSTM-2015
The proposed NRLBP corrects noisy non-uniform patterns back to uniform pattern. Figure (1) shows the
histogram of LBP, LTP and NRLBP .The threshold t is chosen as 10 for LTP and NRLBP. LTP histogram is the
concatenation of positive LBP histogram and negative LBP histogram. The last bin of each histogram is
corresponding to non-uniform patterns, and other bins are corresponding to uniform patterns. Clearly, compared
with LBP histogram and LTP histogram, non-uniform patterns in NRLBP histogram are reduced significantly
from about 35% to about 10% only.
The proposed NRLBP corrects a large amount of non-uniform patterns that are corrupted by the noise back to
uniform patterns. The proposed NRLBP is different from LBP and LTP in many other aspects besides the
capability of noise resistance and error-correction. The LBP code is one of the NRLBP code set if it is uniform.
The only exception is that the LBP code is non-uniform and is corrected back to uniform code in NRLBP.
Compared with LTP, the treatment of uncertain state is totally different for NRLBP. For LTP, all uncertain bits
are set to 0 for positive half and 1 for negative half whereas for the proposed NRLBP, do not hurry for a
decision of the uncertain bits.
To treat them as if they could be encoded as 1 and/or 0, and determine their values based on the other bits of the
code. Mathematically, for LTP, for positive half and for negative half, value is determine for NRLBP. The
number of histogram bin is also different. LTP histogram consists of 118 bins, whereas NRLBP histogram only
has 59 bins. For implementation, a look-up table from the uncertain code to the feature vector of NRLBP
histogram can be precomputed. Then, the feature vector of local image patch can be easily obtained by summing
up the feature vector of each pixel in this image patch.
195 | P a g e
International conference on Science, Technology and Management ICSTM-2015
The following texture model, Optimized Local Ternary Patterns (OLTP) which is rotational invariant, gray-scale
invariant, image histogram equalization invariant and noise resistant is proposed. OLTP operator uses only
optimal set of patterns for describing a local image texture This newly proposed texture model, OLTP uses a
total number of 24 unique optimal patterns for texture representation. All other patterns are termed as
“suboptimal” patterns and grouped under one label 25.
Therefore the dimension of pattern spectrum has been reduced from 6561 to 25, that too with optimal set of
patterns. Among these 24 unique optimal patterns, 17 patterns are having a maximum of 2 transitions in their
sub patterns 3 patterns are having and there are 4 patterns for some of the pattern strings with relevant details of
their uniformity, level of optimality and whether they are optimal patterns or not. some selected texture images
from Brodatz album and their corresponding pattern spectrum of the optimal patterns
obtained through proposed OLTP texture model.
IV. CONCLUSION
This study proposed a new spatial method of texture modeling approach called Optimized Local Ternary
Patterns (OLTP). This study also introduced a new concept called, “Level of Optimality”, which is very simple
and computationally efficient, to select the optimal patterns among the uniform patterns. On one hand, like
conventional LBP approach, the proposed method OLTP has the properties of rotation invariant and gray-scale
invariant.
On the other hand, like LTP, it has the ability to with stand against the noise also. LBP is sensitive to noise.
Even a small noise may change the LBP pattern significantly. LTP partially solves this problem by encoding the
small pixel differences into the same state. However, both LBP and LTP treat the corrupted patterns as they are,
and lack a mechanism to recover the underlining image local structures. As the small pixel difference is most
vulnerable to noise, we encode it as uncertain bit first, and then determine its value based on the other bits of the
LBP code to form a code of [2] image local structure.
The proposed approaches show stronger noise-resistance compared with other approaches. Inject Gaussian noise
and uniform noise of different noise levels on the AR database for face recognition and the Outex-13 dataset for
texture recognition. Compared with FLBP, the proposed approaches are much faster and achieve comparable or
slightly better performance. They consistently achieve better performance than all other approaches.
Further, it was also experimentally proved that this newly proposed texture model is histogram equalization
invariant. The quality of the proposed approach was validated with many numbers of experiments to prove that
this OLTP is robust to grey-scale variation, rotation variation, histogram equalization and noise.
This proposed OLTP texture method on one side gives better classification accuracy than recently introduced
LTP texture approach. On the other side, it uses only half the number of uniform patterns of LTP method. It was
experimentally in [1] proved that the optimal patterns of the proposed texture model OLTP are the fundamental
properties of textures and they are the dominant patterns in the uniform patterns of the LTP model. Since the
proposed OLTP is robust in every aspect it can be a good replacement for both LBP and LTP. the future work,
196 | P a g e
International conference on Science, Technology and Management ICSTM-2015
the proposed texture model OLTP can be tested for image texture segmentation problems. The proposed
approach can also be checked for color texture images.
REFERENCE
[1] J. Ren, X. Jiang, and J. Yuan, “Noise-resistant local binary pattern with an embedded error-correction
mechanism,” IEEE Trans. Image Process., vol. 22, no. 10, pp. 4049–4060, Oct. 2013.
[2] M. Raja and V. Sadavisam, “Optimized local ternary patterns: A new texture model with set of optimal
patterns for texture analysis,” J. Comput. Sci., vol. 9, no. 1, pp. 1–14, 2013.
[3] Ahonen, T. and M. Pietikainen, 2007. Soft histogramsfor local binary patterns. Proceedings of the Finnish
Signal Processing Symposiumm, (FSPS’ 07), Finland, Oulu, pp: 1-4.
[4] Brodatz, P., 1966. Textures: A Photographic Album for Artists and Designers. 1st Edn., Dover Publications,
New York, ISBN-10: 0486216691, pp: 112.
[5] Coggins, J.M. and A.K. Jain, 1985. A spatial filtering approach to texture analysis. Patt. Recog. Lett., 3:195-
203. DOI: 10.1016/0167-8655(85)90053-4
[6] Daugman, J.G., 1980. Two-dimensional spectral analysis of cortical receptive field profiles. Vis. Res.,
20:847-856. DOI: 10.1016/0042-6989(80)90065-6
[7] Fu, K.S., 1982. Syntactic Pattern Recognition and Applications. 1st Edn., Prentice Hall, Englewood Cliffs,
N.J., ISBN-10: 0138801207, pp: 596. Galloway, M.M., 1975. Texture analysis using gray level run lengths.
Comput. Graphics Image Proces., 4: 172-179. DOI: 10.1016/S0146-664X(75)80008-6.
[8] Ahmed, F., E. Hossain, A.S.M.H. Bari and A.S.M. Shihavuddin, 2011. Compound local binary pattern
(CLBP) for robust facial expression recognition. Proceedings of the IEEE 12th International Symposium on
Computational Intelligence and Informatics, Nov. 21-22, IEEE Xplore Press, Budapest, pp: 391-395. DOI:
10.1109/CINTI.2011.6108536.
[9] Haralick, R.M., K. Shanmugam and I. Dinstein, 1973. Textural features for image classification. IEEE
Trans. Syst. Man Cybernetics, 3: 610-621. DOI: 10.1109/TSMC.1973.4309314.
[10] Heeger, D.J. and J.R. Bergen, 1995. Pyramid-based texture analysis/synthesis. Proceedings of the 22 nd
Annual Conference on Computer Graphics and Interactive Techniques, Aug. 06-11, ACM Press, USA., pp:
229-238. DOI: 10.1145/218380.218446.
[11] Madasamy Raja, G. and V. Sadasivam / Journal of Computer Science 9 (1): 1-15, 2013
197 | P a g e
International conference on Science, Technology and Management ICSTM-2015
1
Neharika Nigam, 2Kirti Bansal, 3Rajani Sharma, 4Rajender kumar Trivedi
1,2
B.Tech(CSE) Student,Graphic Era University (India)
3
M.Tech(CSE) Graphic Era University(India)
4
MCA ,Graphic Era University(India)
ABSTRACT
With the advent of social networking, especially Facebook we are busy everyday posting our details and adding
and rejecting people as friends and creating so called a social virtual human network which shares ideas,
feeling, status, pictures and so on. This paper presents the practical implementation of how to know the detailed
status of social networking Facebook account so as to know the total number of people on our network, our
friends, their numbers gender-wise, their status, their IDs, picture shared, likes unlike, different reactions of
people in our network on our posts, all can be studied as a summary with the help of Analysis Software R with
an additional package of Facebook R on it.
General Terms: Facebook, Social Networking Sites, R facebook packages, Rcurl, Fetching.
Keywords: Rtool, Graph API, Fqlquery, Datamining, Opinion Mining.
I INTRODUCTION
Social networking site lays a vast flask on the life of a common man and mould their life to a social man. It
come forth as a paramount and provides a effectual means for people to get linked so as to use them effectively.
It provides a platform for sharing views,interest,real life connections etc coalesce into communities. These sites
are chunk of everyday life and consort insurgent changes in communications among various age groups people
especially Facebook. Facebook is one of the social networking sites which refer to online community and began
as a craze. It contains wide variety of data, which makes it a time consuming process for analysis. To overcome
this, we use R language tool which is a open source software for the computation of statistics and graphics,
along with the help of FQL query (Queries for Facebook data) and Graph API. The R language is widely used
among the data miners for data analysis, thereby converting raw data into useful information. Opinion mining
also plays a very important part in the data extraction as it gives us the full review of various sites prevailing on
Facebook and also help us to elaborate the overall numeric rating of sites. It aids us in making decisions about
the sites whether they are good or bad and gives us an opportunity to tell our opinions about the sites i.e. what
198 | P a g e
International conference on Science, Technology and Management ICSTM-2015
are the improvements need to be done so that it can be proved fruitful and easily admired by the people.
II PROBLEMSTATEMENT
Everything in this world has two sides like a coin. One positive and another negative. Similarly face book which
we use in our day to day life has adverse backlash on the life of people due to the vast data present or shared by
the people; it takes lot of time to access. For instance, if a user wants to see all the existing pages of any site, he
has to search it on the search engine of the face book individually rather than retrieve all the pages
simultaneously. Asides these, if u want to chart the difference between two sites by simple Facebooklogin, then
it becomes a time consuming process.This all can be resolved by queries which helps us to find things easily and
timesaving.
For fetching data from facebook,first login your facebook account page and open a new tab next to that
facebook page and paste following link
(https://developers.facebook.com/tools/explorer?method=GET&path=me%3Ffields%3Did%2Cname%2Cfriend
s&version=v2.1on tab. Click gets access token to obtain token.
After this we need to select either FQL query or Graph API to insert the query which enables the user to fetch
the data in few seconds. In this way you can achieve the information of your friends.
199 | P a g e
International conference on Science, Technology and Management ICSTM-2015
IV IMPLEMENTATION
Flipkart 4000346 28
Amazon 25690818 30
Snapdeal 2708634 16
Myntra 2582581 16
Jabong 3247554 22
And, the above illustrated graph depicts the site name along with the like count. This observation tells us that
amazon is most admired Facebook page and flipkart is least admired page on Facebook.
The goal is to emphasize the understanding of how R works.R is a free programming language tool. The R
language is widely used among data miners to provide us a statistical view of various things.Here, we use it for
analyzing the data tool includes two packages R face book and R Curl tool.
i) The RCurl package provides HTTP facilities, allowing us to download files from Web servers, post forms,
upload files, use binary content, handle redirects, password authentication, etc.
ii) R Facebook package is used for providing the information of graph API along with FQL Query within R
language. It includes a series of functions that allow R users to extract their private information, search for
public Facebook pages and capture data, and update their comments regarding that site.
Not only has it told us about the status of sites i.e. how many people share, like, comment, total count,
comment-on-count, click-count etc.. But also, helps the user to determine whether this site is useful or
not. It enables user to give their suggestions about that site and what improvements have to be done
resulting a good platform for users and help in determining the mood or nature of the person. Theliking,
sharing or commenting of pictures decide what kind of person the user is and what is his mental state.
VI CONCLUSION
The aim of this project is classification of various Facebook pages.Now a days, people are increasingly on
facebook as it provides them a user friendly platform to express their opinions about various sites and what are
the changes need to be done. So, it is essential to device some tool to retrieve the data more effectively.In this
paper,for correctly analyzing the popularity of sites,emotions of people,friend counts etc we use a approach that
combine together the use of various tools including R tool,Graph API,FQL Query.. This approach can be used
together with the concept of opinion or data mining to validate result. Our bag of opinions was a result of a
deep analysis of various Facebook sites ,likes and number of pages.This work can further be strengthened with
the use of Facebook query analyzer which is the future scope of this work and can add further results to our
objective.
201 | P a g e
International conference on Science, Technology and Management ICSTM-2015
REFERENCES
[1]. Roddick, John F., and Myra Spiliopoulou. "A bibliography of temporal, spatial and spatio-temporal data
mining research." ACM SIGKDD Explorations Newsletter 1.1 (1999): 34-38.
[2].Kosala, Raymond, and Hendrik Blockeel. "Web mining research: A survey."ACMSigkdd Explorations
Newsletter 2.1 (2000): 1-15.
[3]. Pang, Bo, and Lillian Lee. "Opinion mining and sentiment analysis."Foundations and trends in information
retrieval 2.1-2 (2008): 1-135.
[4]. Ding, Xiaowen, Bing Liu, and Philip S. Yu. "A holistic lexicon-based approach to opinion
mining." Proceedings of the 2008 International Conference on Web Search and Data Mining. ACM, 2008.
[5]. Xia, Yun-Qing, et al. "The unified collocation framework for opinion mining."Machine Learning and
Cybernetics, 2007 International Conference on. Vol. 2. IEEE, 2007.
[6]. Stubbs, Michael. Text and corpus analysis: Computer-assisted studies of language and culture. Oxford:
Blackwell, 1996.
[7]. Amidon, Edmund J., and John J. Hough. "Interaction Analysis: Theory, Research and Application." (1967).
[8]. Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis.Foundations and trends in information
retrieval, 2(1-2), 1-135.
[9]. Han, J., &Kamber, M. (2006). Data Mining, Southeast Asia Edition: Concepts and Techniques. Morgan
kaufmann.
[10].Parpinelli, Rafael S., Heitor S. Lopes, and Alex AlvesFreitas. "Data mining with an ant colony optimization
algorithm." Evolutionary Computation, IEEE Transactions on 6.4 (2002): 321-332.
[11]. Wilson, Robert E., Samuel D. Gosling, and Lindsay T. Graham. "A review of Facebook research in the
social sciences." Perspectives on Psychological Science 7.3 (2012): 203-220.
[12]. Wilson, R. E., Gosling, S. D., & Graham, L. T. (2012). A review of Facebook research in the social
sciences. Perspectives on Psychological Science, 7(3), 203-220.
[13].Greenhalgh, Trisha, and Rod Taylor. "How to read a paper: Papers that go beyond numbers (qualitative
research)." BMj 315.7110 (1997): 740-743.
[14]. Wilson, Robert E., Samuel D. Gosling, and Lindsay T. Graham. "A review of Facebook research in the
social sciences." Perspectives on Psychological Science 7.3 (2012): 203-220.
[15]. Wilson, Robert E., Samuel D. Gosling, and Lindsay T. Graham. "A review of Facebook research in the
social sciences." Perspectives on Psychological Science 7, no. 3 (2012): 203-220.
202 | P a g e
International conference on Science, Technology and Management ICSTM-2015
ABSTRACT
Elder Games provides an environment for coming together, leisure, work out and refreshing the lives of the
elderly population. Designing and developing precise computer games applications for elderly population could
assist them to get happiness and at the same time to be engaged in training or exercising, without getting
bothered or bored. However, the development of Games for Elderly requires the implementation of a thoughtful
design approach to make games useful for Elderly. Therefore, the significant issues related to design and
development of Games for Elderly are studied and presented in this paper.
I INTRODUCTION
Researchers in [1], who conducted several focus groups with elderly people, found that more than 50% of
problems reported by participants in using technological tools related to usability, and could be solved by
improving the design (25%) or by providing training (28%). Input and output devices are particularly delicate,
because they involve an interaction with the sensory or perceptual system of the user, which undergoes several
changes with age that can hamper usability.
II USABILITY CHARACTERISTICS
The researchers in [1] considered usability as the possibility to have access to a product, and define utility as the
capability to provide the functionality the product possesses. They also identify five characteristics related to
usability, which are particularly important when speaking about older adults:
Learn ability: how difficult it is to learn to use a device, to understand and to integrate functioning instruction.
Time needed to complete a task correctly and results obtained in a certain amount of time are possible measures
of learn ability.
Efficiency: the extent to which technological applications satisfy users’ needs, avoiding loss of time, frustration
and dissatisfaction. It can be measured by an experienced user’s performance on a specific task;
Memorability: elderly users’ memorability of a device’s functioning is very important in order to avoid
frustration and loss of time. A simple measure of this characteristic can be obtained by considering the time
needed to perform a previously experienced task.
Errors: how easily a product can induce errors for elderly users and how easily it recovers from them.
203 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Satisfaction: Users’ attitude and adoption of technological applications could be influenced by the pleasure
derived from their usage.
The above mentioned points are to be considered essentially while designing the games for elders.
The intention of the principles in a universal design is to ensure that a product suits all ages, different body sizes
and functional abilities. Whenever a product is designed, the guidelines for a universal design should be
considered to fit different users’ needs.
IV DESIGN TO EVERYONE
In addition to the universal design principles, seven principles of “design to everyone” have been developed by
North Carolina State University [2]. It includes:
Principle 1: Equitable use
Principle 2: Flexibility in use
Principle 3: Simple and intuitive use
Principle 4: Perceivable information
Principle 5: Tolerance for error
Principle 6: Low physical effort
Principle 7: Size and space for approach and use
A game designed for elderly has to ensure that the safety and security is taken care of whenever interacting with
it. The designed product should not decrease the user’s ability to get in social contact, but rather increase it.
The design should be developed to be adjustable to the elderly users. This is an important principle. Elderly
people and those suffering with dementia are a mixed group with different mental and physical capabilities, and
their functional abilities are in constant change as well.
Regardless of the users’ expertise, skills or the comprehension of language or concentration, the usability should
be simple and easy to understand. To people with reduced cognitive functions the product should be used with a
minimal or absent learning time. The user group should rather feel familiar to the product, to avoid confusion
and stress, and consequently feel motivated. [2]
All the important information should be placed within the field of vision to the user, using words that are
understandable and simple. It is important to catch the elderly user’s attention. This could be done by a
combination of several elements. One example is the combination of proper/suitable images, sounds and colors
to help the elderly user complete a given task.
The design should ensure error reduction, quick usage and reliability. Even if errors should occur, the product
should be designed to give positive feedback to the user, not warnings.
The product should not demand a high physical effort in order to use it effectively and comfortably. It should be
usable in any natural occupied positions. This is considered an important principle, as elderly people experience
difficulties coordinating their movements along with their reduced physical strength.
204 | P a g e
International conference on Science, Technology and Management ICSTM-2015
It is recommended that the designed product has an appropriate size and space to suit both an approach and
usage regardless of the user’s mobility, body size or posture. Different hand sizes, their handgrip abilities and
whether or not they’re confined to their beds, sit or even stand upraised. These issues should always be
important considering usability [3].
Persuasive technology can be defined as a set of technologies that attempts to change attitudes and behavior of
people through persuasion and social influence, but without making use of coercion and deception [4]. Those
changes should be voluntary accepted by subjects. Persuasive technology has a great potential to motivate and
encourage old-aged people to change his sedentary lifestyle and become more physically active. Nevertheless,
to make them change is a complex process. Appropriate persuasive methods should be used at the right time to
persuade them to adopt healthier behaviors.
205 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Provide information at opportune moments. Do not disturb users with annoying messages at inappropriate times.
In the project UP Health (Ubiquitously Persuasive Health Promotion) [9], the authors designed a notification
system focused on minimizing the possibility of interrupting users.
Use social influence.
There are several types of social influence that can promote behavior changes.
Social facilitation suggests that people get more involved while performing an activity when other people are
participating too or if they are being observed [10]. For example, some studies show that people exercise more
effectively when they do it with others [11]. In the project Jogging over a distance [12], jogging partners who
are not in the same location use an audio system to be in contact in order to socialize and to motivate each other.
Social Comparison is a theory that explains how individuals evaluate their own opinions and behaviors by
comparing themselves to other people [13].
Based on these findings of the research [14] the following conclusions were posited. There exists a potential
market for elderly use of video games. Games developers should take into account a strong preference for
familiar content, distaste for violent content, and preference for educational or historical information. In order to
better target this population, a high degree of instructional support must also be provided.
As per another study [15], it was stated that although seniors are quite diverse in abilities and experience, older
age is generally associated with a number of well documented changes in sensory-perceptual processes, motor
abilities, response speed and cognitive processes, all of which impose requirements on interfaces that are to be
pleasurably used by the growing elderly population.
The game should be easier to use, with possibility to adjust the game's controls, features, and communication.
There are many commonalities that can be incorporated as design implications for future elderly games.
It is well known that older people have difficulty coping with new technology such as computers, internet, touch
screen, modern interface and advanced mobile functions. Senior people are often reluctant to learn new piece of
technology. Therefore, it is quite logical to use rather known devices or tools such as television, remote control,
computer for some of them and so forth[16]. Today with the advance of innovative such as Kinect, a whole
range of possibilities are offered for the creation of game. The main advantages of Kinect technologies are to
allow senior people to use their body movements to navigate through a selection of menu for instance.
IX CUSTOMIZABILITY OF A GAME
The customizability of a game depending on the user group is important [17]. A simple game may be
constructed on themes based on the player groups. For example, for kids it can be a fairy tale based on cartoon
characters and for the older adults it can be a city excursion. According to [18], elderly woman like to socialize
206 | P a g e
International conference on Science, Technology and Management ICSTM-2015
while men still like competing with others or with themselves. Thus, it is important to implement score function
for competition.
[15] Allowing the user easy control of font, color and contrast setting, as well as window resizing, scroll rate
and zooming, is generally recommended. These adjustments should not exceed appropriate boundaries for the
playability of a game on a system, e.g., a 200 point-size font on a portable game device will not increase
readability. At any moment in time, the user should be able to directly undo the adjustments by means of a
single click.
[19] For elderly it may become a challenge to be steady with the mouse, or any other control device. Small
targets and moving interface elements are known to be difficult for older people, and should best be avoided. In
a related research it was found that pen input is accessible, even for seniors who lack computer experience [20].
Furthermore, in another research [21] based on CogStim Game, the researchers stated that an 85-year-old study
participant expressed that it was fun to draw using a stylus on a Tablet PC, and that drawing with a stylus on a
smooth tablet was much easier than drawing with a pencil on a piece of paper. Some participants wanted to
finger paint. Therefore, they designed interaction methods using touch or pen input for the CogStim Game.
Hidden resources and resources that would appear only after prior ones were successfully conquered, meant that
new challenges were introduced at an appropriate pace, an important game design technique also advocated in
[22].
The older adult during the play might easily forget instructions and information. Therefore, there is a need to
recall the game situation and eventually repeat the information [23]. In addition, it is important to provide
information about the game in a slow manner with option to restart easily if the loss memory has occurred.
Messages should short and concise and displayed on the screen. One interesting feature of the game would be
that the system is able to detect the loss memory of the player by for instance the player's position / posture so
that it can trigger a break in the game play and allow a restart of the movement.
207 | P a g e
International conference on Science, Technology and Management ICSTM-2015
[23] People might experience some difficulties to read or interpret the body language. Therefore, as a
requirement, the screen should be large enough and clear and should be positioned at readable distance.
Additional, visual messages such as icons and illustrations must be simple and easily interpretable.
Researchers in [24] proposed 9 design criteria within a human-factor perspective in order to compensate for age-
related deficits in the visual system:
- increasing the illumination of environment or task context;
- increasing the levels of luminance contrast;
- minimizing the need to use a device excessively close to the eyes;
- adapting the font size;
- minimizing glare;
- minimizing the use of peripheral vision;
- adopting marking strategies to enhance motion perception;
- using great color contrast;
- optimizing the legibility of spatial forms using computer capabilities
XV MOTION REQUIREMENT
[16]Seniors often experience joint pain, problems with mobility, and a lack of general flexibility. Therefore, it is
important to understand their possible physical limitation in order to avoid injuries. Hence, the game should be
designed with in mind several level of difficulties. Very basic and simple steps of the dance are possible.
According to the confidence of the player, there is a possibility to increase the number of steps, or the speed. It
is crucial that the patient feel that it is quite safe to play. Other external parameter should be taken into account
such as enough light in the room, environment setting and so forth.
XVI HEARING
The anatomical changes in the ear with age affect absolute sensitivity, frequency and intensity discrimination,
sound localization and speech recognition. For instance, [25] observed that computer-generated speech, which
does not match the rhythm properties of natural verbal production, can be problematic for elderly drivers.
In [24] the researcher proposed 9 design criteria:
- increasing stimulus intensity,
- controlling background noise,
- avoiding the need to detect/identify high-frequency stimuli,
- avoiding long-term exposure to high levels of noise,
- avoiding signal locations with low frequency sound sources,
- using redundant and semantically well-structured speech materials,
- adapting the rate of words per minute,
208 | P a g e
International conference on Science, Technology and Management ICSTM-2015
XVII MOTIVATION
In [26], the researchers investigated ways to incorporate reward schemes to increase motivation and encourage
game participation. For example, a personally motivating reward could be an automatic direct dial to a
grandchild if they complete the daily game session. Another example would be putting an algorithm into the
game so that if a user completes a certain session, their family members will then be notified by an instant short
message or an email. The reward structure could also be accumulating points to exchange for coupons or gift
cards that could be redeemed at participating local businesses.
[16]In order to insure the success of the senior game dance or sport, it is crucial to make sure that people are
enough motivated it to play it again and again. One aspect to consider is related to the perceived success that the
player could feel. Studies highlight if senior fell that they will not be able to complete the game, they will feel
unmotivated. Thus, it is important that the game offer levels that they can reach easily. In addition, the game
should not expose openly their failure to others residents. In addition, staff should offer praise and encourage
people to use the game as a social tool connecting the community. In addition, game system can be used as a
tool in distraction therapy for pain and anxiety [27].
XVIII CONCLUSION
Game systems for Elders help to reduce isolation, and increase the life expectancy. Games are also found to help
old people better age in place by providing ways to help them form, keep and implement social relationships and
also take care of their wellbeing. While designing a game for Elderly the issues and problems of Elderly should
be taken care of. This paper has given the study related to such design issues for developing games for Elderly.
REFERENCES
[1] Fisk, A.D., Rogers, W. A., Charness, N., Czaja, S. J., and Sharit, J. (2004). Designing for older adults:
Principles and Creative Human Factors A roaches, CRC Press.
[2] Askedal, Are elderly sufferers of dementia able to play a reminiscence game on a tablet device
independently?, Master’s Thesis, Master of Science in Media Technology, department of Computer
Science and Media Technology, Gjøvik Univserity College, 2011
[3] Maki, O. Topo, P. User needs and user requirements of people with dementia: multimedia application
for entertainment. In: Topo, P. Östlund, B. Assistive Technology research series. Volume 24. Dementia
design and technology. Time to get involved. Amsterdam: IOS-press; 2009. p.61-78.
[4] B.J. Fogg. “Persuasive Technology. Using Computers to Change What We Think and Do”, Magazine
Ubiquity, Volume 2002 Issue December, December 1 - December 31, 2002.
209 | P a g e
International conference on Science, Technology and Management ICSTM-2015
[5] S. Consolvo, D. W. McDonald, J. A. Landay. “Theory-Driven Design Strategies for Technologies that
Support Behavior Change in Everyday Life.”, CHI '09 Proceedings of the 27th international conference
on Human factors in computing systems.
[6] J.J. Lin, L . Mamykina, S. Lindtner, G. Delajoux, H.B. Strub. ”Fish'n'steps: Encouraging physical
activity with an interactive computer game. Proceedings 8th International Conference: UbiComp 2006,
Orange County, CA, USA, September 17-21, 2006, Springer, pp. 261- 278.
[7] I. M. Albaina et al, “Flowie: A Persuasive Virtual Coach to Motivate Elderly Individuals to Walk”,
Pervasive Computing Technologies for Healthcare, 2009, p 1-7.
[8] S. Consolvo, et al, “Activity Sensing in the Wild: A Field Trial of UbiFit Garden.2, Proceeding of the
twenty-sixth annual SIGCHI conference on Human factors in computing systems 2008.
[9] Sohn, J. Lee. “UP Health: Ubiquitously Persuasive Health Promotion with an Instant Messaging
System.”, Proceeding CHI EA '07.
[10] B.J. Fogg, D. Eckles. Mobile persuasion. 20 perspectives on the Future of Behavior Change, Persuasive
Technology Lab, Stanford University, 2007,Stanford Captology Media. ISBN-10: 78097950251,
ISBN-13: 978-0-9795025-2-1
[11] G. Roberts, K. Spink, C. L. Pemberton. Learning Experiences in Sport Psychology , 2nd edition, 1999,
ISBN: 0-88011-932-2
[12] F. Mueller, S. O'Brien, A. Thorogood. “Jogging over a distance: supporting a "jogging together"
experience although being apart.”, CHI 2007. Publisher: Stanford Captology Media ISBN-10:
78097950251 and ISBN-13: 978-0-9795025-2-1
[13] L. Festinger. “A theory of social comparison processes.”, Sage social science collection, Human
Relations May 1954 vol. 7 no. 2 117-140
[14] Courtney Aison et al, Appeal and Interest of Video Game Use Among the Elderly, The Harvard
Graduate School of Education, May, 2002
[15] Wijnand Ijsselsteijn et al, Digital Game Design for Elderly Users, Retrieved from
www.nus.edu.sg/nec/InnoAge/documents/Digital Game Design for Elderly.pdf on 27 Mar 2012.
[16] Aurelie Aurilla, Bechina Arntzen, Game based Learning to Enhance Cognitive and, Physical
Capabilities of Elderly People:, Concepts and Requirements, World Academy of Science, Engineering
and Technology, 2011.
[17] A. Al Mahmud et al., Designing social games for children and older adults: Two related case studies,
Entertainm. Comput.(2010)
[18] K. Ogomori, M. Nagamachi, K. Ishihara, S. Ishihara, and M. Kohchi, "Requirements for a Cognitive
Training Game for Elderly or Disabled People," in Biometrics and Kansei Engineering (ICBAKE),
2011 International Conference on, 2011, pp. 150-154.
[19] Rogers, W., & Fisk, A. (2000). Human factors, applied cognition, and aging. In: F.I.M. Craik & T.A.
Salthouse (Eds.), The Handbook of Aging and Cognition. Mahwah, NJ: LEA
210 | P a g e
International conference on Science, Technology and Management ICSTM-2015
[20] Kim, H., Cho, Y.S., Guha, A., Do, E.Y.-L.: ClockReader: Investigating Senior Computer Interaction
through Pen-based Computing CHI Workshop on Senior-Friendly Technologies: Interaction Design for
the Elderly, pp. 30-33, Atlanta, GA (2010)
[21] Hyungsin Kim1, Viraj Sapre1, Ellen Yi-Luen, Games for Health: Design Cognition-focused
Interventions to Enhance Mental Activity, HCI (23) 2011: 415-419
[22] R.J. Sluis, I. Weevers, et al, Read-It: five-to-seven-year-old children learn to read in a tabletop
environment, in: Proceedings of the IDC’04, 2004.
[23] Aurelie Aurilla, Bechina Arntzen, Game based Learning to Enhance Cognitive and, Physical
Capabilities of Elderly People:, Concepts and Requirements, World Academy of Science, Engineering
and Technology, 2011.
[24] Schieber, F.. Human Factors and Aging: Identifying and Compensating for Agerelated Deficits in
Sensory and Cognitive Function. In K.W. Schaie and N. Charness, (Eds.), Impact of Technology on
Successful Aging, 85- 99. New York: Springer. (2003)
[25] Kiss, I., Ennis, T.. Age-related decline in perception of prosodic affect. A lied Neuropsychology, 8,
251-254,(2001).
[26] Kim, H., Cho, Y.S., Guha, A., Do, E.Y.-L.: ClockReader: Investigating Senior Computer Interaction
through Pen-based Computing CHI Workshop on Senior-Friendly Technologies: Interaction Design for
the Elderly, pp. 30-33, Atlanta, GA (2010)
[27] C. Watters, et al, "Extending the Use of Games in Health Care," in Proceedings of the 39th Hawaii
International Conference on System Sciences Hawai 2006.
211 | P a g e
International conference on Science, Technology and Management ICSTM-2015
ABSTRACT
The design of high-performance and low-power clocked storage elements is essential and critical to achieving
maximum levels of performance and reliability in modern VLSI systems such as Systems o Chips (SoCs). TSPC
D flip flop offers advantages in terms of speed and power over normal D Flip Flop design. As chip
manufacturing technology is suddenly on the threshold of major evaluation, which shrinks chip in size and
performance is implemented in layout level which develops the low power consumption chip, using recent
CMOS, micron layout tools. This paper compares 2 architecture of 3 bit counter using normal Flip flop design
and TSPC D flip flop design in terms of speed, power consumption and CMOS layout using 45 nm CMOS
technology. Micro wind CMOS layout design tool allows the designer to design and simulate an integrated
circuit at physical description level.
I. INTRODUCTION
Counters are sequential circuits that keep tract of the number of pulses applied on their inputs. They occur
frequently in real-world, practical digital systems, with applications in computer systems, communication
equipments, scientific instruments, and industrial control, to name a few. Many counter designs have been
proposed in literature, patents, and/or used in practice. Counters are usually classified into synchronous
counters, such as ring counters and twisted counters, and asynchronous counters, such as ripple counter. In
CPUs, microcontrollers, DSPs and many other digital designs which include a program counter, and a timer
counter, synchronous counters are usually preferred. Counters are often clocked at a very high rate, usually with
an activity factor of 1. In a good design however, the activity factor can be substantially less than 1 and data-
dependent leading to lower power consumption.
A counter is a logic circuit that counts the number of occurrence of an input. Each count, a binary number is
212 | P a g e
International conference on Science, Technology and Management ICSTM-2015
called the state of the counter. Hence a counter counting in term of n bits has 2n different states. The number of
different states of the counter is known as modulus of the counter. Thus, an n bit counter is a module 2n counter.
This type of asynchronous counter is also known as serial or ripples counter. The name asynchronous comes
from the fact that’s this counters flip flop are not being clocked at the same time. The clock input is applied only
the first flip flop also called input flip flop in a cascaded arrangement. The purpose of this thesis is to design
with Micro wind a 3-bit asynchronous counter with reset function. This counter will raise the output at a falling
edge of the clock. The 3-stage asynchronous counter displays number from 0 to 9, using a chain of four D-
register cells. The D register design has been implemented using two D and with CMOS inverters.
A digital asynchronous counter is a semiconductor device that is used for counting the number of time that a
digital event has occurred. The term ripple counter comes from the mode in which the clock information ripples
through the counter. For designing of 4 bit asynchronous counter we need to cascade 4 D register, the clock
signal of the each stage is simply carried out by the previous stage to have an asynchronous counter. With this
configuration counter will raise the output at a falling edge of the clock. The counters output is indexed by one
LSB every time the counter is clocked. The 4-stage ripple counter displays number from 0 to 15, using a chain
of four D-register cell. In a counter like this after the occurrence of each clock input, the count has to wait for a
time period equal to sum of all propagation delay of all flip flop before the next clock pulse can be applied .The
propagation delay of each flip flop, of course , will depend upon the logic family to which it belong.
Simply, to operate on n-bit values, we can connect n 1-bit Counters. 3-bit Counter is constructed using four 1-
bit register as in our case.
Fig 1: An implementation of a master–slave D flip-flop that using cmos logic gates and pass
transistors with Reset facility
213 | P a g e
International conference on Science, Technology and Management ICSTM-2015
The architecture is based on inverters and pass-transistors. It is constructed from two memory loop circuits in
series. The cell structure includes a master memory cell (left) and slave memory cell (right). In following
figure, clock is high; the master latch is updated to a new value of the input D. The slave latch produces the
previous value of D on the output Q. When clock goes down, the master latch turns to memory state. The slave
circuit is updated. The change of the clock from 1 to 0 is the active edge of the clock. This type of latch is a
negative edge flip flop. The reset function is obtained by a direct ground connection of the master and slave
memories, using nMOS devices. This added circuit is equivalent to an asynchronous Reset, which means that Q
will be reset to 0 when Reset is set to 1, without waiting for an active edge of the clock.
Fig 2 : An implementation of TSPC D flip flop with reset is triggered on negative edge of clock
214 | P a g e
International conference on Science, Technology and Management ICSTM-2015
III. THREE BIT COUNTER DESIGN USING MASTER SLAVE D FLIP FLOP AND TSPC D
FLIP FLOP
The following is a 3-bit asynchronous binary counter. It has 8 states due to the three flip flop. This counter is
display 000 to 111 binary number. This counter is constructed by using D flip flop as master slave arrangement.
This D master slave flip flop is called D register. This counter is made by three D register. Only one flip flop is
connected to clock and other flip flops are clocked by previous flip flop’s output. Reset is connected to all the
flip flops. When least significant bit makes a transition then information is ripple through all the stats of flip
flops. The clock input is applied to subsequent flip flop comes from the output of its immediately preceding flip
flop. For instance the output of the first register acts as the clock input to the second register, the output of the
second register feeds the clock input of third register. As a natural consequence of this all 4 register do not
change state at the same time .The second register can change state only after the output of first register can
change its state. That is the second fact that it gets its own clock input from the output of the first and not from
the input clock
The counter’s output is indexed by one LSB every time the counter is clocked. The 3-stage ripple counter
displays number from 0 to 7, using a chain of three D-register cell. The Q0, Q1 and Q2 are the three states of
output of the counter.
Now, as we have designed all the components of the counter, we can design it according to the schematic diagram that
we have seen in the introduction. The first stage receives the clock signal. For the reset, we use the reset of our D
registers and we connect them together. However, we need to change the position of the NMOS of the reset of each D
register, in order to optimize our layout. Thus, we have not problems with the Q outputs of the counter when we use
the reset. Firstly counter is designed by using 90Nm and normal and TSPC D FF and this compare with 45Nm
counter, simulate with microwind tools. Design Counter as shown in figure. And possible combination as shown
below
215 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Clock Q0 Q1 Q2
0Pulse 0 0 0
1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
Fig 4 : CMOS layout of 3 bit counter based on D Flip Flop using 90nm Technology
Fig 5: Simulation of 3 bit Counter using D Flip Flop using 90nm Technology
216 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Fig 6 :CMOS layout of 3 bit counter based on TSPC D Flip Flop using 90nm Technology
Fig 7: Simulation of 3 bit Counter using TSPC D Flip Flop using 90nm Technology
Fig 8: Voltage, Current vs Time simulation of 3 bit Counter using 90nm Technology
217 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Fig 9 CMOS layout of 3 bit counter based on TSPC D Flip Flop using 45nm Technology
Fig 10 : Simulation of 3 bit Counter using TSPC D Flip Flop using 45nm Technology
Fig 11: Voltage, Current vs Time simulation of 3 bit Counter using 45nm Technology
218 | P a g e
International conference on Science, Technology and Management ICSTM-2015
V CONCLUSION
This paper gives the comparison in between two design technology such as 45 Nm and 90Nm. 3-bit
asynchronous counter is design by using simple D FF and TSPC D FF on 90Nm technology and results shown
in following table.
Table : Comparison between 3 bit counter design using D Flip Flop and TSPC D Flip Flop
using 90nm cmos Technology
TSPC based counter gives the best results as compare to the D FF based counter. It has less transistors are
required due to this it gives less time for execution. Less layout area is required for designing the any circuit.
Similarly counter is design using two different technology is compare on following factors.
The above table gives the comparison in 45Nm and 90 Nm technology, this results show 45Nm design
technology required less supply voltage to operate any circuit, due to compact design it have less layout area and
low power consumption as compare to the 90Nm design technology. Due to this factor 45Nm design
technology is used for low power consumption circuits.
219 | P a g e
International conference on Science, Technology and Management ICSTM-2015
VI REFERENCES
[1] International Journal of Soft Computing and engineering (IJSCE) ISSN: 2231-2307, Volume-1, Issue-1,
March 2011.
[2] IEEE transactions on very large scale integration (vlsi) systems, vol. 20, no. 9, september 2012.
[3] John Faricelli, “Layout-Dependent Proximity Effectsin Deep Nanoscale CMOS”, April, 16, 2009.
[4] M. Quirk and J. Serda, “Semiconductor manufacturing Technology,” New Jersey: Prentice Hall, pp.388-
434, 2001, IEEE
[5] J. M. C. Wong, C. Wong, V. S. L. Cheung, and H. C. Luong, “A 1-V 2.5-mW 5.2-GHz frequency divider in
a 0.35-um CMOS process,” IEEE J. Solid-State Circuits, vol. 38, no. 10, pp. 1643–1648, Oct. 2003.
[6] J. Yuan and C. Svensson, “High-speed CMOS circuit techniques,” IEEE J. Solid-State Circuits, vol. 24, no.
1, pp. 62–70, Feb. 1989.
[7] Q. Huang and R. Rogenmoser, “Speed optimization of edge-triggered CMOS circuits for gigahertz single-
phase clocks,” IEEE J. Solid-State Circuits, vol. 31, no. 3, pp. 456–465, Mar. 1996.
[8] B. Chang, J. Park, and W. Kim, “A 1.2 GHz CMOS dual-modulus prescaler using new dynamic D-type flip-
flops,” IEEE J. Solid-State Circuits, vol. 31, no. 5, pp. 749–752, May 1996.
[9] J. N. Soares, Jr and W. A. M. Van Noije, “A 1.6-GHz dual modulus prescaler using the extended true-single-
phase-clock CMOS circuit technique (E-TSPC),” IEEE J. Solid-State Circuits, vol. 34, no. 1, pp. 97–102,
Jan. 1999.
[10] J. N. Soares, Jr and W. A. M. Van Noije, “Extended TSPC structures with double input/output data
throughput for gigahertz CMOS circuit.
[11] S. Pellerano, S. Levantino, C. Samori, and A. L. Lacaita, “A 13.5-Mw 5-GHz frequency synthesizer with
dynamic-logic frequency divider,” IEEE J. Solid-State Circuits, vol. 39, no. 2, pp. 378–383, Feb. 2004.
[12] X.-P. Yu, M. A. Do, W. M. Lim, K. S. Yeo, and J. G. Ma, “Design and optimization of the extended true
single-phase clock-based prescaler,” IEEE Trans. Microw. Theory Techn., vol. 54, no. 11, pp. 3828–3835
220 | P a g e
International conference on Science, Technology and Management ICSTM-2015
ABSTRACT
Wireless communication channel is being used as primary means of communication nowadays. Data is DPSK
modulated and hamming encoded before being given as input to the Rayleigh Faded Wireless Communication
channel. This paper mainly focuses on the data re-generation techniques from phase modulated data at the source
side. The generated data is made to pass through the channel by undergoing various transmission impairments. To
accurately regenerate the data at the receiver side equalizers are recommended for usage. Decision feedback
equalizer is one such equalizer that reproduces the data accurately. It takes care of all types of the noise
components and suppresses the AWGN also. It counters the Doppler Effect and ISI most accurately.
I DATA GENERATION
The input to the system is the message of 'L' number of characters. The model converts the given text into ASCII
code and supplies the bit stream to DPSK encoder[3].
The general representation for a set of M-ary phase signaling waveform is[6][3]
S(t) = {U(t) exp j ( 2Πfc t + 2Π (m-1) / M + θ )}
0 ≤ t ≤ T,
m = 1,2,3,4
θ = initial phase.
Where U(t) is a rectangular pulse with amplitude 'A'
S(t) = A cos [2Πfc t + 2Π (m-1) / M + θ]
= AChannel Simulation cos [2Πfc t ] – Aem sin[2Πfc t]
Where AChannel Simulation = cos [2Π (m-1) / M + θ ]
Aem = sin [2Π (m-1) / M + θ ]
If θ = Π/4 and M = 4, then AChannel Simulation = Aem = ±A/2.
221 | P a g e
International conference on Science, Technology and Management ICSTM-2015
The mapping or assignment of information bits to the M possible phases more commonly done using the logic
described here under as shown in Fig (1).
In a four phase PSK, sets of two successive bits are mapped on to the four possible phases. When a pair of bits is
encoded, say 0 1, the phase corresponding to this combination i.e., 5Π/4, added to the phase shift corresponding to
the previous bit interval, say 7Π/4, to give the phase shift the present interval[6].
Thus while in the previous bit interval a sinusoid of signaling frequency fc with phase shift 7Π/4 was transmitted, in
the present interval the sinusoid is transmitted with a phase shift of 7Π/4 + 5Π/4 = Π.
When θ > 0 is used, the signaling phase is shifted in every signaling interval, even when a long string of zeroes
occur in the information. This results in a signal spectrum with a width that approximately equal to 1/T. (T is the
signaling interval). The spectral components above and below the carrier are used in maintaining synchronization at
the receiver. Hence their presence in the received signal is important. Thus a non zero value of 'θ' (θ = π/4) is used in
such a case. With equal to four and θ equal to π /4, eight possible phase shifts exist.
The sequence of pulses {∑k ak } or ∑i Si ( t – iT ) from DPSK is fed to the radio equipment transmitter filter „G‟. It
has an impulse response of g(t) and a transfer function of G(f). The output this filter is real-valued and is given by
S(t) = R [ ∑i Si ( t – iT ) ] * g(t) (* represents Convolution)
= ∑i Si ( t – iT ) * g(t) + ∑i Si* ( t – iT ) * g(t)
= ∑i Si g ( t – iT ) + ∑i Si* g ( t – iT )
II DATA TRANSMISSION
When S(t) is fed into a single Rayleigh fading HF channel the output would be [3][7][4]
222 | P a g e
International conference on Science, Technology and Management ICSTM-2015
223 | P a g e
International conference on Science, Technology and Management ICSTM-2015
y(ti) = μ ai
In the absence of noise, in the ith signaling interval, the received signal can be simply represented as [9]
Ri(t) = sin(2 Π fc t + θ + t )
Where fc = Carrier frequency having a phase shift of (θ + t )
t = Information bearing
θ = Unknown phase shift which represents the path delay and
relative phase difference between transmitter and receiver
oscillators.
224 | P a g e
International conference on Science, Technology and Management ICSTM-2015
R<0&Q<0 01 ; R<0&Q>0 00 ;
R>0&Q<0 11 ; R>0&Q>0 10 ;
The period of integration T2 – T1 makes it possible to avoid the use of band-pass filter separate out noise (if any)
from received signal.
During data demodulation, the carrier fc generated at receiver should not run freely. It must be locked to a fixed
phase at the beginning of every signaling interval. This is necessary to avoid a differential phase shift in the locally
generated carrier, multiplying the incoming phase in successive signaling intervals. This differential phase shift
would other wise add to the information bearing phase shift (i - i-1) and cause erroneous decisions to be taken at
the demodulator.
225 | P a g e
International conference on Science, Technology and Management ICSTM-2015
REFERENCES
[1]. M. Patzold, U. Killat, and F. Laue, “A deterministic digital simulation model for Suzuki processes with
application to a shadowed Rayleigh land mobile radio channel” IEEE Trans. Veh. Technol., vol. 45, pp. 318-
331, May 1996.
[2]. ieeexplore.ieee.org/iel5/5/31280/01455678.pdf
[3]. S. Venkateswarlu, Sastry JKR, “Justification of Rayleigh faded channel for data transmission in wireless
environment” IJETT journal, vol 14, No 4 August 2014, pp106-109
[4]. S. Venkateswarlu, Dr LSS Reddy: “A Channel Model over a frequency selective Rayleigh Faded Channel for
Data Transmission” AOM/IAOM International conference at Virginia, US, 12 th to 15th August 2005
[5]. http://www.lightwaveonline.com/articles/wdm/print/volume-4/issue-7/dpsk-offers-alternative-high150speed-
signal-modulation-54922077.html
[6]. S. Venkateswarlu, Dr JKR Sastry, “A Qualitative analysis of Rayleigh Faded frequency selective channel
simulator” paper published in International Journal of Applied Engineering Research, ISSN 0973-4562 Volume
9, Number 22 (2014) pp. 10281-10286
[7]. E. F. Casas and C. Leung, “A simple digital fading simulator for mobile radio,” in Proc. IEEE Vehicular
Technology Conf., Sept. 1988, pp.212–217.
[8]. T. Eyceoz, A. Duel-Hallen, and H. Hallen, “Deterministic channel modeling and long range prediction of fast
fading mobile radio channels,” IEEE Commun. Lett., vol. 2, pp. 254–256, Sept. 1998.
[9]. M. J. Gans, “A power-spectral theory of propagation in the mobile radio environment,” IEEE Trans. Veh.
Technol., vol. VT-21, pp. 27–38, Feb.1972.
226 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Abstract
The main aim of this paper is to reduce noise introduced by image enhancement methods based on the random
spray sampling technique. Based on nature of sprays, output images of spray-based methods shows noise with
unknown statistical distribution. The non-enhanced image is nothing but either free of noise or affected by noise
of non-perceivable levels. The dual-tree complex wavelet transform (CWT) is a relatively recent enhancement to
the discrete wavelet transform (DWT), with important additional properties: It is nearly shift invariant and
directionally selective in two and higher dimensions. Across the six orientations of the DTWCT the standard
deviation of non-enhanced image coefficients can be computed, and then it normalized for each level of the
transform. The result is a map of the directional structures present in the non-enhanced image. Then Said map
is used to shrink the coefficients of the enhanced image. According to data directionality the shrunk coefficients
and the coefficients of the non- enhanced image are mixed. Finally, the enhanced image can be computed by
doing the inverse transforms. The theoretical analyses of new algorithm are well verified via computer
simulations.
I INTRODUCTION
The dual-tree complex wavelet transform (CWT) is a relatively recent enhancement to the discrete wavelet
transform (DWT), with important additional properties: It is nearly shift invariant and directionally selective in
two and higher dimensions. If we use image enhancement algorithms based on random spray sampling a
specific image quality problems are raised to remove that this Paper introduces a novel multi-resolution
denoising method. We can apply this proposed approach for other image enhancement methods that either
introduce or exacerbate noise. This work builds and expands based on a previous article by Fierro et al. [1].
Random sprays are a two-dimensional collection of points with a given spatial distribution around the origin.
Sprays can be used to sample an image support in place of other techniques, and have been previously used in
works such as Provenzi et al. [2], [3] and Kolås et al. [4]. Random sprays have been partly inspired by the
227 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Human Visual System (HVS). In particular, a random spray is not dissimilar from the distribution of photo
receptors in the retina, although the underlying mechanisms are vastly different. Due to the peaked nature of
sprays, a common side effect of image enhancement methods that utilize spray sampling is the introduction of
undesired noise in the output images. Magnitude and statistical characteristics of said noise are not known
exactly because they depend on several factors such as image content, spray properties and algorithm
parameters. Some of the most commonly used transforms for shrinkage-based noise reduction are the Wavelet
Transform (WT) [5]–[7], the Steerable Pyramid Transform [8]–[10], the Contourlet Transform [11]–[13] and the
Shear let Transform [14]–[16]. With the exception of the WT, all other transforms lead to over-complete data
representations. Over-completeness is an important characteristic, as it is usually associated with the ability to
distinguish data directionality in the transform space. We Independently of the specific transform used, the
general assumption in multi-resolution shrinkage is that image data gives rise to sparse coefficients in the
transform space. Thus, denoising can be achieved by shrinking those coefficients that compromise data sparsely.
Such process is usually improved by an elaborate statistical analysis of the dependencies between coefficients at
different scales. Yet, while effective, traditional multi-resolution methods are designed to only remove one
particular type of noise (e.g. Gaussian noise). Furthermore, only the input image is assumed to be given. Due to
the unknown statistical properties of the noise introduced by the use of sprays, traditional Approaches do not
find the expected conditions, and thus their action becomes much less effective. The proposed approach still
performs noise reduction via coefficient shrinkage, yet an element of novelty is introduced in the form of partial
reference images.
228 | P a g e
International conference on Science, Technology and Management ICSTM-2015
dimensions. It achieves with a redundancy factor of only 2d for d-dimensional signals, which is substantially
lower than the un-decimated DWT. The multidimensional (M-D) dual-tree CWT is non separable but is based
on a computationally efficient, separable filter bank (FB). We use the complex number symbol C in CWT to
avoid confusion with the often-used acronym CWT for the (different) continuous wavelet transform.
image k ∈{1,2, … ,6} as the sum of squares of the real coefficients and the complex ones
Coefficients associated with non-directional data will have similar energy in all directions. On the other hand,
directional data will give rise to high energy in one or two directions, according to its orientation. The standard
deviation of energy across the six directions k = 1, 2. . . 6 is hence computed as a measure of directionality.
Since the input coefficients are not normalized it naturally follows that the standard deviation is also non-
normalized. The Michaelis-Menten function [20] is thus applied to normalize data range. Such function is
sigmoid-like and it has been used to model the cones responses of many species. The equation is as follows
where x is the quantity to be compressed, γ a real-valued exponent and μ the data expected value or its estimate.
Hence, a normalized map of directionally sensitive weights for a given level j can be obtained as
where the choice of γ depends on j as explained later on. A shrunk version of the enhanced image’s coefficients,
according to data directionality, is then computed as
229 | P a g e
International conference on Science, Technology and Management ICSTM-2015
Since the main interest is retaining directional information, we obtain a rank for each of the non-enhanced
coefficients as
where ord is the function that returns the rank according to natural ordering. The output coefficients are then
computed as follows
Where ord is the function that returns the index of a coefficient in m = 1, 2, …, 6 when the set is sorted in a
descending order. The meaning of the whole sequence can be roughly expressed as follows: where the enhanced
image shows directional content, shrink the two most significant coefficients and replace the four less
significant ones with those from the non-enhanced image. The reason why only the two most significant
coefficients are taken from the shrunk ones of the enhanced image is to be found in the nature of “directional
content”. For a content of an image to be directional, the responses across the six orientations of the DTCWT
need to be highly skewed. In particular, any data orientation can be represented by a strong response on two
adjacent orientations, while the remaining coefficients will be near zero. This will make it so that the two
significant coefficients are carried over almost un-shrunk.
IV RESULTS
Original image Enhanced Image
230 | P a g e
International conference on Science, Technology and Management ICSTM-2015
43 Perform Method
V CONCLUSION
This paper presents a noise reduction method based on Dual Tree Complex Wavelet Transform coefficients
shrinkage. The main point of novelty is represented by its application in post-processing on the output of an
image enhancement method (both the non-enhanced image and the enhanced one are required) and the lack of
assumption on the statistical distribution of noise. On the other hand the non-enhanced image is nothing but
noise free. Or affected by non-perceivable noise. The images are first converted to a color space with distinct
chromatic and achromatic axes based on properties of the Human Visual System but only the achromatic part
becomes object of the noise reduction process. To achieve perfect denoising, the proposed method exploits the
data orientation discriminating power of the Dual Tree Complex Wavelet Transform to shrink coefficients from
the enhanced, noisy image. Always according to data directionality, the shrunk coefficients are mixed with those
231 | P a g e
International conference on Science, Technology and Management ICSTM-2015
from the non-enhanced, noise-free image. The output image is then computed by inverting the Dual Tree
Complex Wavelet Transform and the color transform. Since at the time of writing no directly comparable
method was known to the authors, performance was tested in a number of ways, both subjective and
objective, both quantitative and qualitative. Subjective tests include a user panel test, and close inspection of
image details. Objective tests include scan line analysis for images without a known prior, and computation of
PSNR and SSIM on images with a full reference. The proposed algorithm produces good quality output by
removing noise without altering the underlying directional structures in the image. Also, although designed to
tackle a quality problem specific to spray-based image enhancement methods, the proposed approach also
proved effective on compression and latent noise brought to the surface by histogram equalization. It requires
two input images (one non-enhanced and one enhanced) and its iterative nature, which expands computation
time considerably with respect to one-pass algorithms these are the main limitations of proposed method.
REFERENCE
[1] M. Fierro, W.-J. Kyung and Y.-H. Ha, “Dual-tree complex wavelet transform based denoising for random
spray image enhancement methods”, in Proc. 6th Eur. Conf. Colour Graph., Image. Vis., 2012, pp. 194–
199.
[2] Provenzi, M. Fierro, A. Rizzi, L. De Carli, D. Gadia, and D. Marini, “Random spray retinex: A new retinex
implementation to investigate the local properties of the model”, vol. 16, no.1, pp. 162–171, Jan. 2007.
[3] Provenzi, C. Gatta, M. Fierro, and A. Rizzi, “A spatially variant whitepatch and gray-world method for color
image enhancement driven by local contrast”, vol. 30, no. 10, pp. 1757–1770, 2008.
[4] Ø. Kolås, I. Farup, and A. Rizzi, “Spatio-temporal retinex-inspired envelope with stochastic sampling: A
framework for spatial color algorithms”, J. Imag. Sci. Technol., vol. 55, no. 4, pp. 1–10, 2011.
[5] H. A. Chipman, E. D. Kolaczyk, and E. McCulloch, “Adaptive bayesian wavelet shrinkage”, J. Amer.
Stat. Assoc., vol. 92, no. 440, pp. 1413–1421, 1997.
[6] Chambolle, R. De Vore, N.-Y. Lee, and Lucier, “Nonlinear wavelet image Processing: Variational problems,
compression, and noise removal throughwavelet shrinkage”, IEEE Trans Image Process., vol. 7, no. 3, pp.
319– 335, Mar. 1998.
[7] Cho, T. D. Bui, and G. Chen, “Image denoising based on wavelet shrinkage using neighbor and level
dependency”, Int. J. Wavelets, Multiresolution Inf. Process., vol. 7, no. 3, pp. 299–311, May 2009.
[8] P. Simoncelli and W. T. Freeman, “The steerable pyramid: A flexible architecture for multi-scale derivative
computation”, in Proc. 2nd Annu. Int. Conf. Image Process., Oct. 1995, pp. 444–447.
[9] Rooms, W. Philips, and P. Van Oostveldt, “Integrated approach for estimation and restoration of photon-
limited images based on steerable pyramids”, in Proc. 4th EURASIP Conf. Focused Video/Image Process.
Multimedia Commun., vol. 1. Jul. 2003, pp. 131–136.
[10] Rabbani, “Image denoising in steerable pyramid domain based on a local laplace prior”, Pattern Recognit.,
vol. 42, no. 9, pp. 2181–2193, Sep. 2009.
[11] S. Foucher, G. Farage, and G. Benie, “Sar image filtering based on the stationary contourlet transform”,
in Proc. IEEE Int. Geosci. Remote Sens. Symp., Jul.–Aug. 2006, pp. 4021–4024.
232 | P a g e
International conference on Science, Technology and Management ICSTM-2015
[12] W. Ni, B. Guo, Y. Yan, and L. Yang, “Speckle suppression for sar images based on adaptive shrinkage in
contourlet domain”, in Proc. 8th World Congr. Intell. Control Autom., vol. 2. 2006, pp. 10017–10021.
[13] K. Li, J. Gao, and W. Wang, “Adaptive shrinkage for image denoising based on contourlet transform”, in
Proc. 2nd Int. Symp. Intell. Inf. Technol. Appl., vol. 2. Dec. 2008, pp. 995–999.
[14] Q. Guo, S. Yu, X. Chen, C. Liu, and W. Wei, “Shearlet-based image denoising using bivariate shrinkage
with intra band and opposite orientation dependencies”, in Proc. Int. Joint Conf. Comput. Sci. Optim., vol.
1. Apr. 2009, pp. 863–866.
[15] X. Chen, C. Deng, and S. Wang, “Shearlet-based adaptive shrinkage threshold for image denoising”, in
Proc. Int. Conf. E-Bus. E-Government, Nanchang, China, May 2010, pp. 1616– 1619.
[16] Zhao, L. Lu, and H. Sun, “Multi-threshold image denoising based on shearlet transform”, Appl. Mech.
Mater., vols. 29–32, pp. 2251–2255, Aug. 2010
[17] N. G. Kingsbury, “The dual-tree complex wavelet transform: A new technique for shift invariance and
directional filters,” in Proc. 8th IEEE Digit. Signal Process. Workshop, Aug. 1998, no. 86, pp. 1–4.
[18] W. Freeman and E. Adelson, “The design and use of steerable filters”, IEEE Trans. Pattern Anal. Mach.
Intell., vol. 13, no. 9, pp. 891–906, Sep. 1991
[19] M. Livingstone, Vision and Art: The Biology of Seeing. New York: Harry N. Abrams, 2002.
[20] L. Menten and M. I. Michaelis, “Die kinetik der invertinwirkung,” Biochem, vol. 9, pp. 333–369, Feb.
1913
233 | P a g e