Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Artificial Intelligence

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Bioinformatics Vs AI

Diwaker Singh Reg.Id.10808015


B.Tech-MBA (CSE), Lovely Professional University Punjab

Abstract- This term paper aims to

provide an overview of the ways in which techniques from artificial intelligence can be usefully employed in bioinformatics, both for modelling biological data and for making new discoveries. The paper covers three techniques: symbolic machine learning approaches (nearest-neighbour and identification tree techniques); artificial neural networks; and genetic algorithms. Each technique is introduced and then supported with examples taken from the bioinformatics literature.

INTRODUCTION This term paper deals with the main heading Bioinformatics Vs Artificial intelligence. In this term paper I m explaining about the basic definition of the term Bioinformatics and Artificial Intelligence? The second part contains the comparison and contrast between Bioinformatics and AI . It deals with applicaton of AI techniques in the field of Bioinformatics. From the same discussion, the difference between the two would be clear.

Bioinformatics is the field of science in which biology, computer science, and information technology merge to form a single discipline. The ultimate goal of the field is to enable the discovery of new biological insights as well as to create a global perspective from which unifying principles in biology can be discerned. At the beginning of the "genomic revolution", a bioinformatics concern was the creation and maintenance of a database to store biological information, such as nucleotide and amino acid sequences. Development of this type of database involved not only design issues but the development of complex interfaces whereby researchers could both access existing data as well as submit new or revised data. Ultimately, however, all of this information must be combined to form a comprehensive picture of normal cellular activities so that researchers may study how these activities are altered in different disease states. Therefore, the field of bioinformatics has evolved such that the most pressing task now involves the analysis and interpretation of various types of data, including nucleotide and amino acid sequences, protein domains, and protein structures. The actual process of analyzing and interpreting data is referred to as computational biology. Important sub-

I. WHAT ARE BIOINFORMATICS AND AI?

a.

BIOINFORMATICS

disciplines within bioinformatics and computational biology include: the development and implementation of tools that enable efficient access to, and use and management of, various types of information

the development of new algorithms (mathematical formulas) and statistics with which to assess relationships among members of large data sets, such as methods to locate a gene within a sequence, predict protein structure and/or function, and cluster protein sequences into families of related sequences

Automated genome annotation Biological networks inference Comparative genomic analyses Scientific literature and textual annotation mining Integrative systems biology approaches Chemoinformatics and drug discovery applications Personalized medicine applications

III.

WHY USE AI IN BOINFORMATICS??

b. ARTIFICIAL INTELLIGENCE
Artificial Intelligence (AI) is the area of computer science focusing on creating machines that can engage on behaviours that humans consider Intelligent. The ability to create intelligent machines has intrigued humans since ancient times and today with the advent of the computer and 50 years of research into AI programming techniques, the dream of smart machines is becoming a reality. Researchers are creating systems which can mimic human thought, understand speech, beat the best human chess player, and countless other feats never before possible.

The above mentioned tasks in Bioinformatics require a huge amount of data to be stored and analysed by the experts. If done manually it will consume huge amount of time and effort and even if we use computer for this work we will need a good team and an efficient algorithm to accomplish these tasks and also at the same time we will need humans or biologists to draw certain inference from the present data. Now the word Efficient and drawing inference leads to the introduction of Artificial intelligence in this field. It means the use of artificial intelligence has risen from the needs of biologists to utilize and help interpret the vast amounts of data that are constantly being gathered in genomic research. The underlying motivation for many of the bioinformatics and DNA sequencing approaches is the evolution of organisms and the complexity of working with erroneous data. We know that artificial intelligence can be used to design such expert systems which require minimum human resource (even a single person can perform the task of 100 persons using such system), can deliver maximum possible accurate result, and provide inference mechanism which can be used to extract new information from the previously gathered data.

II.

FUNCTIONS OF BIOINFORMATICS

In Bioinformatics we deal with storing and analysing data which can be utilised for following aspects of Bioinformatics: Protein structure and function prediction

There are several important problems where AI approaches are particularly promising Prediction of Protein Structure Semiautomatic drug design Knowledge acquisition from genetic data

We first introduce decision trees. A decision tree has the following properties: each node is connected to a set of possible answers; each non-leaf node is connected to a test which splits its set of possible answers into subsets corresponding to different test results; and each branch carries a particular test results subset to another node.To see how decision trees are useful for nearest neighbour calculations, let us consider 8 blocks ofknown width, height and colour . A new block then appears of known size but unknown colour. On the basis of existing information, can we make an informed guess as to what the colour of the new block is? To answer this question, we need to assume a consistency heuristic, as follows. Find the most similar case, as measured by known properties, for which the property is known; then guess that the unknown property is the same as the known property. This is the basis of all nearest neighbour calculations. Although such nearest neighbour calculations can be performed by keeping all samples in memory and calculating the nearest neighbour of a new sample only when required by comparing the new sample with previously stored samples, there are advantages in storing the information about how to calculate nearest neighbours in the form of a decision tree, as will be seen later. For our example problem above, we first need to calculate, for the 8 blocks of known size and colour, a decision space using width and height only (since these are the known properties of the ninth block), where each of the 8 blocks is located as a point within its own unique region of space. Once the 8 blocks have been assigned a unique region, we then calculate, for the ninth block of known width and height, a point in the same feature space. Then, depending on what the colour of its nearest neighbour is in the region it occupies, we allocate the colour of the nearest neighbour to the ninth block. Notice that the problem consisted of asking for an informed guess, not a

IV. FUNCTIONS OF AI IN BIOINFORMATICS a. Data Mining b. Bio-Medical Informatics AI provides several powerful algorithms and techniques for solving important problems in bioinformatics and chemo-informatics. Approaches like Neural Networks, Hidden Markov Models, Bayesian Networks and Kernel Methods are ideal for areas with lots of data but very little theory. The goal in applying AI to bioinformatics and chemo-informatics is to extract useful information from the wealth of available data by building good probabilistic models. Data Mining is an AI powered tool that can discover useful information within a database that can then be used to improve actions. Bio-Medical Informatics in the field of AI is a combination of the expertise of medical informatics in developing clinical applications and the focused principles that have background guided bioinformatics could create a synergy between the two areas of application. V. AN EXAMPLE Following example shows how the AI techniques can be used in the field of Bioinformatics

Nearest neighbour approach

provably correct answer. That is, in many applications it is important to attribute a property of some sort to an object when the objects real property is not known. Rather than having to leave the object out of consideration, an attribution of a property to the object with unknown property may be useful and desirable, even if it cannot be proved that the attributed property is the real property. At least, with nearest neighbour calculations, there is some systematicity (the consistency heuristic) in the way that an unknown property is attributed.

which, the 8 blocks of known size are first divided into two equal subsets using, say, the height attribute. The tallest of the shorter subset has height 2 and the shortest of the taller subset has height 5. The midpoint 3.5 is therefore chosen as the dividing line.

So, for our problem above, we shall divide up the 8 blocks in advance of nearest neighbour calculation (i.e. before we calculate the nearest neighbour of the ninth block). To do this, we divide the 8 blocks by height followed by width, then height, width ... until only one block remains in each set. We ensure that we divide so that an equal number of cases falls on either side. The eight blocks with known colour are first placed on the feature space, using their width and height measures as co-ordinates (Figure 1 (a)).

Figure 1(b) The next step is to use the second attribute, width, to separate each of the two subsets from Figure 1(b) into further subsets (Figure 1 (c)). For the shorter subset, the wider of the two narrow blocks is 2 and the narrower of the two wide blocks is 4. The midpoint is therefore 3. For the taller subset, a similar form of reasoning leads to the midpoint being 3.5. Note that the two subsets have different midpoints.

Figure 1(a) The ninth object (width 1, height 4) is also located in this feature space, but it may not be clear what its nearest neighbour is (i.e. which region of space it occupies), that is, the object could be orange or red. To decide

Figure 1(c)

Figure 1(e) Since each block does not occupy its own region of space yet, we return to height (since there are only two known attributes for each object) and split each of the four subsets once more (Figure 1 (d)).

While this conclusion may have been reached through a visual inspection of the feature space alone (Figure 1 (a)), in most cases we will be dealing with a multi-dimensional feature space, so some systematic method for calculating nearest neighbours will be required which takes into account many more than two dimensions. Also, nearest neighbour techniques provide only approximations as to what the missing values may be. Any data entered into a database system, or any inferences drawn from data obtained from nearest neighbour calculations, should always be flagged to ensure that these approximations are not taken to have the same status as facts. New information may come in later which requires these approximations to be deleted and replaced with real data. We could of course leave calculating the nearest neighbour of a new sample until the new sample enters the system. But if there are thousands of samples and hundreds of attributes, calculating the nearest neighbour each time a new sample arrives with unknown properties may prove to be a bottleneck for database entry. Nearest neighbour calculations can be used to infer the missing values of database attributes, for instance, and if we are dealing with real-time databases the overheads in calculating the nearest neighbour of new records with missing values could be a real problem. Instead, it is more efficient to compute decision trees for a number of attributes which are known to be noisy in advance of new records so that entering a new record is delayed only by the time it takes to traverse the relevant decision tree (as opposed to calculating the nearest neighbour from scratch). Also, once the decision tree is computed, the information as to why a new sample is categorised with a nearest neighbour is readily available in the decision tree.

Figure 1(d)

For the two taller subsets, the midpoints are both, coincidentally, 5.5 (between 5 and 6). For the two shorter subsets, the midpoints are both, coincidentally, 1.5. Each block now has its own region of space. Once we have divided up the cases, we can then generate a decision tree, using the midpoints discovered as test nodes (Figure 1 (e)) in the order in which they were found. Once we have the tree, we can then trace a path down the tree for the ninth block, following the appropriate paths depending on the outcome of each test, and allocate a colour to this block (orange).

Analysing such trees can shed new light on the domain in question. A decision tree therefore represents knowledge about nearest neighbours.

Nearest neighbour example in bioinformatics

Put simply, the unit receives incoming activation either from a dataset or activation from a previous layer and makes a decision whether to propagate the activation. Units contain an activation function which performs this calculation and the simplest of these is the step or threshold function (Figure 2).

Typically, bioinformatics researchers want to find the most similar bio sequence to another Bio sequence, and such bio sequences can contain hundreds and possibly thousands of attributes, i.e. positions, which are candidates for helping to identify similarities between bio sequences. There will typically be many more attributes than sequences and therefore the choice of specific attributes to use as tests will be completely arbitrary and random. For this reason, nearest neighbour calculations usually take into account all the information in all positions before attributing a missing value.

Figure 2

VI. ANOTHER EXAMPLE

Neural Networks in Bioinformatics


Artificial Neural Networks (ANNs) were originally conceived in the 1950s and are computational models of human brain function.

However, the most common activation function used is the sigmoid function (Figure 3). This is a strictly increasing function which exhibits smoothness and asymptotic properties. Sigmoid functions are differentiable. The use of such sigmoid activation functions in multilayer perceptron networks with backpropagation contributes to stability in neural network learning.

They are made up of layers of processing units (akin to neurons in the human brain) and connections between them, collectively known as weights. For the sake of exposition, only the most basic neural network architecture, consisting of just an input layer of neurons and an output layer of neurons, will be considered first. Artificial neural networks (ANNs) are simulated by software packages which can be run on an average PC. A processing unit is based on the neuron in the human brain and is analogous to a switch.

Figure 3

Units are connected by weights which propagate signals from one unit to the next (usually from layer to layer). These connections are variable in nature and determine the strength of the activation which is passed from one unit to the next. The modification of these weights is the primary way in which the neural network learns the data. This is accomplished in supervised learning by a method known as backpropagation where the output from the neural network is compared with the desired output from the data and the error from this is used to change the weights to minimise the error.

output node values can be compared with the known class values to determine the validity of the network. Network validity can be measured in terms of how many unseen samples were falsely classified as positive by the trained ANN when they are in fact negative (false positive rate) and vice versa (false negative rate). If the class of an unseen sample is not known, then the output node values make a prediction as to the class of output into which the sample falls. Such predictions may need to be tested empirically.

The task of the ANN is to modify the weights between the input nodes and the output node (or output nodes if there is more than one output node in the output layer) through repeated presentation of the samples with desired output. The process through which this happens is, first, feedforward, whereby the input values of a sample are multiplied by initially random weights connecting the input nodes to the output node, second, comparing the output node value with the desired (target) class value (typically 0 or 1) of that sample, and third back-propagating an error adjustment to the weights so that the next time the sample is presented, the actual output is closer to the desired output. This is repeated for all samples in the data set and results in one epoch (the presentation of all samples once). Then the process is repeated for a second epoch, a third epoch, etc, until the feed-forward back-propagating (FFBP) ANN manages to reduce the output error for all samples to an acceptable low value. At that point, training is stopped and, if needed, a test phase can begin, whereby samples not reviously seen by the trained ANN are then fed into the ANN, the weights clamped (i.e. no further adjustment can be made to the weights), and the output node value for each unseen sample observed. If the class of the unseen samples is known, then the

More formally and very generally, the training phase of an ANN starts by allocating random weights w1, w2, wn to the connections between the n input units and the output units. Second, we feed in the first pattern p of bits x (p), x (p) xn(p) to the network and compute an activation value for the output units given
1 2

. That is, each input value is multiplied by the weight connecting its input node to the output nodes, and all weighted values are then summed to give us a value for the output nodes. Third, we compare the output value for the pattern with the desired output value and update each weight prior to the input of the next pattern p: wi( p)= wi( p)+ wi ( p) where wi ( p) is the weight correction for pattern p calculated as follows: wi ( p) = xi ( p) e( p) , where e(p)= OD ( p)- O( p) , where in turn OD(p) is the desired output for the pattern and O(p) is the actual output. This is carried out for every pattern in the dataset (usually with shuffled, or random, ordering). At that point we have one epoch. The process is then repeated from the second step above for a second epoch, and a third, etc.

Typically, an ANN is said to have converged or learned when the sum of squared errors (SSE) on the output nodes for all patterns in one epoch is sufficiently small (typically, 0.001 or below). The equations above constitute the delta learning rule which can be used to train single-layer networks. A slightly more complex set of equations exists for learning in ANNs with more than one layer and making use of the sigmoid function described earlier.

Figure 4 Unsupervised neural networks have been frequently used in bioinformatics as they are a welltested method for clustering. The most common technique used is the self-organisingfeature-map (SOM or SOFM), and this learning algorithm consists of units and layers arranged in a different manner to the feedforward backpropagation networks described above. The units are arranged in a matrix formation known as a map, and every input unit is connected to every unit in this map. These map units then form the output of the neural network (Figure 5).

While many different types of neural network exist, they can be generally distinguished by the type of learning involved. There are two basic types of learning: supervised and unsupervised learning. In supervised learning, the required behaviour of the neural network is known (as described above). For instance, the input data might be the share prices of 30 companies, and the output may be the value of the FTSE 100 index. With this type of problem, past information about the companies' share price and the FTSE 100 can be used to train the network. New prices can then be given to the neural network and the FTSE 100 predicted. With unsupervised learning, the required output is not known, and the neural network must make some decisions about the data without being explicitly trained. Generally unsupervised ANNs are used for finding interesting clusters within the data. All the decisions made about those features within the data are found by the neural network. Figure 4 illustrates the architecture for a two-

layer supervised ANN consisting of an input layer, a hidden layer and an output layer.

Figure 5

The SOFM relies on the notion that similar individuals in the input data will possess similar feature characteristics. The weights are trained to group those individual records together which possess similar features. It is this automated clustering behaviour which is of interest to bioinformatics researchers. Numerous advantages of ANNs have been

identified in the AI literature. Neural networks can perform with better accuracy than equivalent symbolic techniques (for instance decision trees) on the same data. Also, while identification tree approaches such as See5 can identify dominant factors the importance of which can be represented by their positions high up in the tree, ANNs may be able to detect non-dominant relationships (i.e. relationships involving several factors, each of which by itself may not be dominant) among the attributes. However, there are also a number of disadvantages. Data often has to be pre-processed to conform to the requirements of the input nodes of ANNs (e.g. normalised and converted into binary form). Training times can be long in comparison with symbolic techniques. Finally, and perhaps most importantly, solutions are encoded in the weights and therefore are not as immediately obvious as the rules and trees produced by symbolic approaches.

approach yields a 90% classification rate (therefore only 10% error) from the final network, which is in contrast to 75% for each expert.

Another approach has been to operate many different feature selection algorithms and a variety of neural networks on this leukaemia dataset. Feature selection is a method of reducing the number of features or attributes within the dataset which reduces the time taken to train any algorithm, but can be very important for neural networks. The standard neural network with a three layers and 5-15 hidden units performed very well, achieving an error rate of just 2.8% on the test dataset, when coupled with a Pearson rank correlation feature selection. It is obvious from these approaches that neural methods are capable of classifying this data with very good accuracy. The problem with using neural networks in this manner is that whilst the network itself can be used a method for predicting classifications, the attributes which have been used to make the classifications cannot be easily determined. That is, the model can be used to predict the class, but we cannot easily find the genes which are responsible for that classification. This is especially the case where networks with hidden layers have been used (as in the previous experiments) because the hidden layer can find non-linear relationships between combinations of attributes and classes. This non-linear relationship cannot be easily described in the rule format that has been shown earlier. These were only a few examples. There a lot more examples which prove the importance if AI in Bioinformatics.

Leukaemia Dataset
The leukaemia dataset consists of 72 individuals and 7129 gene expression values is collected from each of them. Classifications consist of individuals suffering from a type of leukaemia, ALL or AML. Distinguishing between these two types of data is very important as they respond very differently to different types of therapy and therefore the survival rate may well be improved by the correct classification of this leukaemia. The outward symptoms are very similar, so a method which can differentiate between the two based on gene expression profiles could help patients of the disease. An approach uses several neural networks trained on different aspects of the data in conjunction to give a final classification. It uses the original data and two fourier transforms of the data to train three separate "experts", a gating network then uses a majority vote to determine the final classification of these networks. This

VII.

WORKING EXAMPLE

Functional Genomics and the Robot Scientist: Robot scientist developed by University of Wales researchers Designed for the study of functional genomics Tested on yeast metabolic pathways Utilizes logical and association to knowledge representation schemes

Figure 6.b

CONCLUSION The Robot scientist Yeast Metabolic Pathways In the nutshell I want to conclude about the basics of the term paper that though Bioinformatics is a totally different field, it depends on Artificial Intelligence for speed, accuracy, inference and prediction. Artificial intelligence aims at providing tools and techniques which can help bioinformatics accomplish its tasks with convenience. ACKNOWLEDGMENT

Figure 6.a

I Diwaker Singh Student of BTech.MBA (CSE) would like to thank my teacher for the corresponding subject that is Mr. Vijay Kumar Garg for his guidance during the preparation of this term paper. This term paper has surely enhanced my knowledge. I would also like to thank my friends who helped me in gathering information about the topic.

10

REFERENCES [1]www.wikipedia.com [2]www. iiconference.org [3] www.sciencedirect.com [4] www.bioplanet.com [5] www.aaai.org

11

You might also like