Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Computational Neural Networks Driving Complex Analytical Problem Solving

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Anal. Chem.

2010, 82, 4307–4313

Computational Neural Networks Driving Complex


Analytical Problem Solving
See https://pubs.acs.org/sharingguidelines for options on how to legitimately share published articles.

Grady Hanrahan

California Lutheran University


Downloaded via 105.112.18.179 on August 27, 2019 at 11:48:13 (UTC).

Neural network computing demonstrates advanced ana-

KANJANA PATCHARAPRASERTSOOK
lytical problem solving abilities to meet the demands of
modern chemical research. (To listen to a podcast about
this article, please go to the Analytical Chemistry multi-
mediapageatpubs.acs.org/page/ancham/audio/index.html.)

Learning systems inspired by the brain’s neural structure exhibit


intelligent behaviour without structured symbolic expressions.1
They have the ability to learn by example through highly
interconnected processing elements, which is a key feature of the
neural network computing paradigm.2 Foundational concepts can
be traced back to seminal work by McCulloch and Pitts on the
development of a sequential logic model of a neuron, in which
simplified diagrams representing the functional relationships
between neurons conceived as binary elements were introduced.3
Subsequent developments by Rosenblatt,4 Widrow and Hoff,5 and
Minsky and Papert6 introduced the scientific community to a
network based on the perceptron, the basic framework by which
modern day neural network developments are conceived and
complex analytical problems are solved.
Although the broad range and applicability of neural networks
is established, the data-rich needs of modern chemical research
demand new and more efficient networks. Fortunately, the recent
development of novel learning algorithms and hybrid neural
techniques and gains in raw computing power and speed have
raised several fundamental analytical chemistry research lines.
For example, neural networks have provided a powerful inference and application of computational neural network tools that can
engine for regression analysis and analyte quantitation,7,8 en- increase the robustness and transparency of neural computing
hanced the prediction of chemical and physical properties from techniques as methods for routine chemical analysis and experi-
mid- and near-IR spectroscopy,9,10 proved effective in quantitative mentation. (Table 1 provides definitions of important network
structure-activity relationship (QSAR) studies,11,12 and improved terminology.)
pattern recognition and classification capabilities.13,14 Neural
networks can be regarded as an extension of the many conven- THE BIOLOGICAL MODEL
tional statistical tools in everyday research, with superior perfor- Proper understanding of neuronal and synaptic physiology and
mance dependent upon the interconnectivity between layers and knowledge of the complex interconnections between neurons in
the nature and level of pre-processing of the input data. The intent the brain are central to comprehending how computers can exhibit
of this article is to provide broad insight into the development even minimal neural-network-like behaviour.15 Four main regions
10.1021/ac902636q  2010 American Chemical Society Analytical Chemistry, Vol. 82, No. 11, June 1, 2010 4307
Published on Web 04/28/2010
Table 1. Key Concept and Terminology Table

Figure 1. Neurons organized in a connected network, both receiving


and sending impulses.

Figure 2. A basic multiple-input computational neuron model.


Individual scalar inputs are weighted appropriate elements w1, w2,
w3,.. .,wR of the weight matrix W. The sum of the weighted inputs
and the bias b (equal to 1) forms the net input n, proceeds into a
transfer function f, and produces the scalar neuron output a. The terms
x1, x2, x3,.. .,xR are the individual inputs.

THE COMPUTATIONAL MODEL


A biological neuron represented in simplified computational form
is the mathematical building block of neural network models. The
basic operation of these models can be illustrated by examining
the multiple-input artificial neuron shown in Figure 2 and
considering how interconnected processing neurons work to-
gether to produce an output function. The output of a neural
network relies on the functional cooperation of the individual
neurons within the network with parallel information processing.
The individual inputs x1, x2, x3,.. .,xR are each weighted with
appropriate elements w1, w2, w3,.. .,wR of the weight matrix W.
The sum of the weighted inputs and the bias forms the net
input n, proceeds into a transfer function f, and produces the
scalar neuron output a, written as:

a ) f(Wx + b)

Revisiting the biological neuron pictured above, the weight w


corresponds to synapse strength, the summation represents the
comprise a prototypical neuron’s structure (Figure 1): the cell cell body and the transfer function, and a represents the axon
body (soma), dendrites, axons, and synaptic knobs. The soma and signal. The output is binary and depends on whether the input
dendrites represent the location of input reception, integration, meets a specified threshold T. If the total net input is <0, then
and coordination of signals arising from pre-synaptic nerve the output of the neuron is 0; otherwise it is 1.
terminals. The physical and neurochemical characteristics that A closer look at the transfer function reveals greater insight
arise from the movement from the pre-synaptic to the post-synaptic into the way signals are processed by neurons. The transfer
membrane determine the strength and polarity of the electric input function is defined in the N-dimensional input space (also termed
signals, or action potentials, that convey information in the brain. the parameter space), with the choice of transfer function strongly
Signal propagation from the dendrite and soma occurs from the influencing the complexity and performance of neural networks.
axon and down its length. Ostensibly, each neuron operates like In general, the log-sigmoid function given by
a simple processor with the connection structure between neurons
being dynamic in nature. This adaptive connectivity provides the 1
a) (2)
human brain with the ability to learn. 1 + e-n
4308 Analytical Chemistry, Vol. 82, No. 11, June 1, 2010
is one of the most widely employed transfer functions. It takes
the input (n) and compresses the output (a) into a defined range
(0-1). The purpose of the sigmoid function is to generate a degree
of nonlinearity between the neuron’s input and output. Models
using sigmoid transfer functions often have enhanced generalized
learning characteristics and produce models with improved
accuracy.16 Other commonly reported transfer functions include
the hyperbolic tangent and the simple limiter function.17 Investiga-
tors are now looking beyond these commonly used functions
because of a growing understanding that the choice of transfer
function is equally important as the network architecture and
learning algorithms. For example, radial basis functions (RBFs),
real-valued functions whose value depends only on the distance
from the origin, are increasingly used in modern applications. In
contrast to sigmoid functions, RBFs have radial symmetry about
a center, are assumed to be Gaussian-shaped, and have values Figure 3. Generalized architecture of a MLP, which consists of three
decreasing monotonically with the distance between the input neural layers: input, hidden, and output. Shown is a MLP with multiple
inputs and outputs, with each layer connected to the subsequent layer
vector and the center of each function.
by connection weights.

NETWORK ARCHITECTURES
group demonstrated that neural networks with appropriate
Knowledge of network connectivity and layer arrangement comple-
architecture could be used to develop linear and nonlinear
ments our understanding of the basic operation of a computational
calibration models that performed as well as those developed
neuron and its ability to model complex analytical processes. The
by principal component regression or partial least squares.
modified Rosenblatt’s model of a neuron (Figure 2) is simple in
The form of nonlinear mapping that MLPs offer lends itself
structure but has great deal of computing potential. However, its
applications are limited because it only generates a binary output well to pattern recognition tasks common to both classification
with fixed weight and threshold values. Connecting multiple and regression. Traditionally, classification is accomplished using
neurons feeding forward to one output layer reveals the true one or several well-established techniques, including statistical
computing power of neural networks. Single-layer perceptrons are discriminant analysis, principal component analysis (PCA), K-NN,
easy to set up, train, and have explicit links to statistical models, SIMCA, and hierarchical clustering.19 A comprehensive review
e.g., sigmoid output functions that allow a link to posterior by Brown et al.20 noted, “The most novel research in pattern
probabilities. Nonetheless, they do have their limitations, including recognition involved work with artificial neural networks.” Work
the inability to solve exclusive disjunction (XOR) binary functions. by Baczek et al.21 used a neural network based on a MLP for
In 1969, Minsky and Papert created the multilayer perceptron evaluation of peptide MS/MS spectra in proteomics data. The
(MLP), the first solution to the XOR problem, by combining appropriately trained and validated neural network demonstrated
perceptron unit responses into a much more practical structure efficient classification of the manually interpreted MS/MS spectra
using a second layer of units.6 More recent work by Pina et al.15 as “good” or “bad” and enabled the automatic processing of large
developed simple (photo)chemical systems that, when performing amounts of MS/MS spectra, hence proving the utility of neural
as threshold devices and as XOR logic gates, were capable of networks in processing routine proteomic data. In terms of
integrating the effects of two inputs into a single output. calibration models, the previously mentioned study by Garcı́a-
MLPs consist of an input layer, one or more hidden layers, Reiriz et al.7 successfully used unfolded PCA, residual bilinear-
and an output layer and can be employed to approximate any ization, and neural networks based on radial basis functions to
continuous function. The generalized structure of a MLP with examine four-way kinetic-excitation-emission fluorescence data to
multiple inputs/outputs is presented in Figure 3. Close examina- determine levels of malonaldehyde in olive oil samples.
tion reveals one neuron in the input layer for each predictor In contrast to the strictly feedforward approach described
variable. If categorical variables are considered, N-1 neurons are above, recurrent neural networks (RNNs) have at least one
used to signify the N categories of the variable. The input layer feedback (closed loop) connection. Such networks have activation
standardizes the individual input values (x1.. .xR) to a range of feedback that symbolizes short-term memory, which allows them
-1 to 1. This layer distributes the values to each of the neurons to perform sequence recognition, sequence reproduction, and
in the hidden layer. In addition to the predictor variables, a temporal association. Feedback is modified by a set of weights to
constant input of 1.0 (bias) is fed to the hidden layer. In the enable automatic adaptation through a learning process (e.g.,
hidden layer, the value from each input neuron is multiplied backpropagation). A state layer is updated with the external input
by a given weight, and the resulting weighted values are of the network, as well as with activation from the previous forward
summed. The weighted sum is then fed into a selected transfer propagation. Internal states retain previous information and use
function with final distribution to the output layer. As an this historical record, or “memory”, when functioning under new
example study, Gemperline and colleagues compared weighted inputs. Interest in such memories has been spurred by the seminal
and unweighted linear calibration methods with second-order work of Hopfield in the early 1980s, which showed how a simple
calibration methods using quadratic principal component scores discrete nonlinear dynamical system could exhibit associative
and nonlinear calibration methods employing MLPs.18 The recall of stored binary patterns through collective computing.22,23
Analytical Chemistry, Vol. 82, No. 11, June 1, 2010 4309
reflectance NIR spectroscopy and SOMs. Latino and Aires-de-
Sousa31 performed classification of photochemical and metabolic
reactions by Kohonen SOMs and random forests by inputting the
difference between the 1H NMR spectra of the products and
the reactants.

MODEL DESIGN AND NETWORK TRAINING


Neural network models can approximate virtually any measureable
function up to an arbitrary degree of accuracy.32 With this
flexibility comes the task of selecting the most appropriate model
for a given application. This is especially true for applications in
analytical chemistry given the unpredictable availability, quality,
representativeness, and size of input data sets. Moreover, the
relationships between given input variables and the output
properties afford a plenitude of available information from which
to model nonlinear chemical processes.33 Analytical chemists can
Figure 4. A Kohonen’s SOM displaying a feedforward structure with
develop and implement effective neural network techniques based
a single computational layer arranged in rows and columns.
on a sound model framework such as that shown in Figure 5.
Depending on the density of the feedback connections, present Because variability and missing data affect learning algorithms,
RNNs can take on a simple (partial) recurrent architecture (e.g., data pre-processing techniques such as data imputation, linear and
Elman model, Jordan model) or a fully (total) recurrent architec- logarithmic scaling, and PCA are essential.34 In addition, the
ture (Hopfield model). In simple RNNs, partial recurrence is inherent complexity of modern analytical data sets requires
created by feeding back delayed hidden unit outputs or the outputs suitable input feature selection to ensure robust model develop-
of the network as additional model inputs.24 Groups are updated ment. For example, MLPs are ideally suited for various classifica-
in the order in which they appear in the network’s group array. tion tasks in which the MLP classifier performs mapping from an
Hopfield networks use fully interconnected architectures and input (feature) space into an output space. Support vector machine
learning algorithms that can deal with time-varying input and/or (SVM) models, considered by many to have a close relation to
output in complex fashion.22 For example, Yang and Griffiths25 MLP networks, provide an alternative training method for radial
employed a Hopfield network for encoding FTIR spectra and basis function and MLP classifiers by solving a quadratic pro-
identifying compounds in open-path FTIR measurements. The gramming problem with linear constraints. SVMs provide a
Hopfield network was able to identify very similar spectra (e.g., nonspecific mechanism to fit the surface of the hyper plane to
the vapor-phase spectra of ethanol and 1-butanol) when they were the data through a kernel function, which if we again consider
stored as prototype patterns in the network. The group’s previous classification, unreservedly defines pattern classes by introducing
attempts to use multilayer feedforward neural networks for the concept of similarity between data.35
identification were unsuccessful. Model generalization is an important aspect of network
Kohonen’s self-organizing map (SOM) is one of the most development; detailed understanding of this concept is vital for
popular neural network methods used to classify and cluster proper data subset selection. Generalization refers to the ability
data.26 In neural computing terms, self-organization is the modi- of model outputs to approximate target values given inputs that
fication of synaptic strengths and underlies the dynamics and are not in the training set. In practice, a good balance in the
behavioral modification of pattern recognition of higher-level allocation of the input data set is important. The training (70%)
representations. The SOM is a topology-preserving map that and test (30%) sets routinely comprise 90% of the total, with the
displays a feedforward structure with a single computational layer additional 10% set aside for the validation procedure. The training
arranged in rows and columns in which each neuron is fully data set is used for model fitting in computing the gradient and
connected to all the sources in the input layer (Figure 4). The updating the network weights and biases. The validation set is
SOM combines two paradigms of unsupervised learning: 1) used in modeling assessment, during which the error on the
clustering (grouping data by similarity) and 2) projection methods validation set is monitored during the training process. Finally,
(extraction of explanatory variables). With these functional the test data set is used after the training process to make a final
characteristics comes the ability to map nonlinear relationships assessment of the fit model and success in generalization. Because
among high-dimensional input data into simple geometric relation- supervised networks require training and testing data sets,
ships on a 2D grid.27 Some in the analytical community believe statistical resampling methods such as bootstrap and jackknife
that SOMs resemble biological neurons most closely and provide methods are often needed for unbiased selection from the available
greater flexibility than traditional MLP networks for adaptation data. By using statistical resampling in conjunction with initial state
to different types of problems, including classification activities.28,29 sampling, a computational neural network can be trained on an
In terms of example applications, Lavine et al.30 demonstrated ensemble of different training sets and the performance (cross-
the utility of SOMs for spectral pattern recognition in two separate validation) evaluated on the complement testing sets.36
studies. In the first, Raman spectroscopy and SOMs differentiated Learning and organization are essential to neural network
six common household plastics by type for recycling purposes. architecture, with the choice of a learning algorithm central to
The second study developed a method to differentiate acceptable proper network development. Neural network connection strengths
lots from unacceptable lots of the food additive avicel using diffuse are adjusted iteratively according to prediction error; performance
4310 Analytical Chemistry, Vol. 82, No. 11, June 1, 2010
Figure 5. Model framework for neural network development and final application.
is improved by sufficient and properly processed input data fed propagation of uncertainty in quantities that are unknown to other
into the model, as well as an appropriately defined learning rule. assumptions in the models of interest.40
Hebb’s rule,37 which states that in biological systems, learning The process of choosing the proper architecture as part of the
proceeds via the adaptation of the strengths of the synaptic model selection procedure requires the selection of both the
interactions between neurons, has been modified numerous times. appropriate number of hidden nodes and the connections within.
The most popular supervised learning algorithm is backpropaga- Determining the best number of hidden nodes depends on a
tion, which is used for learning in feedforward networks and was complex array of factors including: 1) the number of input and
developed to overcome some of the limitations of the perceptron output units, 2) the number of training patterns, 3) the amount of
by allowing training for multilayer networks. In order to perform noise present in the training data, 4) the complexity of the function
backpropagation, one must minimize the overall error E by to be learned, and 5) the training algorithm employed. Choosing
manipulating weights and employing gradient descent in the too many results in overfitting and poor generalization unless
weight space to locate the optimal solution. In unsupervised some form of overfitting prevention (e.g., regularization) is applied.
learning, the weights and biases are modified in response to The selection of too few hidden nodes will likely result in high
network inputs only. The central backbone to all unsupervised training and generalization errors as a result of underfitting.
learning algorithms is Hebb’s rule. Oja subsequently proposed a Hwang and Ding43 realized the difficulty in constructing prediction
similar rule that could be used for the unsupervised training of a and confidence intervals for neural networks and showed that by
single nonlinear unit.38 constructing asymptotically valid prediction intervals, one could
In terms of complexity, conventional methods of training are use these intervals to choose the correct number of nodes. In
considered computationally-intensive optimization problems based cross-validation and resampling methods, estimates of generaliza-
on a set of training cases. Bayesian learning presents an alternative tion errors can be used as model selection criteria. More
form of training in which “prior” probabilitiessthose based on established information criteria make explicit use of the number
existing information (e.g., previously collected data, expert of network weights in a model. Some form of measured fit with
judgment)sare assigned.39,40 An application by Moczko et al.41 a penalty term is employed to find an optimal trade-off between
used Bayesian regularization to develop a neural network for an unbiased approximation of the underlying model and the loss
processing 3D spectrofluorimeter data as a response to simulating of accuracy caused by parameter estimation.44
changes in experimental parameters. The optimal model showed Model validation and sensitivity analysis are two final steps
great ability in adapting and modeling the nonlinear changes in a that are routinely employed before models can be used for relevant
mixture of fluorescent dyes. When applied to MLP neural applications. The validation process typically involves subjecting
networks, the Bayesian approach with Markov chain Monte Carlo the neural network model to an independent set of validation data
approximation has been shown to provide better generalization that were not used during training. This is necessary to provide
capabilities than standard neural networks,42 as well as to permit an expectation of model performance after deployment: in other
Analytical Chemistry, Vol. 82, No. 11, June 1, 2010 4311
words, a network’s ability to produce accurate solutions to interval.54 Each fuzzy rule describes a local behavior of the system
examples not housed in the data reserved for the training under study. The ANFIS approach has been particularly useful
process.45,46 in QSAR studies. Here, the premise behind the ANFIS approach
Various error measures are used to assess the accuracy of the is to develop a model or mapping that will correctly associate the
model, including the mean square error, with output values inputs (descriptors) with the target (activity), as evident in recent
compared to target values and the prediction error calculated. papers by Mazzatorta et al.55 and Jalali-Heravi and Kyani.11 Both
Other measures include the root mean squared error, mean studies conveyed the strengths of the hybrid ANFIS approach
absolute error, and the mean relative error. Sensitivity analysis is including fast and accurate learning, good generalization capabili-
a method used to extract the features most accountable for the ties, and the ability to accommodate both data and existing expert
decision of the neural network by measuring the change in the knowledge about the problem under consideration.
network’s response with a change in each input variable or the
gradient of the network output.47 For example, Judge et al.13 OUTLOOK
employed neural networks with sensitivity analysis as pattern The analytical problem solving performance of computational
recognition tools to elucidate the relationship between functional neural networks is becoming increasingly evident, with accurate
groups and IR spectral features. Here, the partial derivative of and efficient modeling of complex problems realized across nearly
the network’s output with respect to each input variable was the whole spectrum of the analytical chemistry discipline. Such
evaluated. efforts have been catalyzed by the upsurge in computational power
and availability and the co-evolution of software, algorithms, and
HYBRID NEURAL MODELS methodologies. Much less developed is the goal of perfecting
The analytical chemistry community is quickly moving toward neural networks to create a truly remarkable artificial intelligence
hybrid neural models. Such models combine two or more system. Non-biological intelligence is now a mainstream topic, with
paradigms to realize powerful analytical problem solving strategies Spector arguing for increased awareness of evolved artificial
by overcoming common weaknesses of single neural network intelligence in the sciences.56 He writes, “The problem-solving
approaches. For example, evolutionary artificial neural networks performance of evolutionary algorithms has advanced significantly
evolve toward the fittest in a task environment, thus eliminating in the past decade or so, to the extent that human-competitive
the tedious trial-and-error approach of manually finding an optimal results have recently been achieved in several areas of science
model for a given task at hand.48,49 One of the most popular hybrid and engineering.”
models involves the incorporation of genetic algorithms (GAs). Although the power of traditional GAs in network optimization
A GA is a highly parallel search tool that emulates evolution has been realized, they have not produced artificial intelligence
according to the Darwinian survival of the fittest principle. A in practice. From this perspective, I believe the future of neural
population of individuals, each representing the parameters of the networks and their relevance in analytical chemistry lie in the
problem to be optimized as a string of numbers or binary digits, willingness and ability of users to adopt more advanced evolution-
undergoes a process analogous to evolution to derive an optimal ary computational techniques, including evolutionary program-
or near-optimal solution. The parameters stored by each individual ming for the design of appropriate network structure. Whether
are used to assign its fitness: a single numerical value indicating or not the computational power of such techniques is sufficient
how well the solution using that set of parameters performs. New for the design and construction of truly intelligent neural systems
individuals are generated from members of the current population is of obvious debate. But perhaps their real contribution will be
by processes akin to asexual and sexual reproduction.50 to open the neural network “black box”, whose inner workings
A 2003 study by Petritis et al.51 used neural networks for have traditionally been concealed from the researcher. By
predicting the reversed-phase LC retention times of peptides encouraging such progression to continue, the analytical chem-
enzymatically digested from proteome-wide proteins. In this study, istry community will no doubt gain a more complete picture of
a GA was developed to normalize the peptide retention data into how neural networks function and how powerful they can be in
a range (0-1), improving the peptide elution time reproducibility solving complex chemical problems.
to ∼1%. A more recent study by our group used a neural network-
GA approach to optimize on-capillary dipeptide derivatization.52 ACKNOWLEDGMENT
More specifically, for the maximum conversion of the dipeptide The author acknowledges support from the John Stauffer
D-Ala-D-Ala by phthalic anhydride, genetic optimization proved Charitable Trust and the National Science Foundation (CHE-
valuable in the determination of effective network structure with 0922558). He thanks Jennifer Arceo for assistance with the
three defined parameter inputs: 1) phthalic anhydride injection literature search and his collaborators for their continued ac-
volume, 2) time of injection, and 3) voltage. Results obtained from ceptance of this field.
the hybrid approach proved superior to a feedforward neural Grady Hanrahan is the John Stauffer Endowed Professor of Applied
model with backpropagation in terms of training data and Analytical Chemistry at California Lutheran University. His research
predictive ability. focuses on the development of chemometric and neural computational
techniques for instrument optimization and modeling of complex chemical
The development of adaptive neuro-fuzzy inference systems and environmental systems. Address correspondence to him at California
(ANFIS) has also been well received in the analytical community. Lutheran University, Department of Chemistry, Thousand Oaks, Cali-
A fuzzy system uses membership functions, fuzzy logic operators, fornia 91360 (ghanraha@clunet.edu).
and if-then rules to map a given input to an output.53 Associated
REFERENCES
fuzzy sets are sets whose elements have degrees of membership (1) Rumelhart, D. E.; McClelland, J. L. Parallel Distributed Processing: Explora-
defined with membership functions valued in the real unit tions in the Microstructure of Cognition; MIT Press: Cambridge, MA, 1986.

4312 Analytical Chemistry, Vol. 82, No. 11, June 1, 2010


(2) Cottrell, G. W. Science 2006, 313, 454–455. (31) Latino, D. A. R. S.; Aires-de-Sousa, J. Anal. Chem. 2007, 79, 854–862.
(3) McCulloch, W.; Pitts, W. B. Math. Biophys. 1946, 5, 115–133. (32) Harrington, P. B.; Urbas, A.; Wan, C. Anal. Chem. 2000, 72, 5004–5013.
(4) Rosenblatt, F. Psych. Rev. 1958, 65, 386–408. (33) Harrington, P. B. Anal. Chem. 1998, 70, 1297–1306.
(5) Widrow, B.; Hoff, M. E. IRE WESCON Convention Record 1960, 4, 96– (34) Pelckmans, K.; De Brabanter, J.; Suykens, J. A. K; De Moor, B. Neural
104. Networks 2005, 18, 684–692.
(6) Minsky, M. L.; Papert, S. A. Perceptrons; MIT Press: Cambridge, MA, 1969. (35) Amari, S.; Wu, S. Neural Networks 1999, 12, 783–789.
(7) Garcı́a-Reiriz, A.; Damiani, P. C.; Olivieri, A. C.; Cañada-Cañada, F.; Muñoz (36) Piotrowski, P. L.; Sumpter, G. G.; Malling, H. V.; Wassom, J. S.; Lu, P. Y.;
de la Peña, A. Anal. Chem. 2008, 80, 7248–7256. Brothers, R. A.; Sega, G. A.; Martin, S. A.; Parang, M. J. Chem. Inf. Model.
(8) Garcı́a-Reiriz, A.; Damiani, P. C.; Culzoni, M. J.; Goicoechea, H. C.; Olivieri, 2007, 47, 676–685.
A. C. Chemom. Intell. Lab. Syst. 2008, 92, 61–70. (37) Hebb, D. O. The Organization of Behaviour; John Wiley & Sons: New York,
(9) Janik, L. J.; Forrester, S. T.; Rawson, A. Chemom. Intell. Lab. Syst. 2009, 1949.
97, 179–188. (38) Oja, E. J. Math. Biology 1982, 15, 267–273.
(10) Rantanen, J.; Räsänen, E.; Antikainen, O.; Mannermaa, J.-P.; Yliruusi, J. (39) Mackay, D. J. C. Neural Comput. 1992, 4, 415–447.
Chemom. Intell. Lab. Syst. 2001, 56, 51–58. (40) Lampinen, J.; Vehtari, A. Neural Networks 2001, 14, 257–274.
(11) Jalali-Heravi, M.; Kyani, A. QSAR Comb. Sci. 2007, 26, 1046–1059. (41) Moczko, E.; Meglinski, I. V.; Bessant, C.; Piletsky, S. A. Anal. Chem. 2009,
(12) Mosier, P. D.; Counterman, A. E.; Jurs, P. C.; Clemmer, D. E. Anal. Chem. 81, 2311–2316.
2002, 74, 1360–1370. (42) Penny, W. D.; Roberts, S. J. Neural Networks 1999, 12, 877–892.
(13) Judge, K.; Brown, C. W.; Hamel, L. Anal. Chem. 2008, 80, 4168–4192. (43) Hwang, J. T. G.; Ding, A. A. J. Amer. Stat. Assoc. 1997, 92, 748–757.
(14) Sugimoto, M.; Kikuchi, S.; Arita, M.; Soga, T.; Nishioka, T.; Tomita, M. (44) Anders, U.; Korn, O. Neural Networks 1999, 12, 309–323.
Anal. Chem. 2005, 77, 78–84. (45) Olivieri, A. C.; Faber, N. M. Validation and Error. In Comprehensive
(15) Pina, F.; Melo, M. J.; Maestri, M.; Passaniti, P.; Balzani, V. J. Am. Chem. Chemometrics; Brown, S., Tauler, R., Walczak, B., Eds.; Elsevier: Amsterdam,
Soc. 2000, 122, 4496–4498. 2009; pp 91-120,
(16) Schalkoff, R. J. Artificial Neural Networks; McGraw-Hill: New York, 1997. (46) Maier, H. R.; Dandy, G. C. Environ. Model. Soft. 2000, 15, 101–124.
(17) Burden, F. R. J. Chem. Inf. Comput. Sci. 1994, 34, 1229–1231. (47) Harrington, P. D. B.; Urbas, A.; Wan, C. Anal. Chem. 2000, 72, 5004–
(18) Gemperline, P. J.; Long, J. R.; Gregoriou, V. G. Anal. Chem. 1991, 63, 5013.
2313–2323. (48) Holena, M.; Cukic, T.; Rodemerck, U.; Linke, D. J. Chem. Inf. Model. 2008,
(19) Lavine, B.; Workman, J. Anal. Chem. 2006, 78, 4137–4145. 48, 274–282.
(20) Brown, S. D.; Blank, T. B.; Sum, S. T.; Weyer, L. G. Anal. Chem. 1994, (49) Scott, D. J.; Manos, S.; Coveney, P. V. J. Chem. Inf. Model. 2008, 48, 262–
63, 207R–228R. 273.
(21) Baczek, T.; Buciński, A.; Ivanov, A. R.; Kaliszan, R. Anal. Chem. 2004, 76, (50) Goodacre, R.; Shann, B.; Gilbert, R. J.; Timmons, E. M.; McGovern, A. C.;
1726–1732. Alsberg, B. K.; Kell, D. B.; Logan, N. A. Anal. Chem. 2000, 72, 119–127.
(22) Hopfield, J. J. Proc. Natl. Acad. Sci. U.S.A. 1982, 79, 2554–2558. (51) Petritis, K.; Kangas, L. J.; Ferguson, P. L.; Anderson, G. A.; Paša-Tolić, L.;
(23) Hopfield, J. J. Proc. Natl. Acad. Sci. U.S.A. 1984, 81, 3008–3092. Lipton, M. S.; Auberry, K. J.; Strittmatter, E. F.; Shen, Y.; Zhao, R.; Smith,
(24) Elman, J. L. Cogni. Science 1990, 14, 179–211. R. D. Anal. Chem. 2003, 75, 1039–1048.
(25) Yang, H.; Griffiths, P. R. Anal. Chem. 1999, 71, 3356–3364. (52) Riveros, T.; Hanrahan, G.; Muliadi, S.; Arceo, J.; Gomez, F. A. Analyst 2009,
(26) Kohonen, T. Self-Organizing Maps, 3rd ed.; Springer-Verlag: New York, 2001. 134, 2067–2070.
(27) Hammer, B.; Micheli, A.; Sperduti, A.; Strickert, M. Neural Networks 2004, (53) Otto, M. Anal. Chem. 1990, 62, 797A–802A.
17, 1061–1085. (54) Amini, M.; Abbaspour, K. C.; Berg, M.; Winkel, L.; Hug, S. J.; Hoehn, E.;
(28) Zupan, J.; Novič, M.; Ruisánchez, I. Chemom. Intell. Lab. Syst. 1997, 38, Yang, H.; Johnson, A. Environ. Sci. Technol. 2008, 42, 3669–3675.
1–23. (55) Mazzatorta, P.; Benfanati, E.; Neagu, C. -D.; Gini, G. J. Chem. Inf. Comput.
(29) Zupan, J.; Gasteiger, J. Neural Networks in Chemistry and Drug Design; Wiley: Sci. 2003, 43, 513–518.
New York, 1999. (56) Spector, L. Artif. Intell. 2006, 170, 1251–1253.
(30) Lavine, B. K.; Davidson, C. E.; Westover, D. J. J. Chem. Inf. Comput. Sci.
2004, 44, 1056–1064. AC902636Q

Analytical Chemistry, Vol. 82, No. 11, June 1, 2010 4313

You might also like