Van Leeuwen et al.
CML++ filter
Page 1
Visual Illusions, Solid/outline-Invariance, and Nonstationary Activity Patterns
Cees van Leeuwen1
University of Sunderland, UK
cees-van.leeuwen@sunderland.ac.uk
Steven Verver
Basket Builders, BV., Amsterdam, NL
Martijn Brinkers
Tryllian, Amsterdam, NL
Keywords:
Boundary Contour system, perception, neural network, simulation
1
Aknowledgement: The authors would like to thank the reviewers of Connection Science for their helpful
suggestions.
Van Leeuwen et al.
CML++ filter
Page 2
Abstract
Coupled Map Lattices (CML) offer a new framework for modelling visual information processes. The
framework involves computing with nonstationary patterns of synchronized activity. In this framework
structural features of the visual field emerge through the lateral interaction of locally coupled non-linear
maps. Invariant representations develop independent of top-down, or re-entrant feedback. These
representations distort certain features of the pattern, giving rise to visual field illusions. Boundary
contours, among others, are emphasised, which suggests that special cases of boundary-contour problem
could be solved by the system. Simulation studies were performed to test the hypothesis that the system
represents visual patterns in a solid/outline invariant manner. A standard back-propagation neural network
trained with a CML-filtered set of solid images and tested with CML-filtered outline versions of the same
set of images (or vice versa) showed perfect generalization. Generalization failed to occur for unfiltered or
contour-filtered images. The CML-representations, therefore, were concluded to be solid/outline invariant.
Van Leeuwen et al.
CML++ filter
Page 3
1. Introduction
Edge, or boundary detection is commonly understood to be the first, primitive stage of visual object
recognition (Marr, 1982). Literally dozens of edge detection algorithms have been described in the
literature (see Shin et al, 1998, Khvorostov et al, 1996 for reviews). Most of these are computing gradients
for each local region of an image (e.g. Canny, 1983). Representations based on gradients differ between
solid and outline versions of the same image. While seeing their equivalence is an easy task for human
observers (Kennedy, Nicolls, & Desrochers, 1995), it is nontrivial for machines, as the overlap between
solid and outline versions of the same image are minimal.
Starting from gradient information, subsequent processing would need to take nonlocal
information into account in order to complete the task. One way in which this problem can be solved, is by
distinguishing boundary from internal contours. In this perspective, solid/outline invariance is a special
case of the boundary contour detection problem. By far the most successful algorithm for detecting
boundary contours is Grossberg & Mingolla (1985). In this system, boundary contours are obtained
through a combination of lateral competition and interactive feedback. The latter involves top-down
mediation from stored knowledge of object shapes.
A process like that goes through several iterative feedback cycles, before an equilibrium is
reached that tells us the solution to the problem. Although we are, in general, sympathetic to the notion of
interactive processing, such a solution to our problem is counterintuitive from a psychological point of
view. Human observers can easily perceive the equivalence of solid and outline versions, even of entirely
unfamiliar shapes. Hebb (1937) reared rats in the dark. In absence of any prior exposure to any patterns,
the animals perceived the identity of solid and outline triangles without any difficulty.
More generally, we doubt that models of which the relevant system states are static equilibria are
the right kind of approach for visual information processing. We believe that computation with
nonstationary, chaotic patterns of activity can more flexible and efficient (van Leeuwen, Steijvers, &
Nooter, 1997). Solid/outline invariance was chosen to demonstrate the viability of this approach. The
approach circumvents the need for gradient-detection, static equilibria, and iterative feedback. It is agreed
Van Leeuwen et al.
CML++ filter
Page 4
that the invariance requires nonlocal information. But in the proposed system, this information is readily
obtained from lateral interactions only.
2. CML
We are proposing coupled-map lattices (CML) as models of the perceptual system. CMLs consist of
coupled units with an activation function updated in discrete time. The activation function is an arbitrary,
nonlinear function of previous activation and input. The presently-used system is shown in Equations 1-2.
In Equation 1, the activation value of the i-th unit at time t + 1 is indicated by
is a nonlinear function of the netinput
xi(1) . This activation value
net i . The netinput of the i-th unit is a weighted sum of the
activation value of that unit at time t, and that of all connected units. For each individual unit the
activation function is controlled by a parameter Ai . This parameter modulates the behaviour of the
activation function of the i-th unit. Parameter
th unit.
ci , j represents the connection strength between the i-and-j-
B (i ) is the set of units which are connected to unit i and n is the number of units in B (i ) .
Equations 1-2 can be considered a coupled version of the well-known logistic map. The dynamical
properties of such systems are relatively well-explored (Kaneko, 1983, 1984, 1989ab; Waller and Kapral,
1984; Schult et al., 1987). In a near-chaotic regime called spatiotemporal intermittency, these systems
produce and annihilate activity patterns spontaneously, without ever reaching a stationary equilibrium.
These regimes may have a degree of flexibility, optimal for information processing.
In these regimes, network units have a tendency to move in and out of synchronised states
intermittently. The A and c parameters can be used to induce local biases on synchronization behavior.
When two connected units have the same A value, they tend to synchronise. Units which run with a lower
A-value have an increased tendency to synchronise through their coupling, for a given value of coupling
strength c. The larger the values of c, the more frequent, global, and persistent the synchronizations are.
With fixed, uniform values of c, strongly chaotic oscillation induced by a high A value, biases a unit to
brief and infrequent synchronization behaviour.
Van Leeuwen et al.
CML++ filter
Page 5
These observations were considered relevant for visual information processing. Synchronization
of oscillatory activity has been claimed as the mechanism for feature binding (Phillips & Singer, 1997;
Singer, 1990). This claim has been disputed by those who emphasize that synchronization mechanisms in
neural networks in general are too slow to capture the fast feature binding in the visual system, and in
particular in the primary visual cortex (Lamme & Spekreijse, in press). Synchronization in intermittent
systems, however, will be shown to be fast enough to qualify as a binding mechanism in visual systems.
For a simple visual model, it may be assumed that sensory input operates on the oscillation
parameters A of the units. An input array of pixels representing sensory stimulation can be mapped
topographically on a slab of locally connected units. We assume that A-values of the units remain in the
chaotic range (for instance [3.7,3.8]). With no stimulation, A is set at maximum, and with stimulation, the
A-value is proportionally lowered. As a result, there will be an increased synchronisation bias when input
is of higher intensity and when input in neighboring units is the same.
With fixed weights, as in Equations 1-2, the system acts as a non-adaptive filter for sensory
information. It is also possible, however, to extend the system of Equations 1-2 by an adaptive algorithm.
Like neural networks, these systems enable Hebbian adaptive coupling mechanisms to control the
formation and storage of patterns. Adaptive CML systems (CML++) were introduced in our earlier
studies. Several alternative algorithms were found to have the same kind of adaptive behaviour. We
present one as an example in Equations 3-5 (Simionescu & van Leeuwen, submitted).
xi(1) = Ai neti (1− neti )
neti = ∑ j ∈B(i) (
c
1 − ci, j
)x i + i, j x j
n −1
n −1
diff i ,(1j) = Gdiff i , j + (1 − G )d i , j
wi , j = 1 −
1
1+ e
ci , j = wi , j C max
− H1 ( 2 ( diff i , j / H 2 −1)
(1)
(2)
(3)
(4)
(5)
Van Leeuwen et al.
CML++ filter
Page 6
The system updates the values of its connection strength parameters cij, according to the
coherence of the activity in the i and j-th unit. The connections between units are changed dynamically
during a run. In equation 3,
d i , j is the current difference between units i and j. The diff i , j represents the
history of differences between those units. Equation 3 containing the parameter G is used as a leaky
integrator and determines the flexibility of the weights. Low value of G causes a large influence of the
current difference
d i , j . Weight updates will be faster, but less smooth, than with high values of G. Note
that with G = 1 and all initial values of
diff i , j set to zero, all coupling strengths ci , j get the same,
uniform value Cmax. With these parameter settings the adaptive system of Equations 1-5 behaves
nonadaptively, as in Equations 1-2. In the present simulations, adaptive characteristics are not the focus of
investigation. We therefore used the simple, nonadaptive system of Equations 1-2 with uniform
connection weights.
3. Illustrations of CML dynamics: Müller-Lyer and Ehrenstein illusion
Some illustration of the spatiotemporal dynamics of CML systems will be helpful before we come to
speak about solid/outline invariance. Consider the time course of activation in the Figures 1 and 2, where
two familiar illusions, the Müller-Lyer and Ehrenstein illusion, respectively, are presented to the system.
Let each pixel in an input-picture array map onto the A-parameter of a corresponding unit in a twodimensional lattice. Ai = 3.7 when the i-th pixel is black; Ai = 3.8 when the i-th pixel is white. This results
in a rapidly evolving pattern of activation in the xi values of the CML. With iterations of the system,
progressive distortion of the original input structure can be observed in the pattern. As shown in Figure 1,
protruding wings of the Müller-Lyer figure are annihilated in several steps, and the surviving central
horizontal will be smaller or larger than the original, depending on whether the wings were inward or
outward bent.
Meanwhile, the system selectively enhances certain spatial contrasts. This effect propagates
through the system in the form of what resembles travelling and standing waves in a field. These may be
relevant to explain a number of visual field illusions, including the Ehrenstein illusion (Figure 2).
Van Leeuwen et al.
CML++ filter
Page 7
____________________________________
Insert Figure 1 here
____________________________________
____________________________________
Insert Figure 2 here
____________________________________
Visual field effects have been assumed in the Gestalt literature, in order to explain the origin of perceptual
structuring principles, including ones that lead to visual illusions. Certainly the currently proposed
approach is in accordance with this idea, as the CML activty patterns constitutes a discrete analogon of
fields. The activity patterns systematically distort the image, in the direction of, what may be called,
global goodness. Protruding parts, for instance, often are annihilated by the system. It, therefore, is tending
towards a more convex image (a tendency akin to the Gestalt principle of convexity). Another principle
that could be covered by CML activity patterns is that of Symmetry. Compare, for instance, in Figures 3
and 4 the CML patterns created by symmetrical and asymmetrical figures. We observe that those of the
symmetrical ones are all characterised by a “holographic regularity”. In the literature on symmetry
perception, holographic regularity was considered essential for symmetry detection (van der Helm, 2000).
These demonstrations, therefore, seem to suggest that CML patterns capture Gestalt field effects in a
manner not easily covered by other models. A systematic exploration of these “distortions”, therefore, is
wanting.
We will have to suspend a further discussion of the distortion effects to a later stage, however, in
order to deal with an even more urgent one. We need to show that the distortions are not arbitrary, but are
actually producing invariance. In fact, one of the critiques against the Gestalt approach (for instance, from
the viewpoint of Gibsonian ecological realism) has been that these distortions lead us away from the
invariances contained in the environment of the individual. The present study provides an occasion to
demonstrate that the creation of distortion and the detection of invariance may be two sides of the same
coin, as long as chaotic systems concerned.
Van Leeuwen et al.
CML++ filter
Page 8
4. Simulations
We investigate the hypothesis that CML systems represent image structure in a solid/outline-invariant
way. Lacking an overall performance criterion (Heath, Sarkar, Sanocky et al., 1997) we used a standard
back-propagation algorithm for this purpose (Plunkett & Elman, 1997). Two sets of eight pictures each
were used as input. Each picture contained 50 x 50 binary pixel values (representing black or white). The
set in Figure 3 contains the solid versions, the one in Figure 4 the outline versions of the same figures.
In one condition, the figures were filtered with a CML network before being offered as input to
the back-propagation algorithm. Typical outputs resulting from applying the CML filter to solid figures
are shown in Figure 3, and to outline ones in Figure 4.
In a second condition, the original unfiltered images of Figures 3 and 4 were used. Finally, in a
third condition, we used an edge-detector as a filter. For this purpose, we used a gradient-based edge
detector (Canny, 1983). The output of this filter is shown for solid images in Figure 3 and for outline ones
in Figure 4.
We compared the generalisation performance of the back-propagation algorithm in these three
conditions. In each condition, either a classification on the solid figures was trained and subsequently
tested on the outline figures, or vice versa. For the CML condition, a 50 x 50 CML filter was used, which
had each unit (except on the borders of the lattice) connected with its eight direct neighbours (Figure 5).
Initial values of the activation xi are homogenous (at 0.5). The value of Cmax was set to 0.2.
____________________________________
Figure 3
____________________________________
____________________________________
Figure 4
____________________________________
____________________________________
Figure 5
____________________________________
Van Leeuwen et al.
CML++ filter
Page 9
The eight figures in a set each represent a different category, which was trained by the backpropagation algorithm. The back-propagation network consisted of 2500 input units (corresponding to the
2500 pixels in the image), 32 hidden units, and 8 output units (corresponding to the 8 categories). The
network was run with the Tlearn simulation environment (Plunkett and Elman, 1997).
In the CML-condition, the input to the algorithm consists of subsequent samples from the xvalues of the CML units, after the pictures were fed to the CML. Samples were taken at iterations 90-99 to
avoid initial transients. The training set for the back-propagation algorithm thus consisted of 80 learning
samples, 10 for each category. The task consists of mapping the filtered input to these 8 categories. The
output unit representing the correct category had a target output with an activation of 0.9 while all other
output units had an activation of 0.1. The same algorithm was used in the unfiltered and in the Canny
edge-detector filtered conditions.
After training was successfully completed, generalisation of the representations was tested. In the
CML filtering condition, the input to the back-propagation for the test set was created in the same way as
the training set: x-values sampled at iterations 90-99 were used for testing the network.
5. Results
The result of the CML-filtered images condition is shown in Tables 1 and 2, that of the unfiltered images
conditions in Tables 3 and 4, and that of the Canny edge-detector filtered images condition in Tables 5 and
6. During training the back-propagation converged to low error values (MSE < 0.1 after 1000000 epochs)
in all conditions. Training, therefore, was successful in all conditions. The generalisation performance,
however, differs markedly between the conditions.
CML-filtered images condition: Table 1 (upper half) shows the performance of the network on a training
set of solid figures. The output unit with the highest activation value is marked grey. Each row in the table
shows the average results over the 10 inputs per category. The error column shows the difference between
the actual value of the appropriate category for the item and its target value. All categories were
recognised properly, even when we take as criterion that the activation of the category should be the
highest and its value should be at least .7.
Van Leeuwen et al.
CML++ filter
Page 10
____________________________________
Insert Table 1 here
____________________________________
____________________________________
Insert Table 2 here
____________________________________
The lower half of Table 1 shows the performance of the network on the test set of outline figures. All
categories were recognised properly according to our criterion and error was low. This result demonstrates
that generalisation from solid to outline versions of the images in the CML condition was good. The
results are similar when the outline figures are used as the training set (upper half of Table 2) and the solid
ones as test set (lower half of Table 2). We conclude that CML filters offer a solid/outline invariant
representation.
Unfiltered images condition: The upper half of Table 3 shows the performance of the network on the
training set of solid figures and the upper half of Table 4 that of the training set of outline figures. All
categories were recognised properly.
____________________________________
Insert Table 3 here
____________________________________
____________________________________
Insert Table 4 here
____________________________________
Generalization performance, however, is poor, both from solid to outline (lower half of Table 3) and from
outline to solid (lower half of Table 4). Only two of the eight categories are recognised properly, for the
outline test set (Cat. 1 and 5) as well as for the solid one (Cat. 1 and 4).
Edge-filtered images condition: A contour-invariant representation does not imply that the filter should
be able to detect edges (like an edge-detection algorithm does), but to provide identical output when the
contours of two images are the same. The previous results demonstrate that this effect is indeed achieved
Van Leeuwen et al.
CML++ filter
Page 11
with CML filtering. An edge detection algoritme does not provide solid/outline invariance. It will detect
edges of solid images, but trivially, with contour-images it detects two edges. It remains to show,
however, that the output generated by an edge detector does not provide information, from which outlineinvariance can easily be detected. In the edge-filtered images condition, the Backprop algorithm is trained
with the output of an edge-detection algorithm of solid images (Figure 4) and tested with the output of the
edge-detection algorithm on outline images (Figure 5) as well as vice versa.
The results are shown in Tables 5-6. The network is able to learn the training set of both solid images
(upper half of Table 5) and those of outline images (upper half of Table 6) perfectly. Generalization,
however, is lacking in these conditions. The test set of outline images (lower half of Table 5) and that of
solid ones, again, result in nearly random classification. By our criterion, none of the figures is categorised
properly. Even by a more relaxed criterion, that considers only which category has the highest activation
regardless of its value, only three out of eight images (Cat. 3, 4, and 7) are classified correctly for the
outline images and one (Cat. 2) for the solid ones. It may, therefore, be concluded that Canny edgedetection filtering does not support contour invariance.
____________________________________
Insert Tables 5-6 here
____________________________________
Van Leeuwen et al.
CML++ filter
Page 12
6. Conclusions and discussion
A back-propagation algorithm successfully classified the images filtered by a CML. This observation
implies that its pattern of activation, in spite of their chaotic character, represent information about the
visual pattern. Our study shows that the CML activation preserves a solid/ outline-invariant representation
of the pattern. Trained classification of solid images generalises to outline ones, and vice versa. As solidoutline information has non-local characteristics, the present study constitutes evidence that CMLs process
structural information of visual patterns.
That outline-invariance is reached by lateral interaction reduces the role of interactive feedback
in the perception of structure. Outline-invariance could be treated as a special case of the boundary
contour problem. Normally, boundary contour detection requires an extensive, interactive feedback
processes. The present approach shows that this is not needed for this special case. Instead, the
spontaneous, pattern forming capacities of chaotic activation are used for this purpose. Moreover, they are
deployed rapidly, without a system having to settle on a static equilibrium.
We may, therefore conclude that CML systems have advantages for processing structural
information, that suggest them as candidate models for human visual processing. This enables a new
perspective on the role of top-down feedback in visual perception. There are field-like effects in the
patterns produced by the model, leading to Gestalt organisation. These effects occur spontaneously,
without feedback. It might be that much more than initially thought, could be left to self-organisation,
when the rich, complex dynamics of chaotic activity patterns is used. The advantage is, that certain
invariances can be obtained, independent of experience with shapes. They are obtained early in visual
processing and form part of its innate basis (Hebb, 1937).
We may carry the speculation one step further, and suggest that what the visual system does is
providing solid/outline representations rather than extracting contours. This assumption doesn’t imply that
we are unable distinguish between solid and outline objects, because the visual system still has has parallel
systems for features such as colour (Grossberg & Mingolla, 1985). The existence of these parallel systems
should explain, among others, why visual orientation illusions, for instance the Bourdon illusion, differ
Van Leeuwen et al.
CML++ filter
Page 13
between solid and outline versions of the same figure (Rozvany & Day, 1980; Wenderoth & O’Connor,
1987).
What is known about early visual processing does not necessary imply that the extraction of local
gradients is the most adequate description of its function. Not only is contour extraction plus subsequent
processing hopelessly inefficient, as a metaphor for human visual information processing it may actually
be misleading. Gradient extraction, one might ask rethorically, for whom? Similarly so for the subsequent
processing that should lead to the elimination of internal contours. These transformations can only lead
from one “picture in the head” to another, unless an invariant can finally be detected. But than, it is more
likely that the visual system uses its representational capacities in a more economical way, and brings
about the invariant directly.
Van Leeuwen et al.
CML++ filter
Page 14
References
Canny, J. F. (1983). Finding edges and lines in images. Technical Report 720, MIT AI Lab
Grossberg, S., & Mingolla, E. (1985). Neural dynamics of perceptual grouping: Textures, boundaries, and
emergent segmentations. Perception and Psychophysics, 38, 141-171.
Gu, Y., Tung, M., Yuan, J.M., Feng, D.H. & Narducci, L.M. (1984). Crises and hysteresis in coupled
logistic maps. Physical Review Letters, 52, 701-704.
Heath, M., Sarkar, S., Sanocki, T., and Bowyer, K.W. (1997). A Robust Visual Method for Assessing the
Relative Performance of Edge Detection Algorithms. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 19 (12), 1338-1359.
Hebb, D.O. (1937). The innate organization of visual activity: 1. Perception of figures by rats reared in
total darkness. Journal of Genetic Psychology, 51, 101-126.
Hogg, T. & Huberman, B.A. (1984). Generic behavior of coupled oscillators. Physical Review A, 29, 275281.
Kaneko, K. (1983). Transition from torus to chaos accompanied by frequency lockings with symmetry
breaking. Progress of Theoretical Physics, 69, 1427-1442.
Kaneko, K. (1984). Period-doubling of kink-antikink patterns, quasiperiodicity in antiferro-like structures
and spatial intermittency in coupled logistic lattice. Progress of Theoretical Physics, 72, 480-486.
Kaneko, K. (1989). Chaotic but regular posi-nega switch among coded attractor by cluster size variation.
Physical Review Letters, 63, 219-223.
Kaneko, K. (1989). Clustering, coding, switching, hierarchical ordering and control in a network of
chaotic elements. Physica D, 41, 137-172.
Kennedy, J.M., Nicholls, A., & Desrochers, M. (1995). From line to outline. In: Ch. Lange-Kuettner, G.V.
Thomas, et al. (Eds.), Drawing and looking: Theoretical approaches to pictorial representation in
children. The developing body and mind. (pp. 62-74). London, UK: Harvester Wheatsheaf.
Khvorostov PV, Braun M and Poon CS. (1996). Edge quality metric for arbitrary 2D edges. Optical
Engineering , 35 (11), 3222-6.
Van Leeuwen et al.
CML++ filter
Page 15
Lamme, V.A.F. & Spekreijse, H. (in press). Neuronal synchrony does not represent texture segregation.
Nature.
Marr, D. (1982). Vision. New York, NY: W. H. Freeman.
Phillips A.W., Singer W. (1997) In search of common foundations for cortical computation. Behavioral
and Brain Sciences, 20, 657-722.
Plunkett, J & Elman, J.L. (1997). Exercises in Rethinking Innatenes – A Handbook for Connectionist
Simulations. MIT Press. The Tlearn simulation environment is distributed freely via the Internet
and can be found on the WWW at http://crl.ucsd.edu/innate/index.shtml.
Rozvany, G.I. & Day, R.H. (1981). Determinants of the Bourdon effect. Perception and Psychophysics, 28
(1), 39-44.
Schult, R.L., Creamer, D.B., Henyey, F.S. & Wright, J.A. (1987). Symmetric and non-symmetric coupled
logistic maps. Physical Review A, 35, 3115-3118.
Simionescu, I., & van Leeuwen, C. (submitted) Robust observables for intermittency and clustering in a
family of dynamical connectionist models.
Singer, W. (1990). Search for coherence: a basic principle of cortical self-organization. Concepts in
neuroscience, 1, 1-26.
Shin, M., Goldgof, D., and Bowyer, K.W. (1998), An Objective Comparison Methodology of Edge
Detection Algorithms for Structure from Motion Task,' Proceedings of IEEE Conference on
Computer Vision and Pattern Recognition, pp. 190-195.
Van der Helm, P.A. (2000). Simplicity versus likelihood in visual perception: from surprisals to precisals.
Psychological Bulletin, in press.
Van Leeuwen, C. (1998). Visual perception at the edge of chaos. In J.S. Jordan (Ed.), Systems Theories
and Apriori Aspects of Perception. Amsterdam, NL: Elsevier, pp. 289-314.
Van Leeuwen, C. & Raffone, A. (submitted). Binding processes in short, medium, and long-term memory
Van Leeuwen, C., Steyvers, M., & Nooter, M. (1997). Stability and intermittency in large-scale coupled
oscillator models for perceptual segmentation. Journal of Mathematical Psychology, 41, 319-344.
Yamada, T. & Fujisaka, H. (1983). Stability theory of synchronized motion in coupled-oscillator systems.
Progress of Theoretical Physics, 70, 1240-1248.
Van Leeuwen et al.
CML++ filter
Page 16
Waller, I. & Kapral, R. (1984). Spatial and temporal structure in systems of coupled nonlinear oscillators.
Physical Review A, 30, 2049-2055.
Wenderoth, P. & O’Connor, T. (1987). Outline- and solid-angle orientation illusions have different
determinants. Perception & Psychophysics, 41 (1), 45-52.
Van Leeuwen et al.
CML++ filter
Page 17
Figure Captions
Figure 1
Müller-Lyer applied to the CML filter. The leftmost picture shows the original picture, the
other ones from left to right display the x values of the CML system after 10, 50 and 100
iterations.
Figure 2
Ehrenstein illusion applied to the CML filter. The leftmost picture shows the original
picture, the other ones from left to right display the x values of the CML system after 10,
50 and 100 iterations.
Figure 3
Figure categories used in the simulations: Solid images. From left to right, respectively:
Original picture, CML filtered, Canny edge detector filtered.
Figure 4
Figure categories used in the simulations: Outline images. . From left to right,
respectively: Original picture, CML filtered, Canny edge detector filtered
Figure 5
Connectivity pattern in coupled map lattice.
Van Leeuwen et al.
CML++ filter
Figures
Figure 1
Figure 2
Page 18
Van Leeuwen et al.
CML++ filter
Page 19
Van Leeuwen et al.
Cross – cat 1
Triangle – cat 2
Circle – cat 3
Line – cat 4
Figure 3
CML++ filter
Page 20
Van Leeuwen et al.
Irregular 1 – cat 5
Irregular 2 – cat 6
Irregular 3 – cat 7
Figure 3 (continued)
CML++ filter
Page 21
Van Leeuwen et al.
Irregular 4 – cat 8
Figure 3 (Continued)
CML++ filter
Page 22
Van Leeuwen et al.
Cross – cat 1
Triangle – cat 2
Circle – cat 3
Line – cat 4
Figure 4
CML++ filter
Page 23
Van Leeuwen et al.
Irregular 1 – cat 5
Irregular 2 – cat 6
Irregular 3 – cat 7
Figure 4 (continued)
CML++ filter
Page 24
Van Leeuwen et al.
Irregular 4 – cat 8
Figure 4 (continued)
CML++ filter
Page 25
Van Leeuwen et al.
CML++ filter
Figure 5
Page 26
Van Leeuwen et al.
CML++ filter
Page 27
Table 1
CML-filtered patterns
CML of Radius 2, outputs of iteration 90-99, 32 hidden back-prop units
Training set: Solid figures (MSE = 0.099548)
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.8772
0.1015
0.0564
0.1088
0.0938
0.1099
0.11
0.1036
0.0228
0.8772
cat 2
0.0867
0.883
0.1061
0.1064
0.0196
0.0862
0.0634
0.1056
0.017
0.883
cat 3
0.0205
0.1201
0.8877
0.0585
0.1058
0.1212
0.0888
0.0964
0.0123
0.8877
cat 4
0.1212
0.1157
0.0555
0.8835
0.1155
0.1046
0.0631
0.1132
0.0165
0.8835
cat 5
0.1092
0.0829
0.1035
0.1053
0.8798
0.0512
0.0965
0.0982
0.0202
0.8798
cat 6
0.105
0.0876
0.1095
0.0587
0.0561
0.8758
0.1156
0.0732
0.0242
0.8758
cat 7
0.1159
0.0889
0.0997
0.029
0.0661
0.1093
0.875
0.0907
0.025
0.875
cat 8
0.1013
0.1006
0.0881
0.0818
0.1076
0.0815
0.0683
0.8776
0.0224
0.8776
Average: 0.02005
0.87995
Test set: Outline Figures
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.8524
0.0986
0.0665
0.1091
0.1085
0.1091
0.1088
0.1016
0.0476
0.8524
cat 2
0.1106
0.8037
0.1194
0.1465
0.0231
0.096
0.0588
0.1082
0.0963
0.8037
cat 3
0.0293
0.0981
0.7552
0.048
0.0504
0.2234
0.0688
0.4505
0.1448
0.7552
cat 4
0.125
0.1112
0.0692
0.8676
0.119
0.0927
0.0747
0.1187
0.0324
0.8676
cat 5
0.1266
0.1325
0.0744
0.0927
0.7583
0.0537
0.109
0.1703
0.1417
0.7583
cat 6
0.2166
0.197
0.0603
0.0931
0.0217
0.9059
0.0756
0.1256
-0.0059
0.9059
cat 7
0.1743
0.0515
0.1693
0.0132
0.1737
0.1261
0.8573
0.0457
0.0427
0.8573
cat 8
0.1958
0.1203
0.05
0.1218
0.1426
0.136
0.0534
0.7234
0.1766
0.7234
Average: 0.084525 0.815475
Van Leeuwen et al.
CML++ filter
Page 28
Table 2
CML-filtered patterns
CML of Radius 2, outputs of iteration 90-99, 32 hidden back-prop units
Training set: Outline Figures (MSE = 0.076286)
Input
Output Unit
1
2
Error
3
4
5
6
7
8
Winner
cat 1
0.8853
0.0774
0.1
0.1015
0.1015
0.097
0.1008
0.1033
0.0147
0.8853
cat 2
0.0846
0.8903
0.0975
0.1061
0.1125
0.11
0.0537
0.1075
0.0097
0.8903
cat 3
0.1009
0.0743
0.8898
0.1026
0.0758
0.0726
0.1016
0.0939
0.0102
0.8898
cat 4
0.1028
0.1072
0.1095
0.89
0.1143
0.1039
0.0511
0.097
0.01
0.89
cat 5
0.0939
0.1068
0.0887
0.1019
0.8931
0.0841
0.107
0.054
0.0069
0.8931
cat 6
0.1051
0.117
0.1011
0.0577
0.086
0.8795
0.0961
0.083
0.0205
0.8795
cat 7
0.1049
0.0558
0.0995
0.0591
0.1032
0.1209
0.8867
0.1136
0.0133
0.8867
cat 8
0.1039
0.0987
0.1047
0.1066
0.08
0.0619
0.1044
0.8814
0.0186
0.8814
Average:
0.0129875
0.8870125
Test set: Solid Figures
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.8913
0.0751
0.1282
0.119
0.1148
0.0648
0.1542
0.1595
0.0087
0.8913
cat 2
0.0414
0.8051
0.2055
0.1298
0.1046
0.074
0.0662
0.2444
0.0949
0.8051
cat 3
0.0882
0.099
0.8287
0.1321
0.0972
0.1212
0.1483
0.0915
0.0713
0.8287
cat 4
0.1118
0.0956
0.1003
0.8908
0.1113
0.1051
0.0561
0.1356
0.0092
0.8908
cat 5
0.0959
0.1024
0.0799
0.1184
0.8624
0.0919
0.1323
0.0548
0.0376
0.8624
cat 6
0.1777
0.1526
0.1391
0.0532
0.1035
0.8113
0.1023
0.0869
0.0887
0.8113
cat 7
0.1168
0.0301
0.1371
0.0336
0.1532
0.1315
0.8422
0.121
0.0578
0.8422
cat 8
0.1013
0.0998
0.0864
0.0986
0.0973
0.1551
0.1444
0.7608
0.1392
0.7608
Average
0.063425 0.836575
Van Leeuwen et al.
CML++ filter
Page 29
Table 3
Non-filtered patterns
32 hidden back-prop units
Training set: Solid Figures (MSE = 0.043907)
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.894
0.103
0.105
0.103
0.1
0.089
0.104
0.095
0.006
0.894
cat 2
0.108
0.896
0.103
0.101
0.014
0.106
0.107
0.104
0.004
0.896
cat 3
0.096
0.099
0.896
0.098
0.106
0.099
0.1
0.098
0.004
0.896
cat 4
0.103
0.101
0.098
0.9
0.103
0.099
0.104
0.098
0
0.9
cat 5
0.108
0.002
0.103
0.101
0.895
0.106
0.107
0.103
0.005
0.895
cat 6
0.098
0.1
0.1
0.1
0.105
0.897
0.103
0.097
0.003
0.897
cat 7
0.1
0.099
0.099
0.099
0.101
0.103
0.895
0.102
0.005
0.895
cat 8
0.093
0.103
0.096
0.098
0.101
0.1
0.076
0.902
-0.002
0.902
Average: 0.003125
0.896875
Test set: Outline figures
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.829
0.023
0.061
0.085
0.419
0.085
0.076
0.036
0.071
0.829
cat 2
0.18
0.153
0.047
0.418
0.116
0.048
0.137
0.097
0.747
0.418
cat 3
0.011
0.027
0.483
0.322
0.413
0.045
0.289
0.055
0.417
0.483
cat 4
0.239
0.078
0.061
0.283
0.191
0.066
0.14
0.071
0.617
0.283
cat 5
0.172
0.002
0.106
0.12
0.857
0.168
0.084
0.109
0.043
0.857
cat 6
0.028
0.033
0.174
0.244
0.329
0.087
0.259
0.072
0.813
0.329
cat 7
0.41
0.01
0.012
0.082
0.68
0.154
0.089
0.119
0.811
0.68
cat 8
0.078
0.393
0.065
0.1
0.04
0.131
0.081
0.511
0.389
0.511
Average:
0.4885
0.54875
Van Leeuwen et al.
CML++ filter
Page 30
Table 4
Non-filtered patterns
32 hidden back-prop units
Training set: Outline figures (MSE = 0.037272)
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.898
0.097
0.101
0.111
0.102
0.099
0.107
0.101
0.002
0.898
cat 2
0.1
0.886
0.074
0.102
0.094
0.107
0.102
0.107
0.014
0.886
cat 3
0.102
0.094
0.891
0.104
0.087
0.1
0.104
0.102
0.009
0.891
cat 4
0.101
0.107
0.109
0.894
0.104
0.1
0.015
0.097
0.006
0.894
cat 5
0.099
0.099
0.1
0.102
0.901
0.101
0.099
0.099
-0.001
0.901
cat 6
0.101
0.101
0.102
0.106
0.097
0.896
0.104
0.099
0.004
0.896
cat 7
0.1
0.11
0.111
0.033
0.106
0.1
0.894
0.097
0.006
0.894
cat 8
0.1
0.1
0.101
0.102
0.104
0.1
0.103
0.9
0
0.9
Average:
0.005
0.895
Error
Winner
Test set: Solid figures
Input
Output Unit
1
2
3
4
5
6
7
8
cat 1
0.881
0.086
0.109
0.108
0.12
0.093
0.112
0.099
0.019
0.881
cat 2
0.131
0.242
0.026
0.258
0.198
0.211
0.028
0.096
0.658
0.258
cat 3
0.309
0.126
0.095
0.026
0.043
0.242
0.142
0.394
0.805
0.394
cat 4
0.14
0.17
0.092
0.832
0.14
0.096
0.013
0.083
0.068
0.832
cat 5
0.097
0.141
0.431
0.081
0.618
0.187
0.055
0.012
0.282
0.618
cat 6
0.008
0.079
0.181
0.701
0.092
0.197
0.042
0.227
0.703
0.701
cat 7
0.776
0.057
0.211
0.066
0.106
0.318
0.193
0.106
0.707
0.776
cat 8
0.281
0.144
0.077
0.024
0.04
0.241
0.133
0.462
0.438
0.462
Average:
0.46
0.61525
Van Leeuwen et al.
CML++ filter
Page 31
Table 5
Canny-filtered patterns
32 hidden units
Training set: Solid figures (MSE = 0.000005)
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.9
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0.9
cat 2
0.1
0.9
0.1
0.1
0.1
0.1
0.1
0.1
0
0.9
cat 3
0.1
0.1
0.9
0.1
0.1
0.1
0.1
0.1
0
0.9
cat 4
0.1
0.1
0.1
0.9
0.1
0.1
0.1
0.1
0
0.9
cat 5
0.1
0.1
0.1
0.1
0.9
0.1
0.1
0.1
0
0.9
cat 6
0.1
0.1
0.1
0.1
0.1
0.9
0.1
0.1
0
0.9
cat 7
0.1
0.1
0.1
0.1
0.1
0.1
0.9
0.1
0
0.9
cat 8
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.9
0
0.9
Average: 0
0.9
Test set: Outline figures
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.15
0.155
0.015
0.052
0.261
0.082
0.521
0.048
0.75
0.521
cat 2
0.043
0.324
0.42
0.087
0.04
0.018
0.098
0.078
0.576
0.42
cat 3
0.046
0.463
0.306
0.605
0.171
0.093
0.329
0.05
0.594
0.605
cat 4
0.423
0.528
0.101
0.613
0.191
0.309
0.176
0.072
0.287
0.613
cat 5
0.387
0.124
0.067
0.331
0.391
0.164
0.062
0.15
0.509
0.391
cat 6
0.465
0.04
0.045
0.258
0.173
0.114
0.354
0.04
0.786
0.465
cat 7
0.183
0.421
0.013
0.049
0.214
0.037
0.615
0.035
0.285
0.615
cat 8
0.56
0.187
0.074
0.775
0.485
0.091
0.24
0.074
0.826
0.775
Average: 0.576625 0.550625
Van Leeuwen et al.
CML++ filter
Page 32
Table 6
Canny-filtered patterns
32 hidden units
Training set: Outline Figures (MSE = 0.000007)
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.9
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0.9
cat 2
0.1
0.9
0.1
0.1
0.1
0.1
0.1
0.1
0
0.9
cat 3
0.1
0.1
0.9
0.1
0.1
0.1
0.1
0.1
0
0.9
cat 4
0.1
0.1
0.1
0.9
0.1
0.1
0.1
0.1
0
0.9
cat 5
0.1
0.1
0.1
0.1
0.9
0.1
0.1
0.1
0
0.9
cat 6
0.1
0.1
0.1
0.1
0.1
0.9
0.1
0.1
0
0.9
cat 7
0.1
0.1
0.1
0.1
0.1
0.1
0.9
0.1
0
0.9
cat 8
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0.9
0
0.9
Average: 0
0.9
Test set: Solid Figures
Input
Output Unit
Error
Winner
1
2
3
4
5
6
7
8
cat 1
0.095
0.523
0.123
0.283
0.345
0.568
0.214
0.515
0.805
0.568
cat 2
0.035
0.606
0.155
0.175
0.077
0.097
0.101
0.03
0.294
0.606
cat 3
0.089
0.214
0.184
0.155
0.071
0.47
0.122
0.055
0.716
0.47
cat 4
0.166
0.752
0.374
0.119
0.34
0.305
0.026
0.155
0.781
0.752
cat 5
0.089
0.194
0.43
0.185
0.313
0.018
0.191
0.35
0.587
0.43
cat 6
0.357
0.187
0.356
0.657
0.317
0.054
0.033
0.036
0.846
0.657
cat 7
0.02
0.809
0.051
0.197
0.199
0.288
0.55
0.152
0.35
0.809
cat 8
0.159
0.316
0.31
0.152
0.589
0.364
0.1
0.487
0.413
0.589
Average: 0.599
0.610125