Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Large-Scale Learning For Media Understanding: Editorial Open Access

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Rocha and Scheirer EURASIP Journal on Image and Video Processing (2015) 2015:24

DOI 10.1186/s13640-015-0080-7

EDITORIAL Open Access

Large-scale learning for media


understanding
Anderson Rocha1* and Walter J. Scheirer2

1 Editorial 1. Do not make the problem easy. The development


The remarkable growth in computational power over the of a capability for machines to understand scenes is a
last decade has enabled important advances in machine primary goal in computer vision. This problem is an
learning, allowing us to achieve impressive results across exceedingly difficult one, requiring generalization to
all areas of image and video processing. Numerical meth- novel instances of known object classes amidst a
ods that were once thought to be intractable are now com- practically infinite number of unknown object
monly deployed to solve problems as diverse as 3D mod- classes. However, to reach this goal, researchers have
eling from 2D data (photo tourism [1]), object recognition routinely targeted a much simpler problem;
(logo detection [2]), human biometrics (face recognition classification, which assumes that all object classes
[3]), and video surveillance (automatic threat detection are known at training time. The difference in
[4]). Despite this tremendous progress, there are many performance between an algorithm evaluated in this
open questions related to understanding visual datawe closed set regime and again in the actual open set
are still far from matching human visual ability in all of one is dramatic. Recent work has shown that even
these areas. when a data set as simple as the MNIST database of
It is fair to scrutinize this observation in some detail handwritten digits is re-contextualized into an open
where are researchers and current methodologies falling set problem where not all digits are known at training
short? From our perspective, the practicalities of real- time, the performance of state-of-the-art supervised
world problems are often obscured by the mischaracter- learning methods drops precipitously [5]. Other
ization of good results in limited contexts, theoretical works have also shown the importance of open set
frameworks built around artificial problems, and a deep classifier general recognition problems [6].
sea of technical minutiae. Assumptions are a necessary Therefore, always make sure to solve the original
component to problem solving, but poor ones lead our problem, and not one that is artificially easier.
algorithms and subsequent analyses astray. Similarly, good 2. Test the perceptual thresholds of models. A major
theory is important, but we should not lose sight of the shortcoming of current evaluation practices in visual
fundamental problem we are trying to solve by abstracting learning is that they neglect an obvious frame of
it away. Such circumstances occur more often than one reference; that of the human observer. For example,
might think. the empirical gains achieved through deep learning
How should a researcher charged with the design of an architectures on benchmark data sets in computer
algorithm that must actually work outside of the labora- vision, which have been characterized in the popular
tory avoid these dilemmas? Perhaps unsurprisingly, we, press as sometimes even mimicking human levels of
as academics, are typically more critical than constructive understanding [7], are indeed impressive. However
when evaluating new workthis is especially true of paper there is growing concern that such methods are
reviews and survey articles. In response to this, we sub- actually inconsistent with human behavior, based on
mit the following researchers guide to everyday machine observed patterns of error [8]. Indeed, it is trivial to
learning as a gentle nudge forward: fool even the best deep learning algorithms into
making mistakes humans never would by using a
hill-climbing strategy that adds subtle distortions to
*Correspondence: anderson.rocha@ic.unicamp.br
Equal contributors an out-of-class image [9]. A better tactic is to follow
1 Institute of Computing, University of Campinas, Av. Albert Einstein, 1251, the precise methods of visual psychophysics [10] and
Cidade Universitria Zeferino Vaz, 13084-971 Campinas, SP, Brazil probe the perceptual thresholds of a model to
Full list of author information is available at the end of the article
understand its limits in a controlled manner. This

2015 Rocha and Scheirer. This is an Open Access article distributed under the terms of the Creative Commons Attribution
License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly credited.
Rocha and Scheirer EURASIP Journal on Image and Video Processing (2015) 2015:24 Page 2 of 3

will yield a quick answer as to whether or not it is retrieval. Using a color descriptor is not enough to
consistent with human behavior. capture all possible class variability. Including other
3. Move away from a strict adherence to data sets. complementary features, such as shape and texture,
Related to the above observation, we have observed is key for a successful retrieval system. In a
that posting good numbers on a benchmark data set remote-sensing image-classification system, the RGB
is no longer a means to an end, but an end in itself. It color channels are just one way to capture image
is generally not true that a good result on a particular information. Infrared channels can also play an
data set means that the algorithm which produced it important role, and each channel can have its own
will always perform well on images from outside of custom-tailored descriptors. Therefore, we
that data set. Well before all of the excitement over recommend thinking of possible complementary
the performance of deep learning architectures on features when dealing with visual problems, along
the ImageNet challenge [11], Torralba and Efros [12] with innovative ways for combining them.
questioned the fields singular focus on such narrow Sometimes what seems unsolvable using just one
problems, arguing that all data sets in computer piece of visual evidence becomes much easier
vision contain some measure of easily learned bias when considering evidence from different and
that can inevitably lead to false conclusions. Bias complementary features and sensors.
becomes evident when testing an algorithms 6. Be aware of machine-learning black-boxes. With
cross-data set generalization ability, or, in other the ever-increasing need for processing vast amounts
words, training a model on one data set and applying of data, researchers often rely on off-the-shelf
it to another. We recommend that researchers go machine learning solutions to tackle their problems
even further by testing their algorithms on data from using so-called black-boxes. Although it is quick and
sources external to any data setif an algorithm fails easy to turn to such solutions, this comes at a price; if
when presented with frames from a live camera, the underlying problem is poorly understood, the
more work needs to be done. default parameters of a chosen black-box model will
4. Avoid dogma (but do so in a principled way). Like likely result in poor performance. Hence, we
any academic field, machine learning has its share of recommend that researchers pay close attention to
subdisciplines, each with its own prescriptions for the intrinsic properties of their problems and to
problem solving. Sometimes, these carefully choose the learning algorithm and its
subdiscipline-specific views become stumbling parameters when actually implementing a solution.
blocks to general progress. An example of this is the Sometimes just a small amount of parameter tuning
topic of convex optimization, which has come to be can save weeks of processing and yield very good
the dominant mode of optimization for visual classification results.
recognition problems. More often than not, it is 7. Think of new useful applications. Researchers
frowned upon to propose an algorithm that may get these days concentrate on just a handful of
trapped in local minimaeven if it demonstrates well-known applications. Digital photo tagging is,
superior empirical performance over the without a doubt, a great application, but it is not the
state-of-the-art. Thankfully, the reemergence of only one we should be working on. Get creative when
artificial neural networks, which are non-convex, has demonstrating the capabilities of a new algorithm.
loosened this tension by demonstrating the utility of Some interesting applications that we have seen
complex and hierarchical network structures that are lately include the following: shellfish detection for the
not amenable to convex optimization [13]. Hence, protection of fisheries [14], digital restoration of
strive to design an algorithm that works to your historical documents [15], and steering headlight
performance specification, and not one that is beams around raindrops [16]. These are a good start,
unnecessarily constrained by theory. However, if a but there is certainly much more over the horizon.
theory does lead you to a good solution for particular
cases, take advantage of it. With the above advice setting the stage, this special issue
5. Seek different evidence when characterizing examines emerging questions and algorithms related to
visual data. There is no silver bullet to solve all complex visual processing tasks where machine learning
problemsespecially when describing images. is applicable. This spans a number of important problems
Different problems often demand different forms of at multiple stages of the image analysis pipeline, from
image description. However, even within a single features to decision-making strategies, all the way through
problem, it is hard to think of a simple descriptor to end-user applications. This issue brings together
that captures all the nuances and cues present in an seven articles describing original research that is closely
image. Consider the example of content-based image matched to these stages.
Rocha and Scheirer EURASIP Journal on Image and Video Processing (2015) 2015:24 Page 3 of 3

At the cusp of current visual recognition capabil- Acknowledgements


ities are algorithms that learn features, instead of The editors would like to thank all of the reviewers for their hard work and
insightful comments, which have allowed us to assemble an outstanding
just blindly applying hand-tuned features that are not special issue. Their contribution is much appreciated. Prof. Anderson Rocha
domain specific. In Hyperspectral Image Classification also thanks the financial support of the Brazilian Coordination for the
via Contextual Deep Learning, Ma et al. examine the Improvement of Higher Level Education Personnel (CAPES) through the
DeepEyes project.
applicability of this paradigm to hyperspectral imaging,
and report promising results for classification in remote Author details
1 Institute of Computing, University of Campinas, Av. Albert Einstein, 1251,
sensing, where this modality is commonly deployed. This
Cidade Universitria Zeferino Vaz, 13084-971 Campinas, SP, Brazil.
article presents a highly effective, yet highly parameter- 2 Department of Computer Science and Engineering, University of Notre
ized learning-based frameworkwhat options do we have Dame, Fitzpatrick Hall of Engineering, 46556 Notre Dame, Indiana, USA.
to tune it? In their article On the Optical Flow Model
Received: 14 July 2015 Accepted: 15 July 2015
Selection Through Metaheuristics, Pereira et al. pro-
pose the use of methods from the area of evolutionary
computing to optimize parameter sets during training to References
minimize error. The results after searching the parameter 1. K Matzen, N Snavely, in ECCV. Scene chronology, (2014)
2. R Pandey, D Wei, V Jagadeesh, R Piramuthu, A Bhardwaj, in IEEE ICIP.
space this way are remarkably better.
Cascaded sparse color-localized matching for logo retrieval, (2014)
Making a good decision is just as critical as learn- 3. Y Taigman, M Yang, M Ranzato, L Wolf, in IEEE CVPR. Deepface: closing the
ing a good representation. Chen et al. introduce us to a gap to human-level performance in face verification, (2014)
new scalable strategy for large-scale learning in A Robust 4. P Turaga, R Chellappa, VS Subrahmanian, O Udrea, Machine recognition
of human activities: a survey. IEEE T-CSVT. 18(11), 14731488 (2008)
SVM Classification Framework Using PSM for Multi-class 5. LP Jain, WJ Scheirer, TE Boult, in ECCV. Multi-class open set recognition
Recognition. The final supervised classification step of an using probability of inclusion, (2014)
image analysis pipeline is often a bottleneckrobustness 6. WJ Scheirer, A Rocha, A Sapkota, TE Boult, Toward open set recognition.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI).
and speed in the overarching framework are the key to 35(7), 17571772 (2013)
solving this, according to Chen et al. 7. J Markoff, Researchers announce advance in image-recognition software.
As any practitioner of machine learning knows, there (The New York Times, 2014)
8. C Szegedy, W Zaremba, I Sutskever, J Bruna, D Erhan, I Goodfellow, R
are some situations in which we do not have labels for Fergus, in International Conference on Learning Representations (ICLR).
all of our training data. A viable solution in such a Intriguing properties of neural networks, (2014)
case is to learn from labeled and unlabeled images via 9. C-Y Tsai, DD Cox, Are deep learning algorithms easily hackable?
http://deeplearning.twbbs.org/. Accessed March 16 2015
semi-supervised learning. In A Semi-Supervised Learn- 10. Z-L Lu, B Dosher, Visual psychophysics: from laboratory to theory. (The MIT
ing Algorithm for Relevance Feedback and Collaborative Press, Cambridge, MA, 2013)
Image Retrieval, Pedronette et al. highlight the power of 11. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A
Karpathy, A Khosla, MS Bernstein, AC Berg, L Fei-Fei, Imagenet large scale
this approach for communities of users participating in visual recognition challenge. CoRR (2014). abs/1409.0575
collaborative image retrieval. 12. A Torralba, AA Efros, in IEEE CVPR. Unbiased look at dataset bias, (2011)
Once we have tools that can learn over large amounts 13. Y Bengio, Y LeCun, in Large Scale Kernel Machines. Scaling learning
algorithms towards AI (MIT Press Cambridge, MA, 2007), pp. 321358
of data, a host of previously unapproachable problems 14. M Dawkins, C Stewart, S Gallager, A York, in IEEE WACV. Automatic scallop
can be solved. There is, for instance, an immediate need detection in benthic environments, (2013)
for medical imaging algorithms that can support doc- 15. K Pal, C Schller, D Panozzo, O Sorkine-Hornung, T Weyrich,
Content-aware surface parameterization for interactive restoration of
tors in making accurate diagnoses. In Oriented Relative historical documents. in Computer Graphics Forum (Proc. Eurographics).
Fuzzy Connectedness: Theory, Algorithms, and its Appli- 33(2) (2014)
cations in Hybrid Image Segmentation Methods, Bejar 16. R Tamburo, E Nurvitadhi, A Chugh, M Chen, A Rowe, T Kanade, SG
Narasimhan, in ECCV. Programmable automotive headlights, (2014)
and Miranda describe a new method for segmentation
that is applied to brain and chest images from MRI and
CT scans. And, at the cellular level, Xu et al. target
the identification of antinuclear antibodies as evidence
of autoimmune diseases with their work HEp-2 Cells
Classification Based on a Linear Local Distance Coding
Framework. And finally, Islam et al. take us on a global
tour to witness the regional diversity and often surprising
cultural homogeneity of the human face, as represented by
models built from geo-tagged face images in their article
Large-Scale Geo-Facial Image Analysis.
We hope that you enjoy this special issue as much as we
did while putting it together.

You might also like