Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

A maximum partial entropy-based method for multiple-instance concept learning

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Multiple instance (MI) learning aims at identifying the underlying concept from collectively labeled data. A training sample consists of a set, known as a bag, of unlabelled instances. The bag as a whole is labeled positive if at least one instance in the bag is positive, or negative otherwise. Given such training samples, the goal is to learn a description of the common instance(s) among the positive bags, i.e., the underlying concept that is responsible for the positive label. In this work, we introduce a learning scheme based on the notion of partial entropy for MI concept learning. Partial entropy accentuates the intra-class information by focusing on the information reflected from the positive class in proportion to the total entropy, maximization of which is to equalize the likelihoods of intra-class outcomes among the positive class, essentially reflecting the intended concept. When coupled with a distance-based probabilistic model for MI learning, it is equivalent to seeking out a concept estimate that equalizes the intra-class distances while the distance to negative bags is restrained. It produces patterns that are similar to at least one instance from each of the positive bags while dissimilar from all instances in negative bags. The generated patterns from the optimization process correspond to prototypical concepts. Maximum partial entropy is conceptually simple and experimental results on different MI datasets demonstrate its effectiveness in learning an explicit representation of the concept and its competitive performance when applied to classification tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Amores J (2013) Multiple instance classification: review, taxonomy and comparative study. Artificial Intelligence 201:81–105

    Article  MathSciNet  MATH  Google Scholar 

  2. Andrews S, Tsochantaridis I, Hofmann T (2003) Support vector machines for multiple-instance learning. In: Advances in Neural Information Processing Systems, vol 15, pp 561–568

  3. Auer P (1997) On learning from multi-instance examples: empirical evaluation of a theoretical approach. In: Proceedings of the 4th International Conference on Machine Learning, pp 21–29

  4. Babenko B, Yang M, Belongie S (2011) Robust object tracking with online multiple instance learning. IEEE Trans Pattern Anal Mach Intell 33(8):1619–1632

    Article  Google Scholar 

  5. Blum A, Kalai A (1998) A note on learning from multiple-instance examples. Mach Learn 30(1):23–29

    Article  MATH  Google Scholar 

  6. Bolton J, Gader P, Frigui H, Torrione P (2011) Random set framework for multiple instance learning. Inf Sci 181(11):2061–2070

    Article  MathSciNet  Google Scholar 

  7. Bruner JS, Goodnow JJ, Austin GA (1956) A study of thinking. Wiley, New York

    Google Scholar 

  8. Chevaleyre Y, Zucker JD (2001) Solving multiple-instance and multiple-part learning problems with decision trees and rule sets. Application to the mutagenesis problem. In: Proceedings of the 14th Biennial Conference of the Canadian Society on Computational Studies of Intelligence, pp 204–214

  9. Chiu DK, Gondra I, Xu T (2013) Future directions in multiple instance learning. Journal of Theoretical and Applied Computer Science 7(3):29–39

    Google Scholar 

  10. Dietterich TG, Lathrop RH, Pérez TL (1997) Solving the multiple instance problem with axis-parallel rectangles. Artif Intell 89(1-2):31–71

    Article  MATH  Google Scholar 

  11. Dollár P, Babenko B, Belongie S, Perona P, Tu Z (2008) Multiple component learning for object detection. In: Proceedings of the 10th European Conference on Computer Vision: Part II, pp 211–224

  12. Foulds J, Frank E (2010) A review of multi-instance learning assumptions. Knowl Eng Rev 25(2):1–25

    Article  Google Scholar 

  13. Gärtner T, Flach PA, Kowalczyk A, Smola AJ (2002) Multi-instance kernels. In: Proceedings of the 19th International Conference on Machine Learning, pp 179–186

  14. Gondra I, Xu T (2010) A multiple instance learning based framework for semantic image segmentation. Multimedia Tools and Applications 48(2):339–365

    Article  Google Scholar 

  15. Guiasu S (1977) Information Theory with Applications. McGraw-Hill

  16. Hajimirsadeghi H, Mori G (2012) Multiple instance real boosting with aggregation functions. In: Proceedings of the 21st International Conference on Pattern Recognition, pp 2706–2710

  17. Leistner C, Saffari A, Bischof H (2010) MIForests: multiple-instance learning with randomized trees. In: Proceedings of the 11th European Conference on Computer vision: Part VI, pp 29–42

  18. Li F, Liu R (2012) Multi-graph multi-instance learning for object-based image and video retrieval. In: Proceedings of the 2nd ACM International Conference on Multimedia Retrieval, pp 35:1–35:8

  19. Maron O, Pérez TL (1998) A framework for multiple-instance learning. In: Proceedings of the Advances in Neural Information Processing Systems, pp 570–576

  20. Mei S, Fei W (2009) Structural domain based multiple instance learning for predicting gram-positive bacterial protein subcellular localization. In: Proceedings of the 2009 International Joint Conference on Bioinformatics, Systems Biology and Intelligent Computing, pp 195–200

  21. Qi Z, Xu Y, Wang L, Song Y (2011) Online multiple instance boosting for object detection. Neurocomputing 74(10):1769–1775

    Article  Google Scholar 

  22. Ray S, Craven M (2005) Supervised versus multiple instance learning: an empirical comparison. In: Proceedings of the 22nd International Conference on Machine learning, pp 697–704

  23. Raykar VC, Krishnapuram B, Bi J, Dundar M, Rao RB (2008) Bayesian multiple instance learning: automatic feature selection and inductive transfer. In: Proceedings of the 25th international Conference on Machine learning, pp 808–815

  24. Riesen K, Bunke H (2008) IAM graph database repository for graph based pattern recognition and machine learning. In: Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Pattern Recognition, pp 287–297

  25. Ruffo G (2001) Learning single and multiple instance decision trees for computer security applications PhD thesis. Universita di Torino, Italy

    Google Scholar 

  26. Wang C, Zhang L, Zhang HJ (2008a) Graph-based multiple-instance learning for object-based image retrieval. In: Proceedings of the 1st ACM international Conference on Multimedia information retrieval, pp 156–163

  27. Wang H, Yang Q, Zha H (2008b) Adaptive p-posterior mixture-model kernels for multiple instance learning. In: Proceedings of the 25th International Conference on Machine Learning, pp 1136–1143

  28. Wang J, Zucker JD (2000) Solving the multiple-instance problem: A lazy learning approach. In: Proceedings of the 17th International Conference on Machine Learning, pp 1119–1126

  29. Xu T, Gondra I, Chiu D (2011) Adaptive kernel diverse density estimate for multiple instance learning. In: Proceedings of the 7th international Conference on Machine learning and data mining in pattern recognition, pp 185–198

  30. Xu T, Chiu D, Gondra I (2012) Constructing target concept in multiple instance learning using maximum partial entropy. In: Proceedings of the 8th international Conference on Machine Learning and Data Mining in Pattern Recognition, pp 169–182

  31. Xu X, Frank E (2004) Logistic regression and boosting for labeled bags of instances. In: Proceedings of the PacificAsia Conference on Knowledge Discovery and Data Mining, pp 272–281

  32. Zafra A, Pechenizkiy M, Ventura S (2013) HyDR-MI: A hybrid algorithm to reduce dimensionality in multiple instance learning. Inf Sci 222(10):282–301

    Article  MathSciNet  Google Scholar 

  33. Zhang ML, Zhou ZH (2006) Adapting RBF neural networks to multi-instance learning. Neural Process Lett 23(1):1–26

    Article  Google Scholar 

  34. Zhang Q, Goldman SA (2001) EM-DD: An improved multiple-instance learning technique. Adv Neural Inf Proces Syst 14:1073–1080

    Google Scholar 

  35. Zhou X, Ruan J, Zhang W (2010) Promoter prediction based on a multiple instance learning scheme. In: Proceedings of the First ACM International Conference on Bioinformatics and Computational Biology, pp 295–301

  36. Zhou Z, Jiang K, Li M (2005) Multi-instance learning based web mining. Appl Intell 22(2):135–147

    Article  Google Scholar 

  37. Zhou Z, Sun Y, Li Y (2009) Multi-instance learning by treating instances as non-I.I.D. samples. In: Proceedings of the 26th International Conference on Machine Learning, pp 1249– 1256

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Iker Gondra.

Additional information

This work was supported by two Discovery Grants (#400297, #327476) from the Natural Sciences and Engineering Research Council (NSERC) of Canada.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, T., Gondra, I. & Chiu, D.K. A maximum partial entropy-based method for multiple-instance concept learning. Appl Intell 46, 865–875 (2017). https://doi.org/10.1007/s10489-016-0873-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-016-0873-0

Keywords