Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2783258.2783304acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Deep Model Based Transfer and Multi-Task Learning for Biological Image Analysis

Published: 10 August 2015 Publication History

Abstract

A central theme in learning from image data is to develop appropriate image representations for the specific task at hand. Traditional methods used handcrafted local features combined with high-level image representations to generate image-level representations. Thus, a practical challenge is to determine what features are appropriate for specific tasks. For example, in the study of gene expression patterns in Drosophila melanogaster, texture features based on wavelets were particularly effective for determining the developmental stages from in situ hybridization (ISH) images. Such image representation is however not suitable for controlled vocabulary (CV) term annotation because each CV term is often associated with only a part of an image. Here, we developed problem-independent feature extraction methods to generate hierarchical representations for ISH images. Our approach is based on the deep convolutional neural networks (CNNs) that can act on image pixels directly. To make the extracted features generic, the models were trained using a natural image set with millions of labeled examples. These models were transferred to the ISH image domain and used directly as feature extractors to compute image representations. Furthermore, we employed multi-task learning method to fine-tune the pre-trained models with labeled ISH images, and also extracted features from the fine-tuned models. Experimental results showed that feature representations computed by deep models based on transfer and multi-task learning significantly outperformed other methods for annotating gene expression patterns at different stage ranges. We also demonstrated that the intermediate layers of deep models produced the best gene expression pattern representations.

References

[1]
Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798--1828, 2013.
[2]
C. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.
[3]
R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160--167. ACM, 2008.
[4]
J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In Proceedings of the 31st International Conference on Machine Learning, pages 647--655, 2014.
[5]
E. Frise, A. S. Hammonds, and S. E. Celniker. Systematic image-driven analysis of the spatial Drosophila embryonic expression landscape. Molecular Systems Biology, 6:345, 2010.
[6]
R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition, 2014.
[7]
S. Ji, Y.-X. Li, Z.-H. Zhou, S. Kumar, and J. Ye. A bag-of-words approach for Drosophila gene expression pattern annotation. BMC Bioinformatics, 10(1):119, 2009.
[8]
S. Ji, L. Sun, R. Jin, S. Kumar, and J. Ye. Automated annotation of Drosophila gene expression patterns using a controlled vocabulary. Bioinformatics, 24(17):1881--1888, 2008.
[9]
S. Ji, L. Yuan, Y.-X. Li, Z.-H. Zhou, S. Kumar, and J. Ye. Drosophila gene expression pattern annotation using sparse features and term-term interactions. In Proceedings of the Fifteenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 407--416, 2009.
[10]
A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1106--1114. 2012.
[11]
S. Kumar, K. Jayaraman, S. Panchanathan, R. Gurunathan, A. Marti-Subirana, and S. J. Newfeld. BEST: a novel computational approach for comparing gene expression patterns from early stages of Drosophila melanogaster develeopment. Genetics, 169:2037--2047, 2002.
[12]
S. Kumar, C. Konikoff, B. Van Emden, C. Busick, K. T. Davis, S. Ji, L.-W. Wu, H. Ramos, T. Brody, S. Panchanathan, J. Ye, T. L. Karr, K. Gerold, M. McCutchan, and S. J. Newfeld. FlyExpress: visual mining of spatiotemporal patterns for genes and publications in Drosophila embryogenesis. Bioinformatics, 27(23):3319--3320, 2011.
[13]
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278--2324, November 1998.
[14]
M. Oquab, I. Laptev, L. Bottou, and J. Sivic. Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition, 2014.
[15]
I. Pruteanu-Malinici, D. L. Mace, and U. Ohler. Automatic annotation of spatial expression patterns via sparse Bayesian factor models. PLoS Comput Biol, 7(7):e1002098, 07 2011.
[16]
K. Puniyani, C. Faloutsos, and E. P. Xing. SPEX2: automated concise extraction of spatial gene expression patterns from fly embryo ISH images. Bioinformatics, 26(12):i47--56, 2010.
[17]
A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. In Proceedings of the 27th IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2014.
[18]
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[19]
Q. Sun, S. Muckatira, L. Yuan, S. Ji, S. Newfeld, S. Kumar, and J. Ye. Image-level and group-level models for phDrosophila gene expression pattern annotation. BMC Bioinformatics, 14:350, 2013.
[20]
P. Tomancak, A. Beaton, R. Weiszmann, E. Kwan, S. Shu, S. E. Lewis, S. Richards, M. Ashburner, V. Hartenstein, S. E. Celniker, and G. M. Rubin. Systematic determination of patterns of gene expression during Drosophila embryogenesis. Genome Biology, 3(12):research0088.1--0088.14, 2002.
[21]
P. Tomancak, B. Berman, A. Beaton, R. Weiszmann, E. Kwan, V. Hartenstein, S. Celniker, and G. Rubin. Global analysis of patterns of gene expression during Drosophila embryogenesis. Genome Biology, 8(7):R145, 2007.
[22]
B. Van Emden, H. Ramos, S. Panchanathan, S. Newfeld, and S. Kumar. Flyexpress: an image-matching web-tool for finding genes with overlapping patterns of expression in drosophila embryos. Tempe, AZ, 85287530, 2006.
[23]
J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, pages 3320--3328, 2014.
[24]
L. Yuan, C. Pan, S. Ji, M. McCutchan, Z.-H. Zhou, S. Newfeld, S. Kumar, and J. Ye. Automated annotation of developmental stages of Drosophila embryos in images containing spatial patterns of expression. Bioinformatics, 30(2):266--273, 2014.
[25]
L. Yuan, A. Woodard, S. Ji, Y. Jiang, Z.-H. Zhou, S. Kumar, and J. Ye. Learning sparse representations for fruit-fly gene expression pattern image annotation and retrieval. BMC Bioinformatics, 13:107, 2012.
[26]
M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In Proceedings of the European Conference on Computer Vision, pages 818--833. Springer, 2014.
[27]
W. Zhang, D. Feng, R. Li, A. Chernikov, N. Chrisochoides, C. Osgood, C. Konikoff, S. Newfeld, S. Kumar, and S. Ji. A mesh generation and machine learning framework for Drosophila gene expression pattern image analysis. BMC Bioinformatics, 14:372, 2013.
[28]
J. Zhou and H. Peng. Automatic recognition and annotation of gene expression patterns of fly embryos. Bioinformatics, 23(5):589--596, 2007.

Cited By

View all
  • (2024)From Organelle Morphology to Whole-Plant Phenotyping: A Phenotypic Detection Method Based on Deep LearningPlants10.3390/plants1309117713:9(1177)Online publication date: 23-Apr-2024
  • (2024)NRAFN: a non-text reinforcement and adaptive fusion network for multimodal sentiment analysisMultimedia Tools and Applications10.1007/s11042-024-19433-zOnline publication date: 28-May-2024
  • (2024)Towards Explainability in Automated Medical Code Prediction from Clinical RecordsIntelligent Systems and Applications10.1007/978-3-031-47718-8_40(593-637)Online publication date: 14-Feb-2024
  • Show More Cited By

Index Terms

  1. Deep Model Based Transfer and Multi-Task Learning for Biological Image Analysis

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    KDD '15: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
    August 2015
    2378 pages
    ISBN:9781450336642
    DOI:10.1145/2783258
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 10 August 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. bioinformatics
    2. deep learning
    3. image analysis
    4. multi-task learning
    5. transfer learning

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    KDD '15
    Sponsor:

    Acceptance Rates

    KDD '15 Paper Acceptance Rate 160 of 819 submissions, 20%;
    Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)70
    • Downloads (Last 6 weeks)12
    Reflects downloads up to 01 Sep 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)From Organelle Morphology to Whole-Plant Phenotyping: A Phenotypic Detection Method Based on Deep LearningPlants10.3390/plants1309117713:9(1177)Online publication date: 23-Apr-2024
    • (2024)NRAFN: a non-text reinforcement and adaptive fusion network for multimodal sentiment analysisMultimedia Tools and Applications10.1007/s11042-024-19433-zOnline publication date: 28-May-2024
    • (2024)Towards Explainability in Automated Medical Code Prediction from Clinical RecordsIntelligent Systems and Applications10.1007/978-3-031-47718-8_40(593-637)Online publication date: 14-Feb-2024
    • (2023)A Dual-branch Enhanced Multi-task Learning Network for Multimodal Sentiment AnalysisProceedings of the 2023 ACM International Conference on Multimedia Retrieval10.1145/3591106.3592260(481-489)Online publication date: 12-Jun-2023
    • (2023)Deep‐learning based classification of a tumor marker for prognosis on Hodgkin's diseaseEuropean Journal of Haematology10.1111/ejh.14066111:5(722-728)Online publication date: 7-Aug-2023
    • (2023)RNN-CNN Based Cancer Prediction Model for Gene ExpressionIEEE Access10.1109/ACCESS.2023.333247911(131024-131044)Online publication date: 2023
    • (2023)S3-VAE: a novel Supervised-Source-Separation Variational Autoencoder algorithm to discriminate tumor cell lines in time-lapse microscopy imagesExpert Systems with Applications10.1016/j.eswa.2023.120861(120861)Online publication date: Jun-2023
    • (2023)Heterogeneous graph convolution based on In-domain Self-supervision for Multimodal Sentiment AnalysisExpert Systems with Applications: An International Journal10.1016/j.eswa.2022.119240213:PCOnline publication date: 1-Mar-2023
    • (2022)Hateful Memes Detection Based on Multi-Task LearningMathematics10.3390/math1023452510:23(4525)Online publication date: 30-Nov-2022
    • (2022)Building and Interpreting Deep Similarity ModelsIEEE Transactions on Pattern Analysis and Machine Intelligence10.1109/TPAMI.2020.302073844:3(1149-1161)Online publication date: 1-Mar-2022
    • Show More Cited By

    View Options

    Get Access

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media