Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3295500.3356180acmconferencesArticle/Chapter ViewAbstractPublication PagesscConference Proceedingsconference-collections
research-article
Public Access

Etalumis: bringing probabilistic programming to scientific simulators at scale

Published: 17 November 2019 Publication History

Abstract

Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models. However, applications to science remain limited because of the impracticability of rewriting complex scientific simulators in a PPL, the computational cost of inference, and the lack of scalable implementations. To address these, we present a novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo (MCMC) and deep-learning-based inference compilation (IC) engines for tractable inference. To guide IC inference, we perform distributed training of a dynamic 3DCNN-LSTM architecture with a PyTorch-MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with a global mini-batch size of 128k: achieving a performance of 450 Tflop/s through enhancements to PyTorch. We demonstrate a Large Hadron Collider (LHC) use-case with the C++ Sherpa simulator and achieve the largest-scale posterior inference in a Turing-complete PPL.

References

[1]
Morad Aaboud et al. 2017. Performance of the ATLAS Trigger System in 2015. European Physical Journal C77, 5 (2017), 317.
[2]
G. Aad et al. 2008. The ATLAS Experiment at the CERN Large Hadron Collider. JINST 3 (2008), S08003.
[3]
G. Aad et al. 2015. Search for the Standard Model Higgs boson produced in association with top quarks and decaying into bb in pp collisions at sqrt s=8 TeV with the ATLAS detector. The European Physical Journal C 75, 7 (29 Jul 2015), 349.
[4]
G. Aad et al. 2016. Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment. The European Physical Journal C 76, 5 (25 May 2016), 295.
[5]
Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16). 265--283.
[6]
Sea Agostinelli, John Allison, K al Amako, J Apostolakis, H Araujo, P Arce, M Asai, D Axen, S Banerjee, G Barrand, et al. 2003. GEANT4---a simulation toolkit. Nuclear instruments and methods in physics research section A: Accelerators, Spectrometers, Detectors and Associated Equipment 506, 3 (2003), 250--303.
[7]
Joël Akeret, Alexandre Refregier, Adam Amara, Sebastian Seehars, and Caspar Hasner. 2015. Approximate Bayesian computation for forward modeling in cosmology. Journal of Cosmology and Astroparticle Physics 2015, 08 (2015), 043.
[8]
M Sanjeev Arulampalam, Simon Maskell, Neil Gordon, and Tim Clapp. 2002. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing 50, 2 (2002), 174--188.
[9]
Atilim Güneş Baydin, Barak A. Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. 2018. Automatic differentiation in machine learning: a survey. Journal of Machine Learning Research (JMLR) 18, 153 (2018), 1--43. http://jmlr.org/papers/v18/17-468.html
[10]
Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 41--48.
[11]
Eli Bingham, Jonathan P Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, and Noah D Goodman. 2018. Pyro: Deep universal probabilistic programming. Journal of Machine Learning Research (2018).
[12]
Christopher M Bishop. 1994. Mixture density networks. Technical Report NCRG/94/004. Neural Computing Research Group, Aston University.
[13]
Christopher M Bishop. 2006. Pattern Recognition and Machine Learning. Springer.
[14]
Johann Brehmer, Gilles Louppe, Juan Pavez, and Kyle Cranmer. 2018. Mining gold from implicit models to improve likelihood-free inference. arXiv preprint arXiv:1805.12244 (2018).
[15]
Jianmin Chen, Xinghao Pan, Rajat Monga, Samy Bengio, and Rafal Jozefowicz. 2016. Revisiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981 (2016).
[16]
D. Das, S. Avancha, D. Mudigere, K. Vaidynathan, S. Sridharan, D. Kalamkar, B. Kaul, and P. Dubey. 2016. Distributed Deep Learning Using Synchronous Stochastic Gradient Descent. ArXiv e-prints (Feb. 2016). arXiv:cs.DC/1602.06709
[17]
Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc'Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. 2012. Large Scale Distributed Deep Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1 (NIPS'12). Curran Associates Inc., USA, 1223--1231. http://dl.acm.org/citation.cfm?id=2999134.2999271
[18]
Adji Bousso Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, and David Blei. 2017. Variational Inference via \ chi Upper Bound Minimization. In Advances in Neural Information Processing Systems. 2732--2741.
[19]
Joshua V Dillon, Ian Langmore, Dustin Tran, Eugene Brevdo, Srinivas Vasudevan, Dave Moore, Brian Patton, Alex Alemi, Matt Hoffman, and Rif A Saurous. 2017. TensorFlow distributions. arXiv preprint arXiv:1711.10604 (2017).
[20]
Patrick Doetsch, Pavel Golik, and Hermann Ney. 2017. A comprehensive study of batch construction strategies for recurrent neural networks in mxnet. arXiv preprint arXiv:1705.02414 (2017).
[21]
Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, and Vladlen Koltun. 2017. CARLA: An Open Urban Driving Simulator. In Proceedings of the 1st Annual Conference on Robot Learning. 1--16.
[22]
Arnaud Doucet and Adam M Johansen. 2009. A tutorial on particle filtering and smoothing: Fifteen years later. (2009).
[23]
Javier Duarte et al. 2018. Fast inference of deep neural networks in FPGAs for particle physics. JINST 13, 07 (2018), P07027. arXiv:physics.ins-det/1804.06913
[24]
Andrew Gelman, Hal S Stern, John B Carlin, David B Dunson, Aki Vehtari, and Donald B Rubin. 2013. Bayesian data analysis. Chapman and Hall/CRC.
[25]
Samuel Gershman and Noah Goodman. 2014. Amortized inference in probabilistic reasoning. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 36.
[26]
Zoubin Ghahramani. 2015. Probabilistic machine learning and artificial intelligence. Nature 521, 7553 (2015), 452.
[27]
B. Ginsburg, I. Gitman, and O. Kuchaiev. 2018. Layer-Wise Adaptive Rate Control for Training of Deep Networks. in preparation (2018).
[28]
Sheldon L Glashow. 1961. Partial-symmetries of weak interactions. Nuclear Physics 22, 4 (1961), 579--588.
[29]
Tanju Gleisberg, Stefan Höche, F Krauss, M Schönherr, S Schumann, F Siegert, and J Winter. 2009. Event generation with SHERPA 1.1. Journal of High Energy Physics 2009, 02 (2009), 007.
[30]
Prem Gopalan, Wei Hao, David M Blei, and John D Storey. 2016. Scaling probabilistic models of genetic variation to millions of humans. Nature Genetics 48, 12 (2016), 1587.
[31]
Priya Goyal, Piotr Dollar, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. 2017. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour. arXiv preprint arXiv:1706.02677v1 (2017).
[32]
David Griffiths. 2008. Introduction to elementary particles. John Wiley & Sons.
[33]
Ralf Herbrich, Tom Minka, and Thore Graepel. 2007. TrueSkill™: a Bayesian skill rating system. In Advances in Neural Information Processing Systems. 569--576.
[34]
Pieter Hintjens. 2013. ZeroMQ: messaging for many applications. O'Reilly Media, Inc.
[35]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9, 8 (1997), 1735--1780.
[36]
Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. 2013. Stochastic variational inference. The Journal of Machine Learning Research 14, 1 (2013), 1303--1347.
[37]
Matthew D Hoffman and Andrew Gelman. 2014. The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research 15, 1 (2014), 1593--1623.
[38]
F. N. Iandola, K. Ashraf, M. W. Moskewicz, and K. Keutzer. 2015. FireCaffe: near-linear acceleration of deep neural network training on compute clusters. ArXiv e-prints (Oct. 2015). arXiv:cs.CV/1511.00175
[39]
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. 2016. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836 (2016).
[40]
Viacheslav Khomenko, Oleg Shyshkov, Olga Radyvonenko, and Kostiantyn Bokhan. 2016. Accelerating recurrent neural network training using sequence bucketing and multi-gpu data parallelization. In 2016 IEEE First International Conference on Data Stream Mining & Processing (DSMP). IEEE, 100--103.
[41]
D. P. Kingma and J. Ba. 2014. Adam: A Method for Stochastic Optimization. ArXiv e-prints (Dec. 2014). arXiv:cs.LG/1412.6980
[42]
Diederik P Kingma and Max Welling. 2013. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114 (2013).
[43]
Kunitaka Kondo. 1988. Dynamical likelihood method for reconstruction of events with missing momentum. I. Method and toy models. Journal of the Physical Society of Japan 57, 12 (1988), 4126--4140.
[44]
Thorsten Kurth, Sean Treichler, Joshua Romero, Mayur Mudigonda, Nathan Luehr, Everett Phillips, Ankur Mahesh, Michael Matheson, Jack Deslippe, Massimiliano Fatica, Prabhat, and Michael Houston. 2018. Exascale Deep Learning for Climate Analytics. In Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC '18). IEEE Press, Piscataway, NJ, USA, Article 51, 12 pages. http://dl.acm.org/citation.cfm?id=3291656.3291724
[45]
Thorsten Kurth, Jian Zhang, Nadathur Satish, Evan Racah, Ioannis Mitliagkas, Md Mostofa Ali Patwary, Tareq Malas, Narayanan Sundaram, Wahid Bhimji, Mikhail Smorkalov, et al. 2017. Deep learning at 15PF: supervised and semi-supervised classification for scientific data, In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. ArXiv e-prints, 7. arXiv:1708.05256
[46]
Tuan Anh Le. 2015. Inference for higher order probabilistic programs. Masters thesis, University of Oxford (2015).
[47]
Tuan Anh Le, Atilim Güneş Baydin, and Frank Wood. 2017. Inference Compilation and Universal Probabilistic Programming. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) (Proceedings of Machine Learning Research), Vol. 54. PMLR, Fort Lauderdale, FL, USA, 1338--1348.
[48]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proc. IEEE 86, 11 (1998), 2278--2324.
[49]
Mario Lezcano Casado, Atilim Güneş Baydin, David Martinez Rubio, Tuan Anh Le, Frank Wood, Lukas Heinrich, Gilles Louppe, Kyle Cranmer, Wahid Bhimji, Karen Ng, and Prabhat. 2017. Improvements to Inference Compilation for Probabilistic Programming in Large-Scale Scientific Simulators. In Neural Information Processing Systems (NIPS) 2017 workshop on Deep Learning for Physical Sciences (DLPS), Long Beach, CA, US, December 8, 2017.
[50]
Linux man-pages project. 2019. Linux Programmer's Manual. http://man7.org/linux/man-pages/index.html
[51]
Amrita Mathuriya, Deborah Bard, Peter Mendygral, Lawrence Meadows, James Arnemann, Lei Shao, Siyu He, Tuomas Karna, Daina Moise, Simon J. Pennycook, Kristyn Maschoff, Jason Sewall, Nalini Kumar, Shirley Ho, Mike Ringenburg, Prabhat, and Victor Lee. 2018. CosmoFlow: Using Deep Learning to Learn the Universe at Scale. In Proceedings of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC '18). IEEE Press, Piscataway, NJ, USA, Article 65, 11 pages. https://dl.acm.org/citation.cfm?id=3291743
[52]
Amrita Mathuriya, Thorsten Kurth, Vivek Rane, Mustafa Mustafa, Lei Shao, Debbie Bard, Victor W Lee, et al. 2017. Scaling GRPC Tensorflow on 512 nodes of Cori Supercomputer. In Neural Information Processing Systems (NIPS) 2017 workshop on Deep Learning At Supercomputer Scale, Long Beach, CA, US, December 8, 2017.
[53]
Sam McCandlish, Jared Kaplan, and et.al Amodei, Dario. 2018. An Empirical Model of Large-Batch Training. arXiv preprint arXiv:1812.06162v1 (2018).
[54]
Hiroaki Mikami, Hisahiro Suganuma, Pongsakorn U.-Chupala, Yoshiki Tanaka, and Yuichi Kageyama. 2018. ImageNet/ResNet-50 Training in 224 Seconds. CoRR abs/1811.05233 (2018). arXiv:1811.05233 http://arxiv.org/abs/1811.05233
[55]
T. Minka, J.M. Winn, J.P. Guiver, Y. Zaykov, D. Fabian, and J. Bronskill. 2018. /Infer.NET 0.3. Microsoft Research Cambridge. http://dotnet.github.io/infer.
[56]
Radford M Neal. 1993. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRG-TR-93-1. Dept. of Computer Science, University of Toronto.
[57]
Radford M. Neal. 2011. MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng (Eds.). Vol. 2. 2.
[58]
Feng Niu, Benjamin Recht, Christopher Re, and Stephen J. Wright. 2011. HOG-WILD!: A Lock-free Approach to Parallelizing Stochastic Gradient Descent. In Proceedings of the 24th International Conference on Neural Information Processing Systems (NIPS'11). Curran Associates Inc., USA, 693--701. http://dl.acm.org/citation.cfm?id=2986459.2986537
[59]
X. Pan, J. Chen, R. Monga, S. Bengio, and R. Jozefowicz. 2017. Revisiting Distributed Synchronous SGD. ArXiv e-prints (Feb. 2017). arXiv:cs.DC/1702.05800
[60]
George Papamakarios and Iain Murray. 2016. Fast e-free Inference of Simulation Models with Bayesian Conditional Density Estimation. In Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett (Eds.). Curran Associates, Inc., 1028--1036.
[61]
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS 2017 Autodiff Workshop: The Future of Gradient-based Machine Learning Software and Techniques, Long Beach, CA, US, December 9, 2017.
[62]
Michael E Peskin. 2018. An introduction to quantum field theory. CRC Press.
[63]
A Salam. 1968. Proceedings of the Eighth Nobel Symposium on Elementary Particle Theory, Relativistic Groups, and Analyticity, Stockholm, Sweden, 1968. (1968).
[64]
Christopher J. Shallue, Jaehoom Lee, Joseph Antognini, Jascha Sohl-Dickstein, Roy Frostig, and George E. Dahl. 2018. Measuring the Effects of Data Parallelism on Neural Network training. arXiv preprint arXiv:1811.03600v2 (2018).
[65]
Samuel L Smith, Pieter-Jan Kindermans, Chris Ying, and Quoc V. Le. 2017. Don't Decay the Learning Rate, Increase the Batch Size. arXiv preprint arXiv:1711.00489 (2017).
[66]
T. Smith, N. Maire, A. Ross, M. Penny, N. Chitnis, A. Schapira, A. Studer, B. Genton, C. Lengeler, F. Tediosi, and et al. 2008. Towards a comprehensive simulation model of malaria epidemiology and control. Parasitology 135, 13 (2008), 1507--1516.
[67]
Sam Staton, Frank Wood, Hongseok Yang, Chris Heunen, and Ohad Kammar. 2016. Semantics for probabilistic programming: higher-order functions, continuous distributions, and soft constraints. In 2016 31st Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). IEEE, 1--10.
[68]
John Sterman, Thomas Fiddaman, Travis Franck, Andrew Jones, Stephanie McCauley, Philip Rice, Elizabeth Sawin, and Lori Siegel. 2012. Climate interactive: the C-ROADS climate policy model. System Dynamics Review 28, 3 (2012), 295--305.
[69]
Michael Teng and Frank Wood. 2018. Bayesian Distributed Stochastic Gradient Descent. In Advances in Neural Information Processing Systems. 6380--6390.
[70]
Dustin Tran, Alp Kucukelbir, Adji B Dieng, Maja Rudolph, Dawen Liang, and David M Blei. 2016. Edward: A library for probabilistic modeling, inference, and criticism. arXiv preprint arXiv:1610.09787 (2016).
[71]
Jan-Willem van de Meent, Brooks Paige, Hongseok Yang, and Frank Wood. 2018. An Introduction to Probabilistic Programming. arXiv e-prints, Article arXiv:1809.10756 (Sep 2018). arXiv:stat.ML/1809.10756
[72]
Martinus Veltman et al. 1972. Regularization and renormalization of gauge fields. Nuclear Physics B 44, 1 (1972), 189--213.
[73]
S Weinberg. 1967. Phys. Rev. Lett 19 (1967), 1264.
[74]
David Wingate, Andreas Stuhlmüller, and Noah Goodman. 2011. Lightweight implementations of probabilistic programming languages via transformational compilation. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 770--778.
[75]
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016).
[76]
Yang You, Igor Gitman, and Boris Ginsburg. 2017. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888 (2017).
[77]
S. Zheng, Q. Meng, T. Wang, W. Chen, N. Yu, Z.-M. Ma, and T.-Y. Liu. 2016. Asynchronous Stochastic Gradient Descent with Delay Compensation. ArXiv e-prints (Sept. 2016). arXiv:cs.LG/1609.08326

Cited By

View all
  • (2024)StarfishDB: A Query Execution Engine for Relational Probabilistic ProgrammingProceedings of the ACM on Management of Data10.1145/36549882:3(1-31)Online publication date: 30-May-2024
  • (2023)Probabilistic Programming with Stochastic ProbabilitiesProceedings of the ACM on Programming Languages10.1145/35912907:PLDI(1708-1732)Online publication date: 6-Jun-2023
  • (2023)High Throughput Training of Deep Surrogates from Large Ensemble RunsProceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis10.1145/3581784.3607083(1-16)Online publication date: 12-Nov-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SC '19: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
November 2019
1921 pages
ISBN:9781450362290
DOI:10.1145/3295500
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

In-Cooperation

  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 November 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. deep learning
  2. inference
  3. probabilistic programming
  4. simulation

Qualifiers

  • Research-article

Funding Sources

Conference

SC '19
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,516 of 6,373 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)616
  • Downloads (Last 6 weeks)59
Reflects downloads up to 30 Aug 2024

Other Metrics

Citations

Cited By

View all
  • (2024)StarfishDB: A Query Execution Engine for Relational Probabilistic ProgrammingProceedings of the ACM on Management of Data10.1145/36549882:3(1-31)Online publication date: 30-May-2024
  • (2023)Probabilistic Programming with Stochastic ProbabilitiesProceedings of the ACM on Programming Languages10.1145/35912907:PLDI(1708-1732)Online publication date: 6-Jun-2023
  • (2023)High Throughput Training of Deep Surrogates from Large Ensemble RunsProceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis10.1145/3581784.3607083(1-16)Online publication date: 12-Nov-2023
  • (2023)A Surrogate Model for Studying Solar Energetic Particle Transport and the Seed PopulationSpace Weather10.1029/2023SW00359321:12Online publication date: 12-Dec-2023
  • (2023)AquaSense: Automated Sensitivity Analysis of Probabilistic Programs via Quantized InferenceAutomated Technology for Verification and Analysis10.1007/978-3-031-45332-8_16(288-301)Online publication date: 19-Oct-2023
  • (2022)Rethinking variational inference for probabilistic programs with stochastic supportProceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3601373(15160-15175)Online publication date: 28-Nov-2022
  • (2022)Rare and Different: Anomaly Scores from a combination of likelihood and out-of-distribution models to detect new physics at the LHCSciPost Physics10.21468/SciPostPhys.12.2.07712:2Online publication date: 25-Feb-2022
  • (2022)Guaranteed bounds for posterior inference in universal probabilistic programmingProceedings of the 43rd ACM SIGPLAN International Conference on Programming Language Design and Implementation10.1145/3519939.3523721(536-551)Online publication date: 9-Jun-2022
  • (2022)Technology readiness levels for machine learning systemsNature Communications10.1038/s41467-022-33128-913:1Online publication date: 20-Oct-2022
  • (2022)Data-centric Engineering: integrating simulation, machine learning and statistics. Challenges and opportunitiesChemical Engineering Science10.1016/j.ces.2021.117271249(117271)Online publication date: Feb-2022
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media