Location via proxy:   
[Report a bug]   [Manage cookies]                
Navdeep Gill

Navdeep Gill

Navdeep Gill works at the intersection of Responsible AI, AI Governance, and AI Risk Management, developing strategies and solutions that ensure AI systems meet the highest standards of transparency, fairness, accountability, and robustness. At ServiceNow, he collaborates with cross-functional teams to operationalize AI policies and risk frameworks, helping scale responsible AI practices across enterprise applications.

During his time at H2O.ai, Navdeep contributed to a wide range of initiatives spanning machine learning interpretability, automated machine learning (AutoML), and GPU-accelerated model training. He played a key role in embedding interpretability into H2O’s flagship AutoML platform, Driverless.ai, and was instrumental in the development of H2O AutoML, H2O4GPU, and H2O-3—projects that advanced distributed and accelerated machine learning for industry-scale data science.

In earlier work at Cisco and FICO, Navdeep designed and deployed machine learning solutions in the domains of contact center analytics and credit risk management. His work included projects on customer acquisition, account management, churn prediction, sentiment analysis, and operational optimization, applying advanced analytics to improve business outcomes.

Before entering the tech industry, Navdeep conducted research in cognitive neuroscience and visual psychophysics. At the University of California, San Francisco, he studied neural mechanisms of memory and attention, focusing on how these functions change with aging and dementia. At the Smith-Kettlewell Eye Research Institute, he explored how the brain perceives depth in 3D space, with a focus on the effects of brain injuries on visual perception and eye movement control.

His contributions span the fields of Responsible AI, AI Governance, AI Risk Management, Machine Learning, Neuroscience, and Cognitive Psychology, with publications and talks covering technical, academic, and industry audiences.

Education

California State University, East Bay
Hayward, CA
M.S. in Statistics with Designated Emphasis in Computational Statistics
Graduated 2014

B.S. in Statistics; B.A. in Psychology; Minor in Mathematics
Graduated 2012

Industry Experience

ServiceNow
Santa Clara, CA
Staff Senior Product Manager - Responsible AI
2024 - present

H2O.ai
Mountain View, CA
Lead Data Scientist
2021 - 2024
Senior Data Scientist
2017 - 2021
Data Scientist
2015 - 2017

Cisco
San Jose, CA
Data Scientist
2014 - 2015

FICO
San Rafael, CA
Data Scientist
2013 - 2014

Academic Experience

University of California, San Francisco
San Francisco, CA
Research Assistant
2012 - 2013

Smith Kettlewell Eye Research Institute
San Francisco, CA
Research Assistant
2011 - 2012

Projects

Current:

Past:

Presentations

Video 1 Screenshot

A Brief Overview of AI Governance for Responsible Machine Learning Systems

Video 1 Screenshot

Ideas on Machine Learning Interpretability

Video 2 Screenshot

Driverless AI Hands-On Focused on Machine Learning Interpretability

Video 3 Screenshot

Secure Machine Learning

Video 3 Screenshot

Interpretability for Generative AI

Video 1 Screenshot

Actionable Strategies for Mitigating Risks and Driving Adoption with Responsible Machine Learning

Video 1 Screenshot

Responsible Machine Learning with H2O Driverless AI.

Video 2 Screenshot

Deep Dive into Responsible Machine Learning with H2O Driverless AI

Video 3 Screenshot

Fairness in AI and Machine Learning

Video 3 Screenshot

From R Script to Production Using rsparkling

Media

Video 1 Screenshot

Data Scientisting & Data Science | Machine Learning Interpretability | Open Source

Video 2 Screenshot

AI Responsibility and Enterprise Automation | AI LA's Responsible AI Symposium 2021

Publications

Conference Presentations

  • Gill, N., Montgomery, K. (2024). Interpretability for Generative AI. H2O GenAI Day, Atlanta, GA, January 23.
  • Gill, N. (2023). Guardrails for LLMs. H2O Open Source GenAI World, San Francisco, CA, November 7.
  • Gill, N., Mathur, A. (2022). Incorporating AI Governance to Increase Adoption in Business Applications. MLOps World 2022, New York, NY, July 14.
  • Gill, N., Tanco, M. (2021). Security Audits for Machine Learning Attacks. MLOps World 2021, June 16.
  • Gill, N. (2021). Training Understandable, Fair, Trustable and Accurate Predictive Modeling Systems. Duke Machine Learning Day, Durham, North Carolina, March 27.
  • Gill, N. (2019). Human Centered Machine Learning. Artificial Intelligence Conference, San Jose, CA, September 11.
  • Gill, N. (2019). Interpretable Machine Learning Using rsparkling. Symposium on Data Science and Statistics, Bellevue, Washington, May 31.
  • Gill, N. (2019). Practical Machine Learning Interpretability Techniques. GPU Technology Conference, San Jose, CA, March 21.
  • Gill, N. (2018). Distributed Machine Learning with H2O. Joint Statistical Meeting, Vancouver, Canada, August 1.
  • Gill, N. (2018). H2O AutoML. Symposium on Data Science and Statistics, Reston, Virginia, May 16.
  • Hall, P., Gill, N., Chan, M. (2018). Practical Techniques for Interpreting Machine Learning Models: Introductory Open Source Examples using Python, H2O and XGBoost. 1st ACM Conference on Fairness, Accountability, and Transparency, New York City, February 23-24.
  • Gill, N., Hall, P., Chan, M. (2017). Driverless AI Hands-On Focused on Machine Learning Interpretability. H2O World, Mountain View, CA, December 11.
  • Gill, N. (2017). From R Script to Production using rsparkling. Spark Summit, San Francisco, CA, June 14.
  • Gill, N. (2016). Scalable Machine Learning in R with H2O. useR Conference, Stanford, Palo Alto, CA, July 11.
  • Voytek, B., Porat, S., Chamberlain, J., Balthazor, J., Greenberg, Z., Gill, N., Gazzaley, A. (2013). Examining the efficacy of the iPad and Xbox Kinect for cognitive science research. 2nd Annual Meeting of the Entertainment Software and Cognitive Neurotherapeutics Society, Los Angeles, California, March 15-17.
  • Greenberg, Z., Gill, N., Porat, S., Samaha, J., Kader, T., Voytek, B., & Gazzaley, A. (2013). Increased visual cortical noise decreases cued visual attention distribution. 20th Annual Meeting of the Cognitive Neuroscience Society, San Francisco, California, April 13-16.
  • Tyler, C.W., Gill, N., & Nicholas, S. (2012). Hysteresis in Stereoscopic Surface Interpolation: A New Paradigm. 12th Annual Meeting of the Vision Sciences Society, Naples, Florida, May 11-16.
  • Gill, N., Fencsik, D. (2012). Effects of Disruptions on Multiple Object Tracking. California Cognitive Science Conference, UC Berkeley, California, April 28.
  • Gill, N., Fencsik, D. (2011). Effects of Distractions on Recovery Time. Psychology Undergraduate Research Conference, UC Berkeley, California, May 1.

Patents

  • Chan, M., Gill, N., & Hall, P. (2024). Model Interpretation. U.S. Patent No. 11,922,283. Washington, DC: U.S. Patent and Trademark Office.
  • Chan, M., Gill, N., & Hall, P. (2022). Model Interpretation. U.S. Patent No. 11,386,342. Washington, DC: U.S. Patent and Trademark Office.