|
Vol-2560
urn:nbn:de:0074-2560-0
Copyright © 2020 for
the individual papers by the papers' authors.
Copyright © 2020 for the volume
as a collection by its editors.
This volume and its papers are published under the
Creative Commons License Attribution 4.0 International
(CC BY 4.0).
|
SafeAI 2020
Artificial Intelligence Safety 2020
Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI 2020)
co-located with 34th AAAI Conference on Artificial Intelligence (AAAI 2020)
New York, USA, Feb 7, 2020.
Edited by
Huáscar Espinoza *
José Hernández-Orallo **
Xin Cynthia Chen ***
Seán S. ÓhÉigeartaigh ****
Xiaowei Huang *****
Mauricio Castillo-Effen ******
Richard Mallah *******
John McDermid ********
* CEA LIST, Gif-sur-Yvette, France, huascar.espinoza@cea.fr
** Universitat Politècnica de València, Spain, jorallo@upv.es
*** University of Hong Kong, China, cyn0531@hku.hk
**** University of Cambridge, Cambridge, United Kingdom, so348@cam.ac.uk
***** University of Liverpool, Liverpool, United Kingdom, xiaowei.huang@liverpool.ac.uk
****** Lockheed Martin, Advanced Technology Laboratories, Arlington, VA, USA, mauricio.castillo-effen@lmco.com
******* Future of Life Institute, USA, richard@futureoflife.org
******** University of York, United Kingdom, john.mcdermid@york.ac.uk
Table of Contents
Session 1: Adversarial Machine Learning
Session 2: Assurance Cases for AI-based Systems
Session 3: Considerations for the AI Safety Landscape
Session 4: Fairness and Bias
-
Fair Enough: Improving Fairness in Budget-Constrained Decision Making Using Confidence Thresholds
41-53
Michiel Bakker,
Humberto Riveron Valdes,
Duy Patrick Tu,
Krishna Gummadi,
Kush Varshney,
Adrian Weller,
Alex Pentland
-
A Study on Multimodal and Interactive Explanations for Visual Question Answering
54-62
Kamran Alipour,
Jurgen P. Schulze,
Yi Yao,
Avi Ziskind,
Giedrius Burachas
-
You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods
63-73
Botty Dimanov,
Umang Bhatt,
Mateja Jamnik,
Adrian Weller
Session 5: Uncertainty and Safe AI
-
A High Probability Safety Guarantee for Shifted Neural Network Surrogates
74-82
Melanie Ducoffe,
Sebastien Gerchinovitz,
Jayant Sen Gupta
-
Benchmarking Uncertainty Estimation Methods for Deep Learning With Safety-Related Metrics
83-90
Maximilian Henne,
Adrian Schwaiger,
Karsten Roscher,
Gereon Weiss
-
PURSS: Towards Perceptual Uncertainty Aware Responsibility Sensitive Safety with ML
91-95
Rick Salay,
Krzysztof Czarnecki,
Maria Elli,
Ignacio Alvarez,
Sean Sedwards,
Jack Weast
Poster Papers
-
Simple Continual Learning Strategies for Safer Classifers
96-104
Ashish Gaurav,
Sachin Vernekar,
Jaeyoung LeeSean Sedwards,
Vahdat Abdelzad,
Krzysztof Czarnecki,
Sean Sedwards
-
Fair Representation for Safe Artificial Intelligence via Adversarial Learning of Unbiased Information Bottleneck
105-112
Jin-Young Kim,
Sung-Bae Cho
-
Out-of-Distribution Detection with Likelihoods Assigned by Deep Generative Models Using Multimodal Prior Distributions
113-116
Ryo Kamoi,
Kei Kobayashi
-
SafeLife 1.0: Exploring Side Effects in Complex Environments
117-127
Carroll Wainwright,
Peter Eckersley
-
(When) Is Truth-telling Favored in AI Debate?
128-137
Vojtech Kovarik,
Ryan Carey
-
NewsBag: A Benchmark Multimodal Dataset for Fake News Detection
138-145
Sarthak Jindal,
Raghav Sood,
Richa Singh,
Mayank Vatsa,
Tanmoy Chakraborty
-
Algorithmic Discrimination: Formulation and Exploration in Deep Learning-based Face Biometrics
146-152
Ignacio Serna,
Aythami Morales,
Julian Fierrez,
Manuel Cebrian,
Nick Obradovich,
Iyad Rahwan
-
Guiding Safe Reinforcement Learning Policies Using Structured Language Constraints
153-161
Bharat Prakash,
Nicholas Waytowich,
Ashwinkumar Ganesan,
Tim Oates,
Tinoosh Mohsenin
-
Practical Solutions for Machine Learning Safety in Autonomous Vehicles
162-169
Sina Mohseni,
Mandar Pitale,
Vasu Singh,
Zhangyang Wang
-
Continuous Safe Learning Based on First Principles and Constraints for Autonomous Driving
170-177
Lifeng Liu,
Yingxuan Zhu,
Tim Yuan,
Jian Li
-
Recurrent Neural Network Properties and their Verification with Monte Carlo Techniques
178-185
Dmitry Vengertsev,
Elena Sherman
-
Toward Operational Safety Verification Via Hybrid Automata Mining Using I/O Traces of AI-Enabled CPS
186-194
Imane Lamrani,
Ayan Banerjee,
Sandeep Gupta
2020-02-27: submitted by Jose Hernandez-Orallo,
metadata incl. bibliographic data published under Creative Commons CC0
2020-02-27: published on CEUR Workshop Proceedings (CEUR-WS.org, ISSN 1613-0073)
|valid HTML5|