Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Artificial superintelligence is expected by many experts to arrive before or around the turn of the century. Solving the AI-safety problem – making sure artificial superintelligences don’t act in ways that create morally disastrous... more
Artificial superintelligence is expected by many experts to arrive before or around the turn of the century. Solving the AI-safety problem – making sure artificial superintelligences don’t act in ways that create morally disastrous outcomes - is of immense importance. One relevant sub-problem that has been raised is that of simulations run by such a superintelligence – what do we do given the possibility that these simulations have moral status? I approach this question by first situating it within the broader context of the AI-safety problem, and using its resources to formulate the question as a decision problem under uncertainty. By considering debates regarding the grounds of moral status as well as the computational realizability of mental properties, I further formalize the decision problem. To make progress on solving this decision problem, I’ll identify questions that we need to make progress on in decision theory, ethics and mind.
Research Interests:
Research Interests: