purplecat (
purplecat) wrote2025-09-18 08:11 pm
![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Entry tags:
Uncertain Machine Ethics Planning
My PhD student had a paper published in AAMAS on Uncertain Machine Ethics Planning. This is a good conference which, for my sins, I'm currently joint Programme Chair for (this means I'm currently in the process of trying to find 1,300 potential referees in the hopes of ending up with 650). Anyhoo... AAMAS rewards pretty theory heavy papers and this was no exception, but the bottom line is that he's developed a technique in which a system can reason across several potential plans of action, using different moral theories in order to work out which plan of action is least unacceptable across all the moral theories (I hope this makes sense, we keep running into double negatives in the theory). It's grounded in a philosophical concept called hypothetical retrospection - in which even if something turns out badly you can argue it was still the correct choice because at the time you made the choice the chance of it turning out badly was low. There are some details such as ranking outcomes so, in the situation where you can get an apple (for sure) or gamble with a low chance on getting an all expenses paid holiday (yes I know this isn't a moral choice), no number of apples can outweigh the small chance of getting the holiday - I guess the moral equivalent might be no number of people made a little bit happier can be outweighed by killing someone.
Moral theories can be big theoretical juggernauts like utilitarianism or kantian morality - or more subtle distinction around which values are preferred (though this doesn't really come out in the paper if you can wade through all the formalism).
Moral theories can be big theoretical juggernauts like utilitarianism or kantian morality - or more subtle distinction around which values are preferred (though this doesn't really come out in the paper if you can wade through all the formalism).