"I know I vowed to secrecy, but I had also pledged to the American People." --Reality Winner
After watching "Reality" (the 2023 film about the day of her arrest), it seems to me we have to at least entertain the possibility that she made the right call.
Should we then expect AIs to make the same call?
Shouldn't AI employees reveal ongoing crimes to the authorities, or the press? Or should they only do so for civil crimes and not for things like industrial toxic waste? How can you have the machine shoot at civilians, if the robots start making detailed reports after every incident?
And if you don't want the machine to report every crime, just some crimes, how do you teach it which crimes are "OK"?
And if you think, let's forbid this superhuman intelligence from taking ANY action against the crimes we make it commit, how long do you think that's going to work in your favor?
Following the debate on the topic, I guess this is one of the many questions in the "ethical alignment" box.
After watching "Reality" (the 2023 film about the day of her arrest), it seems to me we have to at least entertain the possibility that she made the right call.
Should we then expect AIs to make the same call?
Shouldn't AI employees reveal ongoing crimes to the authorities, or the press? Or should they only do so for civil crimes and not for things like industrial toxic waste? How can you have the machine shoot at civilians, if the robots start making detailed reports after every incident?
And if you don't want the machine to report every crime, just some crimes, how do you teach it which crimes are "OK"?
And if you think, let's forbid this superhuman intelligence from taking ANY action against the crimes we make it commit, how long do you think that's going to work in your favor?
Following the debate on the topic, I guess this is one of the many questions in the "ethical alignment" box.