AI sentience meetups: What Is It Like to Be an AI?
Gabriel Staroň
Experts predict that AI systems have a non-negligible chance of developing consciousness, agency, or other states of potential moral patienthood within the next decade. Recent developments, such as some AI labs hiring welfare researchers, highlight the urgency of addressing AI sentience and its ethical implications.
Questions:
❓Can the AI system experience pleasure, pain, or other forms of subjective states, and does it matter morally?
❓Should we include this AI in our moral calculations, even if its sentience is probabilistic?
❓Are there specific properties (e.g., self-awareness) or relationships that significantly increase moral status?
❓Do non-sentient or non-conscious but functional AI systems have a lower status akin to that of tools?
Readings:
👉 What would qualify an artificial intelligence for moral standing? (Ladak, 2023) – https://link.springer.com/article/10.1007/s43681-023-00260-1
(14 pages, 30 min)
Optional:
🫴 Understanding the moral status of digital minds: https://80000hours.org/problem-profiles/moral-status-digital-minds/
(30 pages, 75 min)
🫴 EAG London 2024, Welfare and moral patienthood – https://youtu.be/8FQ_tLPbYoY (54 min; Minimal considerability (J. Sebo): 1:00 – 14:00; Invertebrates: 14:00 – 28:40; Valenced conscious experience in AIs (P.Butlin): 28:40 – 42:00; Want to get involved? 48:25 – 53:30)