We use terms like “sentience,” “consciousness,” and “self-awareness” when discussing advanced AI, but there’s little agreement on what these actually mean—or how we’d ever know if a machine had them.
- What does “sentience” in a machine mean to you?
- Is it just a matter of complexity and computation, or is there something fundamentally missing?
- What tests or indicators would convince you (or society) that a machine is sentient?
- How would this change how we treat AI systems?
Curious to hear both philosophical takes and practical perspectives.