AI and the Sense of Self
How AI will cause us to question what it means to be human
Advanced general intelligence, as demonstrated by abilities like language and complex problem solving, used to be unique to humans. The rise of intelligence in computers raises many questions about what it means to be human.
Descartes viewed the mind (“Res cogitans”) as a non-physical substance, separate from the physical world. While this viewpoint is less common nowadays among scientists and philosophers, it’s still implicitly held by many people. AI models can now think (by communicating and solving problems), which brings this viewpoint into question. If thinking can be performed by a physical process in AI, presumably it can be performed by a physical process in human minds too.
People still view themselves as having an “essential self” which includes their thoughts and goals, but AI will affect this perception. People will become more plugged into AI over time, initially through devices (such as smart glasses) and eventually through more direct brain-computer interfaces. AI will enhance a person’s thinking abilities and help them with their goals, and perhaps people will extend their sense of self to include the AI. But trying to include an external service within the sense of self will weaken the whole concept. If an external changing service is part of the self, the self cannot be that absolute. This can eventually lead more people to recognize that none of their thoughts define them, regardless of where they arose from. People will realize that the self is more flexible than assumed, and perhaps lead many to the Buddhist conclusion that there is no essential or separate self, just different processes interacting.
People and society generally assume there’s true free will - we have something inside us that chooses what to do, separate from the regular chain of physical cause and effect. People get angry at a human who does wrong in a way that they wouldn’t get angry at an animal that does wrong, since they assign free will to the human. As AI models act more independently, will people get angry at the model when it does something wrong and feel gratitude towards it when it does something right? People don’t think an AI model could have done otherwise, but maybe they will start assigning it moral responsibility anyways. If so, this could help spread compatibilist views of free will where moral responsibility is assigned even if everything is determined. And if people don’t assign moral responsibility to the AI models (which will seem very similar to humans) they may change how they assign responsibility to humans too.
Related story:


