By Jason Lim
By AI, I actually mean augmented intelligence, not artificial intelligence. And by augmented intelligence, I mean technology that will allow us to do what we do faster, more, higher, stronger, etc. In short, AI doesn't replace the human; it makes us better in every way.
While this seems less threatening than artificial intelligence with its PR baggage of Skynet and Terminators, HAL9000 and Dave, etc., augmented intelligence may be more of a threat to what it means to be a human being because it can fundamentally redefine how we define ourselves.
Arati Prabhakar, the former director of DARPA, writes in Wired, "What's drawing us forward is the lure of solutions to previously intractable problems, the prospect of advantageous enhancements to our inborn abilities, and the promise of improvements to the human condition. But as we stride into a future that will give our machines unprecedented roles in virtually every aspect of our lives, we humans ― alone or even with the help of those machines ― will need to wrangle some tough questions about the meaning of personal agency and autonomy, privacy and identity, authenticity and responsibility. Questions about who we are and what we want to be."
The critical ― and may I pose an existential one to human beings ― question is whether the way we experience the "self" and exercise our agency can survive AI?
In his TED Talk titled, "Your brain hallucinates your conscious reality," Anil Seth explains how reality is a controlled hallucination that's created by the brain in order to understand the dangers and opportunities in our environment to make decisions that optimize our survival. The brain creates this hallucination by taking sensory inputs (as electrical impulses communicated to the brain through our five senses) and interpreting this in the context of some biological or conditioned cognitive framework to make sense of those inputs. As such, reality is hallucination constructed equal parts external inputs and preconceptions.
We construct our sense of self in a similar fashion. Seth states, "Now I'm going to tell you that your experience of being a self, the specific experience of being you, is also a controlled hallucination generated by the brain. This seems a very strange idea, right? Yes, visual illusions might deceive my eyes, but how could I be deceived about what it means to be me? For most of us, the experience of being a person is so familiar, so unified and so continuous that it's difficult not to take it for granted. But we shouldn't take it for granted. There are in fact many different ways we experience being a self. There's the experience of having a body and of being a body. There are experiences of perceiving the world from a first person point of view. There are experiences of intending to do things and of being the cause of things that happen in the world. And there are experiences of being a continuous and distinctive person over time, built from a rich set of memories and social interactions. Many experiments show, and psychiatrists and neurologists know very well, that these different ways in which we experience being a self can all come apart. What this means is the basic background experience of being a unified self is a rather fragile construction of the brain."
At this point, the question now becomes, "If AI can change the way we construct this hallucination called reality, how can AI change the way we construct the hallucination called the ‘self'?"
After all, AI is all about enhancing the functioning of the cognitive functioning of the brain. But if the natural cognitive functioning of the brain is what gives us the sense of self and capability to wield agency in our lives, then what happens to "us" when our cognitive functioning is enhanced? Or, do we experience ourselves in totally different and unpredictable ways that fundamentally affect how we define the self and agency? At that point, are we no longer us?
Then what about inequality? We know that some of us will be blessed enough to be "augmented" while a vast majority of others won't have the opportunity. Moreover, select few would be augmented with better algorithms and interfaces while others less augmented. If the super-augmented ones experience their self in different ways than normal-augmented ones and non-augmented ones, then what happens to our collective human experience? At that point, we would be constructing substantially different hallucinations from one another and insisting that ours is the true reality. Will that be the dawn of a new religious war?
At this point, no one knows. But one thing is certain. The way we human beings have evolved to perceive reality is a product of our evolutionary journey as biological organisms. When we develop technology to change how we perceive that reality, then we are not "augmenting" humans. We are evolving ourselves into a different reality.
As Prabhakar wrote, "We and our technological creations are poised to embark on what is sure to be a strange and deeply commingled evolutionary path."
Jason Lim is a Washington, D.C.-based expert on innovation, leadership and organizational culture. He has been writing for The Korea Times since 2006. Reach him at jasonlim@msn.com, facebook.com/jasonlimkoreatimes or @jasonlim2012.
![]() |
While this seems less threatening than artificial intelligence with its PR baggage of Skynet and Terminators, HAL9000 and Dave, etc., augmented intelligence may be more of a threat to what it means to be a human being because it can fundamentally redefine how we define ourselves.
Arati Prabhakar, the former director of DARPA, writes in Wired, "What's drawing us forward is the lure of solutions to previously intractable problems, the prospect of advantageous enhancements to our inborn abilities, and the promise of improvements to the human condition. But as we stride into a future that will give our machines unprecedented roles in virtually every aspect of our lives, we humans ― alone or even with the help of those machines ― will need to wrangle some tough questions about the meaning of personal agency and autonomy, privacy and identity, authenticity and responsibility. Questions about who we are and what we want to be."
The critical ― and may I pose an existential one to human beings ― question is whether the way we experience the "self" and exercise our agency can survive AI?
In his TED Talk titled, "Your brain hallucinates your conscious reality," Anil Seth explains how reality is a controlled hallucination that's created by the brain in order to understand the dangers and opportunities in our environment to make decisions that optimize our survival. The brain creates this hallucination by taking sensory inputs (as electrical impulses communicated to the brain through our five senses) and interpreting this in the context of some biological or conditioned cognitive framework to make sense of those inputs. As such, reality is hallucination constructed equal parts external inputs and preconceptions.
We construct our sense of self in a similar fashion. Seth states, "Now I'm going to tell you that your experience of being a self, the specific experience of being you, is also a controlled hallucination generated by the brain. This seems a very strange idea, right? Yes, visual illusions might deceive my eyes, but how could I be deceived about what it means to be me? For most of us, the experience of being a person is so familiar, so unified and so continuous that it's difficult not to take it for granted. But we shouldn't take it for granted. There are in fact many different ways we experience being a self. There's the experience of having a body and of being a body. There are experiences of perceiving the world from a first person point of view. There are experiences of intending to do things and of being the cause of things that happen in the world. And there are experiences of being a continuous and distinctive person over time, built from a rich set of memories and social interactions. Many experiments show, and psychiatrists and neurologists know very well, that these different ways in which we experience being a self can all come apart. What this means is the basic background experience of being a unified self is a rather fragile construction of the brain."
At this point, the question now becomes, "If AI can change the way we construct this hallucination called reality, how can AI change the way we construct the hallucination called the ‘self'?"
After all, AI is all about enhancing the functioning of the cognitive functioning of the brain. But if the natural cognitive functioning of the brain is what gives us the sense of self and capability to wield agency in our lives, then what happens to "us" when our cognitive functioning is enhanced? Or, do we experience ourselves in totally different and unpredictable ways that fundamentally affect how we define the self and agency? At that point, are we no longer us?
Then what about inequality? We know that some of us will be blessed enough to be "augmented" while a vast majority of others won't have the opportunity. Moreover, select few would be augmented with better algorithms and interfaces while others less augmented. If the super-augmented ones experience their self in different ways than normal-augmented ones and non-augmented ones, then what happens to our collective human experience? At that point, we would be constructing substantially different hallucinations from one another and insisting that ours is the true reality. Will that be the dawn of a new religious war?
At this point, no one knows. But one thing is certain. The way we human beings have evolved to perceive reality is a product of our evolutionary journey as biological organisms. When we develop technology to change how we perceive that reality, then we are not "augmenting" humans. We are evolving ourselves into a different reality.
As Prabhakar wrote, "We and our technological creations are poised to embark on what is sure to be a strange and deeply commingled evolutionary path."
Jason Lim is a Washington, D.C.-based expert on innovation, leadership and organizational culture. He has been writing for The Korea Times since 2006. Reach him at jasonlim@msn.com, facebook.com/jasonlimkoreatimes or @jasonlim2012.