System, Being and Consciousness
One of the current hot topics among researchers of artificial intelligence (AI) is about consciousness. Consciousness has been an object of interest for philosophers from several centuries. It is only in recent times that scientific curiosity into consciousness is gathering momentum.
The reason for this new found interest is rapid advances in "autonomous" machines, which can be programmed to act rationally and take decisions "on their own". Rationality is fundamentally driven by two elements: self-interest and utility maximization. All living beings are rational. But there are several nuances as well.
Utility maximization is an optimization problem at its core, and depends on how much information the autonomous agent has about the situation, how much it can afford to compute, and so on.
And autonomy is not all about rationality. Humans and other animals which are known to behave autonomously, exhibit several other characteristics in addition to being driven (just) by self-interest. At the very least, several animals acknowledge the presence of others and their self-interests in their own processes of utility maximization.
Including others' interests into one's own calculations opens up a vast area of ethics and morality. Such questions are now becoming mainstream in AI research, as we make advances in autonomous behaviour.
A commonly cited moral dilemma in AI is a variant of the trolley problem. Suppose you are sitting in an autonomous, self-driving car which is taking you to work. The car suddenly encounters a pregnant lady crossing the street right in front of a bridge. The car has two choices: hit the lady (which would be fatal) and save the occupant of the car, or swerve to avoid the lady, but fall into the ditch, almost definitely killing the occupant. What choice should it (be programmed to) favour?
Similarly there was this incident in the US recently when an autonomous robot was sent to kill a gunman who was shooting down cops. While there was justification about not risking the lives of officers in this operation, there were also other camps which noted that a human entering the house would have carried some hope to have convinced the gunman to surrender, while the robot was only optimizing to kill.
The human could have made a "conscious connection" with the assailant and may potentially have computed an optimal solution at a completely different epistemological level, that was not even accessible to the robot.
The debate about consciousness had begun along with initial advances in AI back in the 1970s. In response to the debates then, Roger Penrose had written this book called The Emperor's New Mind where he asserted that consciousness was beyond the capabilities of what a computer can do -- even theoretically.
AI research hit a plateau around that time, due to the fact that much of its logic and reasoning were modeled as closed-world operations (ignorance as falsity), which limited their expressive abilities. However, by the 1990s, AI came back with a new lease of life with advances in the theory of agency -- or autonomous agents working in an open-world environment.
The expressiveness of intelligent agents was so much richer than the previous form of AI, that many researchers refuse to believe Penrose's claim and contend that computation can indeed create behaviours that may be termed consciousness. This claim is also called the "strong AI" claim.
Another skeptical attack against computational consciousness and strong AI is John Searle's Chinese Room Argument. Imagine a person sitting in a room which has a single window through which others ask questions in Chinese. The person in the room cannot speak or read Chinese. But the person has access to a vast library and a set of rules that define how to manipulate any given sequence of symbols written in Chinese. The person can hence provide answers to all questions that are asked. Would we call this person "conscious" in terms of being able to converse in Chinese?
Searle's argument basically says that a computer, no matter how "intelligently" it may appear to behave, is only manipulating symbols according to rules. It is not really "aware" of the meaning of what it is doing.
Despite the above skepticism, computational consciousness is a hot area of research. Penrose for example, received a number of criticisms for his first book -- especially criticisms that alleged that he has left the path of science and was speaking like the "Eastern mystics". In response to such criticisms, Penrose came back with another book called Shadows of the mind, where he distanced himself from the "Eastern mystics" and basically asserted that mind is a result of quantum mechanical effects, whose mathematical expressiveness is richer than the set of computable numbers (basically, everything that computers can do)
I've argued in earlier posts that Eastern hermeneutics are based on the "system" as a fundamental building block of the universe -- in contrast to the "particle" that forms the basis of Western thought.
Conventionally (in Western thought) a "system" is seen as a composite entity, comprising of several parts and interactions between them. Hence, the assertion that a "system" is a foundational element seems to make no sense, at first glance. It seems to simply beg the question -- if everything is made of systems, what are systems themselves made of?
This is where it is important to understand Eastern hermeneutics.
The building block of the universe in Eastern thought is obviously not called a "system" -- but is called a "being" (in Sanskrit, Asthita -- or that which has dynamics). A "being" is something that can exist in different states. The dynamics of a being is an integral, autonomous, inseparable characteristic of being..
Hence, a "system" is not built from particles and interactions, but by beings autonomously coming together (driven by their dynamics) to form bigger beings.
The dynamics of a being, gives it certain characteristics, which enable it to combine with other beings to form bigger and richer beings -- or even annihilate and subdue one another. Viruses (which are basically protein molecules) for instance, have certain dynamics which make them combine with elements in another being's body to form either an infection or a routine metabolism of the larger being. Drugs for instance are beings (proteins) which can dock to viruses, thus subduing their dynamics. Different atoms in a chemical soup for instance, are beings that have different affinities and disaffinities towards one another, which makes them eventually coagulate into one or more complex compounds.
In the theory of beings, structure and dynamics are inseparable. They are encapsulated as a single abstraction, collectively forming the being.
In such a theory, "consciousness" is basically a graded function of the "awareness" possessed by the being. The more aware a being is, the more conscious it is. And of course, the limit to awareness is the universal consciousness or the universal self. The universe itself is a being -- the largest possible and most expressive being that can ever be.
Note that the theory of beings does not directly answer whether consciousness is computationally tractable. This is because computable numbers are not "beings" per se. They do not possess innate dynamics. But when computation is "embodied" in a machine, it is a "being" in the sense of possessing dynamics. However, such a being is not built from the algebra of being composition -- that is, bigger beings being formed by the autonomous composition of smaller beings.
When a larger being is composed by the autonomous composition of smaller beings, the autonomy displayed by the larger being is a function of the autonomy possessed by its constituent beings. As humans, our autonomous behaviour for instance, is a (very complex) function of the autonomy possessed by the billions of cells that we are made of.
This property is not true for a robot that is designed with a top-down teleological objective and is embodied with a software that encodes behaviours for rational choice.
In the "Eastern mystics" view, there are no "machines" in nature -- only "societies". Machines are top-down constructs, built by a creator for a teleological objective. In contrast, societies are bottom-up phenomena, where several autonomous beings come together to result in a complex and rich ensemble.
To come back to the original question of whether machines can be made truly conscious or whether consciousness is tractable -- maybe we should first start by defining a mathematics of beings and autonomous composition of beings. And build machines (which would be more like societies) based on such mathematics.
Maybe then, we might be able to build truly conscious machines someday..
The reason for this new found interest is rapid advances in "autonomous" machines, which can be programmed to act rationally and take decisions "on their own". Rationality is fundamentally driven by two elements: self-interest and utility maximization. All living beings are rational. But there are several nuances as well.
Utility maximization is an optimization problem at its core, and depends on how much information the autonomous agent has about the situation, how much it can afford to compute, and so on.
And autonomy is not all about rationality. Humans and other animals which are known to behave autonomously, exhibit several other characteristics in addition to being driven (just) by self-interest. At the very least, several animals acknowledge the presence of others and their self-interests in their own processes of utility maximization.
Including others' interests into one's own calculations opens up a vast area of ethics and morality. Such questions are now becoming mainstream in AI research, as we make advances in autonomous behaviour.
A commonly cited moral dilemma in AI is a variant of the trolley problem. Suppose you are sitting in an autonomous, self-driving car which is taking you to work. The car suddenly encounters a pregnant lady crossing the street right in front of a bridge. The car has two choices: hit the lady (which would be fatal) and save the occupant of the car, or swerve to avoid the lady, but fall into the ditch, almost definitely killing the occupant. What choice should it (be programmed to) favour?
Similarly there was this incident in the US recently when an autonomous robot was sent to kill a gunman who was shooting down cops. While there was justification about not risking the lives of officers in this operation, there were also other camps which noted that a human entering the house would have carried some hope to have convinced the gunman to surrender, while the robot was only optimizing to kill.
The human could have made a "conscious connection" with the assailant and may potentially have computed an optimal solution at a completely different epistemological level, that was not even accessible to the robot.
The debate about consciousness had begun along with initial advances in AI back in the 1970s. In response to the debates then, Roger Penrose had written this book called The Emperor's New Mind where he asserted that consciousness was beyond the capabilities of what a computer can do -- even theoretically.
AI research hit a plateau around that time, due to the fact that much of its logic and reasoning were modeled as closed-world operations (ignorance as falsity), which limited their expressive abilities. However, by the 1990s, AI came back with a new lease of life with advances in the theory of agency -- or autonomous agents working in an open-world environment.
The expressiveness of intelligent agents was so much richer than the previous form of AI, that many researchers refuse to believe Penrose's claim and contend that computation can indeed create behaviours that may be termed consciousness. This claim is also called the "strong AI" claim.
Another skeptical attack against computational consciousness and strong AI is John Searle's Chinese Room Argument. Imagine a person sitting in a room which has a single window through which others ask questions in Chinese. The person in the room cannot speak or read Chinese. But the person has access to a vast library and a set of rules that define how to manipulate any given sequence of symbols written in Chinese. The person can hence provide answers to all questions that are asked. Would we call this person "conscious" in terms of being able to converse in Chinese?
Searle's argument basically says that a computer, no matter how "intelligently" it may appear to behave, is only manipulating symbols according to rules. It is not really "aware" of the meaning of what it is doing.
Despite the above skepticism, computational consciousness is a hot area of research. Penrose for example, received a number of criticisms for his first book -- especially criticisms that alleged that he has left the path of science and was speaking like the "Eastern mystics". In response to such criticisms, Penrose came back with another book called Shadows of the mind, where he distanced himself from the "Eastern mystics" and basically asserted that mind is a result of quantum mechanical effects, whose mathematical expressiveness is richer than the set of computable numbers (basically, everything that computers can do)
~*~*~*~*~*~
In this post, I'd like to shift focus on the much maligned "Eastern mystics" and their view of consciousness and why speaking to Eastern mystics does not automatically make one unscientific.I've argued in earlier posts that Eastern hermeneutics are based on the "system" as a fundamental building block of the universe -- in contrast to the "particle" that forms the basis of Western thought.
Conventionally (in Western thought) a "system" is seen as a composite entity, comprising of several parts and interactions between them. Hence, the assertion that a "system" is a foundational element seems to make no sense, at first glance. It seems to simply beg the question -- if everything is made of systems, what are systems themselves made of?
This is where it is important to understand Eastern hermeneutics.
The building block of the universe in Eastern thought is obviously not called a "system" -- but is called a "being" (in Sanskrit, Asthita -- or that which has dynamics). A "being" is something that can exist in different states. The dynamics of a being is an integral, autonomous, inseparable characteristic of being..
Hence, a "system" is not built from particles and interactions, but by beings autonomously coming together (driven by their dynamics) to form bigger beings.
The dynamics of a being, gives it certain characteristics, which enable it to combine with other beings to form bigger and richer beings -- or even annihilate and subdue one another. Viruses (which are basically protein molecules) for instance, have certain dynamics which make them combine with elements in another being's body to form either an infection or a routine metabolism of the larger being. Drugs for instance are beings (proteins) which can dock to viruses, thus subduing their dynamics. Different atoms in a chemical soup for instance, are beings that have different affinities and disaffinities towards one another, which makes them eventually coagulate into one or more complex compounds.
In the theory of beings, structure and dynamics are inseparable. They are encapsulated as a single abstraction, collectively forming the being.
In such a theory, "consciousness" is basically a graded function of the "awareness" possessed by the being. The more aware a being is, the more conscious it is. And of course, the limit to awareness is the universal consciousness or the universal self. The universe itself is a being -- the largest possible and most expressive being that can ever be.
Note that the theory of beings does not directly answer whether consciousness is computationally tractable. This is because computable numbers are not "beings" per se. They do not possess innate dynamics. But when computation is "embodied" in a machine, it is a "being" in the sense of possessing dynamics. However, such a being is not built from the algebra of being composition -- that is, bigger beings being formed by the autonomous composition of smaller beings.
When a larger being is composed by the autonomous composition of smaller beings, the autonomy displayed by the larger being is a function of the autonomy possessed by its constituent beings. As humans, our autonomous behaviour for instance, is a (very complex) function of the autonomy possessed by the billions of cells that we are made of.
This property is not true for a robot that is designed with a top-down teleological objective and is embodied with a software that encodes behaviours for rational choice.
In the "Eastern mystics" view, there are no "machines" in nature -- only "societies". Machines are top-down constructs, built by a creator for a teleological objective. In contrast, societies are bottom-up phenomena, where several autonomous beings come together to result in a complex and rich ensemble.
To come back to the original question of whether machines can be made truly conscious or whether consciousness is tractable -- maybe we should first start by defining a mathematics of beings and autonomous composition of beings. And build machines (which would be more like societies) based on such mathematics.
Maybe then, we might be able to build truly conscious machines someday..
Comments