TPF : Consciousness Defined

A place for discussion of ideas presented in the BothAndBlog, or relevant to the Enformationism thesis.
Post Reply
User avatar
Gnomon
Site Admin
Posts: 3316
Joined: Thu Sep 14, 2017 7:07 pm

TPF : Consciousness Defined

Post by Gnomon » Sun Jun 26, 2022 12:36 pm

Consciousness Encapsulated
https://thephilosophyforum.com/discussi ... ent/712359

How is conscious mind essentially different to AI on a strictly operational level? How would you go about programming such a thing? — enqramot

Your question hinges on your philosophical or technical definition of "Consciousness". Literally, the "-ness" suffix implies that the reference is to a general State or felt Quality (of sentience), not to a specific Thing or definite Quanta (e.g. neurons). In Nature, animated behavior (e.g. seek food, or avoid being food) is presumed to be a sign of minimal sentience, and self-awareness.

AI programs today are able to crudely mimic sophisticated human behaviors, and the common expectation is that the animation & expressions of man-made robots will eventually be indistinguishable from their nature-made makers -- on an "operational level". When that happens, the issue of enslaving sentient (knowing & feeling) beings could require the emancipation of artificial creatures, since modern ethical philosophy has decided that, in a Utopia, all "persons" are morally equal -- on an essential level.

Defining a proper ethical hierarchy is not a new moral conundrum though. For thousands of years, military captives were defined as "slaves", due to their limited freedom in the dominant culture. Since, many captives of the ruling power happened to have darker skin, that distinguishing mark came to be definitive. At the same time, females in a male-dominated society, due to their lack of military prowess, were defined as second-class citizens. At this point in time, the social status of AI is ambiguous ; some people treat their "comfort robots" almost as-if they are "real" pets or persons. But, dystopian movies typically portray dispassionate artificial beings as the dominant life-form (?) on the planet.

But, how can we distinguish a "real" Person from a person-like Mechanism? That "essential" difference is what Chalmers labeled the "Hard Problem" : to explain "why and how we have qualia or phenomenal experiences". The essence-of-sentience is also what Nagel was groping for in his query "what does it feel like?". Between humans, we take homo sapien feelings for granted, based on the assumption of similar genetic heritage, hence equivalent emotions. But, the genesis of AI, is a novel & unnatural lineage in evolution. So, although robots are technically the offspring of human minds, are they actually kin, or uncanny?

Knowing and Feeling are the operational functions of Consciousness. But Science doesn't do Essences. "If you can't measure it, it ain't real". Yet, a Cartesian solipsist could reply, "If I can't feel it, it ain't real". Therefore, I would answer the OP : that the essential difference between AI behavior and human Consciousness is the Qualia (the immeasurable feeling) of Knowing. Until Cyberneticists can reduce the Feeling-of-Knowing to a string of 1s & 0s, Consciousness will remain essential, yet ethereal. So, if a robot says it's conscious, we may just have to take it's expression for evidence. :smile:


Google AI has come to life
:
AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.
https://www.washingtonpost.com/technolo ... e-lemoine/

Google's AI is impressive, but it's not sentient. Here's why :
https:/

User avatar
Gnomon
Site Admin
Posts: 3316
Joined: Thu Sep 14, 2017 7:07 pm

Re: TPF : Consciousness Defined

Post by Gnomon » Sun Jun 26, 2022 12:41 pm

Maybe consciousness isn't the right word, maybe sentience would be, — enqramot

Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that. For example, the current National Geographic magazine has a cover article on the sense of touch. And it shows a mechanical hand with touch sensors on the fingertips. Without "sentience" (feedback) an animated robot would be helplessly clumsy. But "consciousness" literally means to "know with". Certainly a robot with touch sensors can interpret sensible feedback in order to guide its behavior. But is it aware of itself as the agent (actor) of sentient behavior?

Therefore, the philosophical question here is "does a robot (AI) know that it knows"? Is it self-aware? To answer that question requires, not an Operational (scientific) definition, but an Essential (philosophical) explanation. All man-made machines have some minimal feedback to keep them on track. So, it's obvious that their functions are guided by operational feedback loops. And that is the basic definition of Cybernetics (self-controlled behavior). Which is why some AI researchers are satisfied with Operational Sentience, and don't concern themselves with Essential Consciousness. It's what Jackson calls an "engineering problem".

But philosophers are not engineers. So, they are free to ask impractical questions that may never be answered empirically. When an octopus acts as-if it recognizes its image in a mirror, is that just an operational function of sentience, or an essential function of self-awareness? We could debate such rhetorical questions forever. So, I can only say that, like most philosophical enigmas, it's a matter of degree, rather than Yes or No. Some intelligences are more conscious than others. So, it's hard to "encapsulate" Consciousness into a simple matter of fact.

Ironically, the one asking such impractical rhetorical questions may be the most self-aware, the most introspective & self-questioning. The behavior of Intelligent animals is typically pragmatic, and focused on short-term goals : food, sex, etc. They don't usually create art for art's sake. But, when they do, can we deny them some degree of self-consciousness? :smile:

ELEPHANT SELF-PORTRAIT
https://i.imgur.com/TVucAax.jpg

User avatar
Gnomon
Site Admin
Posts: 3316
Joined: Thu Sep 14, 2017 7:07 pm

Re: TPF : Consciousness Defined

Post by Gnomon » Wed Jun 29, 2022 11:37 am

I'd rather philosophy steered clear of questions already settled. The operational principle of AI is already known, described in technical terms, there should be no need for an alternative explanation. — enqramot

Ha! Philosophy has no "settled questions", and philosophers are not content with mechanical "operational principles". So, the OP goal of encapsulating Consciousness, is still an open question.

Nevertheless, pragmatic scientists are still working on a Consciousness Meter to update the crude EEGs and somewhat more sophisticated MRIs. They are even using Artificial Intelligence to search for signs of Consciousness in Natural Intelligences that appear to be anaesthetic (unconscious). However, they are not looking for philosophical essences, but for operational signs & symptoms. So, even then, the OP on The Philosophy Forum will go unanswered.


Artificially intelligent consciousness meter :
https://www.monash.edu/data-futures-ins ... ness-meter

The hunt for hidden signs of consciousness in unreachable patients :
https://www.technologyreview.com/2021/0 ... nreachable

User avatar
Gnomon
Site Admin
Posts: 3316
Joined: Thu Sep 14, 2017 7:07 pm

Re: TPF : Consciousness Defined

Post by Gnomon » Wed Jun 29, 2022 11:45 am

Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that. — Gnomon
Let's stick to "consciousness" then — enqramot

Yes. Some plants, such as touch-me-not & venus flytrap, are "sentient" in a very limited sense. They sense and react to touch. But we don't usually think of them as Conscious. However, the typical scientific concept of Consciousness places it on a continuum from minimal sentience to Einsteinian intelligence. Nevertheless, some philosophers still imagine that Consciousness is something special & unique like a Soul. So, the OP seems to be searching for a physical mechanism that produces Self-Awareness. Yet, it's the last step on the continuum from Sentience to Consciousness that has, so far, resisted encapsulation.

One reason for that road-block may be the Reductive methods of Science. Some scientists assume that Consciousness is a function of Complexity. But complexity without Entanglement is just complicated. For example, Neanderthal brains were significantly larger (more neurons) than homo sapiens, but their intelligence was only slightly higher than that of a chimpanzee. So, it seems to be the single-minded interrelations of intelligent brains that produce the "difference that makes a difference" (i.e. information) in intelligence.

A current theory to explain that difference points to Social Intelligence as the difference maker. Whereas Neanders tended to hunt in small family groups (wolf-packs), Homos began to work together in large tribes of loosely-related people (communities). The SI hypothesis says that, individually, Neanders were about as smart as Homos. But, by cooperating collectively, Homos were able to offload some of the cognitive load to others. And that externalization of thought (language), eventually evolved into Writing, for even wider sharing of thoughts. In computer jargon, the collective human mind is a parallel processor.

Therefore, it's not just how many neurons a person has that determines intelligence, but the communal sharing of information with other brains, focused on the same task. Likewise, a more Holistic view of Consciousness might reveal that higher degrees of Sentience & Self-Awareness emerge from the evolution of collective Culture. Whereas Sentience is limited to the neurons of a single organism, sophisticated Consciousness (and Wisdom ; sapiens) may result from exporting & importing information between brains & minds via language*1. Sharing information via Culture is literally Con-scious-ness : knowing together".

PS__Sci-Fi likes to extend that symbiosis to include include Mind-Reading. So, maybe human Consciousness is a form of "sym-mentosis". No magic required though. Just the ability to talk and read.

*1. For example, without Google & Wiki, my posts on this forum would read like Neander grunts.

Consciousness : from Latin conscius ‘knowing with others or in oneself’

The Social Intelligence hypothesis :
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2042522/

Post Reply

Who is online

Users browsing this forum: No registered users and 15 guests