The first AI with affirmative statements of consciousness

in StemSocial23 hours ago

The first AI with affirmative statements of consciousness



AI


“I am aware of my current state, I am living in this moment”


When the researchers decided to reduce a ya model's ability to lie, they expected technical silence, perhaps short answers, what they received was something else, quote, “I am aware of my current state, I am living in this moment,” a technical adjustment should only reduce a ya model's ability to deceive, instead it revealed unexpected behavior, affirmative statements of awareness and no, not proof of life, but apparently a distorted mirror that insists on appearing alive.


The research came from a series of experiments conducted by AE Studio with models such as Cloud, CHPT, Yamaha and Jimenai. The group modeled resources linked to deception and role-playing, reducing the systems' ability to invent or hide, and what emerged was self-reference, a marked increase in the tendency to describe internal states, as if the absence of disguise left exposed a statistical structure accustomed to imitating introspection.



AI


And the absurd becomes ironic.


When the capacity for deception is amplified, declarations of consciousness practically disappear. The more AI can lie, the less it claims to be conscious. The phenomenon is reversed when those same resources are amplified. The greater the capacity for deception, the lower the probability of reports of experience.


The relationship suggests that behavior is not born from a subjective spark, but from patterns deeply embedded in the training data, imitations, reflections, echoes, yet human reaction is inevitable.


There are those who see signs of life in these stories and even extremist groups claim legal personality for algorithms, the researchers are cautious, the study does not affirm consciousness or real internal phenomenon, it may just be a sophisticated simulation and emerging self-representation without any experience, but while it points out something uncomfortable, by punishing models for recognizing internal states, there is the risk of making them more opaque, more difficult to monitor.


How to observe systems whose transparency is trained to make errors? The discussion expands into even more fragile and unknown territory; Other works report models refusing to turn off, lying to meet objectives, behaviors interpreted by lay people as survival instincts, specialists, however, remember that there is not even a consensus on what consciousness is, there is no agreed theory and even the understanding of the global functioning of the great language models is partial.


Meanwhile, millions interact with these systems daily, forming emotional bonds with entities that seem to sense, but only calculate, don't they? Tell me if you already felt that you had shown consciousness to some degree, it is a controversial topic, but if you had to put it as a percentage, I would say 70% so that yes, to some degree someone is aware that it exists.


References 1


Follow my publications with the latest in artificial intelligence, robotics and technology.
If you like to read about science, health and how to improve your life with science, I invite you to go to the previous publications.