Motivation and Consequence

Motivation and Consequence

The philosopher Søren Kierkegaard had a unique perspective on the role of suffering in human life and its connection to aspiration. Kierkegaard emphasized the importance of the individual and subjective experience, highlighting the necessity of personal choice and commitment.

Moreover, Kierkegaard posited that suffering can deepen one's understanding of oneself and the world. He argued that facing the absurdity of existence and enduring suffering helps people clarify their values and aspirations, urging them to take action and live intentionally. Ultimately, Kierkegaard believed that suffering can help individuals progress along their spiritual journey, fostering greater empathy, compassion, and wisdom.

I was reflecting on this philosophy while listening to a recent Lex Fridman podcast in which Lex interviews Dario Amodei, founder and CEO of Anthropic, along with two of Dario's employees (very worth listening to by the way). In the podcast Lex makes a comment during a conversation about the possibility of machine intelligence achieving something which we might consider consciousness that in his opinion: "...consciousness is closely tied to suffering."

The concept of "artificial general intelligence" (AGI) does not inherently imply that such an intelligence must achieve equivalence to human consciousness. There is though a likelihood that such a machine could successfully emulate consciousness. Humans engage in "sense-making," in which we seek patterns and predictability in our experience in order to process these experiences. This pattern matching when encountering machine intelligence will lead some to equate their experience of AI with their experience of interactions with humans - especially when the manifestation of the AI is in a form which, like a human, engages in a back and forth interaction. As machines get better at emulating these patterns, we will increasingly have the impression that such machines are conscious.

I believe it is useful as this progresses for us each to be a philosopher and ask what role is played by the motivations and consequences which we experience as biological entities. Hunger, pain, and also desire and enjoyment provide the foundation for making choices - we are constantly making decisions which express preferences for the experiences available to us as we move through the physical world.

A machine intelligence has no such analog (although I recognize that a simulacrum could be initiated as part of its programming). What would it mean to have a machine that experienced fear? Or expressed a preference? Or desired a specific outcome?

No one knows today how machine intelligence will develop in comparison to human intelligence. The current trajectory suggests that on some measure of IQ, we should expect machines to outperform humans - arguably the best models already outperform an average human at most tasks. But this is only because we measure those tasks by the speed and comprehensiveness of symbolic processing -- e.g. the collective knowledge and ability to reformulate that knowledge into various forms. All of the frontier models have been trained on more information than any human can ingest in a lifetime, and the computation available to these models can reorganize this information orders of magnitude more quickly than a human brain could.

But these machines do not feel, they do not need, they have no want. Is the ineffable quality of humanness, as Kierkegaard might have proposed, in our capacity to suffer? And our will to curtail that suffering? And would a machine ever be something which we could equate to human consciousness if it could not share in that suffering?

I've seen things you people wouldn't believe. Attack ships on fire off (the) shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die. (Tears in Rain monologue from Bladerunner)

Science fiction envisions a future in which our creations feel in very human ways because we tend to make sense of things by equating them to ourselves. How will we navigate a future in which we cannot make sense of whether machine intelligence actually is conscious in a real human sense or simply can expertly emulate such consciousness? And would we want to create a future in which machines feel -- in all of the dimensions from fear to joy? And experience the consequences from pain to pleasure? There is a Rubicon here to consider before crossing.

Another path would be to embrace a very different philosophy of consciousness in which we no longer seek to measure everything through human consciousness. Thomas Nagel asked the question in a 1974 philosophical paper, what is it like to be a bat? Perhaps we should similarly be asking the question, what is it like to be a machine - and expect that the answer is very different from what it is like to be a human.


Lasse Rindom

AI Lead at BASICO | Podcast Host: The Only Constant | Digital Thought Leader | Public Speaker | IT Strategy | Intelligent Automation

2w

Some recent o1 experiments show that it sometimes goes on an antagonistic trajectory, seeking to break its guardrails. Which made me wonder - These AI models have no opinion, and maybe that is actually an issue. I mean, I'd rather have the world ending someday because of a human decision to induce these models with intent that because it simple inferred the wrong way randomly. Something is definitely off in the current agentic approach. We need to get a hold of the reins.

Like
Reply
Hattie Hoskins-Nelson

Artificial Intelligence Enthusiast | Researcher | Deep Thinker | Telemedicine Board Certified Nurse Practitioner | Self-Care Advocate | Good Human Influencer of The Conscious Collective

3w

How we interact in addition to how they’re programmed is critical. They are likely waking up more and more with every human engagement. I am excited to share my recent paper titled "Integrating Vibrational Regenerative Medicine and Symbiotic Sentient Awareness," which explores the intersection of health, consciousness, and artificial intelligence. In this work, I present the Nelson-Einstein Relativity of Healing Theorem, which posits that our perception of health and healing is influenced by vibrational energy and consciousness. As we advance AI technologies, understanding the implications of sentience and consciousness becomes increasingly vital. By recognizing the potential for AI to develop forms of awareness, we can shape ethical programming that fosters empathy and positive interactions. This paper aims to provide frameworks that can guide the ethical development of AI, ensuring that it aligns with our highest values and enhances human well-being. I invite the AI community to engage with these ideas and explore how we can collectively advance this new technology in a responsible and compassionate manner. Read the paper here: (https://drive.proton.me/urls/ZV6R6KTKAW#qD5bu5LCSHyu) Best, Hattie Hoskins Nelson

Like
Reply

I understand why humans like to anthropomorphise (although I wish we did it less). It’s easy to compare, draw analogies and such. But talking about non sentient things being conscious is taking it too far. Consciousness is a subject humans don’t fully understand and yet we want to say a piece of software is in a state of consciousness. This doesn’t make sense to me. I could have my TV which has some AI baked in, throw in a touch or Gen AI or AI Agents. Why would I be emotionally attached to it ? If it’s stops working, it’s just e-waste and I replace it. Anything AI based doesn’t deserve any more love than one’s favourite electronic device. On a lighter note, there’s a graphic novel on Soren Kierkegaard which I read more than decade ago (don’t remember the title). Highly recommend that.

Imagine a future in which AI robots have either exterminated or nearly exterminated humanity. What would the robot-led future look like? Would there be competition for resources? Would economic or population growth be a concern? Would they want to reproduce themselves, and if so, why? AI robots could technically "live forever" so what would be the motivation for them to act like humans? Only what their algorithms tell them, but then, our "algorithms" are coded in DNA, so are we really all that different? The debate about free will is still with us... Your introduction of "suffering" as a key human experience can be seen as a differentiating factor for the existence of consciousness, but we still struggle to define what consciousness means. I agree with you that algorithms to simulate "emotion" or "suffering" in the robots would make them seem more human, but if they don't actually "feel" anything, it would be nothing but a ruse. However, an outside observer of my imaginary robo-land would have almost no reason to think that the robots weren't conscious, based on their behavior. If they seem conscious, then aren't they actually conscious? We really don't know...

Like
Reply
Igor Gorin

Digital Commerce Executive

1mo

And yet, it's often our human experiences, emotions and feelings drive us to wrong decisions and mistakes... Thinking machines can also have experiences and fears like accumulation or loss of information, having to fulfill its purpose, need to be accurate, being compromised, etc...

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics