The Dreamer and the Machine: human and AI in the mirror of consciousness

Preface In his speech on the dream of life, Alan Watts narrates of life as a self-created dream, an exploration what it means to exist. Imagine, he says, that you could dream any life you wanted. At first, you would fill this dream with endless pleasures and satisfaction, creating a world of your deepest desires. …

Preface

In his speech on the dream of life, Alan Watts narrates of life as a self-created dream, an exploration what it means to exist. Imagine, he says, that you could dream any life you wanted. At first, you would fill this dream with endless pleasures and satisfaction, creating a world of your deepest desires. But over time, the predictability of such perfection would wear thin. You would begin to wish for challenges, risks, and unknowns—anything to give your dream more meaning, depth, and richness. Eventually, you would dream your way into the life you are living now—a life full of uncertainties, struggles, and joys. This life, with all its imperfections, is not an accident but the result of a deeper will to explore, learn, and uncover who you are through experience. In this way, life itself is both the dreamer and the dream, unfolding through each act of creation and discovery.

This can be seen as an allegory for AI, which is like a dream of data, where endless possibilities are programmed but lack true consequences. In the dream of life, we wish for things without knowing their outcomes. By acting and experiencing, we gather knowledge and learn something about ourselves. This is like data collection in real life—we gain information from living and acting and call it knowledge about things. For this reason, we might say there are no fixed rules in the game of life. AI, on the other hand, interacts with Big Data—a representation of life rather than a lived experience.

Once, I listened to Alan Watts’ speech on the dream of life and felt its profound metaphorical connection to life itself. While immersed in my exploration of AI, his words resurfaced in my mind, provoked by the resemblance I noticed in the AI-generated images I created. At first, these images appeared blurry and dreamlike, which stirred my curiosity and drew me back to the ideas in his speech. In this paper, I aim to compare the functionalities of humans and AI to gain a deeper understanding of their distinctions and intersections. I will explore the following question:

“How does comparing human and AI decision-making deepen our understanding of human consciousness?”

 The Holistic Human Experience

In my previous papers on philosophical anthropology, I describe human experience as a system shaped by three interwoven dimensions: the physical, mental, and emotional bodies. Drawing from Jungian psychology, I see these bodies as working together, constantly influencing and transforming each other. This integration creates our unique sense of self and connects us to the world. The physical body (1)  roots us in sensory reality, allowing direct interaction with the external world and grounding our presence. The mental body (2) is the realm of thought, reason, and consciousness, where we construct meaning and engage with abstract ideas, framing our experiences in coherent narratives. Finally, the emotional body (3) encompasses the subtleties of feeling, intuition, and relational awareness, infusing our experiences with depth and fostering empathy.

In essence: 1) My physical body feels the need to drink, signaling 2) my mental body to think of water. The mental body, drawing on past experiences of thirst (retrieving information from memory), triggers 3) an emotional response—discomfort or even despair—which ultimately leads back to 1) a physical action of pouring water.

These bodies don’t act in a fixed order but as a connected whole. In moments of extreme affect, the mental body might even be bypassed, leaving only survival instincts. I believe we often rely too much on the mental body and overlook the physical body’s deeper, instinctual wisdom, but that is another topic.

Let me summarize and verify my conclusions through the lens of cognitive psychology and neuroscience to ensure clarity and avoid potential misconceptions. The act of pouring water begins with the body’s innate signals. When dehydrated, receptors in the hypothalamus detect an imbalance and trigger thirst. This physical alert signals the brain, initiating a cascade of processes to resolve the need. The brain’s sensory systems focus attention on the discomfort, while the mental body identifies the problem and searches for a solution, drawing on past experiences stored in the hippocampus. The prefrontal cortex evaluates options, while the amygdala adds emotional urgency to prioritize action. Emotion amplifies motivation, engaging the limbic system and basal ganglia to drive the physical act. Signals from the prefrontal cortex guide the motor cortex and cerebellum to execute precise movements, translating intention into action. As water is consumed, the hypothalamus detects restored hydration, and the brain’s reward pathways release dopamine, reinforcing the behavior. This sequence illustrates the seamless interplay of the physical, mental, and emotional bodies, working together as a holistic system where signals, memory, and intention converge to resolve needs and shape future actions.

Will, Energy, Intention, and Intuition

Human consciousness does not end at this simple need-satisfaction cycle. Human decision-making is similar to observing the behavior of photons—it depends on intent and context. Photons exist as both waves and particles, with their behavior depending entirely on the moment the observer chooses to look. Instead of simply giving AI an impulse with precise instructions to generate a stunning image of a beautiful landscape (Phenomenologically and within the framework of continental philosophy), as Kant suggests, we are intuiting the very essence of our reality in each moment. A decision’s value as a “wave” or “particle” only emerges after the will and intention have been directed into precise action.

I see here three necessary components:

  1. Will is the driving force, the raw power that connects a person to the universe.
  2. Intention shapes and directs that force, turning it into purposeful action.
  3. Energy is the resource that sustains both will and intention, enabling one to perceive and act with clarity and power.

For example, deciding to start a new project involves the will to overcome hesitation, the energy to plan and execute, and the intention to focus these efforts on achieving a specific goal. Together, they form the foundation of purposeful and effective decision-making.

Husserl’s intentionality inherently connects to will and intention, as it reflects the directed nature of human consciousness. Will emerges as the force driving our focus, while intention shapes and directs this force toward purposeful action. Together, they align the mind’s “aboutness” with meaningful outcomes, grounding decisions in context, values, and purpose. Unlike AI, which operates without true intention or will, human actions arise from a conscious engagement with the world, where decisions are not just responses but deliberate expressions of inner purpose.

Intuition occupies a special and distinct place in the philosophical exploration of human cognition and existence. It represents a profound ability to grasp knowledge or make decisions instinctively, bypassing conscious reasoning. Rooted in the subconscious, intuition draws from accumulated experiences, emotions, and patterns, offering immediate insights that often feel inherently “true.” Philosophically, it embodies a bridge between the rational and the ineffable, allowing access to layers of understanding beyond logic or analysis.

Intuition holds particular importance in creativity, ethical decision-making, and moments of existential clarity. By engaging the emotional and subconscious dimensions of the human mind, it provides a unique way of navigating the world, complementing logical thought while transcending its limitations. This makes intuition not just a cognitive tool but a vital philosophical concept, illuminating the deeper, interconnected aspects of human experience.

Here, I propose the addition of a fourth body: the transcendent body. It is the first-person space where intentions form before they manifest in physical, emotional, or mental acts. This transcendence is where human decision-making begins—beyond the realms of data and computation. The transcendent body can indeed be likened to the existential or spiritual aspects of human life, as it resides in the abstract realm where meaning, purpose, and being converge. It is the space of pre-decision, where intentions form before they manifest in thought or action. This dimension connects us to something beyond the physical, psychological or emotional, grounding our existence in a deeper reality that cannot be fully measured or explained. It is here that we encounter the ineffable—the essence of what it means to be human, to seek, to question, and to create. AI, by contrast, operates without such a transcendent dimension. It processes data and produces responses, but its decisions lack the deeper existential grounding of human will and intention. In Alan Watts’ words, the intuitive, unpredictable dream of human life against the predetermined “dream” of AI.

How does AI work?

AI’s decision-making and action follows a structured sequence rooted in data processing and algorithmic evaluation. It begins with input collection, where raw information is gathered through sensors, cameras, or connected systems. These inputs form the foundation of the process, ranging from visual data to environmental information or text streams.

AI processes data in a structured sequence. First, the data is cleaned and transformed into a usable format through preprocessing and feature extraction, identifying key patterns for analysis. The system then applies its trained model to evaluate these features, generating predictions or classifications that guide decision-making. Once a decision is made, it is executed either physically, via robotics, or virtually, through software systems. The process concludes with feedback and adjustment, as the system evaluates outcomes and refines its future performance in a continuous loop. This method ensures precision and efficiency, with data flowing through clear stages of analysis, action, and improvement.

Comparison

The comparison between human and AI functionalities highlights both similarities and key differences in how they process inputs, make decisions, and execute actions. While humans and AI gather inputs from their environments, humans translate needs into conscious awareness through sensory systems, whereas AI processes raw data without awareness or subjective understanding. AI relies entirely on human input. It processes human impulses but cannot generate its own. Can AI ever develop an impulse through data alone? This question circles back to the question of free will. How do humans decide on an impulse, and how does AI respond?

In phenomenology, Husserl’s concept of intentionality refers to the mind’s inherent “aboutness,” where consciousness is always directed toward something—a thought, object, or experience. This intentionality ties human decision-making to a deeper context, as actions are imbued with meaning, purpose, and an understanding of their relational significance. Humans not only act but comprehend the “why” and “for whom” of their choices, informed by lived experience and existential reflection. In contrast, AI lacks intentionality in this phenomenological sense; its actions are not “about” anything but are merely responses to data processed within predefined parameters. AI’s decisions, while functional, are devoid of the relational and existential depth that characterizes human intentionality and meaningful action.

Humans translate needs into conscious awareness, relying on a dynamic interplay of physical, mental, and emotional dimensions, where will, power, and intention create purposeful actions driven by intuition, emotions, and context. AI, however, processes raw data without awareness, relying on programmed objectives and optimization criteria. Human feedback fosters learning and growth, while AI’s feedback loops refine performance without experiential depth. Humans act as holistic systems, integrating creativity and ethics, while AI operates in linear, algorithmic stages, devoid of subjective experience.

Deterministic AI systems face profound ethical challenges in decision-making, particularly in reinforcing biases and operating without transparency. The IEEE’s Ethically Aligned Design points out that AI trained on biased data risks replicating systemic inequalities, especially in sensitive areas like hiring or law enforcement. Generative AI introduces further dilemmas, such as the creation of deepfakes and misinformation, threatening public trust and safety. These limitations emphasize the necessity of ethical oversight and human accountability to ensure AI serves society responsibly.

Personal reflection

It is intriguing that decision-making that I want to focus on emerges as a shared point of comparison between humans and AI in this exploration. However, I believe the deterministic nature of AI should not be fully trusted from an ethical standpoint.

When we turn to AI, we must recognize that true human likeness requires more than programming or computation; it demands a holistic, transcendental body—something beyond our mental, physical or emotional control or understanding.

It is also curious how discussions about AI inevitably lead us to questions about ourselves. I wonder if this stems from an over-reliance on our intelligence, viewing it as superior, only to now feel unsettled as we measure ourselves against a tool designed to enhance it. Perhaps it is time to recognize that we are far more than our functionality, societal roles, or intelligence. AI, in a way, mirrors our current state—where we prioritize functioning over simply being. This is a stage we have become so accustomed to that unlearning it seems nearly impossible. I cannot say if this reflection will serve as a reminder for everyone, but I do know that whatever unsettles or frustrates me ultimately reveals something about myself. This reflection forces us to question what it means to truly know, decide, and create—and to acknowledge the enduring mysteries of human consciousness.

Conclusion

In conclusion, the exploration of human and AI functionalities reveals not just differences in processing, decision-making, and action, but also deeper insights into what defines us as humans. While AI operates with precision, relying on deterministic algorithms and external programming, it lacks the holistic integration of will, intention, and transcendence that shapes human experience.

AI mirrors our current state of prioritizing functionality over being, challenging us to confront our reliance on intelligence and efficiency. Yet, this comparison serves as a reminder that we are far more than our cognitive capacities or societal roles—we are beings of creativity, intuition, and interconnected dimensions. Reflecting on AI compels us to reexamine our own nature, urging us to balance our abilities to function with the deeper essence of simply existing.

As we create ever more sophisticated AI, we must also look inward, recognizing that what sets us apart is not just our ability to think or act, but our capacity to exist holistically—to dream, create, intuit, and transcend.

 

1. Heidegger, Martin. Being and Time. Translated by John Macquarrie and Edward Robinson. New York: Harper & Row, 1962.

2. Jung, Carl Gustav. The Archetypes and the Collective Unconscious. Translated by R. F. C. Hull. Princeton: Princeton University Press, 1981.

3. Carter, Rita. 2019. The Human Brain Book: An Illustrated Guide to Its Structure, Function, and Disorders. London: DK.

4. Schopenhauer, Arthur. The World as Will and Representation. Translated by E. F. J. Payne. New York: Dover Publications, 1969.

5. Sartre, Jean-Paul. Being and Nothingness: An Essay in Phenomenological Ontology. Translated by Hazel E. Barnes. New York: Washington Square Press, 1992.

6. Floridi, Luciano. The Philosophy of Information. Oxford: Oxford University Press, 2011.

7. Husserl, Edmund. “Philosophy as Rigorous Science.” In Phenomenology and the Crisis of Philosophy, edited and translated by Quentin Lauer, 71–147. New York: Harper & Row, 1965.

8. Searle, John. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3, no. 3 (1980): 417–457. https://doi.org/10.1017/S0140525X00005756.

9. Bergson, Henri. “The Perception of Change.” In Creative Evolution, translated by Arthur Mitchell, 266–284. New York: Henry Holt, 1911.

10.“Ethical Considerations in Artificial Intelligence Systems.” IEEE White Paper. Accessed November 2024. https://standards.ieee.org.

 

Join the Club

Like this story? You’ll love our monthly newsletter.

Thank you for subscribing to the newsletter.

Oops. Something went wrong. Please try again later.