As artificial intelligence grows increasingly sophisticated, it compels us to grapple with profound philosophical questions about its potential to emulate human cognitive capabilities such as consciousness, self-awareness, intentionality, creativity, and free will. Yet, before we can evaluate AI’s potential, we must first address a more fundamental issue: What do these human cognitive traits truly mean? Without a deeper understanding of our cognition, discussions about AI’s ability to replicate it remain incomplete.
Consciousness, a fundamental aspect of human cognition, is often understood as the awareness of and ability to perceive one’s own existence and surroundings. Our inclination to attribute consciousness to other humans rests on two key assumptions: first, that each individual directly experiences their own consciousness, and second, that others exhibit behaviors similar to our own, leading us to infer that they, too, are conscious. However, consciousness is deeply subjective. As philosopher Thomas Nagel famously observed, “There is something it is like to be a conscious being,” but this “what it is like” is accessible only to the experiencing subject.
This subjectivity has profound implications for AI. If consciousness can only be verified by the subject experiencing it, how could we ever determine whether an AI system is truly conscious? The problem becomes even more perplexing when we consider our limited understanding of consciousness itself. While neuroscience can describe the brain’s physical processes, such as neural interactions and chemical signals, we have no explanation for how these processes give rise to conscious experience.
The possibility of AI consciousness becomes even more intriguing when considering highly advanced systems designed to mimic human interaction. For example, imagine an AI companion created to emulate romantic relationships, indistinguishable from a human partner in appearance, communication, and behavior. Suppose it expresses affection, shares thoughtful responses, and demonstrates what seems like empathy. Can we ever be certain it is not a self-aware being? Or are we merely projecting our own consciousness onto an entity that skillfully imitates human traits?
Such dilemmas parallel our exploration of creativity as a hallmark of human cognition. Creativity is often defined as the ability to produce something novel and valuable—an original contribution to the collective human experience. In this sense, AI has already demonstrated creativity. For example, AI programs like ChatGPT can compose unique poems, while other algorithms generate original symphonies or visual art.
The debate about free will further challenges our understanding of human and machine cognition. We often perceive ourselves as free agents, yet our desires frequently emerge from forces beyond our conscious control. Why does one person crave chocolate while another prefers vanilla? The ultimate explanation often boils down to “just because,” suggesting internal mechanisms dictate our choices.
In this sense, human will may not be so different from an AI algorithm. A conscious AI might respond similarly, attributing its “decisions” to the constraints of its programming. As Arthur Schopenhauer aptly noted, “Man can do what he wills, but he cannot will what he wills.” This insight highlights the possibility that human free will, like AI’s decision-making processes, is constrained by unseen factors.
Intentionality—the mind’s capacity to be directed toward something—is another defining trait of human cognition. But where do our intentions come from? Are they self-generated, or do they arise from external and internal influences? AI operates on algorithms that guide its behavior toward specific goals, and one could argue that human intentions follow a similar path, shaped by a complex interplay of biology, environment, and experience.
Ultimately, the question of whether AI can become like humans hinges on our understanding of what it means to be human. If traits such as consciousness, creativity, free will, and intentionality can be reduced to patterns and processes, AI may already resemble us to some degree—and could eventually surpass us. But once AI functions, behaves, and communicates in ways indistinguishable from humans, the question of whether it experiences these traits becomes unsolvable.
As Ludwig Wittgenstein poignantly stated, “The limits of my language mean the limits of my world.” Similarly, the limits of our understanding of consciousness and cognition constrain our ability to determine whether AI possesses these attributes. Just as we can never truly know another human’s consciousness, we may never resolve whether AI is conscious or merely simulating it.
In this light, AI challenges us to confront the boundaries of technology and the mysteries of the human mind. As we create machines like us, we are reminded of the profound enigma of what it means to be human—and whether the essence of our cognition can ever be replicated.
Dotan Rousso was born and raised in Israel and holds a Ph.D. in Law. He is a former criminal prosecutor in Israel. He currently lives in Alberta and teaches Philosophy at the Southern Alberta Institute of Technology (SAIT).