Things AI Cannot Replace: Creative Friction, Empathy Work, and Emotional Public Spheres

[Note: Roughly adapted from my Korean language article at slownews.kr. Intended to serve as a discussion starter for any ‘Media and Society’ type classes that aim to go deeper than hey-algorithms-are-bad.]

In the current discourse around AI, two extremes stand out. One is excessive optimism, the belief that AI can replace anything. The other is excessive pessimism, the belief that the AI bubble will burst and lead to dystopia. Of course, both are premature. What we need instead is reflexivity: questioning our basic assumptions, including how we ourselves are involved in the mechanism. Rather than rushing to say everything will evolve or collapse, we need to carefully ask what functions in communication we actually intended to have, and still need right now. Then we can map out where our current advancements in AI fit, now and in the future.

Creative Friction

One major limitation of today’s AI, specifically the generative large language models, is that they are designed to eliminate what could be called “creative friction.” Humans learn through mistakes and grow through relationships, especially when those relationships involve conflict. But a friction becomes ‘creative’ only when there is a shared goal, a future point of agreement; otherwise, it’s a death match. In simple terms, it’s like being on the same boat – if it sinks, everyone sinks together. That shared stake compels members to use conflict for solutions.

Whether in personal interaction or in the workplace, building relationships and working through conflicts are essential to creating something together. On the other hand, AI does not require fostering that kind of connection. For hundreds of thousands of years, human relationships have been built on the assumption that we are all just humans and are part of the same shared group called the human race, potentially even a community. That understanding, combined with our advanced brains, gave us a unique capability called ’empathy.’ If you are happy, I can imagine I would feel such happiness too if the situation were to happen to me. If you are in pain, I put myself in your shoes to figure out what causes such pain, to avoid it from happening to me. The reason humans have been using animals as resources is because there is no such assumption.

But AI does not share this fundamental premise of empathy, because it does not have the basic need to feel happy or avoid pain and explore other humans’ conditions for that. When humans argue, they are still part of the same community called humans, imagining shared consequences (yes there are of course some members of the human race out there who somehow lack that capacity). LLM-based AI, in its current iteration, simply uses human language without sharing that common-stake assumption. Creating a new shared premise between humans and non-humans – something that would allow true partnership – is, at least for now, impossible. Anyone claiming otherwise is being dishonest.

AI might be able to imitate human conflict as a language output, but it does not imagine or participate in the shared future that gives those conflicts any meaning. It is just a tool, and are unlikely to be recognized as a true partner in such creative friction by humans. A useful thought experiment is the TV series Westworld, where robots live in a theme park as if they were human. Believably, the human visitors to the park often end up behaving cruelly toward them. It is like a physical version of Sims, really: because the robots are treated as objects towards which no responsibility is demanded, people easily reveal their violent tendencies. Even when humans form “relationships” with the robots, those relationships are merely instrumental, breaking off at the end of their weekend fun. In such environments, human cruelty emerges more easily.

AI Transparency

In contexts such as streamlining work, AI is an excellent tool. However, in those cases, transparency becomes even more important to balance effectiveness and relation building. If you are using AI for work or communication, you should disclose that fact and indicate exactly where it was used. This shows that you still value your and others’ thoughts and emotions as humans and engage as such.

For example, what if someone simply generates the topics to discuss with me with Claude, and simply passes it down to me? I would simply conclude that this person lacks any interest to seriously think things through with me. But if that person used AI to gather a pool of topics based on recent trends and phenomena, then assesses and selects the most pressing issues based on their own moral judgment, it shows they are simply good at using AI tools.

Ultimately, the core question is to what extent a human’s “value judgment” is reflected in the process. I discuss social issues based on my perceived trust that the topic proposed to me was deemed valuable by a fellow human being. I expect the ideas we create will contribute to improvements in our shared human condition. On the other hand, if someone suggests a topic simply because “Claude said so,” I would probably respond via Claude as well, and the world will sink into an AI slop hole.

What Makes Humans Human

The paradox of humans and robots, featuring humans and physical AI, is a staple of popular culture. Humans are often depicted as more instrumental and cruel, while robots, initially acting on purpose, end up behaving more “humanely.” Way before and long after Blade Runner, various works show that when humans treat robots purely as objects or tools, they eventually converge into something instrumental and inhuman themselves.

We know well that the attribute of being human isn’t inherently unique or special. When humans stop treating other humans as such, our strengths simply vanish. Humanity can be claimed by others who do. If we treat humans as tools, the premise that a human should be treated “better” than any other tool disappears. In a relationship defined by utility, AI can easily become more important, superior, and better than a human.

What makes humans uniquely human-like is simply their strong favoritism toward humans. We choose to do something for the sake of humans simply because they are human. This is different from general philanthropy. This partiality, the idea that being human inherently grants a certain status, is a scaled-up type of empathy.

In everyday talk, “sympathy” and “empathy” are often used interchangeably. However, the concepts differ: empathy is asking “what would I do in that situation?” This is the foundation of sociality, namely the ability to do something with others. Humans don’t have social roles genetically hardwired like ants or bees; we learn them. We don’t become enslaved to fixed functions because, through empathy, we creatively imagine wider social contexts.

Care as Empathy Work

In recent articles, Dr. SH Lee at ILO emphasized the potential for AI-driven innovation in caregiving work (2025.02), specifically the role of ‘physical AI’ (2026.02). This is part of what we call “Aging in Place.” For example, a smartwatch can now immediately call emergency services if an elderly person falls. While we might not usually call such simple functions AI, this feature introduced by the Apple Watch is indeed a critical AI function. It must intelligently judge whether a change in position detected by sensors should be classified as a fall.

Since falls among the elderly are life-threatening, I see this as a way for AI and care to be practically linked. Medical demand for such features will increase, and AI will meet that demand. Another aspect is daily convenience: adjusting room temperature or turning off lights via voice when you are less mobile. While these are now commonplace, AI provides great help in these daily dimensions.

However, caregiving involves another crucial dimension: the psychological. Aging brings two major psychological challenges: social helplessness (the feeling of no longer functioning as a proper member of society) and social isolation (the feeling of being disconnected). Can AI replace the demand for this communion?

When a person wants to do something but can’t anymore, they need help accepting the natural process of aging. When they want to interact with others but can’t, they need a way to connect. We need to feel and act as humans; at least for now, it seems difficult for language models that merely imitate humans to replace that void. A human caregiver, whether it’s their assigned job or not, performs that empathy work naturally to some extent.

Language models mimic human speech and the underlying thought process, but they cannot replace the ultimate goal of those actions: the reassurance of being recognized as a person by another person. In traditional psychotherapy, the most important premise is acknowledging that your concerns are ones that any human would have in that situation. Only then can treatment, whether cognitive restructuring or medication, begin. The starting point is human trust between the client and the counselor.

AI can be useful for low-level tasks, such as basic surveys or counselor training. But for serious therapy, one must treat the client as an equal, form a rapport, and, above all, be free of prejudice. The problem with AI is that if the training data contains bias, that bias is inevitably reflected. Imagine learning from data biased against schizophrenia or alcoholism.

Crucially, a counselor must be able to restrain a client’s thoughts, not just support them. This leads back to the concept of creative conflict.

The Metaverse?

Some of the shortcomings above could be addressed if we could simply use AI technology to connect people to people. Realistically, it works only on the trust that there is another human like me at the other end of the connection. In other words, a “Human-to-AI” relationship is unlikely to achieve that goal as soon as the cover is blown. To revisit the caregiving example: a fellow human saying, “That’s just what happens when you get old; my joints ache too,” is far more impactful than an AI generating a search result response.

Meaningful social relationships must be backed up with actual existence (except for intentionally parasocial fandoms). Can we experience the density and reality of social relations through a simulated personality made of only language? I doubt it. AI as a tool to connect humans is entirely possible and full of potential. But it cannot “replace” the human.

The Habermas Machine

Ultimately, the problem of creating a public sphere is an extension of individual life and workplace experiences. In a functioning democracy, desirable institutions aren’t created out of thin air through scientific or normative thinking; they must effectively address the concrete lived experiences, needs, and conflicts of society’s members. We need a “middle realm” that performs a public function: connecting concrete life worlds to abstract political and economic systems.

The late Jürgen Habermas (1929–2026) theorized this middle realm as the Öffentlichkeit (the original term for the ‘public sphere,’ without the spatial metaphor). He provided a philosophical norm according to which public opinion is formed through communicative actions grounded in rationality rather than instrumental communication. However, Habermas emphasized modern rationality so much that he overlooked a key point: communication in the lifeworld is influenced more by individual emotions than by rational norms.

Suppose we argue for strengthening labor rights. Do people fight for it just because they learned it in a textbook or because it’s logically “right”? Not many.

Instead, it’s closer to this: The experience of working hard and not being treated fairly. The humiliation of being treated as an inferior. The anxiety of being fired at someone’s whim. The horror of hearing about a colleague who wasn’t compensated for an injury. The nausea of realizing that if these backward systems aren’t fixed, you’ll be next. The accumulation of these sensual and emotional memories is what moves the body.

The process of creating public opinion cannot be entirely rational; rather, it is closer to the formation of collective emotions. Yet many assume AI could one day deliver a “perfectly organized public opinion” through rational language. This is neither desirable nor possible with current technology.

So, how do we use AI as a tool for the public sphere? There is an interesting experiment called the “Habermas Machine.” Roughly summarized, the experiment found that AI is very efficient as a discussion moderator. It cannot provide lived experience or suggest a direction based on values, but it functions well as a manager of the “middle ground.” As a debate leader, it would be no good, but as an MC it could be excellent.

This brings us back to the start: If we outsource our human value-based decisions to AI, we are in trouble. But as an MC or a secretary who organizes and manages so that humans can properly coordinate and decide? Despite many limitations, it can be quite good.

On a personal note, when I have to summarize really long student essays, I sometimes use AI. Some students use AI to write them, too. What matters isn’t whether AI was used, but whether the result contains a good proposal or creative idea rooted in value orientation and faces the limitations of reality.

A good essay asks: “WHY must we act in a certain way?” Mechanically summarizing the pros and cons of an issue is less important. The value judgment is what matters. Choosing what to do, and the normative reasoning behind it is always paramount. Using AI as a tool to help polish it up is perfectly fine. Similarly, AI is great for literature reviews; summarizing articles or papers to see if they are worth reading. However, even in such cases, comparing those summaries with the original human-written abstract is needed.

The Reward of Experiencing

We read grand literary works, watch two-hour movies, and read detailed papers not just for the synopsis or conclusion. We do it to enter the experience of the process: the flow of thought, the prose, and the unfolding logic.

If “learning” is about grasping content, “experience” is entering the space of the content in all its messy intermediate states, layers of contradiction, and the vividness of everything somehow coming together. Experience is a form of open observation, essential for generating better thoughts.

Think of a trip to Paris. Summarizing information about the landmarks and Michelin-starred restaurants is simply “learning.” Experiencing is breathing in the smell of the dirty subway, smelling the neighborhood bakery, wandering through messy yet systematic alleys, and feeling the sense of distance of the crowded cafés with your skin. The reward of experience follows afterward.

How does life and thought flow in such a space? AI can help you “learn” the plot of Tolstoy’s Resurrection, but it cannot help you “experience” the hypocrisy of 19th-century Russian nobles or the suffering of the masses from within their vivid lives.

To emphasize again: AI can improve efficiency as a tool. But it is dangerous to mistake AI for something that can replace human connection, where that connection is part of the goal. AI is a tool; it is not a partner in a relationship that requires and results in communion. You can replace the tool, but you cannot replace the relationship.

If we let future versions of AI replace humans at one end of personal connections requiring communion, we may no longer need human-to-human relationships. That day may come. At that point, the rationale for our existence diminishes, and humanity might as well go extinct. If we want to avoid such a catastrophe, we must never stop reflecting on what we want from fellow humans, and keep thinking about the importance of human experience beyond the ‘learning.’ We must move forward, but always look back.


Posted

in

by

Comments

Leave a Reply