AI generated with Dall-E. Cue: artificial-intelligence-and-social-media.

AI Won’t Destroy the Public Sphere (…because we do.)

ChatGPT for language. Midjourney for pictures. Discussing matters of public interest and formulating opinions on them require a steady intake of facts and opinions that reflect social realities and values. Just like we need good ingredients to cook great food. However, with the rapid advancement of AI, those ingredients could be less reflective of actual “human” conditions than we would take for granted. As such, many smart people have already issued the alarm on how AI would harm humans, essentially by destroying our public sphere. I find them largely valid, but ultimately non-fundamental… due to a simple reason, really. But first, let’s categorize some arguments.

[] Concern 1: AI will make the public more misinformed by looking like legitimate (human-made) sources.

This is a concern by… so, so many people. The ur-concern, so to say. We fear that our creations will be just like us. The creations by AI, with all the intentional and unintended falsehoods (from propaganda to hallucination) and biases will look like what we traditionally consider as human-made legit information and perspectives, and we the people will be fooled by them.

BUT: misinforming people does not require a lot of sophistication. We didn’t need realistic “deepfakes” to spread the rumor of Pelosi being drunk at her job with some visual evidence; it was only a slightly slowed-down video clip that did the job. Pizzagate did definitely not become a viral hit by sounding probable and logical. If you want to lie about the status of an ongoing war, you don’t even need Photoshop; just find an image on Google from any past war and post it. Anything works, if it works.

REALITY CHECK: it is less about the strength of the material, but more about in what context the material gets circulated. It is about finding the right audience, and having a strong amplification chamber of the interpretation community (yes, I want to move on from the crude ‘echo chamber’ analogy to a slightly better explanation). Because we end up believing what we want to believe, unless we are intellectually trained to feel like countering our own beliefs. Worse yet, even if we did get such training, it will probably be limited to a very narrow area of specialization. Case in point: I have some expertise in intellectual skepticism about media phenomenon. But I would probably uncritically believe a lot of BS regarding molecular biology.

[] Concern 2: Even when it is not abused, AI will provide us with only approximations of actual information and experiences.

Beautifully written out by the SF author Ted Chang. By design, AI will deteriorate how information represents our social realities, since they rely on algorithmic approximations. When AI learns back from the approximated data (rinse and repeat), the results will become farther and farther from what it is supposed to reflect.

BUT: we have been always deconstructing and reformulating information when we spread them, all the time. In that process, sometimes deeper insights are infused; in most cases, we just come up with crappy approximations. The only places that ask for strict citation of original sources is the academia and Wikipedia. Otherwise, we are not exactly inclined to find out whether the information reflects the source fairly or not, unless we have a direct stake. We love summary newsletters. We love those empathetic messages of those we already like, on social media. We put authority on those items that resonate emotionally and actionable, while being more forgiving on the strict realism.

REALITY CHECK: It’s a matter of how we want information and experiences to be accessible to us. We want them to be convenient, emotionally close, and pragmatic for the narrow scope of our lives. There is so much complexity in just about anything that goes on in this world, and we want to have some smooth approximation over the cold inaccessible nuance of the “original” reality.

[] Concern 3: AI will make us trust others less, by raising doubts of (human) authenticity.

This is a concern by resident worriers of human civilization, such as Harari, Haidt and many others. If enough people become aware of concern #1, we will lose trust in whatever is out there and it will cause us to distrust everything, since they could be fake. Without trust in other human beings and their rationality, the public sphere is moot. Heck, society itself is moot.

BUT: even if we think there is a human behind the words of others, we distrust them based on represented and imagined adversity. Look at the people who already constantly berate confronting messages as “bots,” regardless of whether they believe there is a human behind them or not. Chances are they are written by people with vastly different political ideas and points of views that may be socially problematic, but stemming from human experiences nonetheless.

On the flip side, we put trust in whatever we choose to champion. Sometimes not even individuals but whole institutions. Just look at Elon Musk, the current owner and destroyer of Twitter, formerly hoped to be an “online town square”: he denounces legacy media overall as some junk and promotes his distorted idea of “citizen journalism.”

REALITY CHECK: It’s less about being authentically human. It is about the perceived ill-will of the communicator, who may be a human or someone who used AI to create that message.

So, the gist is:

  1. It’s not the message that misinforms us; it’s the network of other humans we do like, and rely on providing us with how to interpret the world.
  2. It is not the distorted meanings; it’s our drive to feel knowledgeable about stuff beyond what we can actually handle.
  3. It is not about being human; it’s about the loss of the sense that all other humans agree on pursuing some common good.

Those three elements precede the advent of some next-gen AI. They were already reaching breakthroughs in communication technology through the abundance of hyper-active social media. Which stands on the foundations of hyper-active openly partisan cable news. Which stands on the foundations of the whole commercial incentive system favoring such kinds of inventions. Which stands on the foundations of the magical idea that things will go well by just providing more quantity of communication, without strengthening the quality of good discussion practices.

Whatever we blame AI would do to destroy our civilization, we had already been doing so ourselves. Powered by… networked humans.

If the problem lies with humans, it cannot be solved by calling for some pause in advancements of AI. Instead, we need to foster more basic education on democratic norms, empathy, social complexity, and other human conditions.

Too bad that the one discipline that was always supposed to do exactly this one thing has been pushed to marginalization over the last couple of decades: Humanities.


Posted

in

by