How ChatGTP Might Ruin Reality

Max Haarich
3 min readJan 14, 2023

--

Video Still from “Smallest Sunrise in the Universe”. The image is a result of a single transparent pixel repeatedly filmed with a digital camera. (Max Haarich, 2021)

I was lucky to join a very impressive discussion about AI at DLD Conference, which centered around the latest achievements of ChatGTP. Since the release of ChatGPT, we have seen crazy things like schools banning GPT because students use it to write better essays than ever. People even built virtual computers inside ChatGTP, which delivered faster results than any existing computer. ChatGPT seems to be the perfect tool for any task that can be reduced to an input-output-transformation of data, whether it’s texts, numbers, images, or whatever. What could be dangerous about that?

During the pandemic, where people increasingly worked online, most of our jobs mainly consisted of nothing but streams of data; streams of text messages, documents, video calls, etc. But now that we had our jobs reduced to a data stream, why wouldn’t we simulate those, too? Today we could use ChatGTP to simulate the work, tomorrow, we might simulate the worker. And the day after tomorrow, we might even simulate the company, the industry, the nation, everything.

Maybe this is not even bad!? Maybe this is the holy day, where technology finally fulfills its 60,000 year old promise to help us work less. We can finally sit back and watch a virtual world hustle to provide all the things we need. But reality would start to feel strange at some point. And it would feel stranger everyday until it becomes completely random. The reason is dehumanization.

There is a fundamental difference between human content and AI-generated content: humans create for a reason. They have intentions arising from their need to sustain a living body. Any human generated content can be understood as a contribution to survival, either direct or indirect. Survival is the Archimedean anchor, which allows the members of society to bring our cognitive maps together and to communicate about the world and our future. AI, in contrast, has no embodiment. AI has no problems to solve, only equations. Therefore, AI has no source of meaning, AI „thinks“ only logically, while humans think biologically. But why would that matter so much?

Considering the quality of recent ChatGTP results, it is easy to imagine that such content might soon populate the internet, maybe as images, as essays, or in the form of speculations about the future of democracy. Search engine providers already consider moving from searching content to creating the content you need. AI systems like ChatGTP might increasingly end up being trained on the very data they produced themselves. This might push the whole internet into a more and more self-referential feedback loop, with each output drifting further away from human intentionality. At some point, the internet would be filled mainly with Nth-grade derivatives of human content, nothing but fascinating non-sense. So how do we deal with this?

One panelist replied that this would not be a real problem: we will just build an AI to care for that.

End of the story*

*Don’t get me wrong: I am a big fan of new technologies and happily use ChatGTP myself. I am only concerned about potential societal consequences, which might be hard to notice. I mean, we are used to jobs getting rationalized and the internet getting dumber everyday. But we as a society have hardly any experience with a daily live based on hyper-individualized and hyper-ephemeral AI content. Prompt Engine Optimization will be a thing!

--

--

Max Haarich
Max Haarich

Written by Max Haarich

Max Haarich is a conceptual artist, artistic researcher and founder of the Embassy of the Republic of Užupis. His work focuses on ethics in tech and AI.

No responses yet