New things tend to bring out extreme opinions and AI is no different. Some liken it to the second coming, while others damn it as the antichrist. It’s early days yet, but to me AI feels more like Web 2.0 than Web 3.0. Both were maximally hyped by press and marketing departments, but Web 3.0 always felt like if a Ponzi scheme and vaporware had a baby. Web 2.0 was different. There was a there there. Google Maps, Flickr, Facebook were all real things. Web 2.0 marked the very real and immensely tangible beginning of the web as a viable platform. While there has undoubtedly been an unrelenting torrent of heinous marketing with regard to AI, there is also very clearly a there there. Even without the time to truly delve into the plethora of tools and techniques currently available, the likes of ChatGPT and Cursor have already been helpful in my work. My very limited experience with LLMs and the like gives me optimism that AI will bring a new generation of computerized tools that will help people build, create, and think. What worries me though is when I see people use AI, not as a tool to help them do those things, but to do those things for them. The best example of this is how LLMs are already being used to write.
I have been fortunate enough to have a now decades long career as a software engineer. As one might expect, my early success came from solving problems mostly through coding. What has really helped me thrive in my more senior roles of late, however, is writing. Writing regularly to this blog and elsewhere for over a decade has greatly improved my ability to distill vague ideas into cogent points. For me, practicing writing has been like practicing a strict form of thinking. John Gruber just recently talked about this connection between writing and thinking while guesting on Cortex:
But it’s that writing is thinking. And to truly have thought about something, I need to write it. I need to write it in full sentences, in a narrative sense, and only then have I truly known that I’ve thought about it.
Like John, writing makes me truly think about a subject by leading me to consider its various aspects and then forces me to organize all of those ideas into coherent prose. This process also forces me to organize these same ideas in my brain. While I agree with John that speaking extemporaneously can’t compare to the very thorough consideration involved with writing, I would argue that by making me a better thinker, the practice of writing has made me a better speaker.
The idea that writing improves thinking isn’t unique. That’s why I suspect the liberal arts are filled with writing. It’s not so much about finding the next great academic, but creating a whole class of better thinkers. That’s ostensibly why a college degree is required for the jobs that ultimately pay people to think.
It’s this connection between writing and thinking that makes me worried about people using LLMs to write. Now not all writing is the same and I would argue that most of the writing people do, even professionally, is functionally basic communication. I’m also not all that concerned with AI tools that assist writing. An LLM that autocompletes or even rewords sentences doesn’t eliminate the process of writing. Where I see problems is when LLMs are used to do the actual writing in a way that precludes users from having to think.
Let’s assume two scenarios involving someone being asked to provide requirements for a given project. In scenario one, the person writes the requirements in five bullets, but is worried about the optics of such a short response. In scenario two, the person doesn’t yet know the requirements, but still wants to provide a response just to not be empty handed. In both scenarios, each person uses an LLM to generate a 1000 word specification document that they send to their colleagues. Both not only wasted their colleagues’ time by having them read a 1000 words of AI slop, they also created an illusion of thought that may not have been needed in the case of person one or didn’t even happen in the case of person two.
And then there’s a third scenario. The person who has no intention of ever really thinking about any project and uses AI solely to merely keep up appearances. You might think that’s cynical or absurd, but I’ll bet you dimes to dollars this is already happening. There are many, many situations in jobs that pay people to think where avoiding thinking can be a successful strategy. That’s because thinking through ideas is the kind of time consuming, indeterminate, hard to measure, and even harder to justify task even when it’s absolutely necessary. Being the one who takes the time to think through something can easily become a “heads I win/tails you lose” proposition. Ideas that can’t be worked out can end up with a stink of failure while the best and most thought through ones can seem like common sense in hindsight. Add to the risk/reward equation that the actual act of thinking is largely invisible. It’s the resulting documents that are seen at the end of the day, and how many bosses pay that close attention to their contents. Of those who do, how many could discern which were produced by an LLM? Many never had the time to really think about the subject, and why should they? That’s what they paid the third person who wrote the document to do, a person by the way, they already believe is an ace for being able to produced such documents on short notice.
I am still optimistic about AI the same way I have been optimistic about other major developments in computers, but those other developments never gave anyone the impression that computers could actually think. Before AI, no one looked at an image or document and questioned whether a human was involved. No one looked at Photoshop and Google Docs as an alternative to thinking. LLMs of today can already give the illusion of human thought. The idea of our attention being flooded with AI generated slop alone is worrisome, but what makes me way more worried is how often individuals will have the computer create and illusion of thought in lieu of actually thinking.