Shorter and shorter thoughts

home

· Seattle

Today, Hieu Pham posted a realization that resonated with me:

I noticed an alarming change within myself because of LLM usage: I have become lazy of reading.

 

These days, if I ask grok a question and its answer is long, I would get annoyed and tell it to give me a TDLR. Sometimes, I even become impatient enough to tell it to answer concicsely in advance, which I just stop because that usually leads to worse answers.

 

I am afraid that as Grok becomes infinitely better, and its short answers are all informative and correct, I will become addicted to consuming short chunks of text. Pretty much like the way some people are addicted to the short videos on TikTok.

 

That is scary. Losing the enthusiasm to read is the key to falling behind. And if it is happening to me, I suspect it is happening to many who are growing up in this age of AI.

I noticed this in myself too, and posted about it last year in the combatting helpless post. It’s a pretty interesting phenomenon that is leaking into almost everything in daily life, which I called “outsourcing our brain.” It’s getting bad! Just take a look at all the “@grok is this true??” comments on X.

With “vibe-coding” becoming normalized, literally everything being summarized, and even a paragraph being too long to consume, it feels like we’re approaching a new norm for information. I imagine it’s gonna be like an era of half-truths, where everything’s summarized so densely that it lacks most the nuance that makes it “true.”

I don’t really see how any LLM will become “infinitely better” and actually give objective short-form summarizations without missing out on loads of nuance. It’s not even an LLM problem, it’s just that an easy to digest summarization can’t really convey the full picture of complicated ideas without either leaving out key details or implicitly over-representing certain topics.

This doesn’t exactly worry me on its own, but last week’s sycophantic gpt-4o outbreak plus heavy summarization, and a lot of “I’m right, right?” kind of makes me think that a lot of online spaces are going to be even more annoying as time goes on.

What’s the correct balance? We need information density, but also complete truths. It seems like we need much more than a tack-on at the end of a summarization that says, “but remember there is more to this story!” Honestly, I think it’s the same as it’s always been. There’s not really a solution, and as people realizing that we’re outsourcing our thinking, we have to strive to be better and go through organic rabbit holes of primary sources with curiosity.

To me, the LLM has become a way to better find existing rabbit holes to go down. But we have to stay cautious and not let it do all of the deep diving too, or else we get into the overly summarized territory of half-truths.

Comments

Loading comments...