LLM reliance

home

ยท Bothell

Something I’ve noticed recently is how over time, a lot of my Google queries have gotten much more complex. Beyond just complex, they’ve become abstract and sometimes have unverifiable answers.

In high school and college, the queries slowly shifted from basic knowledge questions to more of a “how to” style. This meant that a lot of times, videos on YouTube were helpful for learning. I used to think that the creators of these videos were geniuses. Like how in the world does this guy know how to solve all of these integrals?

As my personal knowledge grows, I am continually let down by my search results on almost every platform. Nobody has really answered some of my questions and not many people have the exact same questions as me. Sometimes it even makes me wonder if I just have the problem itself wrong, and aim for easier solutions to my problems. (A lot of the time, that’s the actual problem).

Over the past few months I have found myself quickly giving up on search engines and directing my questions to a Large Language Model (LLM). For example, just a few minutes ago, I was looking for the word colophon, but couldn’t remember it. I searched a few different things on Google trying to see if any synonyms showed up, but nothing.

I went to chat.openai.com and asked a few different ways for the word. The AI got it wrong about five times, but it let me into its “mind”, helping me correct where my querying went wrong. I gave more context by saying, “Looking for one single word that describes a page that is made to describe the process of making the work” and got the word I was looking for immediately.

The breadth of “knowledge” in these LLMs is giving me the same ecstatic sensations I got as a kid when someone on YouTube could solve my most complex of questions. But now, the problems are loosely defined and still solvable.

My current use case

The issue that LLMs are solving for me right now is with problems where I know something exists or can be solvable, but can’t quite put a finger on it. I ask around the question, saying something like “it’s similar to X” and usually get a good result. They’ve been helpful with answering all types of technical questions that have many correct answers, and have reliably been accelerating my workflow.

These abstract solutions are what make me think that my reliance on these tools will grow substantially over the next few years. My questions usually aren’t just facts such as, “when was the Civil War?” or “kinematic equations” anymore. They have recently been more geared towards optimizations of very specific problems that are really hard to search for right now on a normal engine.

Searching Google for the same query I had asked the chatbot while trying to find the word “colophon” gives me the following results as the top five:

  • What is a Flowchart?
  • Word: Page Layout
  • Synonyms and Antonyms of Design
  • Create a basic Flowchart
  • Basic tasks in Word

None of these are close. But the bad part is that it’s hard to correct it; I have no idea why it decides to show these results and it’s not always simple to tweak around the words and query structures to emphasize the issue at hand.

In this case, if I tweak the query to also say “part of a book”, one of the top five results mentions the colophon. It’s not always this simple though, especially with technical questions, and as the question gets more complex, it feels down to luck whether something shows up or not.

Future use cases

I think that in our technology-filled world, we will have a reliance on LLMs for all kinds of things. One of the most interesting cases I can foresee right now is the way that we “bookmark” and manage our online lives. The digital space is continually growing, which seems to be collectively destroying our attention span and ability to recall exact sources or details.

Online, I frequently lose track of things that I eventually want to come back to such as tweets, people, photos, stories, and words like colophon. Even in bookmarks they get lost, and if we bookmark everything, they become worthless. But why search all of the internet for content that we know exists and have seen before, if the issue is putting our finger on the specific source from our subset of seen content?

I’m sure there must be a few AI startups by now that are tackling this issue, but I envision something like Linus Lee’s Monocle for everyone. It’s ambitious, but I could imagine some type of personal search engine with a LLM on top to help its owner query loads of things they’ve seen.

While writing this, I noticed this may be interpreted as something out of a Black Mirror episode. But I’m envisioning something more like bookmarks on steroids, kind of like how some people set up a Notion or Obsidian page to be a place to store stuff they like. The LLM would build on top of these pages to help the owner search through it when it becomes massive. It would also help the user save things so it’s not a tedious process.

As a consumer this idea excites me while looking at productivity and terrifies me from a privacy perspective. Still, as a security person, this privacy risk is kind of like a new industry, and exciting in its own way. The growing use of sensative personal data is going to be an interesting privacy problem to solve over time. It’s not the issue to secure just our name an address anymore. It’s our thoughts, questions, and possibly everything that we are.

Comments

Loading comments...