+++
title = "On LLMs"
publishDate = 2025-04-04T13:21:00+02:00
lastmod = 2025-04-21T14:36:44+02:00
tags = ["ethics", "communication", "avoidance", "work"]
categories = ["llm", "tech", "mind"]
draft = false
meta = true
type = "list"
[menu]
  [menu.posts]
    weight = 3002
    identifier = "on-llms"
+++

This topic [came up on Fedi](https://infosec.exchange/@david_chisnall/114278564411985648), and I thought it would be interesting to bring it up here to avoid my thinking getting lost in the sea.

To address David's take, I'll go a step further.

I believe we should put more effort into teaching people to **reason**. Thinking isn't quite enough. It's a vague word that essentially means _mentate_, which can ultimately be any mental action.

Reasoning, on the other hand, is an actual understanding and application of logic{{< sidenote >}}In the formal sense, see: https://tellerprimer.sf.ucdavis.edu/logic-primer-files {{</sidenote>}} to the systems{{< sidenote >}}People, relationships, tools, assets, materials, networks, logistics, including their components.{{</sidenote>}} involved.

In this context, an LLM is not quite a solution, but a crutch towards it.

So then...

&gt; If you could ask any silly, stupid, embarrassing question without fearing consequences, what would you ask?

You have an unfeeling, unjudging, (mostly) unbiased machine in front of you. It knows nothing about you, and it can (sometimes) generate answers to your questions which will be useful. It's certainly fallible, and you know that, but you'll still use it, for two reasons:

1.  It doesn't judge you, there's no anxiety about asking the silliest, most basic questions, because it will never make you feel worse for not knowing things.
2.  It is instant, convenient and quick, even if it's incapable of reasoning{{< sidenote >}}LLMs don't "hallucinate." Spouting random garbage is their function and purpose. Engineers can only do so much to /bias/ them toward being truthful, but even then that /'truthfulness'/ is accidental, rather than by design.{{</sidenote>}}.
3.  To get answers that are relevant and more accurate (as far as they can be), an LLM requires _context_, since they're not omniscient mind-readers. This often forces the user to sit down and type up their notes and knowledge, in an effort to get the LLM to comply with their wishes.

And that last point is what I consider important. **A user trying to solve a problem with an LLM often ends up typing up all the information they need to solve the problem themselves.** That's where the true value is. Having a _place_ or an _environment_ in which a person can _think_ about their problem in a way they themselves understand.

This effectively prompts the user to _slow down._ There's this tendency, attitude that everything has to happen now, immediately, ASAP. That's a big detriment to humans, because while it's true that emergencies happen, most events aren't even close to being emergencies.

But the LLM requires that the users slows down, and thinks about their problem - otherwise there's no value to be gained{{< sidenote >}}No, I don't consider a short-term mitigation of a problem to be a true solution. Solutions are stable, tested, long-term resolutions and mitigations which prevent the problem from appearing again without causing side-effects. And while that may appear detached to some, because of the cost involved in developing real solutions, I stand by this. Solutions /solve/ problems, they don't create new ones.{{</sidenote>}}.

From where I'm sitting, LLMs are useful as a prompting device for the user. They spout confabulated garbage left and right, and sometimes it accidentally reflects the state of reality. Something an LLM has no access to in the first place.

I genuinely believe using LLMs for anything more than boilerplate code or small, self-contained scripting is a crutch. Certainly it can help, but the real value is in _writing the prompt_ and _analyzing and understanding the underlying problems_, not in the tokens generated.

**So here's an alternative to using LLMs:** open a blank text file, and start describing your problem, write down all the details, possible resolutions, and any side-effects they may have.
You already have a better understanding of the context and systems involved than you could _ever_ communicate to _anyone_, and leveraging _your own mind_ to solve problems is infinitely faster and better than relying on a crutch that's wrong most of the time{{< sidenote >}}In the way of assumptions, lack of context, lying, lack of comprehension, being 'confidently wrong', forgetting stated facts, changing problem parameters or task requirements, and so on.{{</sidenote>}}.

So there. And if you keep your notes diligently, you can speed up this process to the point where finding answers is _quicker_ than typing the questions you have into an LLM.
If you wish to learn more about information management{{< sidenote >}}Yes, it says 'personal' information, but I've successfully applied these ideas and tools in multiple professional roles.{{</sidenote>}}, I very highly recommend [Karl Voit's blog](https://karl-voit.at/tags/pim/), where you will find in-depth articles on information management and associated workflows.

A last bit to note is that all of the above assumes that the user isn't using LLMs to avoid effort. Information management and problem solving _intrinsically_ require focus, attention and effort. From the very beginning, you have to actually put in work to:

1.  understand the problem,
2.  the context of the problem,
3.  the effects and side-effects of the problem,
4.  the possible tools you can use to resolve the problem,
5.  the possible resolutions,
6.  their associated effects and side-effects that may well reach outside the scope of the problem.

These are not complicated questions, and an LLM can (in some situations){{< sidenote >}}Especially when the problem in question is localized and small, with no side-effects.{{</sidenote>}} help address them. **This can promote and habituate avoidance of effort, instead of promoting deeper thinking and closer analysis of the problem's components.** Which is a _very bad thing._

Is there a way to address such a situation?

I don't know. But if _you_ do, send me an e-mail, I'd be very keen on learning how to help folks use their own intellect more.