I like the sentiment of the article; however this quote really rubs me the wrong way:
I’m not suggesting we abandon AI tools—that ship has sailed.
Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and it’s possible to start having days where you don’t use an LLM, then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?
I personally don’t interact with any LLMs, neither at work or at home, and I don’t have any issue getting work done. Yeah there was a decently long ramp-up period — maybe about 6 months — when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. I’ve even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.
Maybe it’s just because I’ve never bought into the hype; I just don’t see how people have such a high respect for LLMs. I’m of the opinion that using an LLM has potential only as a truly last resort — and even then will likely not be useful.
I agree with this on a global scale; I was thinking about on a personal scale. In the context of the entire world, I do think the tools will be around for a long time before they ever fall out of use.
The actually useful shit LLMs can do.
I’ll be the first to admit I don’t know many use cases of LLMs. I don’t use them, so I haven’t explored what they can do. As my experience is simply my own, I’m certain there are uses of LLMs that I hadn’t considered. I’m personally of the opinion that I won’t gain anything out of LLMs that I can’t get elsewhere; however, if a tool helps you more than any other method, then that tool could absolutely be useful.
I never used LLMs until recently; not for moral or ideological reasons but because I had never felt much need to, and I also remember when ChatGPT originally came out it asked for my phone number, and that’s a hard no from me.
But a few months ago I decided to give it another go (no phone number now), and found it quite useful sometimes. However before I explain how I use it and why I find it useful, I have to point out that this is only the case because of how crap search engines are nowadays, which pages and pages of trash results and articles.
Basically, I use it as a rudimentary search engine to help me solve technical problems sometimes, or to clear something up that I’m having a hard time finding good results for. In this way, it’s also useful to get a rudimentary understanding of something, especially when you don’t even know what terms to use to begin searching for something in the first place. However, this has the obvious limitation that you can’t get info for things that are more recent than the training data.
Another thing I can think of, is that it might be quite useful if you want to learn and practice another language, since language is what it does best, and it can work as a sort of pen pal, fixing your mistakes if you ask it to.
In addition to all that, I’ve seen people make what are essentially text based adventure games that allow much more freedom than traditional ones, since you don’t have to plan everything yourself - you can just give it a setting and a set of rules to follow, and it will mould the story as the player progresses. Basically DnD.
Basically, I use it as a rudimentary search engine
The other day I had a very obscure query where the web page results were very few and completely useless. Reluctantly I looked at the Google LLM-generated “AI Overview” or whatever it’s called. What it came up with was completely plausible, but utter bullshit. After a quick look I could see that it had taken text that answered a similar question, and just weaved some words I was looking for into the answer in a plausible way. Utterly useless, and just ended up wasting my time checking that it was useless.
Another thing I can think of, is that it might be quite useful if you want to learn and practice another language
No, it’s terrible at that. Google’s translation tool uses an LLM-based design. It’s terrible because it doesn’t understand the context of a word or phrase.
For instance, a guy might say to his mate: “Hey, you mad cunt!”. Plug that into an LLM translation and it you don’t know what it might come up with. In some languages it actually translates to something that will translate back to “Hey, you mad cunt”. In Spanish it goes for “Oye, maldita sea”, which is basically “Hey, dammit” Which is not the sense it was used at all. Shorten that to “Hey, you mad?” and you get the problem that “mad” could be crazy or it could be angry, depending on the context and the dialect. If you were talking with a human, they might ask you for context cues before translating, but the LLMs just pick the most probable translation and go with that.
If you use long conversational interface, it will get more context, but then you run into the problem that there’s no intelligence there. You’re basically conversing with the equivalent of a zombie. Something’s animating the body, but the spark of life is gone. It is also designed never to be angry, never to be sad, never to be jealous, it’s always perky and pleasant. So, it might help you learn a language a bit, but you’re learning the zombified version of the language.
Basically DnD.
D&D by the world’s worst DM. The key thing a DM brings to a game is that they’re telling a story. They’ve thought about a plot. They have interesting characters that advance that plot. They get to know the players so they know how to subvert their expectations. The hardest things for a DM to deal with is a player doing something unexpected. When that happens they need to adjust everything so that what happens still fits in with the world they’re imagining, and try to nudge the players back to the story they’ve built. An LLM will just happily continue generating text that meets the heuristics of a story. But, that basically means that the players have no real agency. Nothing they do has real consequences because you can’t affect the plot of the story when there’s no plot to begin with.
And, what if you just use an LLM for dialogue in a game where the story/plot was written by a human. That’s fine until the LLM generates a plausible dialogue that’s “wrong”. Like, say the player is investigating a murder and talks to a guard. In a proper game, the guard might not know the answer, or might know the answer and lie, or might know the answer but not be willing / able to tell the player. But, if you put an LLM in there, it can generate a plausible response from a guard, and that plausible response might match one of those scenarios, but it doesn’t have a concept that this guard is “an honest but dumb guard” or “a manipulative guard who was part of the plot”. If the player comes and talks to the guard again, will they still be that same character, or will the LLM generate more plausible dialogue from a guard, that goes against the previous “personality” of that guard?
I like the sentiment of the article; however this quote really rubs me the wrong way:
Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and it’s possible to start having days where you don’t use an LLM, then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?
I personally don’t interact with any LLMs, neither at work or at home, and I don’t have any issue getting work done. Yeah there was a decently long ramp-up period — maybe about 6 months — when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. I’ve even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.
Maybe it’s just because I’ve never bought into the hype; I just don’t see how people have such a high respect for LLMs. I’m of the opinion that using an LLM has potential only as a truly last resort — and even then will likely not be useful.
Because the tools are here and not going anyway
The actually useful shit LLMs can do. Their point is that using only majorly an LLM hurts you, this does not make it an invalid tool in moderation
You seem to think of an LLM only as something you can ask questions to, this is one of their worst capabilities and far from the only thing they do
Swiss army knives have had awls for ages. I’ve never used one. The fact that the tool exists doesn’t mean that anybody has to use it.
Which is?
I agree with this on a global scale; I was thinking about on a personal scale. In the context of the entire world, I do think the tools will be around for a long time before they ever fall out of use.
I’ll be the first to admit I don’t know many use cases of LLMs. I don’t use them, so I haven’t explored what they can do. As my experience is simply my own, I’m certain there are uses of LLMs that I hadn’t considered. I’m personally of the opinion that I won’t gain anything out of LLMs that I can’t get elsewhere; however, if a tool helps you more than any other method, then that tool could absolutely be useful.
My 2 cents on this.
I never used LLMs until recently; not for moral or ideological reasons but because I had never felt much need to, and I also remember when ChatGPT originally came out it asked for my phone number, and that’s a hard no from me.
But a few months ago I decided to give it another go (no phone number now), and found it quite useful sometimes. However before I explain how I use it and why I find it useful, I have to point out that this is only the case because of how crap search engines are nowadays, which pages and pages of trash results and articles.
Basically, I use it as a rudimentary search engine to help me solve technical problems sometimes, or to clear something up that I’m having a hard time finding good results for. In this way, it’s also useful to get a rudimentary understanding of something, especially when you don’t even know what terms to use to begin searching for something in the first place. However, this has the obvious limitation that you can’t get info for things that are more recent than the training data.
Another thing I can think of, is that it might be quite useful if you want to learn and practice another language, since language is what it does best, and it can work as a sort of pen pal, fixing your mistakes if you ask it to.
In addition to all that, I’ve seen people make what are essentially text based adventure games that allow much more freedom than traditional ones, since you don’t have to plan everything yourself - you can just give it a setting and a set of rules to follow, and it will mould the story as the player progresses. Basically DnD.
The other day I had a very obscure query where the web page results were very few and completely useless. Reluctantly I looked at the Google LLM-generated “AI Overview” or whatever it’s called. What it came up with was completely plausible, but utter bullshit. After a quick look I could see that it had taken text that answered a similar question, and just weaved some words I was looking for into the answer in a plausible way. Utterly useless, and just ended up wasting my time checking that it was useless.
No, it’s terrible at that. Google’s translation tool uses an LLM-based design. It’s terrible because it doesn’t understand the context of a word or phrase.
For instance, a guy might say to his mate: “Hey, you mad cunt!”. Plug that into an LLM translation and it you don’t know what it might come up with. In some languages it actually translates to something that will translate back to “Hey, you mad cunt”. In Spanish it goes for “Oye, maldita sea”, which is basically “Hey, dammit” Which is not the sense it was used at all. Shorten that to “Hey, you mad?” and you get the problem that “mad” could be crazy or it could be angry, depending on the context and the dialect. If you were talking with a human, they might ask you for context cues before translating, but the LLMs just pick the most probable translation and go with that.
If you use long conversational interface, it will get more context, but then you run into the problem that there’s no intelligence there. You’re basically conversing with the equivalent of a zombie. Something’s animating the body, but the spark of life is gone. It is also designed never to be angry, never to be sad, never to be jealous, it’s always perky and pleasant. So, it might help you learn a language a bit, but you’re learning the zombified version of the language.
D&D by the world’s worst DM. The key thing a DM brings to a game is that they’re telling a story. They’ve thought about a plot. They have interesting characters that advance that plot. They get to know the players so they know how to subvert their expectations. The hardest things for a DM to deal with is a player doing something unexpected. When that happens they need to adjust everything so that what happens still fits in with the world they’re imagining, and try to nudge the players back to the story they’ve built. An LLM will just happily continue generating text that meets the heuristics of a story. But, that basically means that the players have no real agency. Nothing they do has real consequences because you can’t affect the plot of the story when there’s no plot to begin with.
And, what if you just use an LLM for dialogue in a game where the story/plot was written by a human. That’s fine until the LLM generates a plausible dialogue that’s “wrong”. Like, say the player is investigating a murder and talks to a guard. In a proper game, the guard might not know the answer, or might know the answer and lie, or might know the answer but not be willing / able to tell the player. But, if you put an LLM in there, it can generate a plausible response from a guard, and that plausible response might match one of those scenarios, but it doesn’t have a concept that this guard is “an honest but dumb guard” or “a manipulative guard who was part of the plot”. If the player comes and talks to the guard again, will they still be that same character, or will the LLM generate more plausible dialogue from a guard, that goes against the previous “personality” of that guard?