

Ahh. I’d seen a bunch of people pointedly avoiding things he’d worked on and was working with, but no one actually said why so I was assuming it was llm related. No such luck, I guess… the old missing stair strikes again.


Ahh. I’d seen a bunch of people pointedly avoiding things he’d worked on and was working with, but no one actually said why so I was assuming it was llm related. No such luck, I guess… the old missing stair strikes again.


Ahh, i knew there was a recent catastrophe involving people handing credentials and confidential information to third parties without a single thought or qualm, but couldn’t for the life of me remember what it was. Thanks!


So, there’s a kind of security investigation called “dorking”, where you use handy public search tools to find particularly careless software misconfigurations that get indexed by eg. google. One too, for that sort of searching it github code search.
Turns out that a) claude chat logs get automatically saved to a file under .claude/logs and b) quite a lot of people don’t actually check what they’re adding to source control, and you can actually search github for that sort of thing with a path: code search query (though you probably need to be signed in to github first, it isn’t completely open).
I didn’t find anything even remotely interesting (and watching people’s private project manager fantasy roleplay isn’t something I enjoy), but viss says they’ve found credentials, which is fun.


Given that openai is now a precedent for removing the pb figleaf from a pbc, I’m assuming everyone will be doing it now and it’ll just become another part of the regular grift.


That’s an excellent summary of the product.


Armin Ronacher, who is an experienced software dev with a fair amount of open and less open source projects under his belt, was up until fairly recently a keen user of llm coding tools. (he’s also the founder of “earendil”, a pro-ai software pbc, and any company with a name from tolkien’s legendarium deserves suspicion these days)
His faith in ai seems to have taken bit of a knock lately: https://lucumr.pocoo.org/2026/1/18/agent-psychosis/
He’s not using psychosis in the sense of people who have actually developed serious mental health issues as a result of chatbot use, but software developers who seem to have lost touch with what they were originally trying to and just kind a roll around in the slop, mistaking it for productivity.
When Peter first got me hooked on Claude, I did not sleep. I spent two months excessively prompting the thing and wasting tokens. I ended up building and building and creating a ton of tools I did not end up using much. “You can just do things” was what was on my mind all the time but it took quite a bit longer to realize that just because you can, you might not want to. It became so easy to build something and in comparison it became much harder to actually use it or polish it. Quite a few of the tools I built I felt really great about, just to realize that I did not actually use them or they did not end up working as I thought they would.
You feel productive, you feel like everything is amazing, and if you hang out just with people that are into that stuff too, without any checks, you go deeper and deeper into the belief that this all makes perfect sense. You can build entire projects without any real reality check. But it’s decoupled from any external validation. For as long as nobody looks under the hood, you’re good. But when an outsider first pokes at it, it looks pretty crazy.
He’s still pro-ai, and seems to be vaguely hoping that improvements in tooling and dev culture will help stem the tide of worthless slop prs that are drowning every large open source project out there, but he has no actual idea if any of that can or will happen (which it won’t, of course, but faith takes a while to fade).
As always though, the first step is to realise you have a problem.


I’ve thought about jolla, but I’m not particularly interested right now. Their security is unlikely to be anything like as good as ios or graphene, software availability is poor, the hardware quality appears to be ok at best, and so on.
I’m considering various alternative devices, but if it’s effectively a “vanilla smartphone only slightly worse” it doesn’t really appeal to me. If they’d built a modern n900, on the other hand…


This is fun: a zero-click android exploit that allows arbitrary code execution and privilege escalation. Y’know, the worst kind. How did we get here?
Over the past few years, several AI-powered features have been added to mobile phones that allow users to better search and understand their messages. One effect of this change is increased 0-click attack surface, as efficient analysis often requires message media to be decoded before the message is opened by the user. One such feature is audio transcription. Incoming SMS and RCS audio attachments received by Google Messages are now automatically decoded with no user interaction. As a result, audio decoders are now in the 0-click attack surface of most Android phones.
AI, making everything worse, even before it runs!
https://projectzero.google/2026/01/pixel-0-click-part-1.html
Every now and then, I think about going back to android, and then I read stuff like this. FWIW, iOS had a closely related bug, but compiled the offending code with bounds checks, so it wasn’t usefully exploitable (and required some user interaction, too).
Anyway, if you do android, maybe check if automatic transcription is enabled.


Blacksky has delivered on bluesky’s promise of federation by setting up their own app view, creating a complete and independent third party implementation.
https://blacksky.community/profile/did:plc:w4xbfzo7kqfes5zb7r6qv3rw/post/3mcozwdhjos2b
Mcc has an interesting thread on mastodon (https://mastodon.social/@mcc/115918042095581428) which asks a bunch of questions about what the actual consequences of this might be, and no-one really seems to know, but no-one has much faith in the engineering or moderation chops of the bluesky team.
It looks like bluesky is somewhat vulnerable to rich trolls, because the main barrier to entry is cost… blacksky has budget of maybe 80000 usd/year (https://opencollective.com/blacksky) which is well within the reach of a whole bunch of people prepared to spend money to be egregious assholes, especially if they already have access to suitable talent and equipment. It’ll be bleakly interesting to see who tries this first.


A fun little software exercise with no real world uses at all: https://drewmayo.com/1000-words/about.html
Turns out that if you stuff the right shaped bytes into png image tEXt chunks (which don’t get compressed), the base64 encoded form of that image has sections that look like human readable text.
What are the implications?
Nothing! This was just for fun after a discussion with a colleague whether it might be even possible to make base64 blobs look readable. There’s certainly no poorly coded systems out there which might be hooked up to read emails or webpages and interpret any text they see as information.
No siree I’m sure everyone is keeping the attachments and the content well and truly isolated from each other and this couldn’t possibly do anything other than be a fun proof of concept and excuse for me to play with wasm.


As an update, the original author received sufficient negative feedback and abuse that they’ve closed the repo and decided to give open source and social media a bit of a rest for the foreseeable future. Hopefully the projects they’ve created and been working on will live on, but it is quite a loss to the community either way.


I feel like one day that “no guarantee of merchantability or fitness for any particular purpose” thing will have to give.


Python at least has the advantage that there are a bunch of ides and fancy text editors out there (with varying degrees of llm integration), so if one doesn’t work for you there are alternatives. Fwiw, there are several different language servers as well, though I’m not sure how easy they are to swap in codium.
I’ve been using zed of late. It has an ai killswitch, which is good enough for my needs which are a bit less hardline than the authors of the slopware list, and also helix which is nice but a very different sort of tool.


There’s room for some nuance there. They make some reasonable predictions, like chatbot use seems likely to enter the dsm as a contributing factor for psychosis, and they’re all experience systems programmers who immediately shot down Willison when he said that an llm-generated device driver would be fine, because device drivers either obviously work or obviously don’t, but then fall foul of the old gell-mann amnesia problem.
Certainly, their past episodes have been good, and the back catalogue stretches back quite some time, but I’m not particularly interested in that sort of discussion here.


My gloomy prediction is that (b) is the way things will go, at least in part because there are fewer meaningful consequences for producing awful software, and if you started from something that was basically ok it’ll take longer for you to fail.
Startups will be slopcoded and fail quick, or be human coded but will struggle to distinguish themselves well enough to get customers and investment, especially after the ai bubble pops and we get a global recession.
The problems will eventually work themselves out of the system one way or another, because people would like things that aren’t complete garbage and will eventually discover how to make and/or buy them, but it could take years for the current damage to go away.
I don’t like being a doomer, but it is hard to be optimistic about the sector right now.


Eh, it works okay for me. What’s “the language server” in this case? The third party c# language server isn’t great, but I’ve had no particular issues with python, julia, lean4, rust, dafny and tla+. I can’t recommended (vs)codium because it is a monstrous memory hog, and even in its de-microsofted form it’ll be full of vide-coding, but where there aren’t good alternatives is gets a solid “it’ll do” from me.


If that won’t sell it to governments around the world, I don’t know what will. Elon’s on to a winner with that strategy.


Ugh, I carried to listening to the episode in the hopes it might get better, but it didn’t deliver.
I don’t understand how people can say, with a straight face, that ai isn’t coming for your job and it is just going to make everyone more productive. Even if you ignore all the externalities of providing llm services (which is a pretty serious thing to ignore), have they not noticed the vast sweeping layoffs in the tech industry alone, let alone the damage to other sectors? They seem to be aware that the promise of the bubble is that agi will replace human labour, but seem not to think any harder about that.
Also, Willison thinks that a world without work would be awful, and that people need work to give their lives meaning and purpose and bruh. I cannot even.


Been listening to the latest oxide and friends podcast (predictions 2026), and ugh, so much incoherent ai boosting.
They’re an interesting company doing interesting things with a lot of very capable and clever engineers, but every year the ai enthusiasm ramps up, to the point where it seems like they’re not even listening to the things they’re saying and how they’re a little bit contradictory… “everyone will be a 10x vibe coder” and “everything will be made with some level of llm assistance in the near future” vs “no-one should be letting llms access anything where they could be doing permanent damage” and “there’s so much worthless slop in crates.io”. There’s enthusing over llm law firms, without any awareness of the recent robin ai collapse. Talk of llms generating their own programming language that isn’t readily human readable but is somehow more convenient for llms to extrude, but also talking about the need for more human review of vibe code. Simon Willison is there.
I feel like there’s a certain kind of very smart and capable vibe coder who really cannot imagine how people can and are using these tools to avoid having to think or do anything, and aren’t considering what an absolute disaster this is for everything and everyone.
Anyway, I can recommend skipping this episode and only bothering with the technical or more business oriented ones, which are often pretty good.
A few months back, @ggtdbz@lemmy.dbzer0.com cross-posted a thread here: Feeling increasingly nihilistic about the state of tech, privacy, and the strangling of the miracle that is online anonymity. And some thoughts on arousing suspicion by using too many privacy tools and I suggested maybe contacting some local amateur radio folk to see whether they’d had any trouble with the government, as a means to do some playing with lora/meshtastic/whatever.
I was of the opinion that worrying about getting a radio license because it would get your name on a government list was a bit pointless… amateur radio is largely last century technology, and there are so many better ways to communicate with spies these days, and actual spies with radios wouldn’t be advertising them, and that governments and militaries would have better things to do than care about your retro hobby.
Anyway, today I read MAYDAY from the airwaves: Belarus begins a death penalty purge of radio amateurs.
I’ve not been able to verify this yet, but once again I find myself grossly underestimating just how petty and stupid a state can be.