

You can still do –


You can still do –


My favorite part is the people that are behind the window but in front of the dew


What an interesting observation—highly relevant to the modern world—it’s a shame that cultural touchstones are being devastated by these statistical abnormalities—we must delve deeper into the root causes of this—I firmly believe that we need to train a bigger AI to help determine what is AI and prevent this from—becoming an issue long—term


She’s fine, but she definitely was part of that Wheedon era of media. The voice acting she’s been doing since then is great


Cockshott is a TERF, but Towards a New Socialism is basically a techbro pipeline. It shows how you can use what are essentially financial algorithms (based on Gosplan production matrices) to implement a planned economy.
So basically the only good use for ““AI”” models. I think this guy specifically is a good example of the beginning of that pipeline. This is one of the first times he’s explicitly brought up Graeber, but his video on money was definitely influenced by him. Seems like he’s not too far away from making it more explicit that his concept of automation is based on socialist theory.


Yeah, but it also makes “forking” into a more ambiguous term overall. A GitHub fork can be a clone with a working branch or an actual fork and it’s not immediately obvious until you look at the code.
That’s why I use the README test to see if a forked repo is an actual fork since almost no one will modify the README if their GitHub fork is actually a work branch.


It seems like they’re trying to build a thesis on AI and it’s role in society. Their perspective seems to be in the camp of Graeber and Cockshott.
They honestly might be making the switch because it worked out for Second Thought.


Not always, all PRs usually start as forks unless the person is part of the project and can do their work on a branch.


Yeah, there’s not even a working branch yet. And she’ll need to set up a CI pipeline to keep it synced with the upstream while making sure the AI stuff doesn’t get back in. It’s already 9 commits behind and nothing has even been done.


Art of the Problem did a good video on this recently. Stick with it since he buries the lede and it only comes back when he shows like 3 minutes of uncut David Graber. There is a bit of liberal idealism in there, but he’s not wrong about how democracy is meant to make direct control through financial markets more difficult.


She’s just a German history enthusiast!


That new one they’re working on looks interesting. Doesn’t seem to be a waifu collector at least


Oh I know, it’s just the syntax part that would be nice. Lisp syntax is great for highly functional stuff whereas it feels kinda forced in JS.
Like I said I mostly use Python, so “functional” to me is a comprehension statement (which I think is great syntax), but that type of thing just flows better with syntax specifically designed for it.


As someone who’s never really had to use JS for anything, man is it a messy language. I use Python mostly which has its issues, but if at least has the capability of being pretty robust if you care.
I do wish that Java wasn’t the zeitgeist and the scheme style language was used instead…
There’s also only so much you can do to make Anya Taylor Joy look bad, she’s just kinda got that energy.
She’s fallen so low that she’s turned into a cigarette ad!


NLTK just does Chomsky diagrams and tokenizes text based on parts of speech. It’s mostly a bunch of hash tables and optimized algorithms with a simple pre trained machine learning model (VADER) that can do rudimentary sentiment analysis.
I can see how just jamming text in a pipeline is a simpler solution though, since you need to build the extraction model by hand using NLTK.


That’s a fair use case. I just didn’t have the patience for it lol. I’ve always been better at just failing repeatedly until I succeed, which is basically what GPTs do, but instead of me getting the benefit of that process, they get it and then immediately forget.
Might try the rubber ducking thing at some point, but most of my code is in a field where there’s not really many good examples and the examples that do exist tend to be awful, so it’s pure hallucination. I’ve seen some stuff colleagues have vibe coded and it gives me the ick/bad code smells.


I used to use NLTK back in highschool and college for sentiment analysis and it was usually decently accurate at scale (like checking average sentiment of a tag) and ran surprisingly fast even on an old laptop. Are the open models as performant as NLTK?
It’s also more human since it’s 2 tokens for an LLM, but way easier to actually type for a person.