certainly more weights contain more general information, which is pretty useful if you’re using a model is a sort of secondary search engine, but models can be very performant in certain benchmarks while containing little general data
this isn’t really by design, up until now (and it’s still continuing to be that way), it’s just that we don’t know how to create an LLM, which can generate coherent text without absorbing a huge portion of the training material
i’ve tried several models based on facebook’s llama LLMs, and i can say that the 13B and definitely 30B versions are comparable to chatGPT in terms of quality (maybe not in terms of the amount of information it has access to, but definitely in other regards)
recent advancements in LLMs that are small enough to be accessible to regular ppl (alpaca/llama), but also performant enough to sometimes even outperform chatGPT, are more interesting to me personally
while the size of this model is certainly super impressive, even if the weights were released, it would require like half a terabyte of VRAM at int4 MINIMUM, so you’d need like ~100k usd just to run inference on this thing at decent accuracy :(
i’ve seen a lot of such footage where conductors spend like 30 seconds trying to forcefully pack in a entire train carriage in order for the doors to close and i’ve always found it to be so weird
like, if your metro is so busy, i’m assuming you have trains running every 60-90 seconds, which translates roughly to the minimum safe distance between trains, therefore they literally just spent like half the time they’d have to wait for another train to come simply trying to shove in one passenger, delaying not only everyone on that train, but also everyone is subsequent trains
where is the logic?
that’s not very meaningful i’m afraid
say, russia turned out to be like scandinavian countries after the fall of the ussr, russians would have been happy for the fall of the ussr, even though scandinavian countries rely on worker exploitation in third world countries, this only indicates that for a lot of people the quality of life took a plunge after the fall of the ussr
living in russia sucks, welcome to the real world, people are so tired and exhausted that they can’t give a shit, just wanting to live a good quality life, without caring about the consequences of their lifestyle, which is very characteristic of capitalism now that i think about it
well, the general goal is to have a community without spam, very off-topic content and toxic environment, it’s the moderator’s role to steer a community to that state
mods usually look through recent posts and comments, and remove those that classify as spam or vioilate rules of the website (mostly the former), but as of recently when reporting functionality was introduced to lemmy, a lot of it is just reviewing reported posts/comments and deciding whether to remove them or not, in the beginning one also get to shape the community by creating rules, general theme etc
the effort required varies depending on commitment and size/activity level of a community and website: in the beginning this instance was like 90 % spam, and users were overwhelmed with it, but now there are a couple of spam posts a day, rarely more
the most infuriating aspect of electron, is is that multi-billion (and even multi-trillion) dollar corporations think it’s ok practice to release desktop clients using electron, like wtf, each one of them can hire ten teams to build ten completely new desktop toolkits, let alone just design their applications to use native system tools 🤷♀️🤦♀️
not from a super hot place, but we do get up to like 35-40 degrees in the summer at times
best way to cool is to open windows at night, close them and the blinds during the day, submerge yourself in water for additional cooling :)