Let’s grow
Well they have to pay for their crimes, and unlike the rest of us, they can afford to. But it’s always worth it to do something to set a precedent and go after them.
I only read the name of the post and was convinced for too long that they ate stray cats…
There’s a lot you’re saying that I agree with, but it’s undeniable that sending weapons to Isreal is not solving this problem it’s directly causing the problem. Biden is incredibly ineffective at solving this and is not holding any sort of red line for real. He needs to hold Isreal accountable for their actions. We have sent billions and billions of dollars of weapons to Isreal, and we likely aren’t stopping anytime soon even if Kamala is elected. We need to hold their feet to the fire and show them this is unacceptable.
What is this from?
20 years? Jamie pull that up.
Wtf is wrong with you?
Quite unnerving
I just watched Tim Pool say that traitors should get the death penalty yesterday…
More would be great. What sort of arguments did you make? We’re you discussing the science?
Forgive me for being suspicious of your comment. There is a huge anti-vegan bias in society, and many argue against veganism, not in good faith. Can you provide any examples of the mods doing this?
Have you looked at any science on this issue? Or are you just using cOmMoN SeNsE to decide who should have their brain removed?
Doh! Didn’t mean to link to a specific time in the video.
Is this edited? Gowron is so clear looking.
Definitely. The thing you might want to consider as well is what you are using it for. Is it professional? Not reliable enough. Is it to try to understand things a bit better? Well, it’s hard to say if it’s reliable enough, but it’s heavily biased just as any source might be, so you have to take that into account.
I don’t have the experience to tell you how to suss out its biases. Sometimes, you can push it in one direction or another with your wording. Or with follow-up questions. Hallucinations are a thing but not the only concern. Cherrypicking, lack of expertise, the bias of the company behind the llm, what data the llm was trained on, etc.
I have a hard time understanding what a good way to double-check your llm is. I think this is a skill we are currently learning, as we have been learning how to sus out the bias in a headline or an article based on its author, publication, platform, etc. But for llms, it feels fuzzier right now. For certain issues, it may be less reliable than others as well. Anyways, that’s my ramble on the issue. Wish I had a better answer, if only I could ask someone smarter than me.
Oh, here’s gpt4o’s take.
When considering the accuracy and biases of large language models (LLMs) like GPT, there are several key factors to keep in mind:
In summary, while LLMs can provide valuable assistance in generating text and answering queries, their accuracy is not guaranteed, and their outputs may reflect biases present in their training data. Users should use them as tools to aid in tasks, but not as infallible sources of truth. It is essential to apply critical thinking and, when necessary, consult additional reliable sources to verify information.
I think they were joking, but that’s a good clarification.