booty [he/him]

  • 3 Posts
  • 470 Comments
Joined 4 years ago
cake
Cake day: August 11th, 2020

help-circle






  • It’s a very easy game when you know what you’re doing, but yeah if you have no items and don’t know how to get started making them then the monsters that spawn in dark areas will probably fuck you up lol

    They also made the game significantly harder in 2016, pretty much requiring you not only to figure out how to craft items, but also how to get a steady supply of food. Again, still an easy game, just one you need a bit of knowledge to get started in.

    Ask your kid to teach you how to make a sword (from the beginning, he might just show you the crafting recipe and that wouldn’t be very helpful, you’ll wanna know how to gather the materials and make the crafting table too) and farm and cook potatoes or bread and that should pretty much be the baseline required to play the game, at that point it really opens up and you can start exploring and teaching yourself.

    (If your kid is really young and doesn’t know how to do all that either, then let me know and I’ll give you the minimal-spoiler starter tutorial :D)


  • My first instinct was A, at the base of the neck. But now that I think about it I think I agree with this more. I think it could be argued that at the joint is where the neck really begins, and that the narrow part beneath that is still part of the body. And I think it would look better (and more professional!) if our weevil friend wore his tie there.


  • Have you ever used an LLM?

    Here’s a screenshot I took after spending literally 10 minutes with chatgpt very confidently stating incorrect answers to a simple question over and over. (from this thread) Not only is it completely incapable of coming up with a very simple correct answer to a very simple question, it is completely incapable of responding in a coherent way to the fact that none of its answers are correct. Humans don’t behave this way. Nothing that understands what is being said would respond this way. It responds this way because it has no understanding of the meaning of anything that is being said. It is responding based on statistical likelihoods of words and phrases following one another, like a markov chain but slightly more advanced.



  • I don’t see how it could be measured except from looking at inputs&outputs.

    Okay, then consider that when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning, proving that it does not have any functional understanding of anything and instead simply outputs random noise that sometimes looks similar to what one would output if they did understand the content in question.