• footfaults@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    6 hours ago

    I think it’s far more telling how you conflate automation with Large Language Models (colloquially being called AI even though it’s not).

    Much of those technologies that you cite as examples and call AI (OCR, computer vision), I don’t understand why you do that. Those technologies existed long before LLMs.

    I find the protein folding example especially perplexing since protein folding simulation existed far, far before LLMs and machine learning, and it is ahistorical to claim those as being AI innovations.

    I don’t agree with your AI boosterism, but I think what is more perplexing is how misinformed it is.

    • pcalau12i@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      They are all artificial neural networks, which is what “AI” typically means… bro you literally know nothing about this topic. No investigation, no right to speak. You need to stop talking.

      The “intelligence” part in artificial intelligence comes from the fact that these algorithms are very loosely based on how what makes biological organisms intelligent: their brains. Artificial neural networks (as they are more accurately called) use large numbers of virtual neurons with different strengths of neural connections between the neurons sometimes called their “weights” and the total number of different connections is referred to as the “parameter” count of the model.

      You do a bit of calculus and you can figure out how to take training data to adjust sometimes billions of parameters in an ANN in order to make the artificial neural network spit out more accurate answers given the training data. You repeat this process many times with a lot of data and eventually the ANN will fine-tune itself to find patterns in the dataset and start spitting out better and better answers.

      The benefit of ANNs is precisely that they effectively train themselves. Imagine writing a bunch of if/else statements to convert text in an image to written text. It would be impossible because there’s quadrillions of different ways an image can look and have the same text, if it’s taken at a different distance, different writing style, under different lighting conditions, etc. You would be coding for forever and would never solve it. But if you feed an ANN millions of pictures of written text alongside images of that written text under all these different conditions, you can do a bit of calculus with a lot of computational power and what you will spit out is the fine-tuned weights for an ANN that if you pass in a new image it will be able to identify the text.

      Technology is fascinating but sadly you seem to have no interest in it and I doubt you will even read this. I only write this for others who may care.

      Also, yes, computer vision is also based on ANNs. I have my own AI server with a couple GPUs and one of the tasks I use it for is optical character recognition which requires you to load the AI model onto the GPU for it to run quickly, otherwise it is rather slow (I am using paddleocr). If the image I am doing OCR on is in a different language then I can also pass it through Qwen to translate it. If you ever setup a security system in your home, these often will use AI for object recognition. It’s very inefficient to record footage all the time, but many modern security systems you can tell them to record footage only when they see a moving person, or a moving car. Yes, this is done with AI, you can even buy an “AI hat” for the Raspberry Pi that was developed specifically for computer vision and object identification.

      Literally if you ever take a course in AI, one of the first things you learn is OCR, because it’s one of the earliest examples of AI being useful. There is literally a famous dataset with its own Wikipedia page called MNIST because so many people who learn how AI work often first learn to build a simple one that can do OCR on handwritten digits that they are tasked with training on the MNIST dataset.

      I’m also surprised your hatred is towards large language models specifically, when usually people who hate AI despise text-to-image models. You do know that “AI art” generators are not LLMs, yes? I find it odd someone would despise LLMs, which actually have a lot of utility like language translation and summarization, over TTIMs, which don’t have much utility at all besides spitting out (sometimes…) pretty pictures. Although, I assume you don’t even know the difference since you seem to not know much about this subject, and I doubt you will even read this far anyways.

      • footfaults@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 hours ago

        bro you literally know nothing about this topic. No investigation, no right to speak. You need to stop talking.

        You are toxic, as well as being incredibly arrogant. A true example of Dunning-Krueger effect. If you want to have a tantrum then by all means do so, but don’t pretend that you are on some sort of high ground when you make your pronouncements.

        Every conversation you have had with me, you project opinions that I do not have (maxism vs anarchism, calling me a luddite, etc) and construct strawmen arguments that I did not make

        Do some self crit