• hark@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    19 hours ago

    Something to keep in mind when people are suggesting AI be used to replace teachers.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      17 hours ago

      To be fair, some human teachers are way worse with abusive behaviour…

      I still agree, that you shall not replace teachers with LLM, but teachers should teach how to use and what they can/can’t do in schools.

      Imagine if internet was still banned from schools…

  • AFK BRB Chocolate@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    2
    ·
    24 hours ago

    Isn’t this one of the LLMs that was partially trained on Reddit data? LLMs are inherently a model of a conversation or question/response based on their training data. That response looks very much like what I saw regularly on Reddit when I was there. This seems unsurprising.

  • Vibi@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    135
    arrow-down
    1
    ·
    1 day ago

    It could be that Gemini was unsettled by the user’s research about elder abuse, or simply tired of doing its homework.

    That’s… not how these work. Even if they were capable of feeling unsettled, that’s kind of a huge leap from a true or false question.

    • ayyy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      1
      ·
      24 hours ago

      Wow whoever wrote that is weapons-grade stupid. I have no more hope for humanity.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        17 hours ago

        Well that is mean… How should they know without learning first? Not knowing =/= stupid

        • ayyy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          20
          ·
          17 hours ago

          No, projecting emotions onto a machine is inherently stupid. It’s in the same category as people reading feelings from crystals (because it’s literally the same thing).

          • Petter1@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            4
            ·
            17 hours ago

            It still something you have to learn. Your parents(or whoever) teaching you stupid stuff does not make you stupid, but knowing BS stuff thinking it is true.

            For me stupid means that you need a lot of information and a lot of time understanding something where the opposite would be smart where you understand stuff fast with few information.

            Maybe we have just different definitions of stupid…

            • ayyy@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              10
              ·
              17 hours ago

              Well in this case if you have access to a computer for enough time to become a journalist that writes about LLMs you have enough time to read a 3-5 paragraph description of how they work. Hell, you could even ask an LLM and get a reasonable answer.

              • Petter1@lemm.ee
                link
                fedilink
                English
                arrow-up
                5
                arrow-down
                2
                ·
                16 hours ago

                Ohh that was from the article, not a person who commented? Well that makes all the difference 😂

  • zephorah@lemm.ee
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    1
    ·
    1 day ago

    Fits a predictable pattern once you realize AI absorbed Reddit.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    6
    ·
    18 hours ago

    2 years later… The all new MCU Superman meets the Wolverine and Deadpool all AI animated feature!..

    Why. Hello Mr wolverine 😁, my name is Man and I am super according to 98% of the other human population. Oh hello Mister Super last name Man! Yes, we are Wolverine and Deceased Pool. We are from America and belong to a non profit called the X-People, a group where both men and women who have been affected by DNA mutations of extraordinary kind gather to console one another and to defend human beings by taking advantages of the special mutations of its members. Yes, it’s quite interesting. And you? Oh I an actual called CalElle and I am a migrant from an expired plant that goes by the name you assigned the heavy novel gas Krypton. Anyway because the sun is bright and yellow I can fly, I’m very strong and can burn things with my eyes. I think I am similar to those of you in the X-People club! Good to meet you! Likewise!

  • cy_narrator@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    9
    ·
    22 hours ago

    I remember asking copilot about a gore video and got link to it. But I wouldnt expect it to give answers like this unsolicitated

  • Gointhefridge@lemm.ee
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    15
    ·
    1 day ago

    I’m still really struggling to see an actual formidable use case for AI outside of computation and aiding in scientific research. Stop being lazy and write stuff. Why are we trying to give up everything that makes us human by offloading it to a machine?

    • candybrie@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      21 hours ago

      Why are we trying to give up everything that makes us human by offloading it to a machine

      Because we don’t enjoy actually doing it. No one who likes writing is asking chat gpt to write for them. It’s people who don’t want to write but are required to for whatever reason. Humans will always try to come up with a way to not have to do the work they don’t want to but still get it done, even if it’s not as good. Using tools like this is very human.

      • Gointhefridge@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        10 hours ago

        I really don’t see any value in AI art. AI pictures look like slop, AI music sounds soulless, AI writing I guess can be fine but usually sounds weird.

        I just don’t see the value in AI because to me, every use case scenario for anything artistic is justified with a capitalist excuse.

        I’ll give you the organizational ones, that’s understandable and not a bad reason. I suppose I have trouble getting behind taking the soul out of creating something just to slap it on an ad or product to sell something.

        • treefrog@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          9 hours ago

          Commodification of art is soul less. Doesn’t matter if a person makes the commodity or a machine. It’s meant to be aesthetically pleasing or elicit an emotion to sell something. It’s not really art anymore than what I’m writing here is art.

          Art is about playful self-expression and often sharing that expression with those who appreciate it.

          And AI creative writing is garbage too. I had Gemini write some poetry for me yesterday out of curiosity, and, as someone that writes poetry, I’ll just say it was formulaic and predictable. It has no understanding of the medium, it’s history, why things are done in certain ways, or ability to play with the many forms poetry may take. It’s a good enough replica for people who want to write a shitty rhyming poem. Like we all learned to do as children. And it has a huge vocabulary to make rhymes with. But it was still uninspired drivel.

          For creative writing, it’s a tool. Not a writer. And for technical writing, well, it’s often wrong about things so… still a tool.

    • GreyBeard@lemmy.one
      link
      fedilink
      English
      arrow-up
      11
      ·
      23 hours ago

      Its uses are way more subtle than the hype, but even LLMs can have uses, occasionally. Specifically, I use one to categorize support tickets. It just has to pick from a list of probable categories. Nice and simple for it. Something humans can do just as easily, but when you have a history of 2 million tickets that need to be categorized, suddenly the LLM can do it when it would drive a human insane. I’m sure there are lots of little tasks like that. Nothing revolutionary, but still valuable.

    • deegeese@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      36
      arrow-down
      2
      ·
      1 day ago

      AI summaries of larger bodies of text work pretty well so long as the source text itself is not slop.

      Predictive text entry is a handy time saver so long as a human stays in the driver’s seat.

      Neither of these justify current levels of hype.

      • kitnaht@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        10
        ·
        edit-2
        1 day ago

        Go look at the models available on huggingface.

        There’s applications in Visual Question Answering, Video to Text, Depth Estimation, 3D recreation from a photo, Object detection, visual classification, Translation from language to language, Text to realistic speech, Robotics Reinforcement learning, Weather Forecasting, and those are just surface-level models.

        It absolutely justifies current levels of hype because the research done now will absolutely put millions out of jobs; and will be much cheaper than paying people to do it.

        The people saying it’s hype are the same people who said the internet was a fad. Did we have a bubble of bullshit? Absolutely. But there is valid reason for the hype, and we will filter out the useless stuff eventually. It’s already changed entire industries practically overnight.

        • chrash0@lemmy.world
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          1 day ago

          the reactionary opinions are almost hilarious. they’re like “ha this AI is so dumb it can’t even do complex systems analysis! what a waste of time” when 5 years ago text generation was laughably unusable and AI generated images were all dog noses and birds.

        • Mbourgon everywhere@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          I think he’s talking about the LLMs, which…yeah. AI and LLMs are lumped together (which makes sense, but classification makes a huge difference here)

          • kitnaht@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            1 day ago

            Even LLMs in the context of coding, I am no programmer - I have memory issues, and it means I can’t keep the web of information in my head long enough to debug the stuff I attempt to write.

            With AI assistants, I’ve been able to create multiple microcontroller projects that I wouldn’t have even started otherwise. They are amazing assistive technologies. Many times, they’re even better than language documentation themselves because they can give an example of something that almost works. So yes, even LLMs deserve the amount of hype they’ve been given. I’ve made a whole game-server management back-end for ARK servers with the help of an LLM (qwen-coder 14b).

            I couldn’t have done it otherwise; or I would have had to pay someone $60k; which I don’t have, and which means the software never would have existed.

            I’ve even moved onto modifying some open source Android apps for a specialized camera application. Compared to a normal programmer, sure - maybe it’s not as good. But having it next to me as an inexperienced nobody allows me to write programs I wouldn’t have otherwise been able to, or that would have been too daunting of a task.

            • rumba@lemmy.zip
              link
              fedilink
              English
              arrow-up
              7
              ·
              1 day ago

              Hell, even if you are a programmer and have no memory issues, it’s a hell of a lot faster to have it boilerplate something for you for a given engine with certain features than to sit down and write it from scratch or try to find a boilerplate. Stack exchange usage has been going down regularly as LLMs are filling the gap.

              It doesn’t get you to third base or anything. But it does get you started and well-structured within the first couple minutes of code for any reasonably simple task.

              Last year I worked on a synchronized Halloween projector project. I had the first week of work saved into my repo, but as Halloween approached, I wrote a lot of it on the server. After Halloween, I failed to commit it back and inadvertently wiped the box.

              This year, after realizing my code was gone, I decided to try having copilot give me a head start. I had it start back over from scratch, asked it in detail for exactly what I had last year, it was all fully functional again in about 4 hours. It was clean, functional well documented code. I had no problem extending it out with my own work and picked up like I hadn’t lost anything.

              • Noxy@yiffit.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                20 hours ago

                to be fair, you had already done the thing and learned from that process. you should give yourself more credit!

    • superkret@feddit.org
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 day ago

      It’s good for speech to text, translation and a starting point for a “tip-of-my-tongue” search where the search term is what you’re actually missing.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        With chatgpt’s new web search it’s pretty good for more specialized searches too. And it links to the source, so you can check yourself.

        It’s been able to answer some very specific niche questions accurately and give link to relevant information.

    • CubitOom@infosec.pub
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      It can be really good for text to speech and speech to text applications for disabled or people with learning disabilities.

      However it gets really funny and weird when it tries to read advanced mathematics formulas.

      I have also heard decent arguments for translation although in most cases it would still be better to learn the language or use a professional translator.

    • bloup@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      1 day ago

      I don’t use it for writing directly, but I do like to use it for worldbuilding. Because I can think of a general concept that could be explored in so many different ways, it’s nice to be able to just give it to an LLM and ask it to consider all of the possible ways it could imagine such an idea playing out. it also kind of doubles as a test because I usually have some sort of idea for what I’d like, and if it comes up with something similar on its own that kind of makes me feel like it would be something which would easily resonate with people. Additionally, a lot of the times it will come up with things that I hadn’t considered that are totally worth exploring. But I do agree that the only as you say “formidable” use case for this stuff at the moment is to use this thing as basically a research assistant for helping you in serious intellectual pursuits.

    • five82@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      1 day ago

      The relentless pursuit of capitalism and reduced labor costs. I still don’t think anyone knows how effective it’s going to be at this point. But companies are investing billions to find out.

    • chakan2@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      1 day ago

      I’m still really struggling to see an actual formidable use case

      It’s an excellent replacement for middle management blather. Content that has no backing in data or science but needs to sound important.

  • Noxy@yiffit.net
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    20 hours ago

    on the other hand, this user is writing or preparing something about elder abuse. I really hope this isn’t a lawyer or social worker…

  • meyotch@slrpnk.net
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    1 day ago

    I suspect it may be due to a similar habit I have when chatting with a corporate AI. I will intentionally salt my inputs with random profanity or non sequitur info, for lulz partly, but also to poison those pieces of shits training data.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      1 day ago

      I don’t think they add user input to their training data like that.

      • kitnaht@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 day ago

        They don’t. The models are trained on sanitized data, and don’t permanently “learn”. They have a large context window to pull from (reaching 200k ‘tokens’ in some instances) but lots of people misunderstand how this stuff works on a fundamental level.

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    24 hours ago

    Will this happen with AI models? And what safeguards do we have against AI that goes rogue like this?

    Yes. And none that are good enough, apparently.

  • Ceedoestrees@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    2
    ·
    1 day ago

    The war with AI didn’t start with a gun shot, a bomb or a blow, it started with a Reddit comment.

      • reksas@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        23 hours ago

        it doesnt think and it doesnt use logic. All it does is out put data based on its training data. It isnt artificial intelligence.

  • asbestos@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    4
    ·
    22 hours ago

    If this happened to me I’d probably post it everywhere and proceed to kill myself just to cause a PR hell

    • Darth_Mew@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      15 hours ago

      the hero we deserve, but not the one we need

      this actually fucking hilarious I can’t stop cackling