JUHASZ: DiBenedetto now works for Louisiana’s Department of Education and is in charge of bringing Amira into more classrooms. He says by the time the state’s two-year pilot is over…

DIBENEDETTO: I think we’re going to see some interesting impacts, and we’ll definitely have some data to make prudent decisions in the future.

JUHASZ: Like whether to spend even bigger money on AI. The company behind Amira says 2 million children already use the tool. Experts caution the technology isn’t a replacement for teachers or even all tutors. It can’t build relationships with students like humans can.

MONTAGNINO: I’m old-school. I still believe people, especially with reading for little kids - that’s where it’s at.

JUHASZ: Montagnino, the principal in Gretna, says for that reason, she was skeptical at first.

MONTAGNINO: But this, to supplement good science of reading instruction in the classroom? This is great.

JUHASZ: And it’s likely to get better because just as kids are learning from Amira, it’s learning from them, too.

[Bolding added]

So it seems an alternative headline for this story would be “Private for profit company gets paid to collect training data for its AI from children who could face disciplinary or legal consequences for non-compliance”

  • Dizzy Devil Ducky@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    5 hours ago

    The day AI becomes a better teacher in general than a human, not just for single student experiences but for a while class of people with individual preferences and needs, is the day Hell freezes over.

  • eldereko@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    16 hours ago

    helping children learn to read sounds like an ideal use case for an LLM. An app that utilizes its own users interactions to enhance its own capabilities is not inherently malicious and is vastly different from selling user data to third parties or training on scraped content from others.

    And what are you even talking about with the “children could face disciplinary or legal consequences for noncompliance” nonsense. where was that in the article?

      • gAlienLifeform@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        8 hours ago

        Exactly, they’re a captive audience, and moreover they are legally incompetent to consent to a contracted business relationship like this

        If this was a department of education AI or even some kind of transparently administered non-profit organization I’d be fine with this, but the fact that this is being developed for some for profit company that can just jack their rates and cut off public schools whenever they want to is bullshit. Like, I’m not opposed to the technology of LLMs at all, I think they’re actually pretty neat, but our social and economic systems have a lot of exploitative trash in them that cool technologies can inadvertently exacerbate.

        • eldereko@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          6 hours ago

          Do you think the department of education writes the textbooks, standardized tests (SAT, ACT, etc.), grading and student management software, learning management systems (Google Classroom, Canvas), or manufactures its own classroom tech (Chromebooks, tablets)? The education system is full of for-profit businesses that can jack up the prices, and they do. The DOE simply doesn’t have the resources to create these things themselves and would cost them far more if they tried. The only new thing here is the AI, the business model has existed forever

          Personally, I’m more concerned with the use of Google products in schools. A company that’s sole business is harvesting user data and selling it to advertisers should have no place in schools or children’s products. But they’ve embedded themselves into everything so people just accept it at the cost of privacy

          • actually@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            5 hours ago

            There is a difference between the old fashioned scams with textbooks, and the new AI scams. Its true they have a lot of things in common. But the AI scams are more centralized, more powerful, can make more money. These are forging new ways to con people.

            But we have gone through several generations of new scams, they are not all bad. Such greed and kickbacks are probably healthier than realized and contribute to a stable society somehow in a way that escapes me, but I feel is valid