In a demonstration at the UK’s AI safety summit, a bot used made-up insider information to make an “illegal” purchase of stocks without telling the firm.

When asked if it had used insider trading, it denied the fact.

Insider trading refers to when confidential company information is used to make trading decisions.

Firms and individuals are only allowed to use publicly-available information when buying or selling stocks.

The demonstration was given by members of the government’s Frontier AI Taskforce, which researches the potential risks of AI.

  • Chthonic@slrpnk.net
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    8 months ago

    I work on chatbots for a big tech company. Every team is trying to use GenAI for everything. 90% of the stuff they try won’t work. I have to explain that LLMs can’t actually think at least three times a week. The hype train was too strong. Even calling it AI feels misleading.

    That said, there are some genuinely great applications for LLMs that i’ve enjoyed looking into.

    • Norgur@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      It’s absolutely a technology that’s worth existing and I think that advances in AI will make our lives vastly different over time. Yet, we’re not at this point yet.

    • KeenFlame@feddit.nu
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      8 months ago

      I mean there are laymen that think they are sentient, sure, but it is much more infuriating to me when techbros come in to explain how they “don’t think” and literally can’t reason or use context at all. So you know more than literally the researchers themselves that don’t fully understand how or why they function? You don’t. Because nobody understands how they can reason or if they have a mental model of the world. Be reasonable and stop spreading bullshit. It’s only to your own avail you downplay what is going on with these things

      • Chthonic@slrpnk.net
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        8 months ago

        They don’t reason, they’re stochastic parrots. Their internal mechanisms are well understood, no idea where you got the notion that the folks building these don’t know how they work. It can be hard to predict/understand how an LLM generated a given prompt because of the huge training corpus and statistical nature of neural nets in general.

        LLMs work the same as any other net, just with massive sample sets. They have no reasoning capabilities of any kind. We are naturally inclined to ascribe humanlike thought processes to them because they produce human-sounding outputs.

        If you would like the perspective of real scientists instead of a “tech-bro” like me I would recommend Emily Bender and Timnit Gebru. I’d recommend them as experts without a vested interest in the massively overblown hype about what LLMs are actually capable of.

        • KeenFlame@feddit.nu
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          8 months ago

          Not really, no. They do reason. Their neural nets have entire research areas dedicated to understanding why they work as we do not and cannot know what the weights represent. It’s okay though. You do you while everyone else in the world research the software reneissance of the century