Lawyers for a man charged with murder in a triple homicide had sought to introduce cellphone video enhanced by machine-learning software.

A Washington state judge overseeing a triple murder case barred the use of video enhanced by artificial intelligence as evidence in a ruling that experts said may be the first-of-its-kind in a United States criminal court.

The ruling, signed Friday by King County Superior Court Judge Leroy McCullogh and first reported by NBC News, described the technology as novel and said it relies on “opaque methods to represent what the AI model ‘thinks’ should be shown.”

“This Court finds that admission of this Al-enhanced evidence would lead to a confusion of the issues and a muddling of eyewitness testimony, and could lead to a time-consuming trial within a trial about the non-peer-reviewable-process used by the AI model,” the judge wrote in the ruling that was posted to the docket Monday.

The ruling comes as artificial intelligence and its uses — including the proliferation of deepfakes on social media and in political campaigns — quickly evolve, and as state and federal lawmakers grapple with the potential dangers posed by the technology.

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    7 months ago

    Given AI models’ penchant for hallucinating and the blackbox nature of it all, it seems like it shouldn’t be admissible. AI is fine for creative endeavors, but in arenas where facts matter, AI can’t be trusted.

      • paddirn@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        Oh no, we’re still plowing ahead with this self-induced AI nightmare, this is just a speed bump…

        Friend Computer always knows what’s best for us. All praise the Computer and woe to the Mutant, Commie, Scum who would try to bring ruin upon our beneficent Computer overlord!

  • AnAustralianPhotographer@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    7 months ago

    Edit: my comment isn’t about exactly the same thing, but …

    Some new camera tech might be opening a can of worms about whether what’s pictured can be taken literally.

    There was a story late last year of a woman trying on a wedding dress in front of two mirrors and someone snapped a photo.

    When they looked at it, the reflection on the left mirror had a different pose to the reflection on The right mirror.

    And this cast doubt on what exactly was going on the moment the shutter was pressed.

    It looks like the camera had one of the stitch together the best photo of the people pictured (e.g. don’t show shots of people blinking etc) and it treated the mirror images as different people.

    • TowardsTheFuture@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 months ago

      I mean, yeah but in that everything that happened was real, and happen within a second probably at most of eachother. Still definitely permissible. AI is a very different story.

      • Crozekiel@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        My info may be out of date but last I knew you could not use any edited photographic evidence in court, done by ai or not, in the US.

  • Lanusensei87@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    7 months ago

    “Your Honor, as you can see from the footage, my client sprouted 7 fingers out of his hand, with such a condition, he couldn’t possibly operate a firearm…”

  • sylver_dragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    7 months ago

    This seems like one of those technologies which may be useful as an investigatory tool, but should ultimately not admissible in court. For example, if law enforcement has a grainy video of a crime, and they use AI enhancement to generate leads, that could be ok. Though, it will still have issues with bias and false leads; so, such usage should be tracked and data kept on it to show usefulness and bias. But, anything done to a video by AI should almost universally be considered suspect. AI is really good at making up plausible results which are complete bullshit.

    • Daxtron2@startrek.website
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      It was still AI back then too, it just hadn’t entered the zeitgeist so no one would’ve understood what it meant.

  • scoutFDT@lemm.ee
    link
    fedilink
    arrow-up
    4
    ·
    7 months ago

    Does this ruling apply to all AI processed images or only ones for generative AI? What about stuff like DLSS that utilizes deep learning?

    • RubberDuck@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      7 months ago

      I would imagine that using an AI to create a video and voice of a defendant to “say” something from a transcript would be much more impressive than someone reading it.

  • Daxtron2@startrek.website
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    7 months ago

    I’m generally against the whole anti-AI stuff these days but this makes perfect sense. There’s no way of verifying whether or not the content of an upscaled image is accurate.