Context: I’m a second year medical student and currently residing in the deepest pit in the valley of the Dunning-Kruger graph, but am still constantly frustrated and infuriated with the push for introducing AI for quasi-self-diagnosis and loosening restrictions on inadequately educated providers like NP’s from the for-profit “schools”.

So, anyone else in a similar spot where you think you’re kinda dumb, but you know you’re still smarter than robots and people at the peak of the Dunning-Kruger graph in your field?

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    10 months ago

    infuriated with the push for introducing AI for quasi-self-diagnosis and loosening restrictions on inadequately educated providers like NP’s from the for-profit “schools”.

    That’s because these decisions are not being made with data, considerations for patient safety, and so on.

    It has everything to do with bloated hospital administrations who eat up all the money and spend pennies on actually helping patients. It’s usually not fucking doctors who are like “you know what would be cool, if I could replace half my nurses with an AI in my phone. I would save so much money!”

    It’s just one more pot to piss money away into while saying it improves something.

    CEO’s and other business leaders regularly ignore data and evidence they don’t like. Look at the Return to Office fight, they don’t care about the data, they don’t care about employee satisfaction or retention, or savings in real estate. They are miserable and they want you to be miserable, too. They have more than enough money for them to weather all the bad decisions they make, because the worst parts of their bad decisions fall on the weakest and poorest in society, as usual. So they could give a damn what anyone thinks of their shitty ideas: their shitty ideas are going to happen, because they’ve got the purse-strings.

    These ideas don’t come from regular people. They come from an entire class of people who is completely disconnected from what any of these decisions do. They are only making decisions for a number on a spreadsheet, and sometimes they don’t actually give a shit if the number goes down as long as they get to continue feeling in control of other people. They literally don’t care that their decisions are dumb and will hurt people, they’re going to do them anyway.

    IMHO, this has fuck-nothing to do with Dunning-Krueger and everything to do with decisions made by rich out of touch twats.

    • z00s@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      10 months ago

      If inaction were deemed socially acceptable as a management strategy, the world would be an incredibly better place.

      So much of the bullshit ideas come from managers who think they have to always seen to be doing something, whereas sufficiently complex systems are often self-balancing and require little to no direct action.

      But to act on this they’d have to admit that managerial jobs are largely bullshit and unnecessary.

    • medgremlin@lemmy.sdf.orgOP
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      10 months ago

      At least I can rest assured of the fact that AI will be next to useless in my intended field. Emergency medicine is an environment where you get a random constellation of symptoms and complaints with very little direction on which are related to the current illness, and which ones are not currently relevant. Also, in the time it would take to get all the info into the AI for a trauma/cardiac/code situation, the patient might be dead or rapidly heading in that direction.

      • Tremble@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        Can’t AI aggregate the data on triage outcomes to prioritize larger scale emergencies… it has to be useful somehow I would think

        • montar@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          10 months ago

          It wouldn’t need to be AI, just sone statistics and “Crdiac arrest? Piority 9. Broken arm? piority 1” decision-making.

          I’m a tech wizard not a healer, there probably are factors that define one cardiac arrest even more critical than the other i just do not know them.

          • medgremlin@lemmy.sdf.orgOP
            link
            fedilink
            arrow-up
            2
            ·
            10 months ago

            There’s some things you look for that are difficult to describe to someone who hasn’t seen it before. That’s part of why experience is so valuable in Emergency Medicine, and it’s not uncommon to put your best nurses out in triage. People will do this kinda twitchy/wilting/loss of focus/change in pallor/change in posture right before they go down. I don’t have a good way to describe it, and it might be easier to draw even, because it really is a body language thing and the general appearance of the patient that can inform your decision making.

        • medgremlin@lemmy.sdf.orgOP
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          I have thought about trying to plan out a learning algorithm that could spit out suggestions for triage level and preliminary tests based on input data like vital signs, symptoms, and complaints… but I would never implement something like that as anything beyond a tool for the nurses at triage to use. There would have to always be an option to override the algorithm because there’s some aspects of patient presentation that are not easily quantifiable. I’d never be able to explain it in a way that one could input it into a computer, but even with my limited experience, I can tell which patients are going to crump on me.