For those who are reporting this, it’s a satire piece and is the correct sub
I’m gay
For those who are reporting this, it’s a satire piece and is the correct sub
Could you be a little bit more specific? Do you have an example or two of people/situations you struggled to navigate? Bad intentions can mean a lot of things and understanding how you respond and how you wish you were responding could both be really helpful to figuring out where the process is breaking down and what skills might be most useful.
Cheers for this, found two games that seem interesting that I never heard about before!
This isn’t just about GPT, of note in the article, one example:
The AI assistant conducted a Breast Imaging Reporting and Data System (BI-RADS) assessment on each scan. Researchers knew beforehand which mammograms had cancer but set up the AI to provide an incorrect answer for a subset of the scans. When the AI provided an incorrect result, researchers found inexperienced and moderately experienced radiologists dropped their cancer-detecting accuracy from around 80% to about 22%. Very experienced radiologists’ accuracy dropped from nearly 80% to 45%.
In this case, researchers manually spoiled the results of a non-generative AI designed to highlight areas of interest. Being presented with incorrect information reduced the accuracy of the radiologist. This kind of bias/issue is important to highlight and is of critical importance when we talk about when and how to ethically introduce any form of computerized assistance in healthcare.
ah yes, i forgot that this article was written specifically to address you and only you
I appreciate your warning, and would like to echo it, from a safety perspective.
I would also like to point out that we should be approaching this, as every risk, from a harm reduction standpoint. A drug with impurities that could save your life or prevent serious harm is better than no drug and death. People need to be empowered to make the best decisions they can, given the available resources and education.
Great read. Really loved the way it was paced - while it jumped around a lot, it never felt too out of place and tied together nicely.
Venus rhymes with a piece of anatomy often found on men. Obviously they got it backwards
Alt text: the words “white text with black outline can be read on any color” is superimposed on a rainbow gradient, demonstrating the point
Been thinking about picking this one up
It’s FUCKING OBVIOUS
What is obvious to you is not always obvious to others. There are already countless examples of AI being used to do things like sort through applicants for jobs, who gets audited for child protective services, and who can get a visa for a country.
But it’s also more insidious than that, because the far reaching implications of this bias often cannot be predicted. For example, excluding all gender data from training ended up making sexism worse in this real world example of financial lending assisted by AI and the same was true for apple’s credit card and we even have full-blown articles showing how the removal of data can actually reinforce bias indicating that it’s not just what material is used to train the model but what data is not used or explicitly removed.
This is so much more complicated than “this is obvious” and there’s a lot of signs pointing towards the need for regulation around AI and ML models being used in places it really matters, such as decision making, until we understand it a lot better.
Yeah I think that’s a reasonable analogy. It’s important to note that medicine and taking care of your hair are not quite analogous and the potential negative outcomes from bad health care can be orders of magnitude worse than a negative outcome at a salon… But yeah
big weird flex but okay vibes except actually not okay
Like most science press releases I’m not holding my breath
Game changer for smart watches if this turns out to work and scale well
Gender affirming clinics are not doing any kinds of surgery, period. A gender affirming care provider is doing care in the context of the unique health needs of a trans individual. Often this has to do with how hormones affect the body, but in many cases it’s just about being able to provide care in an affirming way. A provider with no training might suggest the patient do things which are not in alignment with their gender (such as advising them to stop or adjust hormones in response to lab values which can be managed in other ways) or use language which is offensive or harmful to the patient (misgendering and dead naming for example). Ancillary services such as speech therapy might be offered at these locations, although generally speaking they tend to be pretty primary care focused. Knowing how often you should be testing for various things, like pap smear frequency for trans men, or knowing how to treat pelvic pain in trans men and women are the kinds of care that gender affirming clinics can offer. Understanding to keep a closer watch on hemoglobin levels for trans men and advising that they donate blood or take medication if they get too high, is also something that a normal clinic might miss. Honestly there’s far too much to even mention in a single comment, which is why these places exist and why there is so much demand.
It’s quite rare for a medical device to have a warranty.
Thanks for writing and sharing this. This is a valuable perspective and brings up some excellent points around accessibility and how some environments and actions can be really draining and how other people are often unaware of how draining it can be due to their privilege on some axis. As an aside, I had never bothered to look into where the language around spoons came from, thank you for sharing the link! That’s super cool that it came from a disability writer, I’ll have to check out the essay it comes from.
I’ve personally found it’s best to just directly ask questions when people say things that are cruel, come from a place of contempt or otherwise trying to start conflict. “Are you saying x?” but in much clearer words is a great way to get people to reveal their true nature. There is no need to be charitable if you’ve asked them and they don’t back off or they agree with whatever terrible sentiment you just asked whether they held. Generally speaking people who aren’t malicious will not only back off on what they’re saying but they’ll put in extra work to clear up any confusion - if someone doesn’t bother to clear up any confusion around some perceived hate or negativity, it can be a more subtle signal they aren’t acting in good faith.
If they do back off but only as a means to try and bait you (such as refusing to elaborate or by distracting), they’ll invariably continue to push boundaries or make other masked statements. If you stick to that same strategy and you need to ask for clarification three times and they keep pushing in the same direction, I’d say it’s safe to move on at that point.
As an aside - It’s usually much more effective to feel sad for them than it is to be angry or direct. But honestly, it’s better to simply not engage. Most of these folks are hurting in some way, and they’re looking to offload the emotional labor to others, or to quickly feel good about themselves by putting others down. Engaging just reinforces the behavior and frankly just wastes your time, because it’s not about the subject they’re talking about… it’s about managing their emotions.