I’m gay

  • 72 Posts
Joined un anno fa
Cake day: gen 28, 2022


Fantastic, thank you for sharing this. None of this surprises me as I keep up to date on AI and ethical concerns, but I’m glad it’s receiving more attention.

Apologies, upon re-reading my comment I see that I didn’t really spend enough time to word this appropriately. Vapes are not harmless. There is absolutely a harm to inhaling any vaporized liquid, or just inhaling hot gas in general. We don’t know the full extent of the harms it causes, but we have a decent amount of knowledge from similar hot inhaled substances to draw some high level conclusions.

I was mostly speaking towards the harm reduction angle and how many places around the world have taken a hard stance on this - banning the substance because it is problematic. It always struck me as rather shortsighted, especially when presented with the alternative, a much more adulterated and toxic substance, still being legal. In general drug prohibitions do not work and I appreciate an article talking about the intricacies of that.

Thanks for this article, it starts out with a strong Scientific background. What I personally found interesting as I started to investigate nicotine vapes a few years back was the lack of solid evidence out there which showed any real harm from nicotine vapes and also how shoddy nearly all the science on how addictive of a substance nicotine was (almost all of it is conducted on cigarettes, failing to control for other chemicals or used outdated animal models of addiction which exaggerate addictive quality).

As we’ve seen throughout the entirety of human history, making substances illegal does not stop people from using them. I’m glad someone has taken the time to investigate this, and I hope we can learn in the future that banning substances doesn’t work. In fact, all the evidence points towards declining usage and increasing safety as drugs are legalized and controlled as they become less adulterated and the taxes can be used for purposes such as fighting addiction.

Thank you for sharing this. You’re absolutely right that its not up to you to educate others. In fact, the concept of educational burden is often brought up when we talk about minorities. If someone unknowingly does something racist or sexist, they often push back and ask for an explanation from the affected party. This is a burden they are placing on others, because they have not educated themselves. But this is also misplaced, because they are the one causing harm and they are usually the person in the position of power or the person who is in a place of privilege.

People mixing their own pre-workout often make this mistake and drop in a tbs or more of caffeine which can and often does kill people. Caffeine is a risky substance when you utilize it in a purified form - a risk with many drugs where the active dose is so small.

I think a focus on the source of the misinformation is misplaced

It’s the power of that source to generate misinfo at a faster speed and for close to no cost that’s a more pressing issue here.

I don’t think this is particularly likely to happen, but imagine I use a LLM to create legal documents to spin up non-profit companies for very little cost, I hire a single lawyer to just file these documents without looking at them and only review if they get rejected. I could create an entire network of fake reporting companies fairly easily. I can then have a LLM write up a bunch of fake news, post it to websites for these fake reporting companies, and embed an additional layer of reporting on top of the reporting to make it seem legit. Perhaps some of the reports are actually twitter bots, Instagram bots, etc. spinning up images with false info on it, and paying for bot farms to surface these posts enough for them to catch on and spread naturally on outrage or political content alone. This kind of reporting might seem above-board enough to actually make it to some reporting websites which in turn could cause it to show up in major media. This could end up with real people creating Wikipedia pages or updating existing information on the internet and sourcing these entirely manufactured stories. While there are some outlets out there who do their research and there are places which fact check or might question these sources, imagine I’m able to absolutely flood the internet with this. At what point of all total reporting/sharing/news/tweeting/youtubing/tiktoking/etc does this become something which our system can actually support investigating?

I also think it’s important to consider the human element - imagine I am an actor interested in spreading misinformation and I have access to a LLM. I can outsource the bulk of my writing to this LLM - I can simply tell it to write a piece about something I wish to spread, and then review it as a human and make minor tweaks to the phrasing, combine multiple responses, or otherwise use it as a fast synthesis engine. I now have more time to spread this misinformation online, meaning that I can reach more venues and create misinformation much quicker than I could previously. This is also a potential vector through which misinformation can be spread more quickly through the use of LLMs. In fact, I’m positive this vector is already being used by many.

However none of that touches on what I think is the most pressing issue of all, the use of AI outside it’s scope and a fundamental misunderstanding of inherent bias in systemic structure. I’ve seen cases where AI was used to determine when people should or shouldn’t receive governmental assistance. I’ve seen AI used to flag when people should be audited. I’ve seen AI used by police to determine who’s likely to commit a crime. Language models aren’t regularly used at policy scale, but language models also have deeply problematic biases. I think we need to be rethinking when AI is appropriate and the limitations of it and to consider the ethical implications during the very design of the model itself or we’re going to have far reaching consequences which will simply amplify existing systemic biases by reinforcing them in their application. Imagine that we trained a model on IRS audits and used it to determine whether someone deserved an audit. We’d end up with an even more racist system than we currently have. We need to stop the over-application of AI because we often have a fundamental misunderstanding of scope, reach, and the very systems we are training them on.

Why do you think that I perceive chatgpt in this way? I voiced an opinion about the biases that chatgpt and most AI have due to their large training sets which reflect systemic biases.

Why do you ask this question?

Can you help me understand what you mean by propaganda device?

Unfortunately, AI’s typical problem with biases, in particular those towards certain minorities which are discriminated against online, did not warrant making this release. It only gets a tiny mention under limitations:

GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts.

This is incredibly anecdotal story. It’s one that highlights the experience of one elder doctor and how they don’t like the expansion of a technology they don’t understand and don’t wish to adapt to. There’s countless studies and even metastudies out there about how incredibly useful and important telehealth is. Hell, there’s even reviews of metastudies which highlight how useful this technology is and how abundant we have data to prove its efficacy. The article doesn’t spend any time touching on the other side of the argument. It’s hyperfocused on this one doctor’s opinion of healthcare, and their perception of it. The one patient he focuses on, is exactly the kind of patient for which the kind of telehealth he was practicing (zoom style narrative only telehealth) is not particularly well suited. There’s a reason that there’s telehealth devices which exist to allow the use of a sphygmomanometer, stethoscope, otoscope, and other important checkup tools or are a hybrid telehealth environment where a nurse can do these and report findings to a doctor who’s present virtually.

As an aside I’m not sure what to think of the publication openmind magazine. They’re relatively new and they claim to have a focus on unbiased reporting, but they also claim to be here to address and debunk conspiracies and deceptions and controversies. If this is meant to be a think piece, the lack of addressing the obvious scientific gap between this anecdotally based thinking about a very well established scientific field makes me think twice about whether this is truly out here to be based on fact or whether this actually just a conservative mouthpiece trying to pass itself off as focused on facts.

With all of that being said, I do think there’s an important consideration to be made in healthcare, and one that’s been discussed in extreme depth in the literature - what kinds of care are better for telehealth and which are best for in-person (or at least, what tech would we need for the two to be comparable). There are absolutely important considerations on what specialties and workflows do well in the telehealth field and which ones are not well suited. Emergency and trauma care, for example, are unlikely to have any telehealth components for a long time. Dermatology and mental health, on the other hand, are extremely successful in the telehealth space and were early adopters. There’s also a specific set of skills and a way of approaching diagnosis that are fundamentally different for those people you see in person and those you see via telehealth and if you are not adequately trained on these considerations it makes a lot of sense that you might not work well in between the two mediums.

This will never end up happening, because big business has its hands in every government, but tracking of any sort really needs to be opt-in, rather than opt-out. In California, for example, this is how it works for companies which like to send out those “we want to share information with our business partners” emails, documents, etc. If you are a resident in California and do not reply, by law the company must assume that you opted-out.

Every day we stray further from the light… I’m so sick and tired of the middle of this god forsaken country

I first was introduced to this concept through a TED talk on behavioral economics as it relates to language - as mentioned in this article, languages which grammatically associate the future and the present together also happen to save more money for retirement, practice safer sex, prioritize their physical health, etc. It’s made me think a lot about all the other ways language likely interacts with how we think, what values we place on society (and society places on us) and other far reaching effects of language on cognition. Thank you for this article as it talks through, in detail, many of these differences based on language structure and has provided me with a plethora of papers to read through!

A few months ago I read Hoffman’s book A Case Against Reality. It was an interesting read, one in which I ended up learning more about quantum mechanics than I ever thought I would when I picked the book up. Frankly I think the book could be distilled down to a much shorter version, as the central concept was not a particularly complicated one, just one which challenges conventional ways of thinking. I think this talk does a better job. If your curious to learn more about the science that supports this particular way of thinking or a more in depth exploration of what it means, particularly with relevance to the concept of spacetime, I’d suggest giving the book a read.

New version bugs - Language undetermined error, Subscribed/local/all not defaulting
If you haven't set a language in your profile and you try to post, the default option is "undetermined" and anything you try to reply/post will give you the unhelpful error of language_not_allowed. To an end user this doesn't provide any guidance on what happened or how to fix it. Similarly if you haven't set a new default since updating, going to the main page of an instance will show whatever your previously saved option was among the options subscribed, local, all but it will always show all (since that is what it defaults to on your profile).

JKR uses a lot of her time and money to further the TERF agenda. She even proudly considers herself a TERF. The new harry potter game is going to generate a decent amount of revenue for her, which means its directly funding a hateful ideology.

Some queer people and allies have decided to fight against this however they could, which meant that people would hurl insults at people who talked about the game, post memes which spoiled the main plot of the game, and really anything they had control over. It’s been a bit of a nightmare for moderation if you didn’t decide to take a side in the matter.

Okay so I need to be sure I have something that can make sense of s3 calls to storage, I feel like we’re getting closer, just still way out of my own technological depth.

Is there any way to do this and avoid having to use S3? I don’t want a surprise bill from Amazon because we exceeded some thresholds they have on the free tier (nor do I want to have to make new free tiers every 12 months).

Great article, thanks! Completely unsurprising, but I’m glad that issues like this are being surfaced through mediums in which they will receive attention, because these companies certainly aren’t proactively trying to identify and fix these kinds of issues.

I am willing to contribute storage (I have several TB), but I am somewhat bandwidth limited, so I need to be a bit careful with hosting too many images to not impact the other services that I run on the same connection.

How would you accomplish this? I have plenty of bandwidth and plenty of storage I can subsection as a possible solution (hell even buying a raspberry pi and an old hard drive wouldn’t be all that expensive and potentially a fun project) but I really don’t even have an idea of how to connect this to the lemmy instance

If it’s only used for images I’m not all that concerned… images not loading when the rest of the page loads really only matters when the focus of the post is a meme, and I’m not too concerned about those not loading.

Thank you for adding the additional context, hopefully it can help people calibrate how much they should believe this writing.

Honestly it’s kinda fascinating in some extremely weird way…

Of note, there’s no sources to this. To an extent, this is to be expected. Hersh does happen to have a history of breaking a few important stories, but previous stories were backed up by a lot more paperwork than this particular story has.

What is up with the way that article is written? Is this meant to be targeting incels? There’s a weird level of hand-holding tutorial interspersed with sexist ideology about owning a girlfriend. There’s also a weird shift from NFT art of women into trying to find a date in VR? but no mention that you’re trying to interact with a human? If this was written by a human (and not AI) I am very concerned

I find it rather interesting that some of the places most keen on adopting AI are some of the places most plagued by racism. Experts in the field pretty unanimously agree that nearly all AI is racist, so choosing a target system that’s already really racist is just not a good idea.

Unfortunately at the end of the day, capitalism is likely to win. This will likely be sold to police departments in the coming months and years, despite this article or any attention it’s going to receive.

Never heard the term ‘feudal security’ before. Interesting read, thanks!

This is exactly the kind of AI application that is almost assured to happen in financially strained systems, especially systems of government that are chronically underfunded, that are most at risk of causing serious harm because nearly all algorithms are biased and in particular, racist.

This is the use of AI that scares me the most, and I’m glad it’s facing scrutiny. I just hope we put in extremely strong protections ASAP. Sadly, most people in politics do not see how dangerous using AI for these applications can be, so we most likely will see a lot more of this before we see any regulation.

If you’re curious as to why these kinds of applications are nearly all biased, the following quote from the article helps to explain

The Allegheny Family Screening Tool was specifically designed to predict the risk that a child will be placed in foster care in the two years after the family is investigated.

They are comparing variables to an outcome - the outcome is one which is influenced by existing social structures and biases. This is like correlating the risk of ending up in jail with factors which might loosely correlate with race. What will end up happening is that you’ll find the strongest indicators of race, in particular if you are black, and these will also be the strongest indicators of ending up in jail because our system has these biases and jails black individuals at a much higher rate than individuals of other races.

The same is happening here. The chances of a child being placed in foster care depend heavily on the parents race. We are not assessing how well the child is being treated or whether they might need support, we are assessing the risk that the child will be moved to foster care (which can alternatively be read as assessing the likelihood that the child is of a non-white race). This distinction is critical to understand when AI is reinforcing existing biases.

Horseshoe theory was never meant to describe political attitudes. Horseshoe makes the classic mistake of confusing economic policy with social in an attempt to oversimplify and classify individuals. Perhaps most importantly, there’s exceedingly little scientific study of horseshoe theory and what little is out there happens to fail to prove the horseshoe theory hypothesis.

While I’m not going to tell someone how they should enjoy the internet, there are very real storage costs to host images or even create thumbnails of them. Are those the only pictures that you disapprove of? What about vids?

Thank you for the sentiment. Could you explain more what you mean by “mute noise - especially with pictures and vids”?

A few issues I’ve seen with adoption in the federated/open source world-

There is a technical barrier to entry. The fact that you’re on a website that’s connected to other different websites in the same interface is one that people aren’t particularly familiar with. For a social website, questions around moderation and who you’re interacting with are problems which are hard to address if you’re unwilling or incapable of learning the terminology you need to learn to understand how this works.

Each entry point into this website system is slightly different as well - how it presents itself, the design, who participates on that entry point, what kind of discussions exist. You might stumble across a lemmy instance as your first introduction to lemmy that doesn’t appeal to you and not recognize that it’s not everything that’s available on lemmy and discovering that can be difficult. The same is true of other federated websites.

As you mentioned there’s also issues with algorithmic feed. This is what leads a lot of people to stick with a particular platform. They want content to come to them, rather than searching for it, and they aren’t always aware what content they want. Federated content is much more pull oriented than push oriented and until someone codes an algorithm to push I think there will be a lot of resistance with a particular subset of individuals who are interested in pushed content rather than pulled

While this could arguably be placed in science, it made me think about the implications of an entire generation where the brains of children ‘aged’ at an increased rate as compared to peers prior to COVID-19 and what the implications of this might be for society. Mental health as a whole declined over the pandemic, and it’s had me wondering whether it has helped to normalize going to therapy and treating mental health seriously and not as a taboo. Has this affected how children interact with each other and their values and priorities going forward? I don’t think we can answer it at this point in time, but I am curious to follow the research and learn more from others.

A few years ago I listened to a TED talk by Keith Chen, which was focused on the research highlighted in this article. It made a lot of sense to me, that the language constructs which you have and which you use would affect your behavior and how you think about things. Thank you for this article, as it highlights a bunch more research in a subject I haven’t seen much about in some time. I find small quirks in thinking like this quite fascinating and I’m happy to have a new book to read 😄

Their hobbies likely aren’t causing them to have negative feelings, whereas their work more likely is. Humans are somewhat biased towards needing to vent and talk about issues which cause them negative feelings that they have to do.

People also talk about work for a variety of social reasons. Most importantly, perhaps, is that people often measure social standing by their work. Where they work, what jobs they have, how much money they make, and other characteristics of work are important for many human social evaluations. Because this is important, it becomes socialized as something that you should discuss, and thus becomes a common topic of conversation. People then internalize it as something they should talk about, or is interesting to talk about. It’s a self sustaining model built upon the foundations of social worth and evaluation, supported by the emotional needs of humans.

Interestingly you’ll see that in certain circles where social worth is not derived from your work (minorities in which upwards mobility or potential jobs are limited often talk less about work) but from other aspects of your life (talking about children is a favorite for those who have them and artists love to talk about their creative pursuits) that you’ll find conversation drifting towards different topics instead.

I think the best thing you can do, if you find this boring, is to attempt to redirect conversation away from work and towards something you’d rather talk about. People will naturally drift back towards conversation that they find useful, interesting, or have been socialized to do and ultimately you may need to tolerate this or find a group of friends less interested in talking about their career. I’ve generally found that quips which highlight it’s silly to be talking about work away from work (such as when participating in work offsite trips) or highlight how work is just a means to make money and I’m disinterested in talking about capitalism and would rather know the person and what they find interesting tend to work well to divert conversation away from chatting about work.