If an organization runs a survey in 2024 on whether it should get into AI, then they’ve already bodged an LLM into the system and they’re seeing if they can get away with it. Proton Mail is a priva…
we appear to be the first to write up the outrage coherently too. much thanks to the illustrious @self
The trouble is that Proton has announced and implemented Scribe in a manner that sends up huge red flags for their privacy-focused techie base.
Proton Mail’s privacy-focused users are worried about the Scribe announcement because they’ve never seen Proton be so vague and nonspecific about security and threat models.
Up to now, Proton has been serious about privacy
It’s not about AI. It’s about privacy and communication.
fucking incredible, you managed to cherry pick some of the few sentences in the article that don’t use the words “AI” or “LLM”! good for you, you exhausting motherfucker
I’m taking it as a positive sign that the Proton story’s gaining traction, as it should. this thing is a massive fucking security risk and a bad sign of things to come for Proton, and more people should be talking about it.
but between the dishonesty on Proton’s part about the survey and the types of accounts that’ve come out of the woodwork to unabashedly support this trainwreck of a feature (the pattern’s especially clear on mastodon), boy, there’s a lot of stank on this one
Who can use Proton Scribe?
We are currently rolling out Scribe to eligible users. If you’re on a Proton Business plan, including Mail Essentials, Mail Professional, and Proton > Business Suite, you can try Proton Scribe for free for 14 days. If you’re on our Visionary plan, it’s included with your plan.
fuck, the pure PR fluff they’re posting in response to “hey fucknuts, this thing breaks your fucking security model”. I’ve dropped other companies for doing this “uhh no it doesn’t, trust us” shit before. if they had proof this thing’s secure they would’ve posted it by now, but they don’t (because it isn’t, it’s broken by design) so instead they have to post this horseshit
I highlighted another nice dig by weizenbaum this afternoon which your “broken by design” reminded me of:
“These gigantic computer systems have usually been put together (one cannot always use the word designed) by teams of programmers, whose work is often spread over many years. By the time these systems come into use, most of the original programmers have left or turned their attention to other pursuits. It is precisely when such systems begin to be used that their inner workings can no longer be understood by any single person or by a small team of individuals.”
I think that sequence of events happens sometimes but not all the times. the generational-departed programmer thing happens more in bigger orgs or teams with a bit of a more established presence/footprint. and I don’t really get the impression proton is that big yet
this one smells more like the other kind of ratfuckery I’ve seen in shartups: some particular bugbear/feature-idea “driven” by a C-level/owner/teamlead (where “driven”, n.: “someone said go do it”), enabled by complicit PM/POs doing some goalwashing, with devs either just keeping their head down, or actively participating in creation
also I keep meaning to push on this and getting distracted:
only for business users, who have asked for it
fuck no, this breaks the security model for every proton user. one of the key assumptions of Proton’s end to end encrypted model is that the plaintext of a messsge never touches Proton’s servers, on both ends of the conversation. now if a proton business/visionary (and they keep fucking forgetting they forced their visionary accounts into having this horseshit) user sends me a message or a reply, there’s a chance the plaintext on their end was exposed to Proton’s servers, and as the receiver I can’t control or even detect the conditions that cause the plaintext leak (is the sender a proton business/visionary account? did they use the cloud version of the LLM? what text did it operate on?)
fucking unworkable. I’m not even a cryptographer, but this is the most obvious plaintext leak I’ve ever seen in a cryptography product.
and now, my swing at a secure version of this feature:
if I receive a message whose content was sourced from the cloud LLM (ie the user activated the feature at any point while writing), instead of pulling the content of the message, protonmail displays a warning that the content of the message was exposed to their servers, and I’m given buttons to either display the message, or delete it and block the sender. if I delete the message and block the sender, protonmail itself sends a message back to the message’s author proving that I deleted the message unopened.
I’m not kidding, that’s the only secure version of this. that’s the version a privacy-oriented company would have implemented, if they really had to do any of this at all (they didn’t)
also the other one, where this feature gets lacklustre uptake but not enough to kill it, and then it just gets sorta shoved into a side panel, and then every so often it’s turned on by default again because someone updated the config/prefs code or some other banal-but-instantly-effective reason (presuming it’s not even intentionally turned on again by adding new default-on settings for “different” uses-that-to-build features)
It’s not about AI. It’s about privacy and communication.
fucking incredible, you managed to cherry pick some of the few sentences in the article that don’t use the words “AI” or “LLM”! good for you, you exhausting motherfucker
they really came out of the woodwork today, huh
I’m taking it as a positive sign that the Proton story’s gaining traction, as it should. this thing is a massive fucking security risk and a bad sign of things to come for Proton, and more people should be talking about it.
but between the dishonesty on Proton’s part about the survey and the types of accounts that’ve come out of the woodwork to unabashedly support this trainwreck of a feature (the pattern’s especially clear on mastodon), boy, there’s a lot of stank on this one
you can see they are actively monitoring the masto discourse and responding whenever they think their justification list has any merit https://hci.social/@protonprivacy@mastodon.social/with_replies
but they are already saying stuff out of sync with their promotional material so damage control does appear to be in action
e.g.
https://mastodon.social/@protonprivacy/112814751983760603
but their site says
https://web.archive.org/web/20240719203115/https://proton.me/support/proton-scribe-writing-assistant
fuck, the pure PR fluff they’re posting in response to “hey fucknuts, this thing breaks your fucking security model”. I’ve dropped other companies for doing this “uhh no it doesn’t, trust us” shit before. if they had proof this thing’s secure they would’ve posted it by now, but they don’t (because it isn’t, it’s broken by design) so instead they have to post this horseshit
I highlighted another nice dig by weizenbaum this afternoon which your “broken by design” reminded me of:
“These gigantic computer systems have usually been put together (one cannot always use the word designed) by teams of programmers, whose work is often spread over many years. By the time these systems come into use, most of the original programmers have left or turned their attention to other pursuits. It is precisely when such systems begin to be used that their inner workings can no longer be understood by any single person or by a small team of individuals.”
I think that sequence of events happens sometimes but not all the times. the generational-departed programmer thing happens more in bigger orgs or teams with a bit of a more established presence/footprint. and I don’t really get the impression proton is that big yet
this one smells more like the other kind of ratfuckery I’ve seen in shartups: some particular bugbear/feature-idea “driven” by a C-level/owner/teamlead (where “driven”, n.: “someone said go do it”), enabled by complicit PM/POs doing some goalwashing, with devs either just keeping their head down, or actively participating in creation
also I believe you’ll greatly enjoy reading https://lemming-articlestash.blogspot.com/2011/12/institutional-memory-and-reverse.html
bit of a whoopsie walkback after caught pants down
totes normal. everyone has this all the time, amirite?!
let’s see how many steps they take back
also I keep meaning to push on this and getting distracted:
fuck no, this breaks the security model for every proton user. one of the key assumptions of Proton’s end to end encrypted model is that the plaintext of a messsge never touches Proton’s servers, on both ends of the conversation. now if a proton business/visionary (and they keep fucking forgetting they forced their visionary accounts into having this horseshit) user sends me a message or a reply, there’s a chance the plaintext on their end was exposed to Proton’s servers, and as the receiver I can’t control or even detect the conditions that cause the plaintext leak (is the sender a proton business/visionary account? did they use the cloud version of the LLM? what text did it operate on?)
fucking unworkable. I’m not even a cryptographer, but this is the most obvious plaintext leak I’ve ever seen in a cryptography product.
and now, my swing at a secure version of this feature:
if I receive a message whose content was sourced from the cloud LLM (ie the user activated the feature at any point while writing), instead of pulling the content of the message, protonmail displays a warning that the content of the message was exposed to their servers, and I’m given buttons to either display the message, or delete it and block the sender. if I delete the message and block the sender, protonmail itself sends a message back to the message’s author proving that I deleted the message unopened.
I’m not kidding, that’s the only secure version of this. that’s the version a privacy-oriented company would have implemented, if they really had to do any of this at all (they didn’t)
also the other one, where this feature gets lacklustre uptake but not enough to kill it, and then it just gets sorta shoved into a side panel, and then every so often it’s turned on by default again because someone updated the config/prefs code or some other banal-but-instantly-effective reason (presuming it’s not even intentionally turned on again by adding new default-on settings for “different” uses-that-to-build features)
oh. perhaps you could explain this to the authors of the article?
brb making popcorn