from the team:
Hi everyone,
In Proton’s 2024 user survey, it seems like AI usage among the Proton community has now exceeded 50% (it’s at 54% to be exact). It’s 72% if we also count people who are interested in using AI.
Rather than have people use tools like ChatGPT which are horrible for privacy, we’re bridging the gap with Proton Scribe, a privacy-first writing assistant that is built into Proton Mail.
Proton Scribe allows you to generate email drafts based on a prompt and refine with options like shorten, proofread and formalize.
A privacy-first writing assistant
Proton Scribe is a privacy-first take on AI, meaning that it:
- Can be run locally, so your data never leaves your device.
- Does not log or save any of the prompts you input.
- Does not use any of your data for training purposes.
- Is open source, so anyone can inspect and trust the code.
Basically, it’s the privacy-first AI tool that we wish existed, but doesn’t exist, so we built it ourselves. Scribe is not a partnership with a third-party AI firm, it’s developed, run and operated directly by us, based off of open source technologies.
Available now for Visionary, Lifetime, and Business plans
Proton Scribe is available as a paid add-on for business plans, and teams can try it for free. It’s also included for free to all of our legacy Proton Visionary and Lifetime plan subscribers. Learn more about Proton Scribe on our blog: https://proton.me/blog/proton-scribe-writing-assistant
As always, if you have thoughts and comments, let us know.
Proton Team
Fucking hell its exhausting trying to keep one step ahead of having this AI bullshit shoved into every service I use.
I’m not against AI. I’m just against it being embedded in literally everything. I
If I want to “consult” an AI to have it look at my code for syntax errors or something like that, I’ll go to its website and use it from there, accepting that yes… That particular bit of code or text is going to be scraped.
But the step from there to "always be reading everything I do is fucking massive.
Pretty sad all the people getting mad about an optional opt-in feature. I think this is pretty cool. If it’s free to use with my existing unlimited plan I’ll probably use it regularly, otherwise if I have to pay more I’ll probably just keep using ChatGPT since I already pay for that.
What are the Visionary, Lifetime, and Business plans?
Lifetime plan is a plan they auction off for a fundraiser
No thank you.
VPN Linux client is still barely functional years later…
Keep releasing new products tho
How is it barely functional? I use it and haven’t had any issues.
Not the same teams.
And who’s responsible for keeping the different teams properly staffed and funded?
So, just for clarity. Even if this rolls out to all paid plans. This is opt in specifically right? Or at minimum can be entirely disabled?
Yes
Amazing thing, unfortunately I won’t be able to use it. I’m on ultimate plan which already costs quite a while and if scribe is going to be a paid add-on I will stick to local ollama models. Not the most convenient thing but it works.
Hey look it’s one of the reasons I started my switch from Google manifesting in my new system, how wonderful! Can the AI winter fully arrive already?
This is opt in and you have to pay for it. Plus runs locally. Nothing like Google
It’s enabled by default and can send your email drafts to their server. The first time you try to use it (by clicking the Scribe button), it asks whether you want to use the local version or the cloud version. It’s easy to disable it completely in Settings.
It does not, and cannot, train on your inbox, due to end-to-end-encryption.
More info: https://proton.me/support/proton-scribe-writing-assistant
I would prefer if the inital prompt included an option to disabled Scribe completely, and a warning about the privacy implications of enabling it, but overall I think their approach is good enough for my privacy needs.
End to end encryption means it can be trained on your inbox, especially locally. It’s not encrypted at rest on your side, else you wouldn’t be able to read it.
That’s why Facebook’s whole “WhatsApp is e2e encrypted, we can’t see anything” is and was a whole farce. They wouldn’t even make the claim in court. People even proved that they could exfiltrate data from WhatsApp after it made it to a users phone over to the Facebook app and boom, e2e didn’t matter at all.
It sounds like your main concern is that once your inbox is decrypted by your local device it could be used by Proton to train Scribe or for some other (perhaps nefarious) purpose.
For the first point, I think the technical challenge of creating a distributed machine learning algorithm, which runs locally on each user’s device and then somehow aggregates the results, is much more difficult than downloading and using an existing model like Scribe does currently, but I agree that it is theoretically possible. If Proton ever overcomes that challenge and offers that feature, I hope they handle it as I suggested above for Scribe: an option to disable it the first time you use it. As long as I could disable it, I would consider the risk minimal. As it stands today, I consider the risk negligible.
For the second point, it’s true Proton could program their app (or their website) to send your decrypted inbox elsewhere. (That’s true of every email provider, unless sender and receiver have exchanged PGP keys, since email is a plaintext protocol.) I trust that they don’t, based on my assessment of the available info, including discussions like this. I certainly consider them much more trustworthy than Facebook/Meta.
As a general point, I think a lot of security/privacy for services like Proton comes down to trust. It’s important to keep Proton honest and to keep ourselves informed. I’m glad we have communities like this to help us do that.
So that’s fair and I completely understand that. My problem is 1. How training data is obtained 2. How this change in the future and 3. I just started my proton switch about two months ago and all of the google AI integration is what broke the camels back for me.
I wanted a platform where I didn’t have to constantly check how the AI is getting trained and handles privacy, which is now gone.
-
Mistral LLM, which I recall is open source
-
probably not much, since they’re not doing a cloud service. You still have to set this up to work on your local server/computer if you want to use it.
-
The local AI integration is, so far, the only AI use they seem to be planning on giving. And as they covered, it’s primarily designed for businesses. Other things they are planning on looking into that are not AI related is a web browser, possibly from scratch, but the AI stuff is using an existing model, it’s not from scratch.
-