This post takes place in a fantasy dream world. Any resemblance to real apps is entirely coincidental.
Consider a world with just two Electron apps (that's how you know this is a fantasy). Let's call them App A and App B.
To the user, both apps feel quite similar. One might even be 90% based on the other. Despite the external similarities, the apps tackle their Electron security configuration in a vastly different way.
App A enables Node.js integration. It makes a lot of things easier, after all. If you need to read out a list of files to display, you can just read them. If you're on a tight deadline and need to release something on a committed due date, it's understandable. And App A is only working with local resources anyway, so it's not like there's any remote content that could run here.
App B took a different approach. From early on, App B sought to enable Electron's security features. Partially due to a greater necessity to accommodate certain new features, but also because software security is foundational and needs to be dealt with upfront. It can't be an afterthought.
App B's approach had many tradeoffs. Using IPC for everything added non-trivial overhead just copying bytes around. It added brand new failure cases. The code is harder to reason about. Adding new features requires a lot of boilerplate. App B's build framework operated on the assumption of Node.js integration, so App B wrote nonsensical workarounds, forked the build tools, and eventually just wrote custom tooling so that sandbox: true could just work. To this day, App B sees all sorts of errors related to the sandbox. Maybe it's some NVIDIA driver issue. Maybe it's some Snap or Flatpak compatibility issue. It's painful for developers to test. It's painful for users who get errors that they'll never bother to report. But, the security benefits make it worth it.
Some day, a discovery is made. It turns out that the code shared by App A and App B isn't as secure as thought.
It also turns out that while App A doesn't handle remote content, it certainly does handle untrusted content. That file the user just opened - it might be on the user's computer, but where did it come from? It might be (and probably is) something totally benign made by the user. But it might be made by a student trying to break into their teacher's laptop to tamper with their grades. It might be a worm spreading through stealing session tokens on a popular chat app. It might be part of a spear phishing campaign. Who knows.
So. Back to that discovery. It turns out that if the user opens a malicious file in either App A or App B, some JavaScript code within that file will be automatically executed. That's called an XSS. It's pretty bad. But what does that actually mean for the user?
App A. The only way this situation could be worse is if it was a zero-click exploit. Due to using Node.js integration, this is actually not XSS. This is just arbitrary code execution. The JavaScript can go plop WannaCry onto the user's desktop and automatically execute it in under a second, and it would only take 80 characters. Maybe your antivirus will block WannaCry, but will it block some bespoke custom malware? Will it block that bespoke custom malware if it's seemingly being executed as part of a widely used code signed app made by an organization that would seem very trustworthy? The possibilities are limitless for the attacker.
App B. It's sandboxed. There's no node integration. The impact here really is just XSS. Of course, there's still plenty of bad things that can happen. Arbitrary interfaces can be shown, allowing highly convincing phishing attacks. It can still send out requests and burn the battery. But the attacker is trapped inside the app.
Someone could use a Chromium zero-day to escape the sandbox and do whatever else. Of course. But those exploits take tremendous effort to develop and involve chaining several bugs together. App A and App B's users are not heads of state. No one is developing one of those chains to then use it here. For all intents and purposes, the sandbox is perfectly secure as long as IPC services do not offer escapes.
The impact of the bug itself is bad enough, but the response to that bug matters just as much. App A and App B both release updates quickly that fix the bug. That's good, but how do users know they need to install the update?
App A. It turns out that if you installed App A directly from the app's website instead of from any sort of app store, there isn't even an update checker. So, there are possibly millions of people with this software installed. It has a known very critical security flaw that has been disclosed with proof-of-concepts. But users have no way to know this. There wasn't even an announcement about it on the website.
App B. At least there was an update checking mechanism, although update installation was still missing. Perhaps App B's developers have concerns about doing that securely, or something along those lines. See the Notepad++ situation for a real-world example.
It's pretty clear that App A could've handled this a lot better. The app should be more secure. The app should have a way to distribute security fixes. App B proved that not only is it possible, it's possible with a very small group of passionate developers. In fact, App A's scope is substantially smaller than App B's. App A could become much more secure, and this migration would take much less effort. So, did App A take any actions here to improve security for their users?
Nothing.
They did nothing.
Years later. Today. Right now. People are downloading App A with all the same security flaws. No way to even inform users about a security issue.
For comparison, it turned out that App B's IPC services did have a security issue that allowed an escape from the sandboxed web content. Compared to arbitrary code execution, this was minor: "just" the ability to read arbitrary files without going through the file picker.
So, what did App B do? They spent the next month rewriting the app from the ground-up. Security at every step. Deny by default for everything. Every permission check. Every web request. Every new window and redirect. If there isn't a rule saying to allow it, it must be denied. App B even did a large-scale migration for every user's data, possibly gigabytes. That was slow and hard. It took many attempts.
Suppose there's another vulnerability. That tends to be how these things go.
Every App A user is still vulnerable. And they won't have any way to know that there is a fix available.
Meanwhile, App B is already prepared. The blast radius of any vulnerability is tiny by comparison, and users have a way to know if there's a security update.
If App A can't keep their Electron app secure, which they demonstrably can't, my opinion is quite simple: stop offering it for download. If you don't have the engineering resources to support it, that's fine. Just acknowledge that and don't lie to your users. No one benefits from kicking the security can down the road.
There's no real conclusion to this. Just food for thought. Again, resemblance to real software is entirely coincidental.