Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Firefox Chrome Mozilla Security The Internet

Mozilla Announces Project Fission, a Project To Add True Multi-Process Support To Firefox (zdnet.com) 67

An anonymous reader quotes a report from ZDNet: After a year of secret preparations, Mozilla has publicly announced plans today to implement a "site isolation" feature, which works by splitting Firefox code in isolated OS processes, on a per-domain (site) basis. The concept behind this feature isn't new, as it's already present in Chrome, since May 2018. Currently, Firefox comes with one process for the browser's user interface, and a few (two to ten) processes for the Firefox code that renders the websites. With Project Fission (as this was named), Firefox split processes will change, and a separate one will be created for each website a user is accessing. This separation will be so fine-grained that just like in Chrome, if there's an iframe on the page, that iframe will receive its own process as well, helping protect users from threat actors that hide malicious code inside iframes (HTML elements that load other websites inside the current website). This is the same approach Chrome has taken with its "Site Isolation."
This discussion has been archived. No new comments can be posted.

Mozilla Announces Project Fission, a Project To Add True Multi-Process Support To Firefox

Comments Filter:
  • by Anonymous Coward on Wednesday February 06, 2019 @09:27PM (#58081742)

    Firefox: Hold my beer.

  • by Anonymous Coward

    In other words, more like Chrome which means even more CPU and memory usage for little gain.

    If people think they're not going to go down the same road and eventually gimp extensions as well, then they're naive.

    • The concept behind this feature isn't new, as it's already present in Chrome

      I think this should actually be the generic template for any news about Chromefox:

      Mozilla announces plant to add $X to Firefox. The concept behind this feature isn't new, as it's already present in Chrome.

      for any given value of X.

  • Yippy. Another fucking update
    • Re: (Score:2, Funny)

      by Anonymous Coward

      Wait until Intel puts it on-die as part of their new IME. "The fastest most hyperthreaded browser-on-a-chip is now always on even when your machine is off! Swear to god, it's a feature! Sure someone asked for it!"

    • by Anonymous Coward

      That should be it.

      This will do nothing to properly isolate inter-site scripting attacks, it will increase memory footprint (less if you are on linux and have ksm(kernel samepage merging) running and the threads flagged as ksm compatible.), increase attack surface, and further complicate the already messy debugging firefox requires.

      If we were to go back and fork from FF-ESR 38, 45, or 52, implement this process isolation on a per-window or per-tab basis, and have plugins tied to per-window or per-tab session

      • by roca ( 43122 ) on Thursday February 07, 2019 @12:11AM (#58082140) Homepage

        Mozilla didn't see site isolation as a high priority until Spectre happened. Unfortunately it is now obvious that given a high-resolution timer, JS can probably read the contents of almost everything in the address space of the process it belongs to using side channels. That means site isolation has to be a priority.

        As a temporary fix various timing channels have had their precision reduced, but that's only a partial workaround at best. Also Mozilla wants to enable parallelism primitives for Javascript that can be (mis)used to gather high-precision timing data.

        Fine-grained multiprocess has some downsides but Mozilla can't afford to lag behind in security and privacy.

    • Yeah damn that completely voluntary update process that I could disable at any time. Damn it to hell!

  • by Anonymous Coward

    I don't understand why processes are being used to provide security. Can someone explain it in more detail? If there aren't any bugs in the code, then it shouldn't matter where anything is running because it won't be able to do anything it's not supposed to do. If there are bugs in the code, why wouldn't they be able to exploit them to communicate with the other processes and cause just as many issues? I would think spending time implementing a simpler thread pool with everything being task based would

    • by AHuxley ( 892839 )
      Each malware filled web page and tab gets it own part of the CPU and memory to stay in.
      Faster too and the OS can still keep up in the background as it has its own part of the CPU.
      Everyone gets a part of the CPU.
      • by rtb61 ( 674572 )

        I don't know, I struggle to believe that, hell Mozilla are incapable of shifting the tab bar back below the address bar where it belongs, so fancy stuff, I am not so sure about that any more.

    • by Immerman ( 2627577 ) on Wednesday February 06, 2019 @11:16PM (#58082034)

      > If there aren't any bugs in the code,
      Ha! Good one!

      > If there are bugs in the code, why wouldn't they be able to exploit them to communicate with the other processes and cause just as many issues?
      You might be able to, but you might not - it depends entirely on the nature of the bugs.

      Basically security programming amounts to putting multiple layers of armor around something, knowing full well that none of the layers are perfect. However, each layer makes it more difficult (read: expensive) to get to the chewy center, at least early on before the vulnerabilities are well known.

      And when someone inevitably does find a way through, and the developers learn of it? Then that "one" vulnerability is actually a list of the vulnerabilities that were exploited in each layer or armor - fix any one of those holes and you're safe again, at least until they find a new way through that layer of armor. Fix most or all of them, and you send them back to the drawing board.

    • by Tailhook ( 98486 ) on Thursday February 07, 2019 @12:25AM (#58082184)

      I don't understand why processes are being used to provide security.

      Processes leverage MMU hardware to achieve memory isolation such that each process has a private address space that can't be violated by another process without either compromising the OS or overcoming the MMU (rowhammer/spectre/etc.) You will now argue that the processes in a multi-process browser already communicate, pretending that this communication is unfettered by any limits. It is not. The browser designers control this communication with the intention of defending against compromised processes by dropping unnecessary privileges and minimizing the IPC attack surface.

      why wouldn't they be able to exploit them to communicate with the other processes and cause just as many issues?

      Because the OS and the MMU are specifically designed to prevent unprivileged processes from communicating with other processes. You will now argue that OS's aren't perfect and chips have flaws and so such designs are pointless. You will do this despite the fact that your proposal relies on hypothetical bug free systems as well, as we see here:

      If there aren't any bugs in the code...

      You're free to fantasize about bug-free systems, but the purveyors of real software must contend with bugs. Bugs in extensions, third party dependencies, compilers and their runtimes, drivers and every other conceivable thing. Any exploited flaw delivers the entire address space of your thread pooled browser and everything it's doing with no further effort. Process isolation at least offers an impediment to further comprise beyond the exploited process.

      Google was right to design Chrome as they have, and Mozilla has been remiss in taking this long to copy it.

      • by Kjella ( 173770 ) on Thursday February 07, 2019 @02:01AM (#58082382) Homepage

        You're free to fantasize about bug-free systems, but the purveyors of real software must contend with bugs. Bugs in extensions, third party dependencies, compilers and their runtimes, drivers and every other conceivable thing. Any exploited flaw delivers the entire address space of your thread pooled browser and everything it's doing with no further effort. Process isolation at least offers an impediment to further comprise beyond the exploited process.

        And even if it's not malicious/exploitable, it'll crash everything. That was my main annoyance, if you got one misbehaving tab in Chrome you can sort by CPU/memory use, find and kill it if it doesn't die on its own. In Firefox it was the "what tab is killing it now" guessing game.

  • Local Render Server? (Score:4, Interesting)

    by 0100010001010011 ( 652467 ) on Wednesday February 06, 2019 @10:48PM (#58081940)

    How long until we have some HTML5/CSS/JS hardware accelerated chip to do the actual rendering and just pass the display information to a 'thin client'?

    At some point it's going to be faster to x11 forwarding/VNC to a bigger machine somewhere else to handle the latest JS framework.

    • I think you just suggested that we either switch to multi-core CPUs (welcome to 2010), or we switch to cloud rendering (over my dead body).

    • How long until we have some HTML5/CSS/JS hardware accelerated chip to do the actual rendering and just pass the display information to a 'thin client'?

      At some point it's going to be faster to x11 forwarding/VNC to a bigger machine somewhere else to handle the latest JS framework.

      I think it was either oprea mini or opera mobile actually did this to run a their browser at reasonable speeds on early feature phones it would render the web pages on their servers and send the output compressed to the phone were it would decompress it and show the rendering.

    • by afidel ( 530433 )

      Well, Windows is nearly there, they actually allow you to run the browser engine in a VM if you have Enterprise and configure it to do so. Unfortunately with the recent announcement that they are moving the rendering engine from EdgeHTML to Chromium for Edge I'm not sure if the feature will be supported going forward. I hope they do as it's one of the better end user security features now that Outlook isn't such a steaming pile.

  • >> After a year of secret preparations,

    Can someone help me square the "open" part of OSS with "a year of secret preparations" please?
    • by roca ( 43122 ) on Thursday February 07, 2019 @12:06AM (#58082130) Homepage

      Translation: "After a year of open discussion we didn't notice until now,"

      Here for example is an overview of memory usage reductions related to Fission, from July 2018: https://mail.mozilla.org/piper... [mozilla.org]

    • Re: (Score:2, Informative)

      by Anonymous Coward

      we prepared this in 2014 already. we had to move stuff around until we could actually multi process at all. and sandbox. and move things around more.
      firefoxos helped a lot making this happen because the mozilla management didnt care one bit, but it was required for firefox os to work well enough (its not kaios, 3rd worldwide mobile os). chrome was already ahead of course at the time and they were already dealing with this as well. for them, maybe it was secret, but their employee live next floor and keep ta

  • by Anonymous Coward

    So now I’m going to have like 500-600 additional processes running on my box every day. Hrm.

  • So instead of taking 100% of my CPU, Firefox will be able to take 500% or 1000% of my CPU (100% for every tab I have opened and Firefox is spinning on for some reason.)

    And so instead of nearly crashing my machine by hogging resources, it most certainly will.

    Please, Firefox devs, get the CPU and memory leaks, javascript wedging, etc., under control before splitting things into more processes (which will just further hide such performance/memory leaks for now.)

    At least one might (supposedly?) be able to kill

Scientists will study your brain to learn more about your distant cousin, Man.

Working...