Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Firefox Microsoft

Firefox Changes Its User Agent - Because of Internet Explorer 11 (ghacks.net) 68

2022 was the year that Microsoft retired its Internet Explorer web browser (to concentrate on its Chromium-based Microsoft Edge browser).

Yet Ghacks reports that Internet Explorer "is still haunting some from its grave." Some websites and apps use code to determine the user agent. The user agent informs the site about several parameters, including the used web browser (engine) and operating system. When done correctly, it may reveal the used browser and that may then lead to a custom user experience.

When done incorrectly, it may lead to false identification; this is exactly what is happening on some sites currently regarding Internet Explorer user agent sniffing and the Firefox web browser. Some sites identify Firefox as Internet Explorer because of inaccurate user agent sniffing..

Internet Explorer 11's user agent ends by identifying its release version as rv:11.0, the article points out. So when a Firefox user visits a website using Firefox 110 (or any other version up to Firefox 119), "The site in question checks for rv:11 in the user agent [and] Firefox's rv:110 value is identified wrongly as Internet Explorer."

Instead of risking problems with functionality, compatibility, or other display issues for Firefox versions 110 through 119, Mozilla has "decided to freeze part of Firefox's version." Instead of echoing rv:110, rv:111 and so on up to rv:119, Firefox returns rv:109 instead. The end of the user agent string displays the actual version of Firefox still. Mozilla plans to restore the original user agent of Firefox with the release of Firefox 120. The organization plans to release Firefox 120 on November 21, 2023.
This discussion has been archived. No new comments can be posted.

Firefox Changes Its User Agent - Because of Internet Explorer 11

Comments Filter:
  • by shm ( 235766 ) on Sunday January 01, 2023 @06:20PM (#63172910)
    • At least Windows has had something in the neighborhood of 9 actual versions.

      Firefox doesn't have anything close to 110 versions, maybe 5 or 6 before they went to "rolling suck".

      • by AmiMoJo ( 196126 )

        This is an example if why text formatted data can be worse than binary. People complain about systemd binary logs, but here you have a great example of how text values don't enforce format, or make parsing unambiguous.

        • Care to elaborate?
          • by Anonymous Coward on Monday January 02, 2023 @04:33AM (#63173598)

            At a guess, GP's thinking is probably that "text" allows partial matching that then turns around and breaks things. That's an error in implementation in the face of changing (though foreseeable) data specification. The logical step that leads to GP's conclusion is that if only it was binary it had to have been specificied much more tightly and therefore "binary can be better". Thing is, these things are orthogonal in principle even if they seem to correlate (somewhat) in practice.

            The systemd example is a good example of "why binary can be worse" for different reasons, off the top of my head at least two of them: First, loglines are ment for humans, even if they have a format that is often run through logalisers. When you need to consult the log is usually when something has gone awry and you're under time pressure to fix it, now! Well, the sysadmins among us are; poettering's attitude, and possibly GP's, shows they're not sysadmins. So a binary format that requires programs to consult that are then at high risk of not functioning for this or that reason and are not part of the usual "lingering" toolset that is much more likely to keep trucking, shuts you out of the loglines when you need it most. Second, it's even more fun if your loglines are entirely inaccessible beyond some point before the part you actually need because some program shat itself and wrote a broken one. Depending on the details of the format, "binary" is much more prone to this. Dealing with that requires thus detailed knowledge which eats up headroom better reserved for dealing with the actual problem rather than dealing with re-enabling your ability to finding out what the problem is.

            Notice that loglines are actually more predictable than http user-agent strings, for all the latter are more repetetive. It's a rare one that doesn't have "mozilla" in it somewhere, but then you get a whole zoo of possible values that change over time, but there's no set format to those and lazy webmonkeys like to just do partial matching to get a "good enough for today" answer that goes stale quickly. Parsing loglines correctly from twenty years ago is actually easier than parsing user-agent strings from even ten years ago, while keeping up with today. The problem isn't parsing old strings, but parsing new strings. Extrapolating forward, loglines still come out ahead.

            Coming back to http user-agent strings, they never were intended to discriminate against user-agents, so doing that is already abuse. And doing it shoddily now sees mozilla bend over backwards, indeed just like skipping "windows 9", to work around specific failures in that abuse. But these are data-based errors, and that can happen "in binary" just as much, possibly even worse, depending on the details of the format. Remember "heartbleed", for one? Go look up how parsing binary went hopelessly awry in that one.

            So, I'd say that our lovely and talented AmiMoJo missed the mark on this one due to having just a little, but not quite enough, understanding of the problem spaces involved.

            • The main problem with journald's binary logging is how space inefficient it is. The json export is smaller than the on-disk format, even though every message contains the same redundant field labels. They paid the costs of binary-format log storage and the only thing they got in exchange is tamper-evidence which is not as good as forwarding to write-only remote storage. Journald is slightly better for a rare use case (enemy action), but much worse for the typical case (garden variety software bugs and har
        • You can parse and interpret binary data any way you want, so I'm not sure how that would be any better for someone not interested in closely following the specs.

          • by AmiMoJo ( 196126 )

            A field specified as an IEEE float or 16 bit major/minor version pair is unambiguous. You wouldn't have an issue with the number starting with 11.

  • by account_deleted ( 4530225 ) on Sunday January 01, 2023 @06:34PM (#63172936)
    Comment removed based on user account deletion
    • by znrt ( 2424692 )

      if anything, it helps to identify crappy web developers you should stay away from. checking the useragent string is the utterly wrong way to address browser capability. it shouldn't be used for anything except maybe usage statistics (and it isn't even really reliable for that).

    • It only matters for web framework developers (the fraction of those who care about Firefox) who will not be able to test for presence of Firefox supported features that are released between versions 110 and 119, or at least not using UserAgent. Probably does not matter much, otherwise Mozilla would have changed UserAgent in another unique way to solve the ambiguity, like using 1I0 to 1I9.

    • I see that it's time to give up the user agent string completely since it's used to fingerprint the user.

    • Whereas Edge claims to be "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 Edg/108.0.1462.54"
      So it is pretending to be Mozilla, Konqueror, Chrome, and Safari.

  • They need to just silently break. When people come with bug reports, they can send those same people to admin of the broken websites with "the website is bugged, it's not firefox's problem."

    Either one of two things happen: the broken websites lose visitors, or they fix their brokenness.

    • "silently break" why silent? Just wondering.
    • by ttfkam ( 37064 ) on Sunday January 01, 2023 @07:24PM (#63173016) Homepage Journal

      That might work if Firefox still had 30% market share. Truth be damned, folks will simply say the site works in Chrome, so Firefox will be blamed. And even if they recognize that Firefox isn't at fault, it's easier to tell people to use Chrome than to get all those broken web sites to patch.

      Rock. Hard place. Pragmatism.

      • Both with personal and work shit. I refuse to give up FF, I'm pretty much alone. I get told by 3rd parties whose shit we pay for to just use Chrome, same by other departments when their shit doesn't work, etc.

      • Many of us already do this. Consider any Windows user, they cannot run Firefox without having an alternate browser installed on their system, and some even have Chrome as well. If something doesn't work, it's a simple click or two and a different browser is pulled up. Honestly internet users give few (if any) shits about this these days.

    • They need to just silently break.

      The problem is that what will really happen in that case, is a lot of users will say "Oh this browser isn't working for some sites, guess I'll just switch to some other browser..."

      Especially for more fringe browsers. They can't afford to have some websites not work for such a simple reason, as much as it would technically be the right thing to forge ahead with plain version updates.

    • by Sigma 7 ( 266129 )

      Overtly break those websites. If a website doesn't support browsers because it can't parse User-Agent, then browsers shouldn't support those websites. When the broken website checks User-Agent, kill JavaScript so that it no longer works, dumping a message in the console. Also, give no User-Agent info, let them go blind on who they're handling.

      Works best if all the browsers do it all at once.

      • Works best if all the browsers do it all at once.

        Thats what Karl Marx said about Communism. All people working together placing the interests of others over their self interest. Yeah, sure.

      • Good luck with that idea.

        First of all, the browser doesn't know whether the web site is "broken." It just renders what it is given. The web site developers are the ones who have to know which browser is being used, and often need to make changes to their code to support those specific browsers. So there's no way for Mozilla to "overtly" break web sites that don't properly parse the user agent string.

        Then, there are many reasons we *want* the user agent string to accurately identify the browser. If two brows

        • by tepples ( 727027 )

          First of all, the browser doesn't know whether the web site is "broken." It just renders what it is given.

          If enough users of Firefox have chosen the "Report broken website to Mozilla and attempt to fix it" option from the menu, the browser can know to avoid sending anything likely to cause the server to give it nonfunctional markup.

        • by Sigma 7 ( 266129 )

          If two browsers render certain tags or styles differently, as a developer we need to know that so we can get the site to render properly on that browser.

          This was more of an issue in the past, rather than an issue of a user running a legacy or esoteric web browser. There's almost no reason to support legacy browsers anymore, unless it's for a website that allows downloading another web browser. Esoteric browsers should fix themselves instead of needing web admins to correct things.

          In modern web development,

          • The subject at hand is Firefox, which does not use the same rendering engine as Chrome / Edge. I never brought up legacy browsers, I agree, these need to go away, particularly Internet Explorer. Toolkits like Boostrap have made cross-browser development much easier, but even they still have issues from time to time. https://getbootstrap.com/docs/... [getbootstrap.com]

            Yes, developers *should* use a library that identifies the browser. But in many cases they don't. It might be legacy code that nobody has touched in years, befor

    • by Tony Isaac ( 1301187 ) on Sunday January 01, 2023 @07:56PM (#63173066) Homepage

      Most users aren't smart enough to understand whether a problem is caused by the web site of the browser. But if they are using Firefox, they will be smart enough to try the "broken" site in, say, Chrome. When they find that it works in Chrome, they'll see it as a Firefox issue, regardless of whether it's really a problem caused by the site .This will lead to more people migrating away from Firefox, something Mozilla doesn't want.

      It might seem nice and easy to say "just let it break." But there are real consequences to Mozilla, that add up to users and dollars lost.

    • Comment removed based on user account deletion
    • by Megane ( 129182 )

      Right now I have no way to log out of Slashdot because in the past few weeks, someone fucked with the top bar code. All I see in the upper right corner is the effectively useless Submit button, and the bottom few pixels of the search field peeking out from under the dark green bar.

      For years, posting a non-anonymous message has not properly converted the message input to a regular message in the web page. It just leaves the buttons there, but if I post an anonymous message, it works properly. I'm only guess

  • by oldgraybeard ( 2939809 ) on Sunday January 01, 2023 @07:00PM (#63172990)
    And seems to me that would be server side. And not being able to tell 11 from 110 - 119. looks like sloppy programming. Besides, can't user-agent be spoofed anyway?
    • Super easy to change them Chrome and firefox both have built in ways to change them, for none technical ways there are plugin that will do it.
    • by thegarbz ( 1787294 ) on Monday January 02, 2023 @08:46AM (#63173776)

      It is all the things you say, but what this really is is Mozilla trying their best to make it not look like a Firefox issue, which is what 100% of users would default to thinking if they open a webpage and it is broken.

      The internet has a long history of working around stupid web-designers, and much of this history involves messing with the UA string.

  • by williamyf ( 227051 ) on Sunday January 01, 2023 @07:33PM (#63173026)

    Instead of the current:

    Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:108.0) Gecko/20100101 Firefox/108.0

    or somesuch slowly evolve it to be something along the lines of
    Std Compliant Browser Supported (Windows NT 11; Arch[x32 x64 AArch 64])

    And a few days before the browser goes out of support, send an out of band patch and change it to

    Std Compliant Browser Unsupported (OS [Win/mac/linux/bsd]; Arch[x32/x64/AArch 64])

    In this day and age, Just telling the web developer that the browser complies with standards, and the fact that it is still supported with security patches or unssuported should be enough. All that extra crap is fluff

    If Apple and Google also did this, all this "best viewed with [insert browser name here]" would be a thing of the past.

    I am allowed to dream, am i? After all, dreaming is free.

    • by AmiMoJo ( 196126 )

      User agents are going away anyway. Chrome is phasing them out.

      • Re: (Score:3, Informative)

        by Anonymous Coward

        Yeah, right. That was supposed to happen almost a year ago. Never happened. https://developer.chrome.com/e... [chrome.com]

        The current Chrome User Agent is Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36, well past the target version of 101.

    • > the browser complies with standards

      None of the browsers are standards-compliant. They each implement a subset.

      That's how we got here.

  • by jcdr ( 178250 ) on Sunday January 01, 2023 @07:37PM (#63173038)

    For example: 2023-01-01
    It bring a better idea to human.
    It can be automated very easily from the commit date.
    I usually use a COMMIT_DATE constant in ISO 8601 down to the second UTC
    For example: 2023-01-01T00:35:12Z
    Very cool for continuous integration build and testing.

    • That will be a problem with ESR releases.

      Firefox 102.6.0 ESR might have been built on 2023-01-02.

      Firefox 108.0.1 might have been built on 2022-12-26.

      You can't just look at the build date to determine which is more up-to-date or supports the latest features. Neither should developers be looking at the user agent string.

      • by jcdr ( 178250 )

        I don't see any difference: ESR and non-ESR are not the same product branch. You have to choose one branch, then you only look at the version on that branch.
        i did not say to not use branch name. What I say will cause the version on each branch to be based on ISO 8601.

    • Just list supported highest HTML version, Javascript version and CSS version in the request. Nothing more.
      'The web sites never need more.

      • by jcdr ( 178250 )

        You are right obviously.
        Using the build version might still be useful to trace the observations bring back by the users before there is an understanding of the cause.

        • by Z00L00K ( 682162 )

          That still doesn't have to go into the user agent string, but it could be in the "About" dialog of the browser because that's something that if the user reports something should be able to do.

  • Wrong Move (Score:1, Interesting)

    by Bahbus ( 1180627 )

    You do not build a browser (or update one) around broken websites and crappy coders. If a website doesn't display properly, it's the website's owner's problem. I don't care if it's the most important website in the world. Stop coddling the lazy, shitty coders building shitty sites. Stop coddling the lazy, shitty users who seem unable to switch to a browser that works. Any website checking for useragents nowadays to actually *do* anything is already garbage and not worth visiting. Build your website using th

    • Re:Wrong Move (Score:5, Insightful)

      by Dutch Gun ( 899105 ) on Sunday January 01, 2023 @10:50PM (#63173290)

      That's mighty easy for you to say, as you lose absolutely nothing by talking a tough game. To the end user, Chrome works, and Firefox doesn't. What are they to conclude from that? In the scenario you describe, Firefox is pretty much guaranteed to be the only loser. Why the hell would Mozilla want to encourage it's users to switch to another browser to view a badly coded web site?

      Like it or not, this is probably the only pragmatic move they can make here.

      • Also, to the manager or executive, the feature they want to or have been asked to implement is supported by Chrome, but not by Firefox. And they give zero shits about standards compliance. Google might pagerank down things which don't work in any browser, but definitely not sites that break in Firefox but not in Chrome.

        • by Bahbus ( 1180627 )

          Something implemented by a single browser only? I don't give a shit. Don't use it. The manager or executive asking for it? They can go fuck off. If they don't give a shit about standards compliance, I don't give a shit about them, and I'm not doing it.

      • by Bahbus ( 1180627 )

        I really don't give two shits what the end-users conclude. I don't care if Mozilla disabled the websites in question with a message saying "This website is coded like shit! We've automatically let the owner know!"

    • You do not build a browser (or update one) around broken websites and crappy coders.

      You literally build everything around the stupidity of others. Not just browsers by the way, but operating systems which by default block network activity of programs, or programming languages that are memory safe, or APIs that are purposefully restrictive as well.

      There is no aspect of the modern computer which hasn't some consideration of the fact that crapper coders exist baked into it.

      • by Bahbus ( 1180627 )

        No. They are designed around the security against attackers, with considerations for the fact that mistakes can and do happen - which is not at all like the example here. These websites don't have little mistakes or typos that are making them a problem, nor are they maliciously bad trying to harm your computer. They were just deliberately designed in a stupid and shitty way.

        Your metaphor was as bad as Firefox's market share is low.

  • by gweihir ( 88907 ) on Sunday January 01, 2023 @08:13PM (#63173104)

    People, if you do not program for W3C compatibility and instead code for specific browsers, then you are incompetent hacks, no excuses. Learn to do stuff right and crap like this does not happen anymore.

    • by dgatwood ( 11270 )

      People, if you do not program for W3C compatibility and instead code for specific browsers, then you are incompetent hacks, no excuses. Learn to do stuff right and crap like this does not happen anymore.

      If only the real world didn't require you to put in boatloads of workarounds for bugs in major popular browsers. Woe be unto anyone trying to support HTML editing, for example.

      • by gweihir ( 88907 )

        Just be pragmatic and use restraint with the feature set you are using. You know, like a real engineer, not some hack.

        • by dgatwood ( 11270 )

          Just be pragmatic and use restraint with the feature set you are using. You know, like a real engineer, not some hack.

          Contenteditable has only been around since what, 2007?

    • then you are incompetent hacks, no excuses. Learn to do stuff right

      They don't care. "Right" by definition for a professional programmer is: "Works for the user case requested, and I got paid, now on to the next contract."

      The customer doesn't pay extra for nice code. The user doesn't give a fuck as long as it works, and when it doesn't they blame the browser.

      You are providing zero incentive.

      • by gweihir ( 88907 )

        It is not that simple. A crappy market is always the fault of both buyer and seller. Well, I guess with will need regulation to fix this mess. That means no more self-taught "coders", no more coders with the wrong type of education and liability for what you produce. The tragedy of the commons all over. Always the same crap with the human race.

  • ... to identify the browser just by the version, and just till it matches on top of this?

    It would be funny if both the hell and the haven for programmers was technical support of the code their wrote - for eternity ;-)

    • It's worse. Internet Explorer is disguising itself because reading the user agent was a necessity when dealing with IE. Microsoft was trying to circumvent it by changing the User Agent header to Firefox or other browser's UA. This made it impossible to detect IE, but the necessity was still there. In the meantime, people have found out what spoof-headers IE really sends, and started to use those to detect the broken browser. This is what is biting Mozilla now.

      So yes, it is disastrous programming, but not by

  • by Anonymous Coward

    Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 10.0; Win64; x64; Trident/7.0; Microsoft Outlook 16.0.15726; Microsoft Outlook 16.0.15726)

    Put in a bad URL in Outlook ...
    Make sure the web address //ieframe.dll/dnserrordiagoff.htm# is correct

  • ShonenWare ftw!

    https://en.wikipedia.org/wiki/... [wikipedia.org]

    Shonen Knife Top of the World MJ090116

    https://www.youtube.com/watch?... [youtube.com]

"The medium is the massage." -- Crazy Nigel

Working...