Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Operating Systems Windows Microsoft Software Build News Technology

Microsoft Removes 260-Character Path Length Limit In Windows 10 Redstone (softpedia.com) 260

An anonymous reader quotes a report from Softpedia: Windows 10 build 14352, a preview version of the upcoming Anniversary Update (also known as Redstone), comes with an eagerly awaited change that Microsoft hasn't yet announced publicly. The 260-character path length limit in Windows can be removed with the help of a new policy, thus allowing you to run operations with files regardless of their path or file name. While this new rule is not enabled by default, admins can turn it on by following these instructions. Launch the Registry Editor by clicking the Start menu and typing "regedit.exe," and then navigate to the following path: HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Group Policy Objects\{48981759-12F2-42A6-A048-028B3973495F}Machine\System\CurrentControlSet\Policies. Look for an entry called "LongPathsEnabled," and if it does not exist, simply right-click Policies, select New DWORD (32-bit), name it "LongPathsEnabled" (without the quotes), enter value 1, and you're good to go. The description of the preview reads, "Enabling NTFS long paths will allow manifested win32 applications and Windows Store applications to access paths beyond the normal 260 char limit per node. Enabling this setting will cause the long paths to be accessible within the process." While the Windows 10 preview build 1452 has been made available last week, according to Windows Central, a Microsoft team member says that the company could released Windows 10 Mobile build 14352 for Insiders on Tuesday, May 31.
This discussion has been archived. No new comments can be posted.

Microsoft Removes 260-Character Path Length Limit In Windows 10 Redstone

Comments Filter:
  • by Anonymous Coward on Monday May 30, 2016 @11:32PM (#52214655)

    There's nothing simple about fucking around in the registry. Why can't Microsoft just do things correctly the first time?

    • by Gadget_Guy ( 627405 ) on Tuesday May 31, 2016 @12:15AM (#52214807)

      There's nothing simple about fucking around in the registry.

      Really? If you have problem with the registry how do you cope with the file system with all its folders? Or even the nested comments of Slashdot? I think that you are making this out to be a much bigger problem than it really is.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        > Really? If you have problem with the registry how do you cope with the file system with all its folders?

        I don't give them names like {48981759-12F2-42A6-A048-028B3973495F}, for starters.

      • by meerling ( 1487879 ) on Tuesday May 31, 2016 @01:32AM (#52215043)
        The registry is actually fairly basic, it's just also huge and pretty poorly documented. Yes, you can screw up your computer pretty good if you do the wrong thing, just like messing with stuff in the system directory, or in ancient history, in the dos directory, but that's why you should be careful, have a backup, and don't play around or randomly experiment.
        Anyone who feels comfortable changing a simple registry entry is almost guaranteed to be able to do this without issue. Anyone who isn't probably doesn't even know what this change even does in the first place.
      • If you have problem with the registry how do you cope with the file system with all its folders?

        The problem with the registry isn't that it is hard to navigate a hierarchical database, but the way it is being used, apparently to deliberately obfuscate the way applications are configured. As a result, it has become an obscenely hideous structure - compare this to the traditional UNIX style of configuration, in simple text files requiring just a text editor and a manual telling you how to do (another thing that is very often absent in Windows). And even if the manual doesn't exist, you can often make re

        • The registry is supposed to enforce a common method of storing settings across all Windows apps. Of course, that laudable idea has then been tested by twenty odd years of pretty much everyone writing software for Windows in their own special way, and maybe about 10% bothering to follow Microsoft's standards. And of course, even different departments of Microsoft have their own ideas of how things should be done, so Office does things differently to SQL Server and so on.

          tl/dr the idea behind the registry wa

        • by EvilSS ( 557649 )
          For 99% of applications it's pretty simple actually. One HKLM\Software (or 64 bit equiv) key, and one HKCU\Software key per user. As long as you are not dealing with something that includes drivers or something crazy like that.
      • There's nothing simple about fucking around in the registry.

        Really? If you have problem with the registry how do you cope with the file system with all its folders? Or even the nested comments of Slashdot? I think that you are making this out to be a much bigger problem than it really is.

        Yes, of course it's easy! That's why I added a desktop shortcut for every user that takes them right into the registry.

        After all, it's as easy to use as Windows Explorer, and what could possibly go wrong....

  • Finally (Score:5, Funny)

    by fustakrakich ( 1673220 ) on Monday May 30, 2016 @11:32PM (#52214659) Journal

    I can replace my Linux machine!

    • I'm guessing this innovation was partly in response to supporting WSL, so one can install Ubuntu on an ntfs filesystem.

      • Re:Finally (Score:5, Informative)

        by bloodhawk ( 813939 ) on Tuesday May 31, 2016 @12:17AM (#52214809)
        NTFS doesn't have a 260 character limit, it is 32k. The limitation is in windows itself and seems to be another one of those limits kept purely for backward compatibility where a shitton of apps can't handle long paths.
        • by AmiMoJo ( 196126 )

          What sort of thing would you use >260 character paths for? I'm not questioning the need, I'm just looking for people with practical applications because it's interesting.

          • You mean you don't use your favorite Twitter comments as each branch of the file tree?
          • At work it happens all the bloody time. We have a very large file share, around 10 TB, of files generated when we do projects for our clients . Frequently our account execs will try to organize one of our larger client folders and end up nesting files and folders so deeply that the data becomes inaccessible. It's pretty easy to do when many documents are generated by mac users, who give zero fucks about file and folder name length.

            Also, I will bet that if you fire up powershell and do a "get-childitem *

          • by Zocalo ( 252965 )
            It's quite a common issue on very large projects where there might be a network file system based file repository instead of a document management system. You'll quite often see insanely deep directory structures to keep things organized and try and let people find exactly what they are looking for (which seldom work, because there's *always* a bunch of files that refuse to be pidgeon holed like that). Generally not a problem on network servers, but when you try and copy a chunk of the directory tree over
          • Re:Finally (Score:4, Informative)

            by cdrudge ( 68377 ) on Tuesday May 31, 2016 @08:36AM (#52216185) Homepage

            I've seen paths that tried to be longer than 260 characters with archives off of Usenet. Some idiot will use the filename as a text message. When it gets extracted the path becomes some_stupidly_long_200_character_filename/some_stupidly_long_200_character_filename.ext. Since the path then becomes too long, extraction fails.

            The above isn't really a legitimate filename. It's being abused. But for a legitimate example, a common way to organize a HTPC movie collection is the format \Movies\[first_letter]\Title\Title.mkv. So Finding Nemo for instance would be \Movies\F\Finding Nemo\Finding Nemo.mkv. If you have a very long movie title (for example [imdb.com]) then you legitimately would have a path too long if you used the full movie name.

            • by AmiMoJo ( 196126 )

              I remember people trolling the network admins with very long path names. If you create a directory with a long name, and then paste another directory with a long name inside it then back on XP it couldn't be deleted via Explorer. You had to use a command prompt and the 8.3 name.

              That was in the 2000s though, back in my day it involved putting "spaced" in DOS file names by typing alt-2-5-5. Or better still put the space at the end of the file name.

          • Comment removed based on user account deletion
        • The limitation is also built into .NET for backwards compatibility. As a result, Powershell can't work with long file paths either. My understanding is that there are .NET libraries you can use to add the capability to your applications.

          However, cmd.exe can access long paths. You can address UNC paths by using "\\?\[Drive letter]\[path to directory or file]". Most commands work. Rename is a notable exception because it interprets the '?' as a wild card.

      • by l3v1 ( 787564 )
        Innovation? Really?
  • by __aaclcg7560 ( 824291 ) on Monday May 30, 2016 @11:40PM (#52214689)
    I no longer have to maintain a relatively flat file directory structure? My directories can finally go to... plaid?!
  • Comment removed based on user account deletion
    • by hcs_$reboot ( 1536101 ) on Tuesday May 31, 2016 @12:39AM (#52214873)
      Because as stated in another thread, MS/W devs do `char path[MAX_PATH];` So if MS removes the limit most programs will stack overflow.
      • by twdorris ( 29395 )

        Because as stated in another thread, MS/W devs do `char path[MAX_PATH];` So if MS removes the limit most programs will stack overflow.

        You may mean buffer overflow here.

        The declaration you've quoted would be resolved at compile time, not run time. So if MAX_PATH was 260 at compile time and then run on a system where the runtime behavior allowed for longer paths, I could certainly see a buffer overflow condition happening. But the program will only ever allocated 260 bytes off the stack in that case, so stack usage would remain the same.

        And I assume if MAX_PATH were _UI64_MAX (or whatever) at compile time, the compiler would complain.

        • The idea is the guy assumes that no path will have a longer length than MAX_PATH. So for already compiled programs (before win10 option) the guy may `strcpy` into `path` any path given by the system with no much care. This would be indeed buffer overflow. But MAX_PATH may be given a much larger value by default in the new win10, in this case stack overflow... (and likely a warning by the compiler).
      • by DarkOx ( 621550 )

        The might stack overflow, depending on how they are written they also might do other things. I bet there is a fair amount of strncpy(foo, bar, MAX_PATH); out there as well. Which could lead to strings that are not null terminated but also don't overwrite, the result probably being some kind of crash when foo is read, the other possibility is truncation. A Only the first 260 bytes of path get copied the result is some later action is taken on the partial path. Maybe that fails, maybe that results in a n

      • I misread that as MSJW devs, and now I'm really hoping that doesn't become a thing.
        • by DoofusOfDeath ( 636671 ) on Tuesday May 31, 2016 @09:11AM (#52216353)

          I misread that as MSJW devs, and now I'm really hoping that doesn't become a thing.

          Their code would never run successfully:

          - They'd consider it every program's right to call abort(); without being criticized.

          - They could only use peer-to-peer architectures, as master / slave is oppressive.

          - All branching logic would initially be floating-point -based, as Boolean logic is exclusionary and thus a micro-aggression against LGBTQ culture. However, it would later be decided that forcing branching logic was inherently judgmental, and thus all program instructions must have an equal chance to execute every processor clock tick.

          The one upside is that their code would be meticulously designed to avoid race conditions. Sadly, it would also be subject to nearly constant deadlock.

    • Perhaps in a weird situation you might want to protect a badly coded application from the longer path length?

      Due to the design of the Windows API, almost all applications are 'badly coded' in this respect.

  • This isn't just something you can switch on without thought.

    Windows' native programming has long had a "MAX_PATH" constant, which devs would use to create a char[MAX_PATH] to accept user input (i.e. from a save file dialog). If you suddenly start creating paths larger than this, you risk buffer overflows.

    Even if your app is carefully written to avoid buffer overflows in this situation, it may simply refuse to read the file with a path too large. Devs have been able to break beyond MAX_PATH for a while by using UNC paths, but almost nobody uses them because you'll find random apps that won't know how to use a longer path.

    I find it a bit weird that they haven't taken an approach similar to high DPI, where you can embed a manifest resource into your app that'll tell the OS it supports high DPI. While this would not solve random apps refusing to work with larger paths, this would at least prevent buffer overflows.

    • I find it a bit weird that they haven't taken an approach similar to high DPI, where you can embed a manifest resource into your app that'll tell the OS it supports high DPI. While this would not solve random apps refusing to work with larger paths, this would at least prevent buffer overflows.

      And in true old-school Slashdot fashion, I've apparently skipped over a paragraph in TFA. Using manifests is exactly what they've done.

    • Notably, windows file explorer never used to allow file operations on >260 filepaths.

    • by johannesg ( 664142 ) on Tuesday May 31, 2016 @02:53AM (#52215205)

      The summary mentions that it will only be available to manifested applications, i.e. ones for which the developer has already indicated it can deal with longer paths. Given that protection, there is absolutely no need for additional protection via a registry key.

      • by DarkOx ( 621550 )

        Sure there is a need. Lots of work flows require multiple applications. You don't want someone creating a file with application A they wound be able to read with application B. That is the kind of thing users generally can't understand and tends to result in lots a helpdesk calls.

        I can see why enterprises might want the option to turn this off.

    • by wwalker ( 159341 )

      Windows' native programming has long had a "MAX_PATH" constant, which devs would use to create a char[MAX_PATH] to accept user input (i.e. from a save file dialog). If you suddenly start creating paths larger than this, you risk buffer overflows.

      It appears Microsoft assumes that only shitty programmers write code for Windows. MAX_PATH is a compile-time constant (#define in windef.h). Even if you declare char my_path[MAX_PATH] variable (surely, you mean char my_path[MAX_PATH+1], right?), you wouldn't just pass it in into some other function expecting exactly MAX_PATH characters to be written into it, right? Surely, you'll also pass in MAX_PATH as the number of chars you are expecting to get, right? Something like strncpy( my_path, other_path, MAX_PA

      • by DarkOx ( 621550 )

        Which could still leave you without a terminating null unless you were first careful to memset(my_path, NULL, MAX_PATH +1); right?

        Its also not like truncating a path could not lead to any sort of undesirable side effects.

  • Thanks (Score:2, Interesting)

    But no thanks ; not using OSes that have registries.
  • by kriston ( 7886 ) on Tuesday May 31, 2016 @12:15AM (#52214805) Homepage Journal

    This is not an NTFS problem but an old API problem that programs should have stopped using years ago (decades, actually).

    Programs like the NPM Nodejs package manager have had, until recently, horrifically long pathnames for no good reason. This fixes that for them.

    Nearly any other program doesn't have this problem.

    Good job, NPM developers, for forcing MSFT to update a very old API that you still insist on using.

    • True. Node is literally the only time that I've run into this problem in my entire career. Maybe I've led a sheltered life, but still....
  • I don't imagine there are many people who are just dying to have filenames longer than 260 characters.

    • by quenda ( 644621 )

      path names, not file names. Its quite easy when a file is buried a few levels deep. Especially with symbolic links.

    • by Ramze ( 640788 ) on Tuesday May 31, 2016 @12:42AM (#52214881)

      Not file names -- file PATHs longer than 260 characters.

      As in:

      "C:\Users\Fubar\Pictures\Vacation\2013\Hawaii\Dole Plantation\Silly Photo with Sister 023.jpg"

      Obviously, that's under 260 characters, but if you try copying an entire user profile to another computer's desktop folder "C:\Users\Foo\Desktop\old profile", you get an even longer character path... and some people have very elaborate Documents folders for work and school projects that are many nested folders deep and lots of characters for descriptions.

      I've hit the character limit more than once myself -- especially with MP3 files with full band and song titles in the name and a few project files, but I've hit it multiple times copying entire profiles to servers as backups before swapping out a machine.

    • by jeremyp ( 130771 )

      Our company is involved in a project where the 260 char limit is a big problem. The git repository is essentially a copy of a portion of a Java content repository with quite a deep structure. If you tried to clone it into your documents directory on Windows, the 260 character limit was guaranteed to be hit unless your login name was something like "Bob".

    • by Malc ( 1751 )

      It's driven me nuts for years. Things will randomly fail and I have to move folders up to a new root or shorten the name of folders along the path. The fact that some programmes haven't had a problem whilst others do has just compounded the problem.

  • I was at a company which developed a large CRM application and I was the person who tarred up software updates to send to sites. A small part of the application was in Java, and the Java programmers were enamoured with class names which emphasized descriptiveness over brevity. We ended up with some files where path+filename exceeded 255 characters, and tar broke. My fix was to tell the programmers to shorten their damn file and directory names. (This was about 15 years ago, and it would have been Gnu tar. )

  • \\?\ Solved this... (Score:5, Informative)

    by Macfox ( 50100 ) on Tuesday May 31, 2016 @04:43AM (#52215475)
    Prepend "\\?\" to the path to tell the standard API to ignore MAX_PATH. This is how Robocopy and many other tools work around the issue.
  • And allow us to use files called CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9.

    Or use of <, >, :, ", |,?,* characters.

    Or the other strange arbitrary rules, e.g. spaces allowed in names but a filename must have something other than spaces in it. There are many others.

    But Linux also should probably not limit file paths to 4096, I'd have thought that might start to be an issue for people using a lot of Unicode.
  • Linux and others could do that for decades. Nice to see that Microsoft is catching up and removes the remaining DOS limitations. I guess it is asking to much to have them backport this to Win7/8. In which build to they do away with drive letters? And when is NTFS replaced with a modern and performant file system?
    • NTFS isn't the problem. NTFS supports up to 32k character file paths as well as a number of characters that windows deems illegal. This is a problem with the Windows API's and .Net. I frequently work with long paths in windows (as dictated by necessity, not by choice) by addressing UNC paths at the command line, or by mapping drive letters or creating symbolic links deep into the directory tree.

  • It should be:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Policies -or-
    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem

    Credit goes to user foobar in the article's comment section.
  • Come on, if 640K is enough for the whole damned program, 260 bytes should be enough for a filename. In fact one digit is enough for Windows version numbers.

    oh!

    wait.

  • Back in the day when most engineering applications were on unix machines and they were being migrated to windows workstation, the path name limitation was a big issue. PTC (Parametric technologies corp, a vendor of CAD/CAM software) would typically use a MAXPATHLEN of about 10*BUFSIZ but user can configure it to be bigger if they needed it. And its parts libray used very long hashes for file names. Everything from part name, author name, version number, creation date gets munged into the file names, bearing_housing_djt_284985473754653746544v .prt or something like that.

    They found the 8.3 file name format very confining. So they did a simple hack. They would construct the file/path name just as they would in unix. Then send it through a string processor that will insert a "\" after every 8th char and keep creating sub directories to get the file name they wanted! User will see humongous file names and path names.

    Our company has been supporting 4K path names now, I remember setting MAXPATHLEN to be 1025 (remember to allocate space for the trailing null) back when joined the company decades ago.

  • We've had to use robocopy et al to delete too-long-named directory trees for how many years now?

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...