Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Git for Windows accidentally creates NTFS alternate data streams (latkin.org)
349 points by latkin on July 20, 2016 | hide | past | favorite | 176 comments


The root cause of all this is a relatively obscure NTFS feature called alternate data streams.

Obscure indeed, I've never seen them used for anything other than hiding malicious content. Curious, I read about them on Wikipedia[1] and it turns out they were originally created to support resource forks in Services for Macintosh. Browsers also use them to flag files downloaded from the internet.

[1] https://en.wikipedia.org/wiki/NTFS#Alternate_data_streams_.2...


Hardly obscure, every modern OS has an equivalent feature, but only OSX and Windows unify it with the regular filesystem API.

Streams and resource forks are a play on a now-standard UNIX feature that almost nobody uses because it has a shitty non-file based API that also breaks most tools unless they are specifically aware of them: extended attributes. Resource forks and extended attributes are almost equivalent in every single way, except that extended attributes can only be read/written atomically (limiting their size to strings that will fit in RAM), whereas a fork or stream can be opened like a regular file. Stick that in your pipe and smoke it, UNIX sycophants, another case where Windows is more UNIX than UNIX ;)

The file-or-directory vagueness created by the hierarchy of resources buried within a file also more closely maps how the most popular path naming scheme on the planet (URLs) work: an URL can always represent both a file and a collection simultaneously, so I see this as closer to an ideal than the alternative where files can have no children at all. Sadly nobody actually uses these APIs like that, because all our tooling sucks so bad at coping with it. I sometimes wonder what the world would look like if directories on popular operating systems had simply been made 0 byte files


> feature that almost nobody uses because it has a shitty non-file based API that also breaks most tools unless they are specifically aware of them: extended attributes

Mind you, OS X makes extensive use of extended attributes in addition to resource forks (and it's largely deprecated resource forks in favor of app folders). Spend some time poking around Siracusa's reviews (since Tiger); he loves to go into detail about every new way Apple makes use of extended attributes.

Also, it's not fair to say that almost nobody uses them. Chrome makes use of extended attributes, as does KDE's metadata system and a few other things.

> (limiting their size to strings that will fit in RAM)

That's an understatement. The Linux kernel API limits the size of all extended attributes to 64KB, and the most popular filesystems limit them further to 4KB. That's not really comparable to a true fork.

ZFS is the exception: its extended attributes are implemented as forks, and the maximum size of an extended attribute is the same as that of a file. Unfortunately, those aren't accessible on ZOL because the kernel won't support it, so you can really only take advantage of it on Solaris/Illumos (and maybe FreeBSD?).


Does OSX use extended attributes to track things like the "color" attribute on a file (that shows up in Finder)? Or is this tracked via the .DS_Store hidden file?


It uses XAs, check with:

  mdls some-file-tagged-green
  (...)
  kMDItemUserTags                = (
      Green
  )
or with xattr:

  xattr -l some-file-tagged-green
  (...)
  com.apple.metadata:_kMDItemUserTags:
  00000000  62 70 6C 69 73 74 30 30 A1 01 57 47 72 65 65 6E  |bplist00..WGreen|


I believe the .DS_Store files are just a fallback so if you are accessing files on a network share or on a file system that doesn't support extended attributes like FAT32, those features can still work. The native implementation on HFS+ uses the extended attributes.


.DS_Store is where finder saves the folder ui configuation (icon positions, list mode, etc) and these files are present even on native HFS+ (although "hidden" by default like any other unix file name starting with ".")

There were some reports online that the future APFS in the 10.12 betas didn't leave .DS_Store files around.


You missed:

- Unix xattrs have a terrible API and awful command line tools: listxattr(2) returning \0-separated character arrays with lists of attributes that are next to impossible to decipher in C? - check! Hiding certain xattrs by default based only on their names? - check!

- xattrs have magical qualities based on their names, the kernel version, the kernel configuration, and the filesystem mount options (eg. "security.selinux", "trusted.*")

- Some xattrs are \0 terminated (and the APIs set and return the \0 making them very awkward to use from shell scripts), some don't, and some are indeterminate. They can also be binary blobs.


Also, add too many xattrs and you can no longer get a list of them:

       As noted in xattr(7), the VFS imposes a limit of 64 kB on the size of
       the extended attribute name list returned by listxattr(7).  If the
       total size of attribute names attached to a file exceeds this limit,
       it is no longer possible to retrieve the list of attribute names.
http://man7.org/linux/man-pages/man2/listxattr.2.html


Is \0 ASCII null?



I think if "almost nobody uses it" it's fair to call it obscure.

> another case where Windows is more UNIX than UNIX

Windows has extended attributes too. Having both features makes it more like a kitchen sink.


It's used in lots of places. Internet Explorer uses it to save whether a file was downloaded via IE. They are just only useful on NTFS, and often not even then because file hating utilities like Dropbox don't store them. So if you upload a file with an ADS to Dropbox, then copy it back again, you'll have lost that data.

One thing I'm not sure about is whether it appears in the file size when using dir. And if you apply a file hashing algorithm to generate a hash and you only use the file attributes, base file name and $DEFAULT data stream then you can append to the file via another data stream. So hash utilities need to be AFS aware to be truly useful in Windows.

Unless you are calling data an "attribute" though, it's really a bit of a silly comparison. Literally it's a seperate namespace in which you store data. The standard tools and utilities provided by Windows generally only look at $DEFAULT. The article is correct, git is pretty much doing something very similar, only the data is stored in .git (or specified somewhere else) and you use a tool like git to get access to that data, but you can also dive into the directory directly with any other tool. In Windows you use streams.exe, and it's a. generalised, b. non-portable as it's an intrinsic part of NTFS, and c. denoted as part of the NTFS filename by the delimiter ":", which is a reserved character and documented as such.

https://blogs.technet.microsoft.com/askcore/2013/03/24/alter...


Using dir normally does not display anything related to the alternate data streams, but if you use the /r option, their names and sizes will be shown.


Speaking of dropbox, it adds its own attributes, called com.dropbox.attributes with 83 bytes of binary data, according to 'dir /r'.


> Internet Explorer uses it to save whether a file was downloaded via IE.

Wait, but why? Chromium (at least on Linux) uses extended attributes too, but to record the origin and referrer of downloaded files (which can be really useful, once you know about it).


Chrome does it too. As does outlook, Firefox, possibly a bunch of other things. I think you'll find that the stream is zone-identifier. It can contain a value of 1 to 4, where each corresponds to a list of Windows' security zones. (Restricted sites, internet sites, local Intranet, and trusted sites from 4 to 1 respectively. There's a fifth option, zone 0, which is "local computer", but it's unused.)

This is the source of the prompts in Windows that say "this file came from the Internet, are you sure you wish to run it?".


To be more specific, IE (and most browsers on Windows, actually) use alternate streams to record that the file originates from the network, in a certain standardized way. When such a file is an executable file, and the user attempts to launch it (via Explorer; I don't think this happens for command line), they will get a confirmation dialog from the OS telling them that it's unsafe.

Other applications can perform similar checks on file formats that they handle, if the payload can be dangerous when untrusted. E.g. Visual Studio will give you a warning if you're trying to open a project file with this bit set.


Solaris unfies it too. You can even use the runat command to open a shell where extended attributes are exposed as and can be manipulated as regular files.

http://docs.oracle.com/cd/E23824_01/html/821-1474/fsattr-5.h...


NTFS has extended attributes.


How....do you know this kind of stuff? Great read, thanks.


It's used by all the browsers on Windows these days. They all create a 'Zone.Identifier' stream when a file is downloaded to mark is downloaded. It's content's is what triggers the "You downloaded this file! It's Evil!' warning in Windows.

To be fair, it's not used by a ton of things, since it requires NTFS, disappears when files are moved to different filesystems, and various things that read and write files destroy them if they're not careful, not to mention actually enumerating the streams is tricky, last I checked.


some history: this was introduced with XP SP2 as part of the windows security push. was a clever way to track the information without touching the binary data directly and supporting it in IE meant the majority of customers saw the benefit right away. and since most people (in windows) don't move files across file systems.


People in windows often move files across file systems: between the internal hard drive (generally NTFS) to external USB drives (often FAT32, or exFAT)


When this happens and the file has alternate streams you get a warning that some of the file's attributes cannot be copied.


I've not seen that warning before. I just tested on a file I downloaded, which had the 'Zone.Identifier' stream. Using Explorer, I copied it to a FAT32 volume, then back to my NTFS drive. Sure enough, it lost the 'Zone.Identifier' stream, and there was no warning when I opened it.

This is on a fairly normal Windows 10 installation. YMMV on different versions, of course.


I saw this warning on files copied from Mac OS X (when I transferred them further to a FAT32 filesystem). Maybe it depends on which stream it is. (Windows 8 Pro.)


iTunes for Windows uses them to store how much of a streaming file it has already downloaded. I wrote it (but I won't take credit for most things in iTunes for Windows)

It's a nifty feature but I'll admit NTFS is really obscure at times.


Great place to store meta data about a file, never thought about that before. I guess if the download stream is interrupted it reads that to know where to pick up again if resumed?

Another obscure feature of NTFS is Transactional NTFS which I'd never heard of until recently.

https://msdn.microsoft.com/en-us/library/windows/desktop/aa3...


Windows even includes mechanisms to perform transactions over different things like file system, registry, and even multiple machines.

Back when SVN was horribly slow and implemented transactions by actually touching thousands of small files in the .svn directories, I actually wanted to implement its file system layer on Windows with NTFS transactions, figuring that a native solution would probably be better. But by now they completely changed their working copy format so I don't think it's necessary anymore.


Unfortunately, transactional NTFS is being deprecated. MSDN says:

"Microsoft strongly recommends developers utilize alternative means to achieve your application’s needs. Many scenarios that TxF was developed for can be achieved through simpler and more readily available techniques. Furthermore, TxF may not be available in future versions of Microsoft Windows."

Which is a shame, because, conceptually speaking, a true transactional filesystem with snapshot semantics makes some things so much easier.


Do you mean DTC or something else?


The original idea on the Macintosh was to have some place to put non-code assets - icons, images, etc - that came with an application. So MacOS files had a "data fork" and a "resource fork". The "resource fork" was a tree structure managed by the Resource Manager.

The problem was that the original Macintosh had limited memory and only a floppy disk, and the implementation of writing to the resource fork wasn't very good. Many programs wrote to their own resource fork for preferences and such. The tree structure wasn't updated fully until the program was closed, because writing to the floppy was so slow. If the program exited abnormally, the resource fork's links were broken. This gave the resource fork approach a bad reputation.

Since Windows programs had to run on DOS, which didn't have resource forks, Windows never used this much. Windows put non-code assets in the executable as read-only objects.

NT, which was supposed to do everything (originally it had POSIX and OS/2 compatibility, and ran on MIPS, Alpha, and x86) added generalized support for resource forks, just in case. But since most applications were written for Windows 3.1/95/ME, they didn't use those facilities.

So that's how we got here.


> The tree structure wasn't updated fully until the program was closed, because writing to the floppy was so slow

Not to mention in many cases on the original Macs, you probably didn't even have the program floppy in the drive when you were working, because with only 400K on a disk you had to swap to the disk with your document on it.

I recall Inside Macintosh had a big disclaimer at the top that warned "The Resource Manager IS NOT A DATABASE". It was originally just meant to handle localizable resources, but since it was already there it was handy for developers (including Apple themselves) to use to load any kind of structured data. And who didn't love going messing around in system and application files with ResEdit?


>It was originally just meant to handle localizable resources

Not quite. An application's executable code was also stored in the resource fork, as CODE resources (one or several, so parts of the code could be loaded and unloaded as needed; initially there was also a size limit of 64k per CODE resource).

When Apple switched to PPC, the PPC code was stored in the data fork and the 68k code in CODE resources.


I meant originally as in when the Mac was still in development - I remembered it from here http://www.folklore.org/StoryView.py?story=The_Grand_Unified...

I may have unconsciously filled in some blanks in my memory that weren't actually there - the story mentions Andy Hertzfeld used the Resource Manager to manage the swapping in and out of code segments and I think I read it as a hack to use the Resource Manager in a way it wasn't intended, but it may very well have been intended that way to begin with.


In the early 90's I worked at a company that made server software that allowed Mac AppleTalk (AFP) clients to connect to a PC network. Eventually IBM had us write a custom version for OS/2 called LAN Server for Macintosh. We were really excited about using the streams/resource forks feature but had to give up eventually. We used a separate database to store what's in the resource forks instead.


SQLServer uses it from version 2005 til 2012 to create databases snapshots in order to run DBCC CHECKDB (consistency check). So for actually a critical feature of MSSQL. I suppose this was the reason why ReFS was not supported for SQL data disks.

It seems they are not used anymore since sql 2014.

See for example http://www.sqlskills.com/blogs/paul/issues-around-dbcc-check...


I have used them.

We had a system that generated millions of images and needed to be sure that from one version to the next the images produced by a given request were the same, and also have some diagnostic data in case of problematic images. The images could be either JPG or PNG and we needed a unified way to associate arbitrary metadata with them.

We had a special mode that would store an equivalent of the request in an alternate data stream of the image. When a problem was detected we would open the alternate data stream and test the request manually.


Very cool niche case. Thanks for sharing.


they should market this as a feature! alternate streams for people who think it is "an obscure feature" I mean that many people using alternate streams would be interesting for anyone forensicating systems for malware or as protection from...


This is used in specific sectors like data loss prevention. For example, you can tag files based on the security sensitiveness and if the file is copied it retains the tags.


> if the file is copied it retains the tags.

...if your miscreant is technically illiterate and only uses NTFS.


I've worked on Windows-only software that used resource forks. It stored mail messages, one per file, with the message metadata in a resource fork so we didn't have to modify the file containing the actual mail when the metadata changed.


There were once plans to store the individual streams which make up Microsoft Office files (OLE2) as alternate data streams, which would have been... interesting.


> Browsers also use them to flag files downloaded from the internet.

Is that where that annoying shit comes from? Good to know. When firefox kills off DownThemAll I will then use a FAT partition to store downloaded files (and see if I can force the temporary files to go there too).


I laugh these days when OSX warns me, "This application was downloaded from the internet." when I first access an app.

Every application on my machine was downloaded from the internet. Even the OS, after the first upgrade. That's not what is dangerous.


stick this in a reg file

  REGEDIT4
  
  ;https://support.microsoft.com/en-us/kb/889815
  [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Environment]
  "SEE_MASK_NOZONECHECKS"="1"
  
  ;https://technet.microsoft.com/en-us/library/cc783259
  [HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Download]
  "CheckExeSignatures"="no"
  "RunInvalidSignatures"=dword:00000001
  
  ;https://support.microsoft.com/kb/883260
  [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Associations]
  "LowRiskFileTypes"=".zip;.rar;.nfo;.txt;.exe;.bat;.com;.cmd;.reg;.msi;.htm;.html;.gif;.bmp;.jpg;.avi;.mpg;.mpeg;.mov;.mp3;.m3u;.wav;"
  "DefaultFileTypeRisk"=dword:00001808
  
  [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Attachments]
  "SaveZoneInformation"=dword:00000001


Unless it has changed in newer Windows versions, you can simply disable that warning in the Internet Settings, no need to keep files in an outdated filesystem.


>use a FAT partition to store downloaded files

Do you never download anything bigger than or equal to 4 GiB?


Not from a browser and, AFAIK, only a browser idiotically marks files as coming from the internet.


I used them for a VCS thought experiment I was playing with a while ago.


Just pretend they're "resource forks".


It's metadata. It's as obscure as file permissions bits.


Except that millions of developers routinely make use of file permissions; as evidenced by this discussion, many - perhaps even a majority - haven't heard of alternate data streams.


Actually, I'm not going to shy away from it. I'd be willing to bet that a clear majority haven't heard of alternate data streams.

There are too many developers who care not for NTFS at all, never mind some little-used feature, for that not to be true.


I really like NTFS as a file system... seems to offer a lot more than many other file systems, and pretty interestingly so for as old as it is now. That said, hopefully broader adoption can happen when the patents expire (ugh, in 7 years). Maybe the "new" MS could be convinced to create a royalty-free spec release/promise.

Would love for NTFS to become default for external storage, I already use it, but getting it on macOS and Linux isn't always as straight forward as it could be. NTFS-3G ftw.


But millions of developers also go "I don't get how it works, just make it world-writable". Not sure they understand either.


The colon has been special since the dawn of DOS. For instance, you cannot use "con:" as a file name. (In fact, in a fit of extreme stupidity, DOS also claimed some devices with no Colon suffix, like "con" and "prn", effectively making these into globally reserved names in any directory.)

Stock Cygwin does something special with the colon character, so the Cygwin git shouldn't have this problem. A path like "C:foo.txt" is not understood by stock Cygwin as a relative reference in the current directory of drive C; the colon is mapped to some other character and then this is just a regular one-component pathname.

In the Cygnal project (Cygwin Native Appplication Library), paths passed to library are considered native. So that certain useful virtual filesystem areas remain available, I remapped Cygwin's "/dev" and "/proc" to "dev:/" and "proc:/", taking advantage of the special status of the colon to take this liberty. You can list these directories (opendir, readdir, ...) and of course open the entries inside them; but chdir is not allowed into these locations. (Unlike under stock Cygwin, where you can chdir to /dev). chdir is not allowed because then that would render the library's current working directory out of sync with the Win32 process current working directory, which would not be "native" behavior.


I remember when any attempted access to c:\con\con would bluescreen any windows machine. Hours of teenage fun sending people to a website I'd set up with <img src="file://c:\con\con">


You could do it over Windows File Sharing too! \\someone-elses-machine\c\con\con would blue-screen their machine!


This annoys me few times 7 years ago, and I never searched the reason :)

I remember I was maintaining few vb6 applications and I often tried to create a "con.udl" file just to trigger the wizard and windows just complained with an error that didn't make any sense. So, I started to use conn.udl.

A bit late, but is good to know.

Here is an screenshot on a Windows 7: https://dl.dropbox.com/s/qg5fxx01mnktw79/ss-2016-07-20T17-19...

Edit: add screenshot


I'm surprised it's a problem even when the suffix .udl is present.


That's useful feature when you actually want to use the special names and the program in question insists on slapping an extension to any filename it uses (I remember printing outputs from various DOS based EDA tools by using export to postscript or hpgl and saving the result as "prn.ps"/"prn.lst").


The colon pre-dates DOS by a long time. I seem to recall it in RSX-11 pip. Definitely it was present in CP/M : https://en.wikipedia.org/wiki/Peripheral_Interchange_Program


I used CP/M with a Z80 coprocessor card on an Apple II. I didn't know RSX-11 also had pip, though.


Peripheral Interchange Program.

Gary Kildall recreated PIP for CP/M because he had come from using Digital/DEC systems. It wasn't just RSX-11, it was in a bunch of PDP stuff going back.

It was a pretty "revolutionary" feature of Unix that device I/O was just in the filesystem along with everything else so all software could access devices. (Not claiming revolutionary as in invented, revolutionary as in one of the things that helped unix achieve ubiquity and would be the first place most people saw it. Maybe it came from Multics, I don't recall.) Without filesystem mapped I/O, you need to create peripheral interchange programs to do ordinary things like copy files and print. Once you get used to PIP style file specification on command lines, it's a next step to push it into the OS API, so CON: will always mean the console, rather than only to software like PIP. This is the origin of MS-DOS having those special names too.

And colon as a special character in a file specification (I didn't say filename) is not just Windows, it's also in Unix (that's where it came from in http:), that's why I'm astonished to hear that people are naming files with colons in them. It used to be, there were more experienced people you worked with who would teach you very quickly that you don't put colons in filenames. Those days are gone, it's emojis all the way down, including some very sad emojis.


Where are there colons in Unix, other than as separators in /etc/passwd, PATH, termcap entries and such?

Berners-Lee may have gotten http: from volume naming in the classic MacOS or Amiga, or other systems.

In Unix, devices were always actual bindings in the filesystem space.


I don't recall when/where it came from, but for example the unix mount command uses "path ::= name colon path" delimiting for things like NFS and other "protocol based rather than physical disk" filesystems. And along the lines of what you say, yes it could have been borrowed, but probably from PIP even if it was indirectly via CP/M, Amiga or whatever.


It's not alone. In MS SQL Server, you can name a database "foo:bar". If you give a database such a name when you restore it from disk, you'll find that the database takes zero bytes on disk (at least, that's what Explorer claims) Your disk space is gone, though.


What? You are saying windows explorer doesn't handle this feature properly? Thats insane.


I think that is the correct behaviour though. The default is left empty in this case, so it should indeed be zero bytes.

Keep in mind that for each file you can have multiple data-streams. Suppose the system reports the total of al the streams for foo combined... You would be surprised if you would read the reported number of bytes from foo and see it crash because there are in reality no bytes in the default stream.

However, there are other tools to report the presence of alternative streams. This is not a feature intended for casual end-users.


The user should not have to use a third-party tool to interact with a feature which is always present in the core OS.

The principle of least surprise applies here: it’s surprising for a user to find a seemingly-empty file, especially if they expect the file to contain valuable data.

Clearly, Explorer should make the presence of multiple streams obvious to the user.


If I dump a SQL database to a file and see that the file size is 0kb in explorer I will assume there was something wrong during the export. Not that the data is hiding in another place which requires me to use special tools to inspect, how am I supposed to know these tools exist in the first place anyway? Explorer is clearly doing the wrong thing here.

This sounds like a bug in SQL server also, what if you try to transfer the data to another computer using a fat32 USB stick, then none of the actual data will be copied.


There are a lot of NTFS features that Explorer doesn't handle properly, like file paths greater than 255 characters.


Yea among the first things I replace in a new install

* Internet Explorer (because it can't explore the modern internet)

* File Explorer (because it can't explore files on my system)


Related to this bug: used to be a vulnerability in IIS back in the late 90s where you could append ::$DATA to a file name (e.g Foo.asp::$DATA) and download a server-side script's source code.


Related - meaning the ::$DATA was interpreted as a request for an alternate data stream from the file, and then read the default stream?


More info https://technet.microsoft.com/en-us/library/security/ms98-00... - seems to imply that $DATA is the default stream.


I had a related problem with Dropbox. Some files uploaded from my Linux machine were not synced to my Windows machine. Later I narrowed down this problem to images being saved from Twitter, which have URLs ending with ":orig". On Linux, Firefox happily saves such images as "blahblah:orig.jpg", whereas on Windows it uses space instead of a colon. And of course Dropbox on Windows would completely ignore filenames that contain colons and tell that the directories are synced, when they obviously aren't.



I get hit with a login page. Can anyone describe what is linked to?


It's just a link to https://www.dropbox.com/help/145

(No login needed there.)


As far as I can tell it's a list of files in my Dropbox with ":"s in the name, e.g. "Invoice 09:17:13.pdf"


That is quite the obscure and interesting issue to run into! Who puts colons in their filenames though? I haven't ever seen that used...


I do: "find ~ -iname ":" | wc -l" yields 49 entries. Some of those are Xorg-related (which identifies displays with a syntax like ":0.0", which ends up in some log file names) or gvfs stuff. But most are PDFs, usually research papers which I tend to save as "title: subtitle - authors.pdf".


High score! In all my filesystem I get about 150,000 files with a colon in their name. Many of them are part of the filesystems of containers which seem to use colons in the names of their libraries and dpkg packages. Many archlinux packages have colons in their names too. I also have some media files and documents with colons in names too.

It's actually really common, for someone who's used *nix for ages and expects every character to be a valid filename.


It is certainly interesting to see the different assumptions people make. Thanks for the details.


I use colons when indicating the ISO 8601[1] timestamp for stuff; it's much more readable with ':' than not, e.g. 2016-07-20T17:04:30Z vice 20160720T170430Z.

I also use them when naming stuff with hashes or UUIDs. Not having colons in filenames seems just weird to me.

Heck, even on Unix I'm annoyed that I can't simply escape slashes! It'd be nice to name files with the URLs they are taken from.

[1] https://en.wikipedia.org/wiki/ISO_8601


Lots of people, that's who.

Besides the other responses, colon is a standard path separator on URIs. If you need more than one kind of them (the obvious one being a slash), the colon is often the most reasonable option. And if you decide to save data on disk, with parts of the URI as file-name (what is also very reasonable)...

Probably, the main reason this problem does not pop everywhere is that people hacking completely new tools rarely do that on Windows. And when they port, it gets hidden together with another hundred other little incompatibilities.


The Maildir format uses colons in filenames, which has created problems with running certain email software on Windows.


Why wouldn't you put colons in filenames? Unless of course you use Windows. Colons, spaces, backslashes, whatever.


Useful if only for timestamps. 2016-07-20T16:27:00Z!


Artificially restricting filenames is an antipattern that makes things harder to read.

I use software that sometimes (but mostly not) needs files in dos 8.3. Because of this people seem to think it a good idea to use really short acronymed file names as a matter of course. If it makes sense to use a special character then people should be able to.


This would be the way to go if all program invocations consistently used some kind of common, higher-level data structure a la powershell. As long as programs rely on parsing their command line according to some syntax the developer just made up, I'm very glad there are commonly agreed sets of "safe" and "unsafe" characters. Dealing with shell escapes is horrible enough today (e.g. quoting rules under windows, filenames that start with a dash under linux...) and this would make matters even worse.


Quoting is a much bigger problem than differentiating flags from paths, and on Unix that's a solved problem: the shell always handles quoting, and unix programs only expect a list of words which can contain arbitrary characters (except NUL, of course). If you invoke a program directly using the exec-family of syscalls, you don't need to quote anything.

Whereas AFAIU Windows programs expect quoted words to be passed via main(), and must parse them. The only benefit is that you can disambiguate a filename with a dash (or slash) based on whether it was escaped, but that's a quite rare necessity, and of course still relies on the caller quoting them. (Does cmd.exe quote pathname expansions?)

The dash problem is also solved as long as programs use getopt() or getopt_long(). First, getopt() knows which flags take arguments and which don't. Knowing this, if a flag takes an argument it doesn't matter whether the argument begins with a dash or not. One consequence is that there's no such thing as an "optional" argument to a flag when using getopt and friends, as that ambiguity cannot be handled cleanly. People who roll their own argument processing code just so they can get "optional" arguments to flags invariably don't appreciate the security problem.

Second, a double-dash (--) terminates the argument list. getopt stops consuming command-line arguments at that point, and optind will index the first non-flag argument. So if passing a list of filenames to a command, the correct idiom in Unix is something like, `foo -- /path/to/*`. Of course, that presumes that the foo is using getopt or getopt_long, or a compatible argument processing implementation. Fortunately the vast majority do.

Smart programmers should rarely if ever roll their own argument processing code. Any headaches (real or imagined) related to a mismatch between the semantics offered by getopt and what the application might want is usually dwarfed by the usability and security benefits of adhering to the system facilities.

On a related note, I've always disliked the way GNU's getopt and getopt_long permuted (reordered) argument lists. I have an inkling it could introduce needless security issues, though I haven't thought it through carefully.


I agree if you are designing for a single operating system. The restriction isn't artificial if the software under design is multi-platform, since Windows for example doesn't work with colons in file names.


I assume any files I make should be usable by any major operating system, so to be sure I avoid any special characters in file names.


Mac OS 9 and earlier used colons as path separators and some support for that made it into OS X, although it might be gone by now. Apple's guidelines https://support.apple.com/en-us/HT202808 recommend not using colons.


It's not really gone, and is the path separator in the "UI layer" (Cocoa&al.) but the POSIX layer uses `/`. This gets automatically mapped back and forth.

Try to create a file or folder containing `/` in the UI (Textedit, Finder) and look at its name via the terminal. Now, `touch foo:bar` and look at it in the Finder.


Also from the Apple lineage, GS/OS allowed either slashes or colons as valid path separators.

See the last paragraph on page 109 of the GS/OS Internals manual. http://www.brutaldeluxe.fr/documentation/gsos/Apple_IIgs_GSO...


Could say the same about spaces and a plethora of systems fail catastrophically on those.


Not any well-written ones. Software should be able to handle all valid names on the platform. For Unix, any name that excludes forward slashes and null bytes.


And what about file bundles that are deployed cross-platform?


They can only target the common denominator of file names. It would be safe to avoid all special characters.


Do you also use slashes? what about backticks and quotes?


The colon should be reserved in Unix to separate path names a la PATH.


The PATH should be a list of strings not a string where : means something special.


And then how do you escape it?


In a sane system, PATH=/home/me/bin:/usr/local/bin:/usr/bin would be (setf bin-dirs '(#P"/home/me/bin/" #P"/usr/local/bin/" #P"/usr/bin/")) and if I wanted to have a quote in a file I could just write #P"/really-weird-name\"-isn't-it台北/bin/".


By list I mean an abstract data structure, you don't "escape" it. You could encode it in a specific format like JSON, if you want.


Transmit it as a JSON list or whatever. We're already assuming we're breaking UNIX userspace compatibility, so any option for reliably transmitting lists of strings is fine, and we have lots of those.


NUL


It isn't, though.


As a lazy user, I often copy paste the document title as the filename. Windows generously trims any disallowed character, but if it didn't, lots of my files would contain colons.

I actually wish colons were supported, since it's so prevalent in document titles. Question marks, too, while at it.


I think I read somewhere that Microsoft were investigating ways in which to get rid of the FAT32 backward compatibility issues with NTFS like the 255 character path element limit and the character limits in filenames. You can manage such files using the NTFS way of addressing them, "\\?\C:\Example\file:with?illegalstuff"


Is there an official name for those kinds of long paths in a single namespace? I found UNC, but it seems to apply only to the network variety.

https://en.wikipedia.org/wiki/Path_(computing)#Uniform_Namin...


I'm not sure Microsoft gives an official name for "\\?\".

The closest I can find is "To specify an extended-length path, use the "\\?\" prefix." (from https://msdn.microsoft.com/en-us/library/windows/desktop/aa3...).

So maybe it is the "extended-length path prefix".


You can also use that prefix to access volumes that have no mount point, as in \\?\Volume{<UUID>}, among other things like UNC paths (\\?\UNC\...). So that's not a good name.


Who puts colons into their filenames? Oh, short-sighted people like the designers of OBS (OpenSUSE Build System) who introduced a colon convention into project names.

http://lists.opensuse.org/opensuse-buildservice/2008-12/msg0...

This causes a problem even in the POSIX environment, because the colon is used in PATH and PATH-like environment variables. Usually there is no escape mechanism.


Well, it isn't uncommon to create a log file with a timestamp and a name like foobar.$datetime.log with $datetime expanding to something like 2016-07-20T22:08:38


> Who puts colons in their filenames though?

Twitter does

https://pbs.twimg.com/media/Cn1tGFoXgAApMt3.jpg:large


I try to save papers sometimes and give the filename the same name as the paper, which often has a colon in it.


This is interesting. I was just recently working on an app where I wanted to ensure the UI wouldn't accept problematic characters in filenames. Obviously, Unix has problems with '/'. I'll add ':' to the list. That's unfortunate. What else should portable apps avoid?


Microsoft seems to have a fairly comprehensive list:

https://msdn.microsoft.com/en-us/library/aa365247(VS.85).asp...

They suggest avoiding <>:"/\|?* as well as all ASCII characters 0-31.

ASCII 0 can be really fun. Lots of filesystem APIs deal with NUL-terminated strings (like, all of POSIX) so a zero byte in the middle of your string just truncates it at that point. If you use something that tolerates zero bytes for your UI strings (like NSString on the Mac, maybe C++ UI frameworks dealing with std::string) then the full string may show in the UI and you just mysteriously get a filename that's shorter on disk than what you see on screen.


ASCII 255 used to be fun in the Windows 3.1 days. DOS would handle just fine (displaying whitespace). The Windows Explorer (or whatever it was called back then) would not let you select a directory named like that. Basically this made a directory inaccessible, unless dealing with very tech savvy people.


If you want a serious rabbit hole, think about Unicode characters in filenames. Windows filenames are supposedly UTF-16, but they do not enforce the requirement that surrogate pairs (which represent characters over 0xFFFF, like emoji) must actually be paired, so you're not guaranteed that a UTF-16 decoder will actually read a filename successfully. UNIX filenames are just untyped byte strings that don't contain NUL or /, which by convention are ASCII or UTF-8 these days, but nothing enforces that; if you run ls, it will just print whatever bytes are in the filename to the terminal, and make it the terminal's problem. So round-tripping an arbitrarily weird UNIX filename to Windows, or vice versa, is challenging.

If you're in a position to enforce well-formed Unicode on all platforms, you're much better off. But many things (e.g. backup systems) don't get the option to just refuse files they don't like.


This points to a fascinating underlying difference between the operating systems. In general, UNIX attempts to be completely non-character-set aware and is built with the philosophy that how to render characters is strictly the terminal's problem. Windows, on the other hand, has a notion of a system character encoding and will try to keep everything compliant with it (with mixed success).

There is a very important takeaway of this: case-sensitivity. UNIX cannot be case-insensitive for file names because the mapping of lowercase to uppercase characters is dependent on the character encoding used, which it doesn't know. Windows can (and does) coalesce case for file names because it knows the character set in use and can consult the relevant mapping.

This difference in behavior produces all sorts of frustrating behavior when interacting between the two platforms, e.g. the classic case of Windows SMB mounting a share from a nix server that contains two files differentiated only by case. It'll show both entries but think they both point to the same thing. On the other hand, it's easy to create file names on a Windows device that are near impossible to name on nix. These are important things to be aware of if you ever implement a cross-platform network user environment.


> Windows can (and does) coalesce case for file names because it knows the character set in use and can consult the relevant mapping.

Actually, it's not the character set in use. Windows uses a case mapping table which is part of the NTFS filesystem metadata. See for instance https://web.archive.org/web/20110308034840/http://blogs.msdn...

(Yes, this means that the mapping of lowercase to uppercase characters can change if the file is copied to another drive in the same machine!)


There's WTF-8 to convert broken UTF-16 into something UTF-8ish. https://simonsapin.github.io/wtf-8/

I kind of wonder if paths not being allowed to contain NUL or '/' was one reason why for codepoints that are represented through more than one byte in UTF-8 (-> all non ASCII codepoints) all bytes have the most significant bit set to 1 (https://en.wikipedia.org/wiki/UTF-8#Description) This makes it impossible to have multi-byte to contain valid ascii chars like `/`.

Note that macOS actually does decomposing unicode normalisation on file names, I guess because it makes handling case-insensitivity easier. (Just doing ascii case insensitivity also handles o+diaresis, but not the ö codepoint) https://developer.apple.com/library/mac/qa/qa1235/_index.htm...


This is cribbing from source of a filename sanitizer in one of my company's internal libraries. The function is a little... paranoid... so I'm not positive all of these are actually forbidden.

/ and 0x00 for unix

:?"<>/|\* and chars 0x00 .. 0x31 for windows

'~!#$&%^; if there's a chance of filename being passed to shell w/o proper escaping.

Windows also forbids a bunch of filenames matching regex "CON|AUX|PRN|NUL|COM[1-9]|LPT[1-9]"

Also, ending filenames with space or period really messes up windows. File explorer can see it, but can't delete or rename it.

edit: fixed markup


> Also, ending filenames with space or period really messes up windows. File explorer can see it, but can't delete or rename it.

As a related tip, if you need to name a file something like .foo in explorer, it rejects it as "not having a file name". But if you type .foo. then it accepts the name and silently strips the trailing period.


That should be NUL (one L). Interestingly, when I tried it in Powershell, 'type NUL' reports that the file does not exist, but in CMD, 'type NUL' outputs nothing (it's the DOS equivalent of UNIX's /dev/null). So apparently some APIs will allow you to use those as filenames while others will choke on them.


It is likely that the .NET base class libraries block NUL and the other special file names that Win32 supports. This would explain why PowerShell (which is written in C#) behaves differently from cmd (which is written in C).


Thanks, corrected that. Shows what I get for transcribing wrong :)

Yeah, windows is kinda crazy inconsistent for some of these. I had a file (created under Linux) which ended in a space... drove windows nuts. Could list it, open it in some programs, but couldn't even open/rename by shortname under DOS or python.


Yes the reserved file names based on device names can be a tricky issue for portability. We ran into a problem where a source code file was named Con.java and it was impossible to use that repository on Windows. Had to rename it as Con_.java to make it work.


  `echo missed one`


IIRC, windows has a dialog that shows their full list of disallowed characters if you try to use one of them ... so try to make a file with (eg) "\" in the name and see what the dialog says.

disclaimer: i'm remembering something from the Windows 2003 era, so YMMV.


It's still there, as a balloon popup that says a file name can't contain the following characters:

\/:*?"<>|


As of Windows 7, if you create the file, or folder via raw API calls (Say, from the text editor built into FAR Manager), it's possible to work around this restriction. You can also create folders called "", or " ".

Surprisingly enough, FAR will deal with this 'somewhat' gracefully, but unsurprisingly, Windows Explorer will completely break.


MacOs (i.e., Os9 and before) had special meaning for colons, too. I wonder what would happen for git on those platforms.

Edit: Apparently colon is _still_ a special character on Mac! http://stackoverflow.com/questions/13298434/colon-appears-as...


And this is how we enter the new era. It goes MacOs, OS X then macOS. Unfortunately the 10.xx has been kept to mess with what is (capitalisation aside?) a nice tidy up. Maybe dropping the names part, Sierra, would have made it better. Relying on readers to spot your capitalisation isn't ideal at all, and what if you start a sentence with macOS, how do you capitalise it?


To be super pedantic, it went

1. "Macintosh System Software"

2. "Mac OS" (starting with 7.5/7.6)

3. "Mac OS X"

4. "OS X" (starting with Mountain Lion)

5. "macOS" (starting with Sierra)


I fixed my MacOs in my post.

(For me it went MacOs, OS X, then Windows 10).


Isn't the colon the directory separator character in HFS, akin to the unix '/' and windows '\'?


The flip-side of this:

I was running a fuzz test on a backup tool, which verified that file data and metadata (including timestamps) as reflected by Windows were exactly as produced by the fuzz test.

I noticed that for some ".eml" files this was not the case. The mtime of these files was being modified by something else after the initial create by the application. At last, it came down to a Windows process which was automatically indexing ".eml" files and creating an ADS for each of them, thereby touching the mtime.

This was intentional on the part of Windows, but I never saw it coming.


The problem should be addressed, but the proposed workaround seems strange. So git should refuse to write the file to disk? How am I supposed to use a git repo that contains such problematic files on Windows then?


How would you propose to "use" a git repo that contains files with unrepresentable file names in the first place? It's the repo that's not portable, not git. You'd have the same problem if someone handed you a zip file or tarball.


What is the alternative? Renaming the file?

This was actually an issue with early versions of Servo on Windows: cloning the repository would fail because it contained a file with a # in the name.

https://github.com/servo/servo/commit/43c999905c01627133240c...


MSYS2 (a Cygwin-based platform) does renaming, mapping colons to U+F03A from the Private Use Area (which renders in Explorer like a bullet point). Its git package cloned the repository from the article with no problem, "ls" shows "foo:bar", and "cat foo:bar" works. Opening the file in non-MSYS tools also has no problems with the exotic character.


'#' is allowed. It appears the issue was the wildcard '?' character, which could be argued isn't the best idea to use on *nix either.


> So git should refuse to write the file to disk?

Yes, since it's /impossible/ for the file to have the same name on Windows as on Linux (or whatever OS was originally used to add it to the repository). And yes, git definitely ought to complain loudly in such a case.

I suppose git could be modified to be aware of alternate data streams, but there would probably still be a discrepancy with the way other tools would present the file (think about how "dir foo*" or "dir foo:bar" would behave for such a file on windows vs. linux).

> How am I supposed to use a git repo that contains such problematic files on Windows then?

Unless the repository is usable without those files, you can't. Unsurprisingly, that's the price of being able to use the same set of files in environments that have different file naming rules.


Complaining loudly and making a repo inaccessible are two different things.

I didn't mean that I should be able to build/run/etc things in the repo that rely on the special filename and magically expect it to work. But my this behavior would make me unable to even look at the repo and perform normal git operations, regardless what it actually contains:

If you interpret "refuse to write the file" as a fatal error, I wouldn't even be able to clone the repo because the clone process would fail.

If you interpret it as non-fatal, I could browse the repo, but would always have a non-clean working set with a "deletion" I cannot undo. This means I cannot pull, rebase or checkout anything. (Unless I actually commit the deletion and remind myself not to push it. On every single branch.)

In no scenario can I access the contents of the file, even if I don't care about the filename at all. Even if I would like to fix the filename issue, I couldn't do so from a Windows pc.

That's why I think a solutions using escaping (and highly visible warnings in git status) are better. Yes, your scripts will still break but you have at least a chance to fix the mess.


I suppose the real question is: if git is happy to create stream "bar" on file "foo," which is arguably correct behavior under Windows, why doesn't "git status" (and everything else) just work correctly with the "foo:bar" file/stream that git created?

Would you be able to see the contents of such files with "git show"?

http://stackoverflow.com/questions/2071288/equivalent-in-git...


This is how Tortoise SVN handles SVN paths that are invalid on Windows - it doesn't write the offending file.

You should probably not check out such code on Windows in the first place, but if you accidentally do, then you really need to get loud warnings splashed everywhere.


He says in the article (I haven't independently confirmed) that there's already other paths Git will refuse because Windows will misinterpret them.

This would just be adding one more to the list.

As for how you're supposed to use the Git repo on Windows: I guess you aren't?


putting colons in your filenames are almost as weird as alternate data streams.


It's not a forward slash or a NUL byte. And it is a printable character. Doesn't seem so wrong to me.


It's used as separator in various places on *nix (eg PATH).


So are semicolons on Windows, yet still legal in file names. For PATH you can just quote the ones containing the separator (at least on Windows).


Traditionally, there's no way to quote in PATH on *nix. I do not believe that's changed, so if you cannot just change the name, you'd need to use a workaround like creating a colon-free symlink.


Some tools assume particular characters mean things. For example GNU tar will assume if you find ":" in the filename of the archive it's marks a hostname..


Use --force-local to bypass this issue. I got hit by this recently. :)


A colon is a forward slash on MacOS


How do you feel about file extensions longer than three characters? How about filenames with multiple dots in them?


Not sure what the question is here, can you clarify?

Long extensions and multiple dots are perfectly valid in Windows filenames. I use them all the time.

They're not like colon which has a special meaning referencing alternate streams.


The funny thing is that in most Windows tools, it doesn't mean even alternate file streams. It simply is flat out illegal (unless used as a drive letter); most tools will reject any program trying to save a file in the format of "filename:ADS".

Even in managed code tools, it's this way. For instance, in C#, a statement like this:

File.WriteAllText(@"c:\test.txt:teststream", "AFS test");

Will error with "path format is not supported". There's no way to access ADS natively in .NET. The only way to access ADS is to evoke native Win32 methods.


PowerShell can access ADS easily, so presumably there's something in .NET for that.

e.g. Get-Content -Path foo -Stream bar


I can't look at PowerShell's assemblies, but they could just call into native code for that.


These are things that seem normal to me that a Windows user might also describe as "weird", just like the GP post described colons in filenames.


I have no problem with it if they actually have a purpose, like .docx or .tar.gz


So I can't name my file "Theory of everything: 42.pdf"?


"McAfee Web Gateway" thinks this is porn, great.


Why would that be I wonder? I don't see any keywords that might trigger it.

That reminds me of web filtering software that blocked my search for "java proxy", but allowed "java procy", which google understood!


So does BlueCoat. I've submitted it for review, but I think McAffee is maintaining its own list.


Wonder why this site is blocked in UAE! :|




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: