Windows 8 and Server 2012 come with a new file system (along side NTFS) called ReFS – Resilient File System.
I am in the process of setting up a Server 2012 as a file server for my organisation and came across a bit of a dilemma. Out of the two great features (both of which, by the way, would be awesome on a file server!) offered by Server 2012, I had to choose one.
Resiliency of the file system to maintain availability and integrity or data deduplication to save on disk capacity and $$$?
Resiliency = ReFS is basically built upon this concept
Data deduplication = Only supported on NTFS
Why choose ReFS over NTFS?
The below is my very brief summary and I definitely advise you to read the technet blog about ReFS here.
- Integrity offered by automatic correction of data corruption
- Designed to stay ONLINE as long as possible – if data corruption occurs, only that sector is ‘corrected’ or taken offline. Typically with NTFS volumes, corruption means running a CHKDSK which can take many hours to days even.
- Salvage – “a feature that removes the corrupt data from the namespace on a live volume”. What this means is that even if there is corruption on the volume which cannot be repaired, the file system will savage those sectors so that the volume still remains online.
So you will probably want to choose ReFS over NTFS if you know you will have very large amount of data on a given drive and want to offer the best user experience possible by keeping volume online and taking advantage of automatic repairs.
Why choose NTFS over ReFS?
The below is my very brief summary and I definitely advise you to do your own research as every system and application is unique to one’s own environment.
- Been around for almost 20 years (July 2013 will be its 20th anniversary) so not so called ‘v1’ like ReFS
- If you need or use any one of the following as they are no longer available on the ReFS: named streams, object IDs, short names, compression, file level encryption (EFS), user data transactions, sparse, hard-links, extended attributes, quotas and DFS Replication
- Data deduplication!!! Server 2012 now supports data dedupe which can save you a lot of storage capacity (and $$$) depending on the type of data your users store.
You can actually see (without actually enabling data dedupe on the volume) how much disk capacity you could save by running a test tool provided in the Server 2012 installation:
When you install the Data Deduplication role service on a server running Windows Server 2012, DDPEVAL.EXE is also installed in the C:\Windows\System32 folder as an additional command-line tool. DDPEVAL.EXE can be run against any local NTFS volumes or NTFS network shares to estimate the amount of disk space that can potentially be reclaimed by moving that data to a Windows Server 2012 NTFS volume with Data Deduplication enabled.
C:\> DDPEVAL \\server\folder /V /O:logfile.txt
I’m no storage expert so there may be some other points I haven’t factored in to the above comparisons – If I have missed something critical please comment below!
What I recommend is to at least run the DDPEVAL tool just to see how much storage capacity you could save. Like I said before, every environment and requirement is unique so you need to see which file system will better suit yours.
For my task of building a file server, my choice so far is ReFS as resiliency is more important to us than data deduplication. However if I get time, I will update this post with an output of the DDPEVAL utility.
If you have something to add, feel free to do so below 🙂
Update May 2015: Now that I have a lot more sysadmin experience, I would not recommend deploying ReFS in a production environment unless maybe for a specific requirement or you have a small number of users. The main reasons for not deploying ReFS is as described above – ReFS is still very immature compared to NTFS and still does not support enterprise features such as DFS-R.