Using the multiple NICs of your File Server running Windows Server


Super Moderator

IMPORTANT NOTE: This blog post was created before the release of Windows Server 2012, which introduced SMB 3.0 and the new SMB Multichannel feature and significantly improved SMB's ability to use multiple network interfaces. You can read more about SMB Multichannel at
1 - Overview

When you set up a File Server, there are advantages to configuring multiple Network Interface Cards (NICs). However, there are many options to consider depending on how your network and services are laid out. Since networking (along with storage) is one of the most common bottlenecks in a file server deployment, this is a topic worth investigating.
Throughout this blog post, we will look into different configurations for Windows Server 2008 (and 2008 R2) where a file server uses multiple NICs. Next, we’ll describe how the behavior of the SMB client can help distribute the load for a file server with multiple NICs. We will also discuss SMB2 Durability and how it can recover from certain network failure in configuration where multiple network paths between clients and servers are available. Finally, we will look closely into the configuration of a Clustered File Server with multiple client-facing NICs.

2 – Configurations

We'll start by examining 8 distinct configurations where a file server has multiple NICs. These are by no means the only possible configurations, but each one has a unique characteristic that is used to introduce a concept on this subject.

2.1 – Standalone File Server, 2 NICs on server, one disabled

This first configuration shows the sad state of many File Servers out there. There are multiple network interfaces available, but only one is actively being used. The other is not connected and possibly disabled. Most server hardware these days does include at least two 1GbE interfaces, but sometimes the deployment planning did not include the additional cabling and configuration to use it. Ironically, a single 1GbE (which provides roughly 100Mbytes per second of throughput) is a common bottleneck for your file server, especially when reading data from cache or from many disk spindles (physical disk throughput is the other most common bottleneck).

Having a single NIC has an additional performance downside if that NIC does not support Receive-side Scaling (RSS). When RSS is not available, a single CPU services all the interrupts from a network adapter. For instance, If you have an 8-core file server using a single non-RSS NIC, that NIC will affinitize to one of the 8 cores, making it even more likely to become a bottleneck. To learn more about deploying RSS, check

2.2 – Standalone File Server, 2 NICs on server, teamed

One simple and effective solution for enabling the multiple NICs on a File Server is NIC Teaming, also known as “Link Aggregation” or “Load Balancing and Failover (LBFO)”. These solutions, providers by vendors like Intel, Broadcom and HP, effectively combine multiple physical NICs into one logical or virtual NIC. The details vary based on the specific solution, but most will provide an increase of throughput and also tolerance to the failure of a NIC or to a network cable being accidentally unplugged.
The NIC team typically behaves as a single NIC, requiring only a single IP address. Once you configure the team itself, the Windows Server and File Server configuration proceeds as if you had only one NIC. However, NIC teaming is not something included with Windows Server 2008 or Windows Server 2008 R2. Support for these solutions (the hardware, the drivers and the configuration tools) is provided by the hardware manufacturer.
You can find Microsoft’s support policy for these types of solutions at and

2.3 – Standalone File Server, 2 NICs on server, single subnet

If you don’t have a NIC teaming solution available but you are configuring a File Server, there are still ways to make it work. You can simply enable and configure both NICs, which will each need their own IP address. If everything is configured properly, both IP addresses will be published to DNS under the name of File Server. The SMB client will then be able to query DNS for the file server name, find that it has multiple IP addresses and choose one of them. Due to DNS round robin, chances are the clients will be spread across the NICs on the file server.
There are several Windows Server components contributing to make this work. First, there’s the fact the File Server will listen on all configured network interfaces. Second, there’s the dynamic DNS that automatically registers all the IP addresses under the server’s name (if configured properly). Third, there’s the fact that DNS will naturally round robin through the different addresses registered under the same name. Last but not least, there is the File Client that will use one of the available IP addresses, giving a priority to the first address on the list (to honor the DNS round robin) but using one of the others if the first one does not respond quickly. The SMB client will use only one of the IP addresses at once. More on this later.
What’s more, due to a feature called SMB2 durability, it’s possible that the SMB client will recover from the failure of a NIC or network path even if it’s right in the middle of reading or writing a file. More on this later as well.

It's important to note that applications other the file server and file client might not behave properly with this configuration (they might not listen on all interfaces, for instance). You might also run into issues updating routing tables, especially in the case of a failure of a NIC or removal of a cable. These issues are documented in KB article 175767. For these reasons, many will not recommend this specific setup with a single subnet.

2.4 – Standalone File Server, 2 NICs on server, multiple subnets

Another possible configuration is for each of the File Server NICs to connect to a different set of clients. This is useful to give you additional overall throughput, since you get traffic coming into both NICs. However, in this case, you are using different subnets. A typical case would be a small company where you have the first floor clients using one NIC and the second floor using the other.
While both of the IP addresses get published to DNS (assuming everything is configured correctly) and each of the SMB clients will learn of both, only one of them will be routable from a specific client. From the SMB client’s perspective, that is fine. If one of them works, you will get connected. However, keep in mind that this configuration won’t give your clients a dual path to the File Server. Each set of clients has only one way to get to the server. If a File Server NIC goes bad or if someone unplugs one of the cables from the File Server, some of your clients will lose access while others will continue to work fine.

2.5 – Standalone File Server, 2 NICs on server, multiple subnets, router

In larger networks, you will likely end up with various networks/subnets for both clients and servers, connected via a router. At this point you probably did a whole lot of planning, your server subnets can easily be distinguished from your client subnets and there’s a fair amount of redundancy, especially on the server side. A typical configuration on the server side would include dual top-of-rack switches on the server side, aggregated to a central switching/routing infrastructure.
If everything is configured properly, the File Servers will have two IP addresses each, both published to the dynamic DNS. From a client perspective, you have a File Server name with multiple IP addresses. The clients here see something similar to what clients see in configuration 2.3, except for the fact that the IP addresses for the client and servers are on different subnets.
It is worth noting that, in this configuration and all the following ones, you could choose to leverage NIC teaming, as described in configuration 2.2, if that is an option available from your hardware vendor. This configuration might bring additional requirements, since each of the NICs in the team are going into a different switch. The configuration of Windows Server itself would be simplified due to the single IP address, although additional teaming configuration with the vendor tool will be added.

2.6 – Standalone File Server, 2 NICs on “clients” and servers, multiple subnets

This last standalone File Server configuration shows both clients and servers with 2 NICs each, using 2 distinct subnets. While this configuration is unusual for regular Windows clients, servers are commonly configured this way for added network fault tolerance. Here are a few examples of such server workloads:

  • IIS servers that store their HTM and JPG files on file share
  • SQL Servers that regularly send their database backups to a UNC path
  • Virtualization Servers that use the file server as a library server for ISO files
  • Remote Desktop Servers (Terminal Servers) where many users use the file server to store their home folders
  • SharePoint servers configured to crawl file shares and index them
You can imagine the computers below as part of the configuration 2.5 above, only with more servers to the right of the router this time.

2.7 – Clustered File Servers, 2 NICs on servers, multiple subnets, router

If you are introducing dual network interfaces for fault tolerance, you are also likely interested in clustering. This configuration takes config 2.6 and adds an extra file server to create a failover cluster for your file services. If you are familiar with failover clustering you know that, in addition to the IP addresses required for the cluster nodes themselves, you would need IP addresses for each cluster service (like File Server A and File Service B). More on this later.
Although we’re talking about Clustered File Services with Cluster IP addresses, the SMB clients will essentially see a File Server name with multiple IP addresses for each clustered file service. In fact, the clients here see something similar to configurations 2.3, 2.5 and 2.6. It’s worth noting that, if File Server 1 fails, Failover Clustering will move File Service A to File Server 2, keeping the same IP addresses.

2.8 – Clustered File Server, 2 NICs on “clients” and servers, multiple subnets

This last configuration focuses on file clients that are servers themselves, as described in configuration 2.6. This time, however, the File Servers are clustered. If you are interested in high availability for the file servers, it’s likely you would be also clustering the other server, if the workload allows for it.

3 – Standalone File Server

3.1 – SMB Server and DNS

A Windows Server file server with multiple NICs enabled and configured for dynamic DNS will publish multiple IP addresses to DNS for its name. In the example below, for instance, a server with 3 NICs each in a different subnet is shown. You can see how this got published on DNS. For instance, FS1 shows with 3 DNS A records for, and It’s important to know also that the SMB server will listen on all interfaces by default.

3.2 – SMB Client and DNS

From an SMB client perspective, the computers will query DNS for the name of the File Server. In the case of the example above, you will get an ordered list of IP addresses. You can query this information using the NSLOOKUP tool, as shown in the screenshot below.

The SMB client will attempt to connect to all routable IP addresses offered. If more than one routable IP address is available, the SMB client will connect to the first IP address for which a response is received. The first IP address in the list is given a time advantage, to favor the DNS round robin sequence.
To show this in action, I created a configuration where the file server has 3 IP addresses published to DNS. I then ran a script to copy a file to a share in the file server, then flush the DNS cache, wait a few seconds and start over. Below are the sample script and screenshot, showing how the SMB client cycles through the 3 different network interfaces as an effect of DNS round robin.
PING FS1 –N 1 | FIND “Reply”
CHOICE /T 30 /C NY /D Y /M "Waiting a few seconds... Continue"

3.3 – SMB2 Durability

When the SMB2 client detects a network disconnect or failure, it will try to reconnect. If multiple network paths are available and the original path is now down, the reconnection will use a different one. Durable handles allow the application using SMB2 to continue to operate without seeing any errors. The SMB2 server will keep the handles for a while, so that the client will be able to reconnect to them.
Durable handles are opportunistic in nature and offer no guarantee of reconnections. For durability to occur, the following conditions must be met:

  • Clients and servers must support SMB2
  • Durable handles must be used (this is the default for SMB2)
  • Opportunistic locks (oplocks) must be used (this happens when files are opened) (this is the default for SMB2)
For Windows operating systems, SMB2 is found on Windows Vista, Windows 7 (for client OSes), Windows Server 2008 and Windows Server 2008 R2 (for server OSes). Older versions of Windows will have only SMB1, which does not have the concept of durability.
To showcase SMB2 durability, I used the same configuration shown in the previous screenshot and copied a large number of files to a share. While the copy was going, I disabled Network1, the client network interface that was being used by the SMB client. SMB2 durability kicked in and the SMB client moved to use Network3. I then disabled Network3 and the client started using Network2. You can see in the screenshot below that, with only one of the three interfaces available, the copy is still going.

4 – Clustered File Server

4.1 – Cluster Networks, Cluster Names and Cluster IP Addresses

In a cluster, in addition to the regular node name and IP addresses, you get additional names for the cluster itself and every service (Cluster Group) you create. Each name can have one or more IP addresses. You can add an IP address per public (client-facing) Cluster Network for every Name resource. This includes the Cluster name itself, as shown below:

For each File Server resource, you have a name resource, at least one IP address resource, at least one disk resource and the file server resource itself. See the example below for the File Service FSA, which uses 3 different IP addresses (, and

Below is a screenshot of the name resource properties, showing the tab where you can add the IP addresses:

4.2 – How Cluster IP addresses are published to DNS

Note that, in the cluster shown in the screenshots, we have 5 distinct names, each of them using 3 IP addreses, since we are using 3 distinct public Cluster Networks. You have the names of the two nodes (FS1 and FS2), the name of the cluster (FS) and the names of the two Cluster File services (FSA and FSB). Here’s how this shows in DNS, after everything is properly configured:

4.3 – Cluster Name and Cluster IP Address dependencies

When your clients are using a File Server resource with multiple routable IP addresses, you should make sure the IP addresses are defined as OR resources, not AND resources, in your dependency definitions. This means that, even if you lose all but one one IP address, the file service is still operational in that node. The default is AND and this will cause the file service to fail over upon the failure of any of the IP addresses, which is typically not desired.

Below you can see the File Service resource with only one of the three IP addresses failed. There is an alert, but the file service is still online and will not failover unless all IP addresses fail.

5. Conclusion

Network planning and configuration plays a major role in your File Server deployment. I hope this blog post has allowed you to consider increasing the throughput and the availability of your File Server by enabling and configuring multiple NICs. I encourage you to experiment with these configurations and features in a lab environment.

What is MPIO and MC/S?


This article will discuss the differences between Multipath I/O (MPIO) and Multiple Connections per Session (MC/S).

What is it?

Using either of these protocols with the iSCSI Target offers redundantavailability of the iSCSI Initiator, by making the iSCSI Initiator connections redundant with the iSCSI Target. This is typically known as fail-over.
Another benefit of using MPIO or MC/S is Load Balancing is available, where both connections can be used to increase the performance of accessing the iSCSI Target, versus one connection. If a network connection fails with Load Balancing, the path to the iSCSI Target reverts back to a single connection.

Basic Requirements

Using either MPIO or MC/S requires at least two network ports on both the Initiator and the Storage Array; it is also strongly recommended to use the network cards within separate networks (or different subnets). Because of the network complexity, it's best that MPIO or MC/S is deployed by experienced network administrators.

  • Note: It is not recommended to use MPIO or MC/S with Link Aggregation Network Links, please select one method (MPIO or MC/S) or use Link Aggregation. Using both, while technically possible, is not practical, and involves complicated network setup.
  • To use MPIO or MC/S, proceed to the iSCSI Target Advanced Options of the Synology DiskStation and enable Allow multiple sessions from one or more iSCSI Initiators

Which method to use?

It is recommended to use which ever method is available within the network, as both achieve the same goals.

  • MPIO has wider support, as it's supported by various technologies, including disk controllers, iSCSI Protocol, Fibre Channel. It is also has wider support by various software companies, including Linux, VMware, and Microsoft.
  • MC/S is easier to deploy, as it has less steps involved to deploy MC/S; note at the time of this article was written, MC/S Capable initiators is not supported by Unix/Linux Initiators, and known Apple-capable Initiators.

Using the Synology DiskStation for iSCSI Storage

Further Reading

Last edited: