diff options
Diffstat (limited to 'Documentation/networking/netvsc.txt')
-rw-r--r-- | Documentation/networking/netvsc.txt | 75 |
1 files changed, 0 insertions, 75 deletions
diff --git a/Documentation/networking/netvsc.txt b/Documentation/networking/netvsc.txt deleted file mode 100644 index 92f5b31392fa..000000000000 --- a/Documentation/networking/netvsc.txt +++ /dev/null @@ -1,75 +0,0 @@ -Hyper-V network driver -====================== - -Compatibility -============= - -This driver is compatible with Windows Server 2012 R2, 2016 and -Windows 10. - -Features -======== - - Checksum offload - ---------------- - The netvsc driver supports checksum offload as long as the - Hyper-V host version does. Windows Server 2016 and Azure - support checksum offload for TCP and UDP for both IPv4 and - IPv6. Windows Server 2012 only supports checksum offload for TCP. - - Receive Side Scaling - -------------------- - Hyper-V supports receive side scaling. For TCP & UDP, packets can - be distributed among available queues based on IP address and port - number. - - For TCP & UDP, we can switch hash level between L3 and L4 by ethtool - command. TCP/UDP over IPv4 and v6 can be set differently. The default - hash level is L4. We currently only allow switching TX hash level - from within the guests. - - On Azure, fragmented UDP packets have high loss rate with L4 - hashing. Using L3 hashing is recommended in this case. - - For example, for UDP over IPv4 on eth0: - To include UDP port numbers in hashing: - ethtool -N eth0 rx-flow-hash udp4 sdfn - To exclude UDP port numbers in hashing: - ethtool -N eth0 rx-flow-hash udp4 sd - To show UDP hash level: - ethtool -n eth0 rx-flow-hash udp4 - - Generic Receive Offload, aka GRO - -------------------------------- - The driver supports GRO and it is enabled by default. GRO coalesces - like packets and significantly reduces CPU usage under heavy Rx - load. - - SR-IOV support - -------------- - Hyper-V supports SR-IOV as a hardware acceleration option. If SR-IOV - is enabled in both the vSwitch and the guest configuration, then the - Virtual Function (VF) device is passed to the guest as a PCI - device. In this case, both a synthetic (netvsc) and VF device are - visible in the guest OS and both NIC's have the same MAC address. - - The VF is enslaved by netvsc device. The netvsc driver will transparently - switch the data path to the VF when it is available and up. - Network state (addresses, firewall, etc) should be applied only to the - netvsc device; the slave device should not be accessed directly in - most cases. The exceptions are if some special queue discipline or - flow direction is desired, these should be applied directly to the - VF slave device. - - Receive Buffer - -------------- - Packets are received into a receive area which is created when device - is probed. The receive area is broken into MTU sized chunks and each may - contain one or more packets. The number of receive sections may be changed - via ethtool Rx ring parameters. - - There is a similar send buffer which is used to aggregate packets for sending. - The send area is broken into chunks of 6144 bytes, each of section may - contain one or more packets. The send buffer is an optimization, the driver - will use slower method to handle very large packets or if the send buffer - area is exhausted. |