I have been researching iSCSI impementations on the server to try and understand the dfference between them and to come to grips with how they work. This article looks to compare the various methods of connecting to a iSCSI network.
It seems that many people do not know or understand that the generation and transmission of IP packets is CPU intensive process. In some operating systems, it can also be very latent since there are many transfers across the memory bus and the PCI bus before the data is actually transmitted.
When managing IP the server OS must manage the following:
- session state for every TCP connection
- buffer for each session
- memory management and control for every IP session
- data transfer on and off the PCI bus
And when using TCP the following items must be handled by the driver.
- TCP Window Size
- TCP Window Scale
- TCP Timestamps
- TCP Delayed ACKs
- TCP Selective ACK
- 802.1x Flow Control / Ethernet Pause
- Maximum Segment Size (MSS) ñ Jumbo Frames
So you can implement an iSCSI connection on the computer using three basic modes:
- Software initiators
- TCP Offoad Engines (TOE)
- Host Bus Adapters
This is the most common implementation. You can download the Microsoft iSCSI Initiator and get started immediately. It seems that some other vendors also offer Initiators, e.g. DelL.
There are several performance issues with this approach:
- you are using a general purpose CPU to perform the data transformation
- you are performing multiple data copy functions across the internal bus of your computer.
- it is not optimised for performance
For a desktop or low intensity server this might work OK. But for VM platforms and high intensity servers, spending a lot of CPU cycles generating iSCSI packets will impact performance.
TCP Offload Engines (TOE)
For a while I thought the TOE cards and HBA were the same thing but this is not true. The TOE cards are able to improve the TCP performance of a server. Since iSCSI uses TCP for data transfer, this will improve storage performance and reduce latency.
Many servers now ship with TOE cards as standard but most drivers have the TOE feature disabled. Also, device driver quality seems to be important, so make sure you get latest versions and use quality vendors.
For servers that exchange a lot of data, enabling TOE will improve performance and reduce the server CPU utilisation.
Host Bus Adapters
It is a generic term for connecting the I/O bus of your server to an external system such as Ethernet or FC. Thus an ethernet adapter is also a HBA. Note that both FC and iSCSI people use the term HBA.
Header and Data Digests were added by the iSCSI Working Group as a more robust mechanism for ensuring data integrity compared to TCP checksums. However, iSCSI Header and Data Digest calculations are very CPU intensive.
Only a full iSCSI offload HBA has the logic built into the ASIC to accelerate these calculations. General purpose NICs and TOEs do not have this innate capability; therefore, the calculations must be performed by the host CPU (if desired). If these calculations are performed by the host CPU, both throughput and IOPS performance will further degrade, potentially slowing application performance to an unacceptable level
Full iSCSI offload HBAs offer SAN Administrators what they need in a storage adapter, including:
- Full iSCSI offload HBAs consistently have low CPU
- offer iSCSI digest
reliability at line speeds without impacting the performance of host applications
- tools that show the capacity and performance of your iSCSI connection.
It is the management tools that are particularly interesting. The ability to have extended reporting on the iSCSI data flows provides a real benefit in terms of locating performance or network problems, provide stats on packet drops and connection failues and thus let you know that a problem exists and give you tools to resolve that problem.
After researching HBA, it seems clear that you are more likely to be successful if you purchase iSCSI HBA for your servers. It seems that many of the features that make Fibrechannel popular are actually derived from the FC HBA and not from an inherent superiority at some other layer.
When implementing an iSCSI backbone you should ensure that get iSCSI HBA for your servers. You will improve your server performance, and get better visibility into your iSCSI service and network overlay.