Our customer has experienced quite low performance when clients access NAS volumes through 4x 1Gbps NICs in bond.
Open-E DSS V6 b5845.
LSI2108 RAID controller with 2x RAID6 volumes consists of total of 12x SATA/2TB HDDs / read and write cache enabled
4x 1Gbps additional card is based on Dual Intel 82576 dual-port Gigabit Ethernet controllers.
Bond is created on 4x 1Gbps NICs (with default type_of_bonding=”balance-rr”)
So, I was doing some experiments with bonding 3x 1Gbps NICs at my lab. Client and DSS are attached to the same 1Gbps switch.
It is true: sequential write performane from client to DSS v6 bond (”balance-rr”) are low as 30-45 MB/s
(seq. write performance from client to single 1Gbps NIC on DSS (without bond) are as expected - about 100-105 MB/s)
Seq. read performance are OK
I have checked other types_of_bonding: "balance-tlb” or “balance-alb" and
seq. write performance is better and as expected about 100MB/s.
Why performance is low with default type of bonding ”balance-rr” ?
Why performance are much better with types of bonding "balance-tlb” or “balance-alb" ?