Visit Open-E website
Page 1 of 2 12 LastLast
Results 1 to 10 of 11

Thread: xenserver 6.1 open e dss v7 iscsi best practice questions

  1. #1
    Join Date
    Dec 2012
    Location
    Italy
    Posts
    14

    Default xenserver 6.1 open e dss v7 iscsi best practice questions

    Hi folks, i'm currently evaluating dss v7 planning for a purchase if everything works fine as i hope withing a couple of weeks.
    Idea is to have 2 xenserver 6.1 hosts + 1 open-e dss 7 san
    actually 2 xen hosts have 8 nic ports 1 for management, 3 bonded together (LACP) for client access, and 4 nic dedicated for ISCSI san (actually bonded).
    dss7 server has 6 nic, 2 for management and 4 for ISCSI (actually bonded).
    I'm planning to have 4-6 virtual machine 2008 R2 running on the two hosts.
    The vms will be blaced in a storage repository created in the Opene SAN.
    Reading all around and googling i was not able to find some guidance for best performance and reliability setting for the ISCSI connection.
    What i only know is that with bonding i'm not getting the best performance right now but just redundancy.

    Is it good to disable the bond (and the LACP) on both xen hosts and open-e and setup 4 different network for all of the like 192.168.(1-4).10 for the open-e and 192.168.(1-4).(1-2) for the hosts and enable multipath?
    Cause i will setup only 1 storage repository with 1 vdi per VM, how many iscsi target would you setup?...1 per nic port (1 per network) or just 1?
    all in roundrobin mode?
    What about connection reliability? will i have any redundancy in case of failure of 1 of the network?

    Can someone give me some suggestion about the best config?

    Thanks

    Andrea

  2. #2
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    This will show you how to configure MPIO and XEN. While the video is for DSS V6, it works the same in DSS V7:
    http://www.open-e.com/service-and-su...ts-and-videos/
    --How to setup DSS V6 iSCSI Failover with XenServer using Multipath

  3. #3
    Join Date
    Dec 2012
    Location
    Italy
    Posts
    14

    Default

    Thanks Admin,

    I looked at the video but it doesn't really answer all my questions.
    I can just assume that my idea of configuration is quite good.
    What about the connection reliabilty? what happen if one port on the opene dss server fail?...i'll loose 1 iscsi target but what about data?

    In case i have already created the Storage repository but using the bonded nics so only 1 scsi target and i want to activate multipath, i have to change all network setting in the opene and the servers and create further iscsi target. Do i loose all my data?

    Thanks

    Andrea

  4. #4
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    Quote Originally Posted by aluisell View Post
    Thanks Admin,

    I looked at the video but it doesn't really answer all my questions.
    I can just assume that my idea of configuration is quite good.
    What about the connection reliabilty? what happen if one port on the opene dss server fail?...i'll loose 1 iscsi target but what about data?

    In case i have already created the Storage repository but using the bonded nics so only 1 scsi target and i want to activate multipath, i have to change all network setting in the opene and the servers and create further iscsi target. Do i loose all my data?

    Thanks

    Andrea
    ** MPIO gives you the ability to have link aggregation plus the benefit of redundancy, all while connecting to a single or multiple targets. The amount of targets does not dictate the amount of paths, and the amount of paths does not dictate the amount of targets.
    ** If you change from a bonded interface to single paths and configure MPIO, the data is not affected.
    ** Whether its a bonded interface or MPIO, and a path fails, targets are not lost, and data is not affected.

    The bottom line here is that the bond type, or the amount of connections has nothing to do with the amount of targets or the inability to access the data. For data to be inaccessible, all paths need to fail.

  5. #5
    Join Date
    Dec 2012
    Location
    Italy
    Posts
    14

    Default

    If the amount of targets does not dictate the amount of paths, and the amount of paths does not dictate the amount of targets, what does it drive the number ot iscsi target? which is the right formula in order to choose the best settings? I know that for performance it's also relevant the number of the spindle, the raid levet etc... I choosed to have hw raid + ssd cache with 6 spindle in raid 10, that should give me enough performance.

    Thanks

    Andrea

  6. #6
    Join Date
    Dec 2012
    Location
    Italy
    Posts
    14

    Default

    Hi

    I made some further test...
    Multipath is enabled on both xenservers and i have made test with 4 different network configuration on xenserver and for different network config in the opene.
    i also made some performance test with 4 network bonded in balance rr in opene and 4 bonded in active active on xen.
    All test with MPIO activated. Best performance was about 200MB per sec. not 400 as expected.
    I can't understand the reason for that.
    I noticed that even if i had 4 diff network xen is saying that i have 5 MPIO path/session ? why 5? it seems also to use the management network of Opene. Is that correct?

    Can you give me some direction of understanding please?

    Thanks

    Andrea

    I noticed the same behavior after bonding... 2 MPIO path/session and not 1...

  7. #7
    Join Date
    Aug 2010
    Posts
    404

    Default

    Are you using Xen Tools?
    And what is the current DSS build that you are using?

    We recommend you to open a support ticket and support us with your system logs files so can check it with more details. Thank you.

  8. #8
    Join Date
    Dec 2012
    Location
    Italy
    Posts
    14

    Default

    Hi ai-s

    I'm using dss v7 6806, currently evaluating version. As i wrote i'm planning to purchase it within 2 weeks if everything will work as expected. I doubt i can open a ticket for evaluation version.


    Andrea

  9. #9
    Join Date
    Dec 2012
    Location
    Italy
    Posts
    14

    Default

    Hi,

    I think i was able to make it working properly.

    What I did was:

    1) in xenserver for each host create 4 network dedicated to the storge ex. 172.10.1.1, 172.10.2.1, 172.10.3.1, 172.10.4.1 and for second host 172.10.1.2---172.10.4.2.
    2) doing the same for open-e 172.10.1.10---172.10.4.10 and no team bond at all on both xenservers or open-e
    3) put xenserver in maintenance mode and activate support for multipath.
    4) edited the file /etc/multipath.conf on both xenserver and added the following as per your pdf/video suggestion on page 59
    http://www.vimeo.com/moogaloop.swf?clip_id=19285099
    device {
    vendor "SCST_FIO|SCST_BIO"
    product "*"
    path_selector "round-robin 0"
    path_grouping_policy multibus
    rr_min_io 100 }
    5) also it's important to follow the instruction on page 61 and add on both xenserver the iptables rules on file /etc/rc.local
    in my case was iptables-I INPUT -s 192.168.0.230 -j DROP (exclude the management interface from multipath) otherwise multipath will try to establish also a connection on the management interface of your open-e.
    6) go back to xencenter and add your iscsi storage...

    Thanks for your suggestion

    Andrea

  10. #10
    Join Date
    Dec 2012
    Location
    Italy
    Posts
    14

    Default

    hi forgot to put the link of the pfd document...

    http://www.google.it/url?sa=t&rct=j&...55534169,d.Yms



    Quote Originally Posted by aluisell View Post
    Hi,

    I think i was able to make it working properly.

    What I did was:

    1) in xenserver for each host create 4 network dedicated to the storge ex. 172.10.1.1, 172.10.2.1, 172.10.3.1, 172.10.4.1 and for second host 172.10.1.2---172.10.4.2.
    2) doing the same for open-e 172.10.1.10---172.10.4.10 and no team bond at all on both xenservers or open-e
    3) put xenserver in maintenance mode and activate support for multipath.
    4) edited the file /etc/multipath.conf on both xenserver and added the following as per your pdf/video suggestion on page 59
    http://www.vimeo.com/moogaloop.swf?clip_id=19285099
    device {
    vendor "SCST_FIO|SCST_BIO"
    product "*"
    path_selector "round-robin 0"
    path_grouping_policy multibus
    rr_min_io 100 }
    5) also it's important to follow the instruction on page 61 and add on both xenserver the iptables rules on file /etc/rc.local
    in my case was iptables-I INPUT -s 192.168.0.230 -j DROP (exclude the management interface from multipath) otherwise multipath will try to establish also a connection on the management interface of your open-e.
    6) go back to xencenter and add your iscsi storage...

    Thanks for your suggestion

    Andrea

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •