Visit Open-E website
Page 2 of 2 FirstFirst 12
Results 11 to 18 of 18

Thread: Can't find units with an Adaptec 2200S

  1. #11

    Default

    It's definitely something with the card. I unplugged one of the SCSI cables from the Adaptec 2200S, and plugged it in to the onboard SCSI controller on the motherboard, and it identified and recognized all of the drives properly in Open-E. I tried turning off the BIOS, and the BBS (BIOS Boot specification); without both of these enabled, the machine won't properly boot at all.

  2. #12

    Default

    It's definitely the Adaptec. I had a spare (older) Dell PERC3 dual-channel (an old LSI MegaRAID card). I swapped that in place of the Adaptec 2200S, and things appears to be working properly now... very, very strange.

  3. #13

    Default

    I've not gotten everything working just fine with the MegaRAID card (AMI QLA12160 = Dell PowerEdge RAID Controller 3/DC). I've been doing a bit of performance testing, and with HD Tune (I realize this doesn't necessarily give good absolute numbers, but at least it should give a good measure for drives in the same system).

    My main internal RAID on the workstation is a RAID5 of four 143 GB Seagate Savvio 10K.2 drives, which gives an average read transfer rate of 203 MB/s.

    For comparison, I also benchmarked my little 120 GB Lacie portable hard drive (5400 rpm, 2.5" drive). Over firewire, it was about 29 MB/s; over USB, about 28 MB/s.

    I also have a Thecus N5200 Pro, which is an iSCSI box with five SATA Seagate Barracuda ES.2 drives in a RAID five, which had an average transfer rate of about 36.5 MB/s.

    Finally, the two RAIDs setup within the Open-E DSS Lite box, which is a Dell Precision Workstation 690, 3 GB of RAM, with the aforementioned MegaRAID two-channel RAID card with 64 MB of RAM. The first setup is a RAID0 of two 10,000 RPM 300 GB Maxtor Atlas 10K.V U320 SCSI drives. This gave a transfer rate of 29.5 MB/s. The second setup has four 73 GB Seagate Cheetah 15K.3 drives in a RAID 5. It's transfer rate was about 26.2 MB/s. In all cases, reading and writing cache were both enabled.

    What I don't understand is why the RAID setups in the Open-E setup are so slow. Internally, they are good for hundreds of megabytes per second. All of the machines have GbE, and 30-40 MB/s is nowhere near the maximum bandwidth, even for a single adapter.

    But the performance I"m getting from dedicated SCSI hardware is poorer than that from a plug-in USB drive. What am I doing wrong? What can I do to fix this?

    Thanks again,

    Peter

  4. #14

    Default

    Not sure why try checking the SCSI cards speed settings I have seen some act differently with to high of a speed setting. Also update all firmware - but I believe you have this setup. I know you have set this with the cache enable, please make sure of this with the manufacture.

    Look at the test.log in the hdparm -t /dev/sdb section to check the Timing buffered disk reads.

    Check memory info in section of "cat /proc/meminfo" to see if
    Check NIC's if there are any rx or tx errors from "ifconfig -a"
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #15

    Default

    So the cache is enabled, as it should be. As for memory, the system has 3 GB of RAM, and from the console tools, it looks like there's about 86% free.

    How do I access ifconfig or hdparm? That is, how do I get to the command line interface? Thanks!

  6. #16

    Default

    My mistake, I left out an important detail - to view the information that might be of help, download the logs from the gui at Status > Hardware > Logs then download the logs and look in the test.log. We do not provide shell or telnet access to the OS.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  7. #17

    Default

    Can you remove the drives that make up the RAID 5 array from the system (physically).
    Maybe DSS Lite is adding the raw drives together and coming up with a total of over 1T.

  8. #18

    Default

    Actually the total amount of space is well under 1 TB. There are 2 x 300 GB in a RAID 0, plus 4 X 73 GB in a RAID 5, for a total of 600 + 200 = 800 GB. After switching to a completely different card, things are just fine.

    As for performance, I'm not sure HDTune is such a good benchmark for this stuff (its main advantage, though, is that it's free and relatively fast, and reproducible). In addition to those numbers, I spent a bit more time copying my own data files.

    I had a set of digital video files, each about 2 GB, for a total of about 14 GB. Copying to the RAID 0 gave an average of about 53 MB/sec; to the RAID 5, about 45 MB/sec. This is a lot more reasonable. Over the long run, with a big mix of files (160 GB total) of all sizes, I get about 19 MB/sec with the RAID 5, which is reasonable since there are a bunch of smaller files.

    This seems a bit better overall, and suggests that there isn't anything seriously wrong with the setup. I'm a bit interested in others' real-world experiences, though. Is there much more performance to be had? Right now, I'm using the integrated ethernet cards on the workstation motherboards on a dedicated switch---should I expect a significant increase by going to dedicated, plug-in server ethernet cards? Will jumbo frames make a big difference? Doesn't look like I'm very close to saturating the Gig-E connection, but should I be considering dual-port cards? Thanks!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •