Thread to note and discuss JovianDSS limitations, both numeric and descriptional. It would be better to know them in advance, than to face them eventially. A couple of questions from me (to start the conversation):
- How many drives JDSS is able to logically support? 256 (it's a Linux limitation, as I know) or less? Today it is not hard to build 2-node cluster, putting into node 4 - 6 SAS controllers, which will support up to 256 target devices each. It's also possible to daisy-chain SAS enclosures with up to 90 HDDs inside. Thus, the 1000+ physical HDDs for single system - it's not a dream. Will JDSS be able to recognize and accesss all these drives, at least?
From the other hand, look at the document. The 65 - 71 and 128 - 135 opcode blocks are allocated for HDD mapping, thus, we have 16 allocated blocks. The minor opcode has range 0..255. Each disk might get 16 consecutive minor opcodes (first represents the entire disk and last 15 for partitions).
Number of possible drives is = 16 * ( 256 / 16 ) = 256.
- Does the pool or volume have their own limitations in size or number of components? The ZFS limitations are: 2^78 bytes for volume, 2^64 bytes for file, 2^48 files per directory, 255 chars for filename. Does the JDSS nested them or added something extra?
Thank you for posting this comment.
The information you mentioned is theoretically correct.
However, basing on our practical tests which were held in our QA lab we can tell that such a limit does not exist for JovianDSS.
Open-E QA team has tested the systems with 500 physical hard drives and such scenarios worked seamlessly.
Moreover, the scenario which included 1000 virtual disks over ISCSI initiator also has been tested and it finished without any problems.
Please note, we are managing disks by using disk-by-id naming convention. That is why the limit can be much more higher.
As an example, we have a Client who has a 2PB’s (petabytes) system in production, where the actual formatted available space is 1.52PB. This solution has been achieved using 280 disks, 8TB each.
Below is the configuration and log dump (zpool_log.txt) showing the actual number of disks used in this system:
1,55 PB formatted
8 un-formatted RAW disk capacity in TB
14 # of data groups
20 # of disks in group
3 # of parity disks
0,91 formatted capacity factor (TB)
0,9 pool max used capacity 155,09
============ ============
1558,51 TB net formatted capacity
1713,60 TB net un-formatted capacity
280 # of disks total
2240 RAW DISK CAPACITY (TB)
Could You estimate the theoretical JDSS actual hardware equipped node performance limit? For example, which 4K-block Random Read IOPS might perform JDSS server with dual top CPUs (if this is neccessary), maximum RAM and a lot of enterprise-class SSDs as a storage via multiple HBAs?
As I see, the systems at the links first and second limited at ~500K IOPS per node. As I know, the typical performance of the actual HBA is 700K-1.2M IOPS, the PCI-E / RAM latency and software architecture might add more limitations. Are these systems 'bottlenecked' by SSDs, or the about ~1M IOPS per node level is the overall top for such systems?
First of all, we are sorry for the delay answering your post.
Considering both systems, the bottleneck can be as you described in your post. Additionally, there may be other limitations such as network protocol, network topology or even the performance of the client machine.
There are many factors which can affect the performance.
So, answering your question. JovianDSS is not limited to the mentioned performance, or IOPS. It all depends on hardware, configuration and type of purpose.
If you are interested in JovianDSS i suggest to download our JovianDSS 60 days Trial version for testing purposes. Our pre-sales team will be happy to help you on every stage of the setup.
T-Ku, I'm already using 4x JDSS system in a virtual lab: metro-cluster pair and a pair for async replication. The trial will invalidated soon, but the 1 hour uptime will be enough to check something quickly. Great thanks!