Thread to note and discuss JovianDSS limitations, both numeric and descriptional. It would be better to know them in advance, than to face them eventially. A couple of questions from me (to start the conversation):

- How many drives JDSS is able to logically support? 256 (it's a Linux limitation, as I know) or less? Today it is not hard to build 2-node cluster, putting into node 4 - 6 SAS controllers, which will support up to 256 target devices each. It's also possible to daisy-chain SAS enclosures with up to 90 HDDs inside. Thus, the 1000+ physical HDDs for single system - it's not a dream. Will JDSS be able to recognize and accesss all these drives, at least?
From the other hand, look at the document. The 65 - 71 and 128 - 135 opcode blocks are allocated for HDD mapping, thus, we have 16 allocated blocks. The minor opcode has range 0..255. Each disk might get 16 consecutive minor opcodes (first represents the entire disk and last 15 for partitions).
Number of possible drives is = 16 * ( 256 / 16 ) = 256.

- Does the pool or volume have their own limitations in size or number of components? The ZFS limitations are: 2^78 bytes for volume, 2^64 bytes for file, 2^48 files per directory, 255 chars for filename. Does the JDSS nested them or added something extra?