We are attempting to implement a clustered lvm system using GFS on a mix of Fedora 4, 5, and 6 systems (using open-iscsi-1.0-485 on the FC4 system). Could they have made this a little more complicated?

[FYI the initiator has 24*750GB physical disks, using RAID-5.]
I've run into a potential problem with open-iscsi-1.0-485 having a block size limit of 2TB, but I think we worked around this by creating 8 2TB devices on the initiator, then using clvm to create a single VG that comprises all 8 devices (/dev/sd[a-h]), and a single LV. gfs_mkfs was used to create the filesystem (with 16 journals for now: this to support up to 16 nodes?) and we can see/use the filesystem thus mounted on the FC4 system.

Firstly, I currently have this running on only one system. I <i>think</i> that we need to also create the PVs, VG, and LVs on each of the machines in the cluster? Or do we somehow use ccs_tool to export the lvm information to other machines (nodes) in the cluster? I've looked at quite a few fractional examples, but haven't so far seen a complete example of how to accomplish this on multiple nodes in a clustered environment.

Secondly, since our app will require considerably more stoarge than 13.68TB, is the workaround (multiple 2TB filesystems) going to create headaches at some time in the future? Especially when we add more storage via additional iscsi initiators? I.e. are we going to run into some limits on filesystem sizes, or block device sizes, or number of scsi devices, or ...? Also, will there be any problem in using vgextend, lvextend, and growfs to increase the capacity of the filesystem?

Finally, will we have the ability to expand the number of nodes that may mount the block devices beyond 16 at a future time, or must the journals all be created with the initial filesystem?

Thanks for any help you can provide!

-fjb