Visit Open-E website
Results 1 to 10 of 10

Thread: Problem of configuration : "No logical volume"

  1. #1

    Default Problem of configuration : "No logical volume"

    Hi,

    After have totally reconfigured our nas, i am stopped in the configuration process because the system display "No logical Volume".

    However, as mentioned in manual :

    In " Setup / Disk Manager ", i have in my units a volume named "vg0" (size is 26,076 To).
    In the unit manager, i can see the status is "in use, vg0".

    Next i need to create my shares and users in the resources tab, but i cannot, because apparently there is no logical volume !!!

    Maybe it is still initializing and take a while, because the storage is big ?
    or probably i missed a step ...

    Thanks in advance for any help guys,

    Lucien

  2. #2

    Default

    It could be that RAID is still initializing, you might want to check on the controller to see what % is completed. If it is completed try rebooting the system, though you should not have to we should see if that helps.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  3. #3

    Default Controller OK

    Thanks To-M for your quick answer.

    In the 3ware tool, the raid is OK.

    I restart the system without success.

    I don't find any reason and solution ...

    thanks again,

  4. #4

    Default

    For information, in "status/logical volume",
    i see lv00 with this informations :

    Usage : % 0.00 / 0.00 GB (free 0.00 GB)
    Total snapshots: 0. In use: 0.

    Any help ?

  5. #5
    Join Date
    Aug 2010
    Posts
    404

    Default

    What is your DSS build that your system is running now?

    In the DSS V6 b5626 this it’s already forbidden to add “volume replication” to a NAS logical volume without deleting the back tasks before and unassigning the snapshot from that logical volume. This should prevent the occurrence of the mentioned failures.

    Also please try the steps in this KB, it might help you:
    http://kb.open-e.com/Default-Volume-Group_136.html

  6. #6

    Default

    the running version is : 3.11.XE00000001.2522
    release date : 2007-01-17

    Before we created the Raid volume via 3ware manager, we had delete and remove all existing shares and tasks.

    I am not physically present so i cannot test via the console today.
    I'll do it tomorrow or monday morning, and if it doesn't work maybe request an intervention from your hotline service to fix it, because it's urgent and touchy (actually our datas are in another NAS, without any backup and security).


    Thanks

  7. #7
    Join Date
    Oct 2010
    Location
    GA
    Posts
    935

    Default

    the running version is : 3.11.XE00000001.2522
    release date : 2007-01-17
    This is no longer supported outside of this forum, and the forum is not an official support channel.
    Can you try loading DSS V6 trial to see if can work for you?

  8. #8

    Default Unfortunately I had absolutely same problem,

    Unfortunately I had absolutely same problem, except for that the NAS was worked until last day and on it the weight of the data is stored...
    Is it can be possible to make something else except system updating?

    =============================
    We have a problem with access to shared resources organized by the NAS with Open-e.
    (
    Open-e NAS-XSR ENTERPRISE
    Ver..3.11.XE00000000.2522
    2007-01-17
    s/n: 0263675127
    )

    In the GUI (maintanance -> recources misc.) writes:
    "Warning: no logical Volume!! (lv00)"
    And I can't to do something on this interface page...

    The system displays the following error.
    (
    Dec 21 18:15:08 nas kernel: <c017d6f7> sys_unlink+0x17/0x20 <c0102e67> syscall_call+0x7/0xb
    Dec 21 18:15:08 nas kernel: <c02e55a2> copy_to_user+0x42/0x60 <c017f369> sys_fcntl64+0x79/0xc0
    Dec 21 18:15:08 nas kernel: <c0188193> iput+0x63/0x90 <c017d65d> do_unlinkat+0xcd/0x110
    Dec 21 18:15:08 nas kernel: <c0186ff6> clear_inode+0xe6/0x150 <c0187fa2> generic_delete_inode+0x142/0x150
    Dec 21 18:15:08 nas kernel: <c019907a> inotify_inode_is_dead+0x1a/0x90 <c02c370f> xfs_fs_clear_inode+0xaf/0xc0
    Dec 21 18:15:08 nas kernel: <c0291853> xfs_itruncate_finish+0x233/0x460 <c02b2e4f> xfs_inactive+0x54f/0x5c0
    Dec 21 18:15:08 nas kernel: <c02ac1ff> xfs_trans_log_efd_extent+0x2f/0x70 <c02677c5> xfs_bmap_finish+0x145/0x1f0
    Dec 21 18:15:08 nas kernel: <c0254065> xfs_free_extent+0xe5/0x110 <c02b878c> kmem_zone_alloc+0x5c/0xd0
    Dec 21 18:15:08 nas kernel: <c0252d51> xfs_free_ag_extent+0x471/0x7a0 <c0254065> xfs_free_extent+0xe5/0x110
    )

  9. #9

    Default

    Check the RAID health from the controller, then if all is ok then run the Repair Filesystem from the Extended tools in the Console Screen.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  10. #10

    Default I have made so, all works.

    Quote Originally Posted by To-M
    Check the RAID health from the controller, then if all is ok then run the Repair Filesystem from the Extended tools in the Console Screen.
    Thank you!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •