Visit Open-E website
Results 1 to 6 of 6

Thread: vSphere 4.1 RDM unable to boot VM

  1. #1

    Default vSphere 4.1 RDM unable to boot VM

    First, we are evaluating Open-E in lab.
    We've created three logical volumes as shown here:

    The 247GB is file based and is used as the datastore for the vSphere host and we are not having any problems using it for storing and running virtual machines and isos.

    We wanted to benchmark and test RDM (raw device mapping) disks in VMware as both file and block devices. So the two 40GB logical drives were created for this purpose, with lv0001 being block and lv0002 being file.

    I've successfully added these as RDM disks to guests in vSphere, but the guests will not boot and hang at the VMWare BIOS screen.
    If I replace the RDM disk with a VMDK on the datastore I no longer experience this failure.

    This occurs even if I select "Force BIOS Setup" in the virtual machines option screen.
    This also occurs regardless of if I am using the file or block based device or a physical or virtual RDM.

    Further, the guest can't be easily powered off once this has occurred, and I have to reboot the host to continue.

    Any ideas?

    Here is what I see when I power on the guest:

  2. #2

    Default

    Hi

    I know the original post was over 6 months ago, but we're having exactly the same issues here. For us the issue was intermittent, too. Occasionally the VM would start fine, but more often than not it would lock at the BIOS screen, refusing to shut down and we would have to power-down the ESX host it was sitting on.

    VMware are dead set against people using the Microsoft iSCSI Initiator within VMs as opposed to RDMs, so not being able to use RDMs is causing us a support issue.

    Does anyone have any ideas?

    Alex

  3. #3

    Default Problems with RDMs in VMware VMs

    We have exactly the same issue as the users above.

    Environment:

    Dell 2950 v3 with Open-E v6 latest build
    Dell 2950 v3 VMware Hosts with VMware vSphere 4.1i Enterprise


    Usage of a Raw Device Mapping hangs the VM that has the RDM attached. (virtual and physical rdm same issue)

    Rebooting the VMware host releases the lock and another power on of the VM with the RDM makes the VM hang again.

    When we're using the same volume and format it as a VMFS-3 volume no problems exist at all.

    Some help from open-e would be appreciated.

    Thanks in advance

  4. #4

    Default

    Can you send in a support ticket on this issue with the logs from the DSS V6 and provide all the setup information as well for the VM.
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  5. #5

    Default

    Also are there any errors in the ESX server in the the /var/log/vmkwarning.log file of the or the /var/log/messages.log file - something like below?

    WARNING: SCSI: CheckPathReady:2941: CheckUnitReady on vmhba2:3:13 returned I/O error 0x0/0x2 sk 0x2 asc 0x0 ascq 0x0
    All the best,

    Todd Maxwell


    Follow the red "E"
    Facebook | Twitter | YouTube

  6. #6

    Default

    Having this same issue. All my VM's in vSphere 4.1 (ESXi 4.1) that have iSCSI RDM's mapped lock up on boot and I have to reboot the ESX server. If I remove the RDM, the VM will startup but as soon as I attach the RDM LUN, it locks up. Any ideas? This just all of a sudden started happening last night. Both my Open-E servers were running for 540+ days without any issues..

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •