[one-users] OpenNebula 3.8.3 and VMWare ESXi 5.0: internal error HTTP response code 403 for upload

Tino Vazquez tinova79 at gmail.com
Thu Apr 25 02:01:52 PDT 2013


Hi,


On Wed, Apr 24, 2013 at 5:20 PM,  <chenxiang at aquala-tech.com> wrote:
> Second try. I looked into disk.vmdk, and found the following:
>
> # Disk DescriptorFile
> version=1
> encoding="UTF-8"
> CID=e451587b
> parentCID=ffffffff
> isNativeSnapshot="no"
> createType="vmfs"
>
> # Extent description
> RW 16777216 VMFS "CentOS-6.2.vmdk"
>
> Obvious I did not have CentOS-6.2.vmdk, but I have CentOS-6.2-flat.vmdk.
> So, I manually edited that disk.vmdk, and changed CentOS-6.2.vmdk to
> CentOS-6.2-flat.vmdk. Then I instantiated a VM, that worked!

Sure, the image had this issue indeed. We have fixed the image and
uploaded it to the marketplace Thanks a lot for the feedback!

Regards,

-Tino

>
> Now I have CentOS 6.2 running......
>
> Chen Xiang
>
>
>
>> A lot of new things to learn. After getting ttylinux work with ESXi, I
>> headed for CentOS 6.2 (also imported from Marketplace), and it did not
>> work as expected.
>>
>> This time I already have a datastore with DS_MAD="vmware". I imported
>> CentOS 6.2 from marketplace, and had the following in that datastore
>> directory, which looked alright:
>>
>> oneadmin at opennebula:~/var/datastores/100/c5610951183fa21ac0900ce147a677b7$
>> ls -l
>> total 8388616
>> -rw------- 1 oneadmin cloud 8589934592 Jul  2  2012 CentOS-6.2-flat.vmdk
>> -rw------- 1 oneadmin cloud        515 Jul  3  2012 disk.vmdk
>>
>> Then I instantiated a VM, but it failed to boot (prolog seemed to be OK).
>> Below is the VM log:
>>
>> Wed Apr 24 22:40:41 2013 [DiM][I]: New VM state is ACTIVE.
>> Wed Apr 24 22:40:41 2013 [LCM][I]: New VM state is PROLOG.
>> Wed Apr 24 22:40:41 2013 [VM][I]: Virtual Machine has no context
>> Wed Apr 24 22:47:36 2013 [TM][I]: clone: Cloning
>> opennebula:/srv/cloud/one/var/datastores/100/c5610951183fa21ac0900ce147a677b7
>> in /vmfs/volumes/101/135/disk.0
>> Wed Apr 24 22:47:36 2013 [TM][I]: ExitCode: 0
>> Wed Apr 24 22:47:37 2013 [LCM][I]: New VM state is BOOT
>> Wed Apr 24 22:47:37 2013 [VMM][I]: Generating deployment file:
>> /srv/cloud/one/var/vms/135/deployment.1
>> Wed Apr 24 22:47:37 2013 [VMM][I]: ExitCode: 0
>> Wed Apr 24 22:47:37 2013 [VMM][I]: Successfully execute network driver
>> operation: pre.
>> Wed Apr 24 22:47:47 2013 [VMM][I]: Command execution fail:
>> /srv/cloud/one/var/remotes/vmm/vmware/deploy
>> /srv/cloud/one/var/vms/135/deployment.1 vmware01 135 vmware01
>> Wed Apr 24 22:47:47 2013 [VMM][D]: deploy: Successfully defined domain
>> one-135.
>> Wed Apr 24 22:47:47 2013 [VMM][E]: deploy: Error executing: virsh -c
>> 'esx://vmware01/?no_verify=1&auto_answer=1' start one-135 err: ExitCode: 1
>> Wed Apr 24 22:47:47 2013 [VMM][I]: out:
>> Wed Apr 24 22:47:47 2013 [VMM][I]: error: Failed to start domain one-135
>> Wed Apr 24 22:47:47 2013 [VMM][I]: error: internal error Could not start
>> domain: FileNotFound - File [101] 135/disk.0/disk.vmdk was not found
>> Wed Apr 24 22:47:47 2013 [VMM][I]:
>> Wed Apr 24 22:47:47 2013 [VMM][I]: ExitCode: 1
>> Wed Apr 24 22:47:47 2013 [VMM][I]: Failed to execute virtualization driver
>> operation: deploy.
>> Wed Apr 24 22:47:47 2013 [VMM][E]: Error deploying virtual machine
>> Wed Apr 24 22:47:47 2013 [DiM][I]: New VM state is FAILED
>>
>> I double check on the ESXi box, File [101] 135/disk.0/disk.vmdk was
>> actually there.
>>
>> /vmfs/volumes/5170ef35-aa800722-6bfa-80ee733ae308/135/disk.0 # ls -l
>> -rw-------    1 oneadmin root         8589934592 Apr 24 14:47
>> CentOS-6.2-flat.vmdk
>> -rw-------    1 oneadmin root                515 Apr 24 14:47 disk.vmdk
>> -rw-r--r--    1 root     root                  0 Apr 24 14:47 one-135.vmsd
>> -rw-r--r--    1 root     root                875 Apr 24 14:47 one-135.vmx
>> -rw-r--r--    1 root     root                262 Apr 24 14:47 one-135.vmxf
>> -rw-r--r--    1 root     root              38580 Apr 24 14:47 vmware.log
>>
>> So I looked into vmware.log, and found that it was looking for
>> CentOS-6.2.vmdk. I did not have this file anywhere.
>>
>> 2013-04-24T14:47:35.839Z| vmx| DISK: OPEN scsi0:0
>> '/vmfs/volumes/5170ef35-aa800722-6bfa-80ee733ae308/135/disk.0/disk.vmdk'
>> persistent R[]
>> 2013-04-24T14:47:35.842Z| vmx| AIOGNRC: Failed to open
>> '/vmfs/volumes/5170ef35-aa800722-6bfa-80ee733ae308/135/disk.0/CentOS-6.2.vmdk'
>> : Could not find the file (600000003) (0x2013).
>> 2013-04-24T14:47:35.842Z| vmx| DISKLIB-VMFS  :
>> "/vmfs/volumes/5170ef35-aa800722-6bfa-80ee733ae308/135/disk.0/CentOS-6.2.vmdk"
>> : failed to open (The system cannot find the file specified): AIOMgr_Open
>> failed. Type 3
>> 2013-04-24T14:47:35.842Z| vmx| DISKLIB-LINK  :
>> "/vmfs/volumes/5170ef35-aa800722-6bfa-80ee733ae308/135/disk.0/disk.vmdk" :
>> failed to open (The system cannot find the file specified).
>> 2013-04-24T14:47:35.842Z| vmx| DISKLIB-CHAIN :
>> "/vmfs/volumes/5170ef35-aa800722-6bfa-80ee733ae308/135/disk.0/disk.vmdk" :
>> failed to open (The system cannot find the file specified).
>> 2013-04-24T14:47:35.842Z| vmx| DISKLIB-LIB   : Failed to open
>> '/vmfs/volumes/5170ef35-aa800722-6bfa-80ee733ae308/135/disk.0/disk.vmdk'
>> with flags 0xa The system cannot find the file specified (25).
>> 2013-04-24T14:47:35.842Z| vmx| Msg_Post: Error
>> 2013-04-24T14:47:35.842Z| vmx| [msg.disk.fileNotFound] VMware ESX cannot
>> find the virtual disk
>> "/vmfs/volumes/5170ef35-aa800722-6bfa-80ee733ae308/135/disk.0/disk.vmdk".
>> Verify the path is valid and try again.
>> 2013-04-24T14:47:35.842Z| vmx| [msg.disk.noBackEnd] Cannot open the disk
>> '/vmfs/volumes/5170ef35-aa800722-6bfa-80ee733ae308/135/disk.0/disk.vmdk'
>> or one of the snapshot disks it depends on.
>> 2013-04-24T14:47:35.842Z| vmx| [msg.disk.configureDiskError] Reason: The
>> system cannot find the file specified.
>>
>> So, the behavior of the CentOS 6.2 image is different from the ttylinux
>> image. What should I do?
>>
>>
>> Chen Xiang
>>
>>
>>> Ok, I see what happened then. Datastores to be used with VMwaqre ESX
>>> hosts must be wither DS_MAD="vmware" or DS_MAD="vmfs", otherwise the
>>> drivers won't know how to handle VMware vmdk disks.
>>>
>>> Glad is working now.
>>>
>>> -T
>>> --
>>> Constantino Vázquez Blanco, PhD, MSc
>>> Project Engineer
>>> OpenNebula - The Open-Source Solution for Data Center Virtualization
>>> www.OpenNebula.org | @tinova79 | @OpenNebula
>>>
>>>
>>> On Wed, Apr 24, 2013 at 12:28 PM,  <chenxiang at aquala-tech.com> wrote:
>>>> The disk.vmdk problem seems to be associated with the DS_MAD setting. I
>>>> created two datastores to test this, the first one with DS_MAD="vmware"
>>>> and the second one with DS_MAD="fs". When I imported the images from
>>>> marketplace, in the first datastore I have disk.vmdk, and in the other
>>>> one
>>>> I have ttylinux.vmdk.
>>>>
>>>> When I encountered the problem in this thread, I was importing the
>>>> ttylinux image into the default datastore, which had DS_MAD="fs". So,
>>>> that
>>>> caused the problem.
>>>>
>>>> A lot of things to learn. Thank you so much for your assistance. You
>>>> are
>>>> so helpful for me.
>>>>
>>>> Chen Xiang
>>>>
>>>>
>>>>> On Wed, Apr 24, 2013 at 7:39 AM,  <chenxiang at aquala-tech.com> wrote:
>>>>>> I double checked the VNC issue. I did set the VNC stuff according to
>>>>>> the
>>>>>> instructions, but that setting got lost across reboots, so I was not
>>>>>> able
>>>>>> to connect to VNC.
>>>>>>
>>>>>> I searched on the web, and found a solution. That is to copy the
>>>>>> modified
>>>>>> version of service.xml to the localdisk of ESXi as a backup, and then
>>>>>> modified /etc/rc.local to recover that file from the backup version,
>>>>>> and
>>>>>> to refresh the firewall settings. (I need to do the same for the
>>>>>> oneadmin
>>>>>> SSH key, otherwise the front end won't be able to connect to the ESXi
>>>>>> box
>>>>>> when the box reboots.) Then VNC worked.
>>>>>
>>>>> Glad you got it working. A similar approach is defined here [1]
>>>>> (search for "persistency")
>>>>>
>>>>> [1] http://opennebula.org/documentation:rel3.8:evmwareg
>>>>>
>>>>>>
>>>>>> Regarding the disk.vmdk. I checked into the oned.log. No I did not
>>>>>> find
>>>>>> the error message you mentioned.
>>>>>
>>>>> Can you send us the relevant files in oned.log at the time of
>>>>> registering a VMware disk folder _without_ the manual renaming to
>>>>> disk.vmdk?
>>>>>
>>>>> Thanks a lot for your feedback,
>>>>>
>>>>> -T
>>>>>
>>>>>>
>>>>>> Chen Xiang
>>>>>>
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> comments inline,
>>>>>>>
>>>>>>> On Tue, Apr 23, 2013 at 8:54 AM,  <chenxiang at aquala-tech.com> wrote:
>>>>>>>> Now I am able to get VM's running with ESXi nodes. I would like to
>>>>>>>> share
>>>>>>>> what I did to make things work, and would like you guys to help me
>>>>>>>> further.
>>>>>>>>
>>>>>>>> From Marketplace I imported the ttylinux-vmware image to local
>>>>>>>> infrastructure. When the images becomes READY, login to the front
>>>>>>>> end
>>>>>>>> server and browse to var/datastores/1 and further browse to the
>>>>>>>> directory
>>>>>>>> holding that particular image. Below was what I had when things did
>>>>>>>> not
>>>>>>>> work:
>>>>>>>>
>>>>>>>> oneadmin at opennebula:~/var/datastores/1/10fc21f21a452add3838d76d63052457$
>>>>>>>> ls -l
>>>>>>>> total 104864
>>>>>>>> -rw------- 1 oneadmin cloud 107374080 Jul  3  2012
>>>>>>>> ttylinux-flat.vmdk
>>>>>>>> -rw------- 1 oneadmin cloud       509 Jul  3  2012 ttylinux.vmdk
>>>>>>>>
>>>>>>>> Base on the error message I got from VM log, I decided to copy
>>>>>>>> ttylinux.vmdk to disk.vmdk. So I did that and then I had the
>>>>>>>> following:
>>>>>>>>
>>>>>>>> oneadmin at opennebula:~/var/datastores/1/10fc21f21a452add3838d76d63052457$
>>>>>>>> cp ttylinux.vmdk disk.vmdk
>>>>>>>> oneadmin at opennebula:~/var/datastores/1/10fc21f21a452add3838d76d63052457$
>>>>>>>> ls -l
>>>>>>>> total 104868
>>>>>>>> -rw------- 1 oneadmin cloud       509 Apr 23 14:33 disk.vmdk
>>>>>>>> -rw------- 1 oneadmin cloud 107374080 Jul  3  2012
>>>>>>>> ttylinux-flat.vmdk
>>>>>>>> -rw------- 1 oneadmin cloud       509 Jul  3  2012 ttylinux.vmdk
>>>>>>>>
>>>>>>>> Now I went back to the same template, and instantiated a VM
>>>>>>>> instance,
>>>>>>>> now
>>>>>>>> it worked.
>>>>>>>
>>>>>>> The vmware/cp script should automatically rename the file. Can you
>>>>>>> see
>>>>>>> any line in /var/log/one/oned.log similar to:
>>>>>>>
>>>>>>> --
>>>>>>> Error renaming disk file $BASE_DISK_FILE to disk.vmdk
>>>>>>> --
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Now I have a new problem. This front-end has been tested for KVM
>>>>>>>> with
>>>>>>>> hundreds of VM instances before I tried ESXi. Now I am trying VM-ID
>>>>>>>> above
>>>>>>>> 100. Now the VM instances were running, but I was not able to
>>>>>>>> connect
>>>>>>>> to
>>>>>>>> the VM console via VNC from SunStone. What should I do? (VM Console
>>>>>>>> in
>>>>>>>> vSphere Client still works.)
>>>>>>>
>>>>>>>
>>>>>>> Have you configured the ESX host to allow VNC connections? See
>>>>>>> http://opennebula.org/documentation:rel3.8:evmwareg#vnc
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> -Tino
>>>>>>>
>>>>>>>>
>>>>>>>> Thanks a lot for your assistance.
>>>>>>>>
>>>>>>>> Chen Xiang
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> With the proper NFS settings, I was able to define the domain, but
>>>>>>>>> fail
>>>>>>>>> to
>>>>>>>>> create the VM. The error message says "File [0]
>>>>>>>>> 120/disk.0/disk.vmdk
>>>>>>>>> was
>>>>>>>>> not found".
>>>>>>>>>
>>>>>>>>> I login to the ESXi box, and verified that I do have
>>>>>>>>> /vmfs/volumes/0
>>>>>>>>> pointing to the right NFS mount. And inside /vmfs/volumes/0 I do
>>>>>>>>> have
>>>>>>>>> 120/disk.0/disk.vmdk .
>>>>>>>>>
>>>>>>>>> Any ideas?
>>>>>>>>>
>>>>>>>>> Below is a copy of the VM log.
>>>>>>>>>
>>>>>>>>> Tue Apr 23 13:12:58 2013 [DiM][I]: New VM state is ACTIVE.
>>>>>>>>> Tue Apr 23 13:12:58 2013 [LCM][I]: New VM state is PROLOG.
>>>>>>>>> Tue Apr 23 13:12:58 2013 [VM][I]: Virtual Machine has no context
>>>>>>>>> Tue Apr 23 13:13:12 2013 [TM][I]: clone: Cloning
>>>>>>>>> /vmfs/volumes/1/10fc21f21a452add3838d76d63052457 in
>>>>>>>>> vmware02:/vmfs/volumes/0/120/disk.0
>>>>>>>>> Tue Apr 23 13:13:12 2013 [TM][I]: ExitCode: 0
>>>>>>>>> Tue Apr 23 13:13:12 2013 [LCM][I]: New VM state is BOOT
>>>>>>>>> Tue Apr 23 13:13:12 2013 [VMM][I]: Generating deployment file:
>>>>>>>>> /srv/cloud/one/var/vms/120/deployment.0
>>>>>>>>> Tue Apr 23 13:13:12 2013 [VMM][I]: ExitCode: 0
>>>>>>>>> Tue Apr 23 13:13:12 2013 [VMM][I]: Successfully execute network
>>>>>>>>> driver
>>>>>>>>> operation: pre.
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][I]: Command execution fail:
>>>>>>>>> /srv/cloud/one/var/remotes/vmm/vmware/deploy
>>>>>>>>> /srv/cloud/one/var/vms/120/deployment.0 vmware02 120 vmware02
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][D]: deploy: Successfully defined
>>>>>>>>> domain
>>>>>>>>> one-120.
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][E]: deploy: Error executing: virsh
>>>>>>>>> -c
>>>>>>>>> 'esx://vmware02/?no_verify=1&auto_answer=1' start one-120 err:
>>>>>>>>> ExitCode: 1
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][I]: out:
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][I]: error: Failed to start domain
>>>>>>>>> one-120
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][I]: error: internal error Could not
>>>>>>>>> start
>>>>>>>>> domain: FileNotFound - File [0] 120/disk.0/disk.vmdk was not found
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][I]:
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][I]: ExitCode: 1
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][I]: Failed to execute
>>>>>>>>> virtualization
>>>>>>>>> driver
>>>>>>>>> operation: deploy.
>>>>>>>>> Tue Apr 23 13:13:24 2013 [VMM][E]: Error deploying virtual machine
>>>>>>>>> Tue Apr 23 13:13:24 2013 [DiM][I]: New VM state is FAILED
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Chen Xiang
>>>>>>>>>
>>>>>>>>>> Hi,
>>>>>>>>>>
>>>>>>>>>> Please use root_squash instead of no_root_squash
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> /srv/cloud/one/var/datastores/0
>>>>>>>>>> *(rw,sync,no_subtree_check,no_root_squash,anonuid=10000,anongid=10000)
>>>>>>>>>> /srv/cloud/one/var/datastores/1
>>>>>>>>>> *(rw,sync,no_subtree_check,no_root_squash,anonuid=10000,anongid=10000)
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> You will need to force the nfs server to re-read the conf file.
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>>
>>>>>>>>>> -Tino
>>>>>>>>>> --
>>>>>>>>>> Constantino Vázquez Blanco, PhD, MSc
>>>>>>>>>> Project Engineer
>>>>>>>>>> OpenNebula - The Open-Source Solution for Data Center
>>>>>>>>>> Virtualization
>>>>>>>>>> www.OpenNebula.org | @tinova79 | @OpenNebula
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Mon, Apr 22, 2013 at 3:46 PM,  <chenxiang at aquala-tech.com>
>>>>>>>>>> wrote:
>>>>>>>>>>> Here is what I got when trying to define the VM via virsh:
>>>>>>>>>>>
>>>>>>>>>>> oneadmin at opennebula:~/images$ virsh -c
>>>>>>>>>>> 'esx://vmware02/?no_verify=1&auto_answer=1'
>>>>>>>>>>> Enter username for vmware02 [root]:
>>>>>>>>>>> Enter root's password for vmware02:
>>>>>>>>>>> Welcome to virsh, the virtualization interactive terminal.
>>>>>>>>>>>
>>>>>>>>>>> Type:  'help' for help with commands
>>>>>>>>>>>        'quit' to quit
>>>>>>>>>>>
>>>>>>>>>>> virsh # define /srv/cloud/one/var/vms/111/deployment.0
>>>>>>>>>>> 2013-04-22 13:24:49.391+0000: 17332: info : libvirt version:
>>>>>>>>>>> 0.9.10
>>>>>>>>>>> 2013-04-22 13:24:49.391+0000: 17332: warning :
>>>>>>>>>>> virVMXFormatVNC:3224
>>>>>>>>>>> :
>>>>>>>>>>> VNC
>>>>>>>>>>> port 6011 it out of [5900..5964] range
>>>>>>>>>>> error: Failed to define domain from
>>>>>>>>>>> /srv/cloud/one/var/vms/111/deployment.0
>>>>>>>>>>> error: internal error HTTP response code 403 for upload to
>>>>>>>>>>> 'https://vmware02:443/folder/111%2fdisk%2e0/one%2d111.vmx?dcPath=ha%2ddatacenter&dsName=0'
>>>>>>>>>>>
>>>>>>>>>>> virsh # exit
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> I am running OpenNebula 3.8.3 on Ubuntu 12.04 (front end) with
>>>>>>>>>>>> VMWare
>>>>>>>>>>>> ESXi
>>>>>>>>>>>> 5.0 (node). I am able to create a VMWare node in SunStone,
>>>>>>>>>>>> register
>>>>>>>>>>>> the
>>>>>>>>>>>> ttylinux images (ttylinux.vmdk.tar download from C12G.com), but
>>>>>>>>>>>> failed
>>>>>>>>>>>> to
>>>>>>>>>>>> instantiate a VM.
>>>>>>>>>>>>
>>>>>>>>>>>> Here is what I have as the NFS exports:
>>>>>>>>>>>>
>>>>>>>>>>>> /srv/cloud/one/var/datastores/0
>>>>>>>>>>>> *(rw,sync,no_subtree_check,no_root_squash,anonuid=10000,anongid=10000)
>>>>>>>>>>>> /srv/cloud/one/var/datastores/1
>>>>>>>>>>>> *(rw,sync,no_subtree_check,no_root_squash,anonuid=10000,anongid=10000)
>>>>>>>>>>>>
>>>>>>>>>>>> One the ESXi node I mounted the NFS exports to /vmfs/volumes/0
>>>>>>>>>>>> and
>>>>>>>>>>>> /vmfs/volumes/1 respectively.
>>>>>>>>>>>>
>>>>>>>>>>>> Yes I configured libvirt-0.9.10 and make it worked with VMWare,
>>>>>>>>>>>> verified
>>>>>>>>>>>> by command lines such as the following command (where vmware02
>>>>>>>>>>>> is
>>>>>>>>>>>> my
>>>>>>>>>>>> ESXi
>>>>>>>>>>>> hostname):
>>>>>>>>>>>>
>>>>>>>>>>>> virsh -c 'esx://vmware02/?no_verify=1&auto_answer=1'
>>>>>>>>>>>>
>>>>>>>>>>>> Below is my VM template:
>>>>>>>>>>>>
>>>>>>>>>>>> CPU="1"
>>>>>>>>>>>> DISK=[
>>>>>>>>>>>>   IMAGE="tty_vmdk",
>>>>>>>>>>>>   IMAGE_UNAME="oneadmin" ]
>>>>>>>>>>>> GRAPHICS=[
>>>>>>>>>>>>   LISTEN="0.0.0.0",
>>>>>>>>>>>>   TYPE="vnc" ]
>>>>>>>>>>>> MEMORY="512"
>>>>>>>>>>>> NAME="ttylinux"
>>>>>>>>>>>>
>>>>>>>>>>>> Below is what I got when trying to instantiate a VM:
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Mon Apr 22 21:27:11 2013 [DiM][I]: New VM state is ACTIVE.
>>>>>>>>>>>> Mon Apr 22 21:27:11 2013 [LCM][I]: New VM state is PROLOG.
>>>>>>>>>>>> Mon Apr 22 21:27:11 2013 [VM][I]: Virtual Machine has no
>>>>>>>>>>>> context
>>>>>>>>>>>> Mon Apr 22 21:27:17 2013 [TM][I]: clone: Cloning
>>>>>>>>>>>> /vmfs/volumes/1/43352fb75cee9bbc1da3c1e7ff474e26 in
>>>>>>>>>>>> vmware02:/vmfs/volumes/0/112/disk.0
>>>>>>>>>>>> Mon Apr 22 21:27:17 2013 [TM][I]: ExitCode: 0
>>>>>>>>>>>> Mon Apr 22 21:27:17 2013 [LCM][I]: New VM state is BOOT
>>>>>>>>>>>> Mon Apr 22 21:27:17 2013 [VMM][I]: Generating deployment file:
>>>>>>>>>>>> /srv/cloud/one/var/vms/112/deployment.0
>>>>>>>>>>>> Mon Apr 22 21:27:17 2013 [VMM][I]: ExitCode: 0
>>>>>>>>>>>> Mon Apr 22 21:27:17 2013 [VMM][I]: Successfully execute network
>>>>>>>>>>>> driver
>>>>>>>>>>>> operation: pre.
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][I]: Command execution fail:
>>>>>>>>>>>> /srv/cloud/one/var/remotes/vmm/vmware/deploy
>>>>>>>>>>>> /srv/cloud/one/var/vms/112/deployment.0 vmware02 112 vmware02
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][E]: deploy: Error executing:
>>>>>>>>>>>> virsh
>>>>>>>>>>>> -c
>>>>>>>>>>>> 'esx://vmware02/?no_verify=1&auto_answer=1' define
>>>>>>>>>>>> /srv/cloud/one/var/vms/112/deployment.0 err: ExitCode: 1
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][I]: out:
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][I]: 2013-04-22
>>>>>>>>>>>> 13:27:21.858+0000:
>>>>>>>>>>>> 17586:
>>>>>>>>>>>> info : libvirt version: 0.9.10
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][I]: 2013-04-22
>>>>>>>>>>>> 13:27:21.858+0000:
>>>>>>>>>>>> 17586:
>>>>>>>>>>>> warning : virVMXFormatVNC:3224 : VNC port 6012 it out of
>>>>>>>>>>>> [5900..5964]
>>>>>>>>>>>> range
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][I]: error: Failed to define
>>>>>>>>>>>> domain
>>>>>>>>>>>> from
>>>>>>>>>>>> /srv/cloud/one/var/vms/112/deployment.0
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][I]: error: internal error HTTP
>>>>>>>>>>>> response
>>>>>>>>>>>> code 403 for upload to
>>>>>>>>>>>> 'https://vmware02:443/folder/112%2fdisk%2e0/one%2d112.vmx?dcPath=ha%2ddatacenter&dsName=0'
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][I]:
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][I]: ExitCode: 255
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][I]: Failed to execute
>>>>>>>>>>>> virtualization
>>>>>>>>>>>> driver
>>>>>>>>>>>> operation: deploy.
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [VMM][E]: Error deploying virtual
>>>>>>>>>>>> machine
>>>>>>>>>>>> Mon Apr 22 21:27:22 2013 [DiM][I]: New VM state is FAILED
>>>>>>>>>>>>
>>>>>>>>>>>> What might be wrong? This looks like that I did not have write
>>>>>>>>>>>> access
>>>>>>>>>>>> to
>>>>>>>>>>>> some place. So I tried both the oneadmin and root acount in
>>>>>>>>>>>> etc/vmwarerc,
>>>>>>>>>>>> with not much luck.
>>>>>>>>>>>>
>>>>>>>>>>>> On the front end the oneadmin user belongs to the following
>>>>>>>>>>>> groups:
>>>>>>>>>>>> cloud,
>>>>>>>>>>>> adm, sudo, libvirtd.
>>>>>>>>>>>>
>>>>>>>>>>>> Best regards,
>>>>>>>>>>>>
>>>>>>>>>>>> Chen Xiang
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Users mailing list
>>>>>>>>>>>> Users at lists.opennebula.org
>>>>>>>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Users mailing list
>>>>>>>>>>> Users at lists.opennebula.org
>>>>>>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Users mailing list
>>>>>>>>> Users at lists.opennebula.org
>>>>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list
>>>>>>>> Users at lists.opennebula.org
>>>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at lists.opennebula.org
>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opennebula.org
>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



More information about the Users mailing list