[one-users] Problem Creating VM on fresh install

Jon three18ti at gmail.com
Sat Oct 27 20:18:43 PDT 2012


Hello,

Ok, one error I'm getting is:

>> 2012-10-28 02:40:02.263+0000: 1058: error : virExecWithHook:328 : Cannot
find 'pm-is-supported' in path: No such file or directory
>> 2012-10-28 02:40:02.263+0000: 1058: warning : umlCapsInit:87 : Failed to
get host power management capabilities

`apt-get install pm-utils` seems to have resolved this error,

However, it seems that there is still an issue deploying the VM.

The only error I get in the libvirtd log is:

>> 2012-10-28 03:01:52.839+0000: 1039: error : qemuMonitorIORead:513 :
Unable to read from monitor: Connection reset by peer

And the qemu vm log returns,

2012-10-28 03:01:52.180+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 512 -smp
4,sockets=4,cores=1,threads=1 -name one-17 -uuid
a78f682b-7e01-3a54-b191-8e8081d478c3 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-17.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-drive
file=/var/lib/one//datastores/0/17/disk.0,if=none,id=drive-ide0-0-0,format=qcow2
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-usb -vnc 0.0.0.0:17 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3
kvm: -drive
file=/var/lib/one//datastores/0/17/disk.0,if=none,id=drive-ide0-0-0,format=qcow2:
could not open disk image /var/lib/one//datastores/0/17/disk.0: Invalid
argument
2012-10-28 03:01:52.839+0000: shutting down

Which appears to be the return of the "invalid argument" error.

The one log reports the same as the libvirt log,

>> Sat Oct 27 21:01:56 2012 [VMM][D]: Message received: LOG I 17 error:
Unable to read from monitor: Connection reset by peer


I did find a bug report with the same error, but this appears to have been
fixed a couple of months ago and I would think the 3.8 release would
include these changes.
http://dev.opennebula.org/issues/1367

I have not upgraded to 3.8.1 yet.  Perhaps I should?

Any ideas are welcomed.

Thanks,
Jon A

On Fri, Oct 26, 2012 at 3:39 AM, Ruben S. Montero
<rsmontero at opennebula.org>wrote:

> Hi,
>
> Can you check the libvirt/qemu specific logs. Sometimes, there is a clue
> there (although those logs are not very verbose), it should be something
> like
>
> /var/log/libvirt/qemu/one-<VMID>.log
>
> There are several reasons for this specific error.
>
> Cheers
>
> Ruben
>
>
>
> On Fri, Oct 26, 2012 at 2:34 AM, Jon <three18ti at gmail.com> wrote:
>
>> Hello Javier,
>>
>> Thanks for that.  I was able to access the disk.
>>
>> I am now getting the error,
>>
>> >> Thu Oct 25 18:17:59 2012 [VMM][I]: error: Unable to read from monitor:
>> Connection reset by peer
>>
>> I tried all of the suggestions in this thread:
>>
>> >> http://www.digipedia.pl/usenet/thread/11667/7067/
>>
>> Adding the disk group to the oneadmin user,
>> changing the order of the apparmor line.
>>
>> Also, something that the docs don't point out, when you enable TCP you
>> also have to disable TLS in libvirtd.conf.
>> This caused a specific error though I don't remember it off the top of my
>> head (that was a couple days ago).
>>
>> Any ideas how I can resolve this?
>>
>> Thanks,
>> Jon A
>>
>> On Thu, Oct 25, 2012 at 9:10 AM, Javier Fontan <jfontan at opennebula.org>wrote:
>>
>>> Make sure you can access /var/lib/one//datastores/0/6/disk.0 on the
>>> node if you are using shared system datastore.
>>>
>>> On Tue, Oct 23, 2012 at 7:47 AM, Jon <three18ti at gmail.com> wrote:
>>> > Hello All,
>>> >
>>> > I'm back again.  Not sure if I should continue this thread of if I
>>> should
>>> > start a new one as I'm still having issues deploying VMs, however, I'm
>>> now
>>> > working with a fresh install of 3.8 on a fresh install of Ubuntu 12.04.
>>> >
>>> > The error I'm getting now is:
>>> >
>>> >>> Mon Oct 22 23:39:42 2012 [VMM][D]: Message received: LOG I 6 error:
>>> >>> internal error process exited while connecting to monitor: kvm:
>>> -drive
>>> >>>
>>> file=/var/lib/one//datastores/0/6/disk.0,if=none,id=drive-ide0-0-0,format=qcow2:
>>> >>> could not open disk image /var/lib/one//datastores/0/6/disk.0:
>>> Invalid
>>> >>> argument
>>> >
>>> > I think this has something to do with the "id" parameter
>>> "id=drive-ide0-0-0"
>>> > but I'm not sure where that comes from or how to remedy it.
>>> >
>>> > I've created my template via sunstone, is the an easy way to export the
>>> > template from the database?
>>> >
>>> > I've copied the full error below in case I've misidentified the source
>>> of
>>> > the error.
>>> >
>>> > Thanks,
>>> > Jon A
>>> >
>>> > Full Error:
>>> >
>>> > Mon Oct 22 23:39:42 2012 [VMM][D]: Message received: LOG I 6 Command
>>> > execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy
>>> > /var/lib/one//datastores/0/6/deployment.0 kitt 6 kitt
>>> > Mon Oct 22 23:39:42 2012 [VMM][D]: Message received: LOG I 6 error:
>>> Failed
>>> > to create domain from /var/lib/one//datastores/0/6/deployment.0
>>> > Mon Oct 22 23:39:42 2012 [VMM][D]: Message received: LOG I 6 error:
>>> internal
>>> > error process exited while connecting to monitor: kvm: -drive
>>> >
>>> file=/var/lib/one//datastores/0/6/disk.0,if=none,id=drive-ide0-0-0,format=qcow2:
>>> > could not open disk image /var/lib/one//datastores/0/6/disk.0: Invalid
>>> > argument
>>> > Mon Oct 22 23:39:42 2012 [VMM][D]: Message received: LOG I 6
>>> > Mon Oct 22 23:39:42 2012 [VMM][D]: Message received: LOG E 6 Could not
>>> > create domain from /var/lib/one//datastores/0/6/deployment.0
>>> > Mon Oct 22 23:39:42 2012 [VMM][D]: Message received: LOG I 6 ExitCode:
>>> 255
>>> > Mon Oct 22 23:39:42 2012 [VMM][D]: Message received: LOG I 6 Failed to
>>> > execute virtualization driver operation: deploy.
>>> > Mon Oct 22 23:39:42 2012 [VMM][D]: Message received: DEPLOY FAILURE 6
>>> Could
>>> > not create domain from /var/lib/one//datastores/0/6/deployment.0
>>> >
>>> >
>>> >
>>> > On Mon, Oct 22, 2012 at 12:10 AM, Jon <three18ti at gmail.com> wrote:
>>> >>
>>> >> Hello Giovanni,
>>> >>
>>> >> My mistake, I thought I had.
>>> >>
>>> >> I swear I had already configured qemu as suggested "oneadmin:oneadmin"
>>> >> (oneadmin is not a valid group), and as suggested on the mailing list
>>> to
>>> >> "oneadmin:cloud", however, I just checked again and it appears that
>>> the user
>>> >> and group had not been set.
>>> >>
>>> >> I've did what is suggested, I created the file
>>> >> /etc/apparmor.d/libvirt-qemu with the text suggested in the docs, I
>>> even
>>> >> toredown apparmor.
>>> >>
>>> >> However, I still get the error:
>>> >>
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: LOG I 7 error:
>>> Domain
>>> >> not found: no domain with matching name
>>> >> '/var/lib/one//datastores/0/7/deployment.0'
>>> >>
>>> >> (not getting the permission denied error any more at least)
>>> >>
>>> >> Mon Oct 22 00:06:47 2012 [ReM][D]: [373] [0 oneadmin] [AclInfo] method
>>> >> invoked
>>> >> Mon Oct 22 00:06:47 2012 [ReM][D]: [373] [0 oneadmin] [AclInfo]
>>> SUCCESS,
>>> >> "<ACL_POOL><ACL><ID>0..."
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: LOG I 7 Command
>>> >> execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy
>>> >> /var/lib/one//datastores/0/7/deployment.0 10.42.0.68 7 10.42.0.68
>>> >>
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: LOG I 7 error:
>>> failed
>>> >> to get domain '/var/lib/one//datastores/0/7/deployment.0'
>>> >>
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: LOG I 7 error:
>>> Domain
>>> >> not found: no domain with matching name
>>> >> '/var/lib/one//datastores/0/7/deployment.0'
>>> >>
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: LOG I 7 error:
>>> Failed
>>> >> to create domain from /var/lib/one//datastores/0/7/deployment.0
>>> >>
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: LOG I 7 error:
>>> Unable
>>> >> to read from monitor: Connection reset by peer
>>> >>
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: LOG E 7 Could not
>>> >> create domain from /var/lib/one//datastores/0/7/deployment.0
>>> >>
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: LOG I 7
>>> ExitCode: 255
>>> >>
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: LOG I 7 Failed to
>>> >> execute virtualization driver operation: deploy.
>>> >>
>>> >> Mon Oct 22 00:06:48 2012 [VMM][D]: Message received: DEPLOY FAILURE 7
>>> >> Could not create domain from /var/lib/one//datastores/0/7/deployment.0
>>> >>
>>> >> Thanks again for your help.
>>> >>
>>> >> Best Regards,
>>> >> Jon A
>>> >>
>>> >>
>>> >> On Sun, Oct 21, 2012 at 11:52 PM, Giovanni Toraldo <me at gionn.net>
>>> wrote:
>>> >>>
>>> >>> Hello Jon,
>>> >>>
>>> >>> please always use the Reply-to-all function of your email client when
>>> >>> using public mailing lists.
>>> >>>
>>> >>> 2012/10/22 Jon <three18ti at gmail.com>:
>>> >>> > Hello Giovanni,
>>> >>> >
>>> >>> > Thanks for your quick reply.
>>> >>> >
>>> >>> > Actually, the only error I see is,
>>> >>> >
>>> >>> >>> Sun Oct 21 22:13:04 2012 [AuM][E]: Auth Error: Could not find
>>> >>> >>> Authorization driver
>>> >>> >
>>> >>> > So I googled the error which brought me to:
>>> >>> >
>>> >>> >
>>> http://lists.opennebula.org/pipermail/users-opennebula.org/2011-August/006282.html
>>> >>> >
>>> >>> > Where they ask if AUTH_MAD is uncommented, in my case it was, but
>>> the
>>> >>> > path
>>> >>> > was not specified:
>>> >>> >
>>> >>> >>  AUTH_MAD = [
>>> >>> >>      executable = "one_auth_mad",
>>> >>> >>      authn = "ssh,x509,ldap,server_cipher,server_x509"
>>> >>> >>  ]
>>> >>> >
>>> >>> > So I set it to the full path:
>>> >>> >
>>> >>> >>  AUTH_MAD = [
>>> >>> >>      executable = "/usr/lib/one/mads/one_auth_mad",
>>> >>> >>      authn = "ssh,x509,ldap,server_cipher,server_x509"
>>> >>> >>  ]
>>> >>> >
>>> >>> > Now authentication is successful, however, I get the following in
>>> my
>>> >>> > oned.log logs.
>>> >>> >
>>> >>> > Sun Oct 21 22:47:22 2012 [TM][D]: Message received: LOG I 2
>>> mkimage:
>>> >>> > Making
>>> >>> > filesystem of 10240M and type ext4 at
>>> >>> > 10.42.0.68:/var/lib/one//datastores/0/2/disk.0
>>> >>> > Sun Oct 21 22:47:22 2012 [TM][D]: Message received: LOG I 2
>>> ExitCode: 0
>>> >>> > Sun Oct 21 22:47:23 2012 [TM][D]: Message received: LOG I 2
>>> mkimage:
>>> >>> > Making
>>> >>> > filesystem of 1024M and type swap at
>>> >>> > 10.42.0.68:/var/lib/one//datastores/0/2/disk.1
>>> >>> > Sun Oct 21 22:47:23 2012 [TM][D]: Message received: LOG I 2
>>> ExitCode: 0
>>> >>> > Sun Oct 21 22:47:23 2012 [TM][D]: Message received: TRANSFER
>>> SUCCESS 2
>>> >>> > -
>>> >>> > Sun Oct 21 22:47:24 2012 [VMM][D]: Message received: LOG I 2
>>> ExitCode:
>>> >>> > 0
>>> >>> > Sun Oct 21 22:47:24 2012 [VMM][D]: Message received: LOG I 2
>>> >>> > Successfully
>>> >>> > execute network driver operation: pre.
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: LOG I 2
>>> Command
>>> >>> > execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy
>>> >>> > /var/lib/one//datastores/0/2/deployment.0 10.42.0.68 2 10.42.0.68
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: LOG I 2 error:
>>> >>> > failed
>>> >>> > to get domain '/var/lib/one//datastores/0/2/deployment.0'
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: LOG I 2 error:
>>> >>> > Domain
>>> >>> > not found: no domain with matching name
>>> >>> > '/var/lib/one//datastores/0/2/deployment.0'
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: LOG I 2 error:
>>> >>> > Failed
>>> >>> > to create domain from /var/lib/one//datastores/0/2/deployment.0
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: LOG I 2 error:
>>> >>> > internal
>>> >>> > error process exited while connecting to monitor: kvm: -drive
>>> >>> >
>>> >>> >
>>> file=/var/lib/one//datastores/0/2/disk.0,if=none,id=drive-ide0-0-0,format=qcow2:
>>> >>> > could not open disk image /var/lib/one//datastores/0/2/disk.0:
>>> >>> > Permission
>>> >>> > denied
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: LOG I 2
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: LOG E 2 Could
>>> not
>>> >>> > create domain from /var/lib/one//datastores/0/2/deployment.0
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: LOG I 2
>>> ExitCode:
>>> >>> > 255
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: LOG I 2
>>> Failed to
>>> >>> > execute virtualization driver operation: deploy.
>>> >>> > Sun Oct 21 22:47:26 2012 [VMM][D]: Message received: DEPLOY
>>> FAILURE 2
>>> >>> > Could
>>> >>> > not create domain from /var/lib/one//datastores/0/2/deployment.0
>>> >>> > Sun Oct 21 22:47:39 2012 [AuM][D]: Message received: LOG I 3
>>> ExitCode:
>>> >>> > 0
>>> >>> >
>>> >>> > What it looks like is it's still failing to get a domain -named-
>>> >>> > "/var/lib/one//datastores/0/2/deployment.0".  This file does exist.
>>> >>> >
>>> >>> > The two disks however, belong to user and group root and root.
>>> >>> >
>>> >>> > I also see IM_MAD set without a path for the executable.
>>> >>> >
>>> >>> > IM_MAD = [
>>> >>> >       name       = "im_kvm",
>>> >>> >       executable = "one_im_ssh",
>>> >>> >       arguments  = "-r 0 -t 15 kvm" ]
>>> >>> >
>>> >>> > I set the full path here (is there a better way to resolve the
>>> lack of
>>> >>> >
>>> >>> > But I'm still getting the same error:
>>> >>> >
>>> >>> >>> Sun Oct 21 23:02:28 2012 [VMM][D]: Message received: LOG I 4
>>> error:
>>> >>> >>> internal error process exited while connecting to monitor: kvm:
>>> >>> >>> -drive
>>> >>> >>>
>>> >>> >>>
>>> file=/var/lib/one//datastores/0/4/disk.0,if=none,id=drive-ide0-0-0,format=qcow2:
>>> >>> >>> could not open disk image /var/lib/one//datastores/0/4/disk.0:
>>> >>> >>> Permission
>>> >>> >>> denied
>>> >>> >
>>> >>> > googling that error brought me back to here:
>>> >>> >
>>> >>> >
>>> http://lists.opennebula.org/pipermail/users-opennebula.org/2010-September/002848.html
>>> >>> >
>>> >>> > Which indicates that I should set the user and group for libvirt to
>>> >>> > oneadmin:cloud.  Problem is, the disk images are owned by
>>> root:root.
>>> >>>
>>> >>>
>>> >>> There are some configuration requirements for KVM hosts, that you
>>> >>> should read on the documentation page and apply them:
>>> >>>
>>> >>> http://opennebula.org/documentation:rel3.6:kvmg#kvm_configuration
>>> >>>
>>> >>>
>>> >>> > I created the template via Sunstone, so there really isn't a
>>> template
>>> >>> > per
>>> >>> > se...  I did a little digging, and it looks like sunstone does
>>> store
>>> >>> > the
>>> >>> > template in the sqlite database.  The following mess of XML looks
>>> to be
>>> >>> > the
>>> >>> > template, I don't see anything that sticks out as a Xen specific.
>>>  I
>>> >>> > did use
>>> >>> > the KVM template in Sunstone to generate this template.
>>> >>> >
>>> >>> > sqlite> select body from template_pool;
>>> >>> > <VMTEMPLATE>
>>> >>> >     <ID>1</ID>
>>> >>> >     <UID>0</UID>
>>> >>> >     <GID>0</GID>
>>> >>> >     <UNAME>oneadmin</UNAME>
>>> >>> >     <GNAME>oneadmin</GNAME>
>>> >>> >     <NAME>Ubuntu Test</NAME>
>>> >>> >     <PERMISSIONS>
>>> >>> >         <OWNER_U>1</OWNER_U>
>>> >>> >         <OWNER_M>1</OWNER_M>
>>> >>> >         <OWNER_A>0</OWNER_A>
>>> >>> >         <GROUP_U>0</GROUP_U>
>>> >>> >         <GROUP_M>0</GROUP_M>
>>> >>> >         <GROUP_A>0</GROUP_A>
>>> >>> >         <OTHER_U>0</OTHER_U>
>>> >>> >         <OTHER_M>0</OTHER_M>
>>> >>> >         <OTHER_A>0</OTHER_A>
>>> >>> >     </PERMISSIONS>
>>> >>> >     <REGTIME>1350791903</REGTIME>
>>> >>> >     <TEMPLATE>
>>> >>> >         <CPU><![CDATA[1]]></CPU>
>>> >>> >         <DISK>
>>> >>> >             <DRIVER><![CDATA[qcow2]]></DRIVER>
>>> >>> >             <FORMAT><![CDATA[ext4]]></FORMAT>
>>> >>> >             <SIZE><![CDATA[10240]]></SIZE>
>>> >>> >             <TYPE><![CDATA[fs]]></TYPE>
>>> >>> >         </DISK>
>>> >>> >         <DISK>
>>> >>> >             <DRIVER><![CDATA[raw]]>
>>> >>> >             </DRIVER>
>>> >>> >             <SIZE><![CDATA[1024]]></SIZE>
>>> >>> >             <TYPE><![CDATA[swap]]></TYPE>
>>> >>> >         </DISK>
>>> >>> >         <GRAPHICS>
>>> >>> >             <LISTEN><![CDATA[0.0.0.0]]></LISTEN>
>>> >>> >             <TYPE><![CDATA[vnc]]></TYPE>
>>> >>> >         </GRAPHICS>
>>> >>> >         <MEMORY><![CDATA[512]]></MEMORY>
>>> >>> >         <NAME><![CDATA[Ubuntu Test]]></NAME>
>>> >>> >         <OS>
>>> >>> >             <ARCH><![CDATA[x86_64]]></ARCH>
>>> >>> >             <BOOT><![CDATA[hd]]></BOOT>
>>> >>> >         </OS>
>>> >>> >         <RAW>
>>> >>> >             <TYPE><![CDATA[kvm]]></TYPE>
>>> >>> >         </RAW>
>>> >>> >         <TEMPLATE_ID><![CDATA[1]]></TEMPLATE_ID>
>>> >>> >         <VCPU><![CDATA[4]]></VCPU>
>>> >>> >     </TEMPLATE>
>>> >>> > </VMTEMPLATE>
>>> >>> >
>>> >>> >
>>> >>> > At this point I'm a little stumped so any help is greatly
>>> appreciated.
>>> >>> >
>>> >>> > Thanks Again,
>>> >>> > Jon A
>>> >>> >
>>> >>>
>>> >>> --
>>> >>> Giovanni Toraldo
>>> >>> http://gionn.net
>>> >>
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Best Regards,
>>> > Jonathan David
>>> >
>>> > Please excuse any brevity or typos as this e-mail is most likely sent
>>> from a
>>> > mobile device.
>>> >
>>> >
>>> > _______________________________________________
>>> > Users mailing list
>>> > Users at lists.opennebula.org
>>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>> >
>>>
>>>
>>>
>>> --
>>> Javier Fontán Muiños
>>> Project Engineer
>>> OpenNebula - The Open Source Toolkit for Data Center Virtualization
>>> www.OpenNebula.org | jfontan at opennebula.org | @OpenNebula
>>>
>>
>>
>>
>> --
>> Best Regards,
>> Jonathan David
>>
>> Please excuse any brevity or typos as this e-mail is most likely sent
>> from a mobile device.
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>> --
>> Ruben S. Montero, PhD
>> Project co-Lead and Chief Architect
>> OpenNebula - The Open Source Solution for Data Center Virtualization
>> <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
>> www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20121027/e414ced3/attachment-0002.htm>


More information about the Users mailing list