[one-users] Problem creating disk images

Ruben S. Montero rsmontero at opennebula.org
Wed May 15 11:06:00 PDT 2013


Hi

Images has to be of type qcow2 to work with the qcow driver. So:

1.- For Images that are to be registered in a datastore, they need to be
either of type qcow2 or if you are creating a datablock use something like:

NAME = "My Disk"
TYPE = DATABLOCK
SIZE = 10240
FSTYPE = qcow2
DRIVER = qcow2
DESCRIPTION = "An empty qcow2 disk"

2.- For volatile disks as in your example add them as:
DISK=[
  DRIVER="qcow2",
  FORMAT="qcow2",
  SIZE="5120",
  TYPE="fs" ]

NOTE: You need to format the device within the guest in this case!

Cheers

Ruben


On Mon, May 13, 2013 at 7:33 PM, Jon <three18ti at gmail.com> wrote:

> Hello All,
>
> I've got an answer of sorts... It seems that OpenNebula doesn't like the
> "qcow2" type of driver.
>
> If I use the following machine definition to define the vm and start it, I
> get the same "Invalid argument" error:
>
> >> oneadmin at loki:~$ virsh define /var/lib/one/vms/99/deployment.0
> >> oneadmin at loki:~$ virsh start one-99
> >> error: Failed to start domain one-99
> >> error: internal error process exited while connecting to monitor: kvm:
> -drive
> >>
> file=/var/lib/one//datastores/0/99/disk.1,if=none,id=drive-ide0-1-0,format=qcow2:
> >> could not open disk image /var/lib/one//datastores/0/99/disk.1: Invalid
> argument
>
>
> However, changing "qcow2" to "raw" in the below deployment allows me to
> define the VM with virsh and start it. (Or edit it directly in libvirt)
>
> This would indicate to me that
>
> Deployment file in question:
> oneadmin at loki:~$ cat /var/lib/one/vms/99/deployment.0
> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0
> '>
>         <name>one-99</name>
>         <cputune>
>                 <shares>1024</shares>
>         </cputune>
>         <memory>1048576</memory>
>         <os>
>                 <type arch='x86_64'>hvm</type>
>                 <boot dev='hd'/>
>         </os>
>         <devices>
>                 <emulator>/usr/bin/kvm</emulator>
>                 <disk type='file' device='cdrom'>
>                         <source
> file='/var/lib/one//datastores/0/99/disk.0'/>
>                         <target dev='hdb'/>
>                         <readonly/>
>                         <driver name='qemu' type='raw' cache='none'/>
>                 </disk>
>                 <disk type='file' device='disk'>
>                         <source
> file='/var/lib/one//datastores/0/99/disk.1'/>
>                         <target dev='hdc'/>
>                         <driver name='qemu' type='qcow2' cache='default'/>
>                 </disk>
>                 <disk type='file' device='disk'>
>                         <source
> file='/var/lib/one//datastores/0/99/disk.2'/>
>                         <target dev='hdd'/>
>                         <driver name='qemu' type='qcow2' cache='default'/>
>                 </disk>
>                 <disk type='file' device='cdrom'>
>                         <source
> file='/var/lib/one//datastores/0/99/disk.3'/>
>                         <target dev='hda'/>
>                         <readonly/>
>                         <driver name='qemu' type='raw'/>
>                 </disk>
>                 <interface type='bridge'>
>                         <source bridge='ovsbr0'/>
>                         <virtualport type='openvswitch'/>
>                         <mac address='02:00:44:47:83:43'/>
>                 </interface>
>         </devices>
>         <features>
>                 <acpi/>
>         </features>
> </domain>
>
> Template used to create deployment file in question:
>
> TEMPLATE 15 INFORMATION
>
> ID             : 15
> NAME           : ubuntu-13.04-x86_64-with_storage
> USER           : oneadmin
> GROUP          : oneadmin
> REGISTER TIME  : 04/27 17:47:17
>
> PERMISSIONS
>
> OWNER          : um-
> GROUP          : ---
> OTHER          : ---
>
> TEMPLATE CONTENTS
>
> CONTEXT=[
>   ETH0_DNS="$NETWORK[DNS,NETWORK_ID=\"3\"]",
>   ETH0_GATEWAY="$NETWORK[GATEWAY,NETWORK_ID=\"3\"]",
>   ETH0_IP="$NIC[IP,NETWORK_ID=\"3\"]",
>   ETH0_MASK="$NETWORK[NETWORK_MASK,NETWORK_ID=\"3\"]",
>   ETH0_NETWORK="$NETWORK[NETWORK_ADDRESS,NETWORK_ID=\"3\"]",
>   SSH_PUBLIC_KEY="ssh-rsa abcd foo at bar.com ]
> CPU="1"
> DISK=[
>   IMAGE_ID="11" ]
> DISK=[
>   CACHE="default",
>   DRIVER="qcow2",
>   FORMAT="ext4",
>   SIZE="5120",
>   TYPE="fs" ]
> DISK=[
>   CACHE="default",
>   DRIVER="qcow2",
>   SIZE="2048",
>   TYPE="swap" ]
> MEMORY="1024"
> NIC=[
>   NETWORK_ID="0" ]
> OS=[
>   ARCH="x86_64" ]
>
> Here is the template of the VM with RAW disks that starts successfully:
>
>  oneadmin at loki:~$ onetemplate show 25
> TEMPLATE 25 INFORMATION
>
> ID             : 25
> NAME           : ubuntu-13.04-test_raw_storage
> USER           : oneadmin
> GROUP          : oneadmin
> REGISTER TIME  : 05/13 11:11:24
>
> PERMISSIONS
>
> OWNER          : um-
> GROUP          : ---
> OTHER          : ---
>
> TEMPLATE CONTENTS
>
> CONTEXT=[
>   ETH0_CONTEXT_FORCE_IPV4="$NETWORK[CONTEXT_FORCE_IPV4,NETWORK_ID=\"0\"]",
>   ETH0_DNS="$NETWORK[DNS,NETWORK_ID=\"0\"]",
>   ETH0_GATEWAY="$NETWORK[GATEWAY,NETWORK_ID=\"0\"]",
>   ETH0_GATEWAY6="$NETWORK[GATEWAY6,NETWORK_ID=\"0\"]",
>   ETH0_IP="$NIC[IP,NETWORK_ID=\"0\"]",
>   ETH0_IPV6="$NIC[IP6_GLOBAL,NETWORK_ID=\"0\"]",
>   ETH0_MASK="$NETWORK[NETWORK_MASK,NETWORK_ID=\"0\"]",
>   ETH0_NETWORK="$NETWORK[NETWORK_ADDRESS,NETWORK_ID=\"0\"]",
>   SSH_PUBLIC_KEY="ssh-rsa  ]
>  CPU="2"
> DISK=[
>   DRIVER="raw",
>   FORMAT="ext4",
>   SIZE="5120",
>   TYPE="fs" ]
> DISK=[
>   DRIVER="raw",
>   SIZE="2048",
>   TYPE="swap" ]
> DISK=[
>   IMAGE_ID="11" ]
> GRAPHICS=[
>   LISTEN="0.0.0.0",
>   TYPE="VNC" ]
> MEMORY="2042"
> NIC=[
>   NETWORK_ID="0" ]
> OS=[
>   ARCH="x86_64" ]
> VCPU="2"
>
> (actually, updating template #15 to reflect "raw" drivers in place of
> "qcow2" drivers also allows the VM to boot)
>
> So, I guess the question then becomes, how do I tell OpenNebula to create
> qcow2 images instead of raw disk images?
>
> Thanks,
> Jon A
>
>
>
> On Sun, May 12, 2013 at 1:45 PM, Jon <three18ti at gmail.com> wrote:
>
>> Hello All,
>>
>> After much aggravation, I finally came up with the idea to disable
>> apparmor for libvirtd.  So I copied the profile
>> from /etc/apparmor.d/usr.sbin.libvirtd
>> to /etc/apparmor.d/disabled/usr.sbin.libvirtd
>>
>> And now I am not receiving the error message:
>>
>> >> error: internal error cannot load AppArmor profile
>> 'libvirt-8a633864-4564-a814-8dc8-73bca05476a1'
>>
>> While this doesn't seem ideal, OpenNebula doesn't seem to work with the
>> libvirt profile active after changing owner / group that libvirt runs as.
>>  Has anyone been able to get libvirt to run with an active apparmor profile
>> under the oneadmin / cloud user / group on an Ubuntu system?
>>
>> What I find even more puzzling, even after I cleared
>> out /etc/apparmor.d/libvirt/:
>>
>> >> root at loki:~# ls -lah /etc/apparmor.d/libvirt/
>> >> total 20K
>> >> drwxr-xr-x  2 root root  12K May 12 13:18 .
>> >> drwxr-xr-x 10 root root 4.0K May 12 13:22 ..
>> >> -rw-r--r--  1 root root  164 May 12 13:17 TEMPLATE
>>
>> after tearing down apparmor the profile
>> for libvirt-39122dda-bedb-4808-fd25-5e68446224ff is still active:
>>
>> >> root at loki:~# /etc/init.d/apparmor teardown
>> >>  * Unloading AppArmor profiles        [ OK ]
>> >> root at loki:~# apparmor_status
>> >> apparmor module is loaded.
>> >> 1 profiles are loaded.
>> >> 1 profiles are in enforce mode.
>> >>    libvirt-39122dda-bedb-4808-fd25-5e68446224ff
>> >> 0 profiles are in complain mode.
>> >> 1 processes have profiles defined.
>> >> 1 processes are in enforce mode.
>> >>    libvirt-39122dda-bedb-4808-fd25-5e68446224ff (12050)
>> >> 0 processes are in complain mode.
>> >> 0 processes are unconfined but have a profile defined
>>
>> All that said, I'm still having trouble attaching a disk.  The good news
>> is I get the same "invalid argument" error when attaching a volatile disk
>> or a persistent disk, however, the error prevents the VM from booting:
>>
>> >> Sun May 12 13:29:14 2013 [DiM][I]: New VM state is ACTIVE.
>> >> Sun May 12 13:29:15 2013 [LCM][I]: New VM state is PROLOG.
>> >> Sun May 12 13:29:29 2013 [LCM][E]: monitor_done_action, VM in a wrong
>> state
>> >> Sun May 12 13:30:28 2013 [TM][I]: clone: Cloning
>> loki:/var/lib/one/datastores/103/4a657983e37df7e79b59110748f58ce8 in
>> /var/lib/one/datastores/0/98/disk.0
>> >> Sun May 12 13:30:28 2013 [TM][I]: ExitCode: 0
>> >> Sun May 12 13:31:00 2013 [TM][I]: mkimage: Making filesystem of 5120M
>> and type ext4 at loki:/var/lib/one//datastores/0/98/disk.1
>> >> Sun May 12 13:31:00 2013 [TM][I]: ExitCode: 0
>> >> Sun May 12 13:31:02 2013 [TM][I]: mkimage: Making filesystem of 2048M
>> and type swap at loki:/var/lib/one//datastores/0/98/disk.2
>> >> Sun May 12 13:31:02 2013 [TM][I]: ExitCode: 0
>> >> Sun May 12 13:31:03 2013 [TM][I]: context: Generating context block
>> device at loki:/var/lib/one//datastores/0/98/disk.3
>> >> Sun May 12 13:31:03 2013 [TM][I]: ExitCode: 0
>> >> Sun May 12 13:31:03 2013 [LCM][I]: New VM state is BOOT
>> >> Sun May 12 13:31:03 2013 [VMM][I]: Generating deployment file:
>> /var/lib/one/vms/98/deployment.0
>> >> Sun May 12 13:31:03 2013 [VMM][I]: ExitCode: 0
>> >> Sun May 12 13:31:03 2013 [VMM][I]: Successfully execute network driver
>> operation: pre.
>> >> Sun May 12 13:31:07 2013 [VMM][I]: Command execution fail: cat << EOT
>> | /var/tmp/one/vmm/kvm/deploy /var/lib/one//datastores/0/98/deployment.0
>> loki 98 loki
>> >> Sun May 12 13:31:07 2013 [VMM][I]: error: Failed to create domain from
>> /var/lib/one//datastores/0/98/deployment.0
>> >> Sun May 12 13:31:07 2013 [VMM][I]: error: internal error process
>> exited while connecting to monitor: kvm: -drive
>> file=/var/lib/one//datastores/0/98/disk.1,if=none,id=drive-ide0-1-0,format=qcow2,cache=none:
>> could not open disk image /var/lib/one//datastores/0/98/disk.1: Invalid
>> argument
>> >> Sun May 12 13:31:07 2013 [VMM][I]:
>> >> Sun May 12 13:31:07 2013 [VMM][E]: Could not create domain from
>> /var/lib/one//datastores/0/98/deployment.0
>> >> Sun May 12 13:31:07 2013 [VMM][I]: ExitCode: 255
>> >> Sun May 12 13:31:07 2013 [VMM][I]: Failed to execute virtualization
>> driver operation: deploy.
>> >> Sun May 12 13:31:07 2013 [VMM][E]: Error deploying virtual machine:
>> Could not create domain from /var/lib/one//datastores/0/98/deployment.0
>> >> Sun May 12 13:31:08 2013 [DiM][I]: New VM state is FAILED
>>
>> What is the invalid argument here?
>>
>> Thanks for any suggestions.
>>
>> Best Regards,
>> Jon A
>>
>>
>>
>> On Wed, May 8, 2013 at 11:40 AM, Jon <three18ti at gmail.com> wrote:
>>
>>> Hey All,
>>>
>>> So I've just upgraded to 4.0Final, however, I am still experiencing the
>>> previous errors Cannot load AppArmor Profile when attempting to use a
>>> volatile disk and an invalid argument when attempting to use a datablock as
>>> a disk image.
>>>
>>> Given that this is the the same error I've been plagued with since
>>> OpenNebula 3.8, I'm fairly certain there is something wrong with my
>>> configuration, but I'm at a loss as to what it could be.
>>>
>>> Any ideas are greatly appreciated.
>>>
>>> thanks,
>>> Jon A
>>>
>>>
>>> On Mon, May 6, 2013 at 11:51 AM, Jon <three18ti at gmail.com> wrote:
>>>
>>>> Hello Ruben,
>>>>
>>>> Thanks for this.  I actually remember performing this step several
>>>> months ago when I installed 3.8, however, I did not perform this step when
>>>> I installed 4.0.
>>>>
>>>> However, it seems that something is still not quite right.  I am now
>>>> unable to deploy any VM with a failure message:
>>>>
>>>> >> error: internal error cannot load AppArmor profile
>>>> 'libvirt-76e12b0b-7d3c-c71a-46f0-bfe44917aedb'
>>>>
>>>> And a relevant entry in /var/log/libvirt/libvirtd.log (followed by a
>>>> cascading series of errors)
>>>>
>>>> >> 2013-05-06 17:06:39.665+0000: 1577: error : virCommandWait:2314 :
>>>> internal error Child process (/usr/lib/libvirt/virt-aa-helper -p 0 -c -u
>>>> libvirt-76e12b0b-7d3c-c71a-46f0-bfe44917aedb) status unexpected: exit
>>>> status 1
>>>> >> 2013-05-06 17:06:39.665+0000: 1577: error :
>>>> AppArmorGenSecurityLabel:446 : internal error cannot load AppArmor profile
>>>> 'libvirt-76e12b0b-7d3c-c71a-46f0-bfe44917aedb'
>>>>
>>>>
>>>> The docs say I need to add the following line to
>>>> /etc/apparmor.d/libvirt-qemu , I think this mean I need to add the
>>>> following line to /etc/apparmor.d/abstractions/libvirt-qemu since this is
>>>> the file that is loaded by the libvirt/TEMPLATE
>>>>
>>>> >> owner /var/lib/one/** rw,
>>>>
>>>> After adding this entry to /etc/apparmor.d/abstractions/libvirt-qemu
>>>> and restarting apparmor my vm fails to start with the same error message.
>>>>
>>>> So I found an old mailing list article about the same issue:
>>>>
>>>> >>
>>>> http://lists.opennebula.org/pipermail/users-opennebula.org/2012-June/019427.html
>>>>
>>>> This thread also indicates that the following line is required in
>>>> /etc/apparmor.d/usr.sbin.libvirtd
>>>>
>>>> >> /var/lib/one/** lrwk,
>>>>
>>>> After doing this, I am still getting the error.
>>>>
>>>> The last reply in this thread says the final solution was killing the
>>>> apparmor process and restarting the apparmor service.
>>>>
>>>> I performed a teardown, which didn't help so I performed a full system
>>>> reboot.
>>>>
>>>> After the reboot, I still receive the error "internal error cannot load
>>>> AppArmor profile"
>>>>
>>>> It appears that this issue only occurs when I am attempting to use
>>>> volitile disks.
>>>>
>>>> However, when I attempt to attach a blank, persistent datablock to a
>>>> virtual machine, I get the error:
>>>>
>>>> >> error: internal error process exited while connecting to monitor:
>>>> kvm: -drive
>>>> file=/var/lib/one//datastores/0/95/disk.0,if=none,id=drive-ide0-1-0,format=qcow2,cache=none:
>>>> could not open disk image /var/lib/one//datastores/0/95/disk.0: Invalid
>>>> argument
>>>>
>>>> I'm not sure if this is related to disabling dynamic ownership and
>>>> setting the ownership and group to oneadmin / cloud or if it is an entierly
>>>> different error altogether.
>>>>
>>>> I will say that I am still able to clone an imported disk image from
>>>> the marketplace, and it appears to launch and run successfully.
>>>>
>>>> Thanks for all your help.
>>>> Jon A
>>>>
>>>> Logs are attached below
>>>>
>>>> Failed VM with volatile disk image:
>>>>
>>>> Mon May 6 11:04:42 2013 [DiM][I]: New VM state is ACTIVE.
>>>> Mon May 6 11:04:42 2013 [LCM][I]: New VM state is PROLOG.
>>>> Mon May 6 11:06:05 2013 [TM][I]: clone: Cloning
>>>> loki:/var/lib/one/datastores/103/4a657983e37df7e79b59110748f58ce8 in
>>>> /var/lib/one/datastores/0/91/disk.0
>>>> Mon May 6 11:06:05 2013 [TM][I]: ExitCode: 0
>>>> Mon May 6 11:06:33 2013 [TM][I]: mkimage: Making filesystem of 5M and
>>>> type ext4 at loki:/var/lib/one//datastores/0/91/disk.1
>>>> Mon May 6 11:06:33 2013 [TM][I]: ExitCode: 0
>>>> Mon May 6 11:06:35 2013 [TM][I]: mkimage: Making filesystem of 2048M
>>>> and type swap at loki:/var/lib/one//datastores/0/91/disk.2
>>>> Mon May 6 11:06:35 2013 [TM][I]: ExitCode: 0
>>>> Mon May 6 11:06:36 2013 [TM][I]: context: Generating context block
>>>> device at loki:/var/lib/one//datastores/0/91/disk.3
>>>> Mon May 6 11:06:36 2013 [TM][I]: ExitCode: 0
>>>> Mon May 6 11:06:37 2013 [LCM][I]: New VM state is BOOT
>>>> Mon May 6 11:06:37 2013 [VMM][I]: Generating deployment file:
>>>> /var/lib/one/vms/91/deployment.6
>>>> Mon May 6 11:06:37 2013 [VMM][I]: ExitCode: 0
>>>> Mon May 6 11:06:37 2013 [VMM][I]: Successfully execute network driver
>>>> operation: pre.
>>>> Mon May 6 11:06:39 2013 [VMM][I]: Command execution fail: cat << EOT |
>>>> /var/tmp/one/vmm/kvm/deploy /var/lib/one//datastores/0/91/deployment.6 loki
>>>> 91 loki
>>>> Mon May 6 11:06:39 2013 [VMM][I]: error: Failed to create domain from
>>>> /var/lib/one//datastores/0/91/deployment.6
>>>> Mon May 6 11:06:39 2013 [VMM][I]: error: internal error cannot load
>>>> AppArmor profile 'libvirt-76e12b0b-7d3c-c71a-46f0-bfe44917aedb'
>>>> Mon May 6 11:06:39 2013 [VMM][E]: Could not create domain from
>>>> /var/lib/one//datastores/0/91/deployment.6
>>>> Mon May 6 11:06:39 2013 [VMM][I]: ExitCode: 255
>>>> Mon May 6 11:06:39 2013 [VMM][I]: Failed to execute virtualization
>>>> driver operation: deploy.
>>>> Mon May 6 11:06:39 2013 [VMM][E]: Error deploying virtual machine:
>>>> Could not create domain from /var/lib/one//datastores/0/91/deployment.6
>>>> Mon May 6 11:06:40 2013 [DiM][I]: New VM state is FAILED
>>>>
>>>> Error log from VM with attached datablock:
>>>>
>>>> Mon May 6 11:40:18 2013 [DiM][I]: New VM state is ACTIVE.
>>>> Mon May 6 11:40:19 2013 [LCM][I]: New VM state is PROLOG.
>>>> Mon May 6 11:40:20 2013 [TM][I]: ln: Linking
>>>> /var/lib/one/datastores/1/27ce5578d5cc9748cc44c8fedb75758b in
>>>> loki:/var/lib/one//datastores/0/95/disk.0
>>>> Mon May 6 11:40:20 2013 [TM][I]: ExitCode: 0
>>>> Mon May 6 11:41:35 2013 [TM][I]: clone: Cloning
>>>> loki:/var/lib/one/datastores/103/4a657983e37df7e79b59110748f58ce8 in
>>>> /var/lib/one/datastores/0/95/disk.1
>>>> Mon May 6 11:41:35 2013 [TM][I]: ExitCode: 0
>>>> Mon May 6 11:41:37 2013 [TM][I]: context: Generating context block
>>>> device at loki:/var/lib/one//datastores/0/95/disk.2
>>>> Mon May 6 11:41:37 2013 [TM][I]: ExitCode: 0
>>>> Mon May 6 11:41:37 2013 [LCM][I]: New VM state is BOOT
>>>> Mon May 6 11:41:37 2013 [VMM][I]: Generating deployment file:
>>>> /var/lib/one/vms/95/deployment.0
>>>> Mon May 6 11:41:38 2013 [VMM][I]: ExitCode: 0
>>>> Mon May 6 11:41:38 2013 [VMM][I]: Successfully execute network driver
>>>> operation: pre.
>>>> Mon May 6 11:41:40 2013 [VMM][I]: Command execution fail: cat << EOT |
>>>> /var/tmp/one/vmm/kvm/deploy /var/lib/one//datastores/0/95/deployment.0 loki
>>>> 95 loki
>>>> Mon May 6 11:41:40 2013 [VMM][I]: error: Failed to create domain from
>>>> /var/lib/one//datastores/0/95/deployment.0
>>>> Mon May 6 11:41:40 2013 [VMM][I]: error: internal error process exited
>>>> while connecting to monitor: kvm: -drive
>>>> file=/var/lib/one//datastores/0/95/disk.0,if=none,id=drive-ide0-1-0,format=qcow2,cache=none:
>>>> could not open disk image /var/lib/one//datastores/0/95/disk.0: Invalid
>>>> argument
>>>> Mon May 6 11:41:40 2013 [VMM][I]:
>>>> Mon May 6 11:41:40 2013 [VMM][E]: Could not create domain from
>>>> /var/lib/one//datastores/0/95/deployment.0
>>>> Mon May 6 11:41:40 2013 [VMM][I]: ExitCode: 255
>>>> Mon May 6 11:41:40 2013 [VMM][I]: Failed to execute virtualization
>>>> driver operation: deploy.
>>>> Mon May 6 11:41:40 2013 [VMM][E]: Error deploying virtual machine:
>>>> Could not create domain from /var/lib/one//datastores/0/95/deployment.0
>>>> Mon May 6 11:41:41 2013 [DiM][I]: New VM state is FAILED
>>>>
>>>>
>>>> On Mon, May 6, 2013 at 5:23 AM, Ruben S. Montero <
>>>> rsmontero at opennebula.org> wrote:
>>>>
>>>>> Have you tried to set the KVM configuration /etc/libvirt/qemu.conf as
>>>>> in
>>>>>
>>>>> $ grep -vE '^($|#)' /etc/libvirt/qemu.conf
>>>>> user = "oneadmin"
>>>>> group = "oneadmin"
>>>>> dynamic_ownership = 0
>>>>>
>>>>>
>>>>> more details here:
>>>>>
>>>>> http://opennebula.org/documentation:rel3.8:kvmg#configuration
>>>>>
>>>>>
>>>>> On Sun, May 5, 2013 at 9:40 PM, Jon <three18ti at gmail.com> wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I am trying to create a virtual machine by installing the OS from an
>>>>>> iso to a qemu image.
>>>>>>
>>>>>> However, whenever the disk image is created, it is created with a
>>>>>> user and group of oneadmin:cloud, however, once the VM attempts to start it
>>>>>> is changed to libvirt-qemu:kvm then root:root, and I get an entry in the
>>>>>> log:
>>>>>>
>>>>>> >> error: internal error process exited while connecting to monitor:
>>>>>> kvm: -drive
>>>>>> file=/var/lib/one//datastores/0/87/disk.1,if=none,id=drive-ide0-1-0,format=qcow2,cache=none:
>>>>>> could not open disk image
>>>>>>
>>>>>> However, this disk image -does- exist, although with incorrect
>>>>>> permissions:
>>>>>>
>>>>>> >> root at loki:~# ls -lah /var/lib/one//datastores/0/87
>>>>>> >> -rw-r--r-- 1 root         root  5.1G Apr 27 19:24 disk.1
>>>>>>
>>>>>> What is changing the ownership of my disk images?
>>>>>>
>>>>>> Thanks,
>>>>>> Jon A
>>>>>>
>>>>>>
>>>>>> The full VM log is below:
>>>>>>
>>>>>> Sat Apr 27 19:23:31 2013 [DiM][I]: New VM state is ACTIVE.
>>>>>> Sat Apr 27 19:23:31 2013 [LCM][I]: New VM state is PROLOG.
>>>>>> Sat Apr 27 19:24:29 2013 [TM][I]: clone: Cloning
>>>>>> loki:/var/lib/one/datastores/103/4a657983e37df7e79b59110748f58ce8 in
>>>>>> /var/lib/one/datastores/0/87/disk.0
>>>>>> Sat Apr 27 19:24:29 2013 [TM][I]: ExitCode: 0
>>>>>> Sat Apr 27 19:24:51 2013 [TM][I]: mkimage: Making filesystem of 5120M
>>>>>> and type ext4 at loki:/var/lib/one//datastores/0/87/disk.1
>>>>>> Sat Apr 27 19:24:51 2013 [TM][I]: ExitCode: 0
>>>>>> Sat Apr 27 19:24:52 2013 [TM][I]: context: Generating context block
>>>>>> device at loki:/var/lib/one//datastores/0/87/disk.2
>>>>>> Sat Apr 27 19:24:52 2013 [TM][I]: ExitCode: 0
>>>>>> Sat Apr 27 19:24:53 2013 [LCM][I]: New VM state is BOOT
>>>>>> Sat Apr 27 19:24:53 2013 [VMM][I]: Generating deployment file:
>>>>>> /var/lib/one/vms/87/deployment.0
>>>>>> Sat Apr 27 19:24:53 2013 [VMM][I]: ExitCode: 0
>>>>>> Sat Apr 27 19:24:53 2013 [VMM][I]: Successfully execute network
>>>>>> driver operation: pre.
>>>>>> Sat Apr 27 19:24:56 2013 [VMM][I]: Command execution fail: cat << EOT
>>>>>> | /var/tmp/one/vmm/kvm/deploy /var/lib/one//datastores/0/87/deployment.0
>>>>>> loki 87 loki
>>>>>> Sat Apr 27 19:24:56 2013 [VMM][I]: error: Failed to create domain
>>>>>> from /var/lib/one//datastores/0/87/deployment.0
>>>>>> Sat Apr 27 19:24:56 2013 [VMM][I]: error: internal error process
>>>>>> exited while connecting to monitor: kvm: -drive
>>>>>> file=/var/lib/one//datastores/0/87/disk.1,if=none,id=drive-ide0-1-0,format=qcow2,cache=none:
>>>>>> could not open disk image /var/lib/one//datastores/0/87/disk.1: Invalid
>>>>>> argument
>>>>>> Sat Apr 27 19:24:56 2013 [VMM][I]:
>>>>>> Sat Apr 27 19:24:56 2013 [VMM][E]: Could not create domain from
>>>>>> /var/lib/one//datastores/0/87/deployment.0
>>>>>> Sat Apr 27 19:24:56 2013 [VMM][I]: ExitCode: 255
>>>>>> Sat Apr 27 19:24:56 2013 [VMM][I]: Failed to execute virtualization
>>>>>> driver operation: deploy.
>>>>>> Sat Apr 27 19:24:56 2013 [VMM][E]: Error deploying virtual machine:
>>>>>> Could not create domain from /var/lib/one//datastores/0/87/deployment.0
>>>>>> Sat Apr 27 19:24:57 2013 [DiM][I]: New VM state is FAILED
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at lists.opennebula.org
>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Ruben S. Montero, PhD
>>>>> Project co-Lead and Chief Architect
>>>>> OpenNebula - The Open Source Solution for Data Center Virtualization
>>>>> www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
>>>>>
>>>>
>>>>
>>>
>>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>


-- 
-- 
Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013
-- 
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - The Open Source Solution for Data Center Virtualization
www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130515/a83dfed0/attachment-0002.htm>


More information about the Users mailing list