[one-users] (gang) live migrate with kvm fails

Shankhadeep Shome shank15217 at gmail.com
Mon Apr 30 10:47:37 PDT 2012


"however if I migrate one vm at a time with a 2-4 sec gap the migration is
successful sequentially"

Let me clarify this statement, I mean that if every migration task is
started with a 2-4 second gap, I didn't actually have to wait for a
migration to finish before starting another.


On Mon, Apr 30, 2012 at 1:44 PM, Shankhadeep Shome <shank15217 at gmail.com>wrote:

> Also Apparmor wasn't totally disabled, just the libvirt configuration.
>
>
> On Mon, Apr 30, 2012 at 1:42 PM, Shankhadeep Shome <shank15217 at gmail.com>wrote:
>
>> No, after I configured libvirt to not use a security driver explicitly it
>> worked fine. The apparmor "DENIED" dmesg logs also disappeared.
>>
>> Shank
>>
>>
>> On Mon, Apr 30, 2012 at 4:47 AM, Tino Vazquez <tinova at opennebula.org>wrote:
>>
>>> Hi,
>>>
>>> Is this issue still showing after disabling apparmor?
>>>
>>> Regards,
>>>
>>> -Tino
>>>
>>> --
>>> Constantino Vázquez Blanco, MSc
>>> Project Engineer
>>> OpenNebula - The Open-Source Solution for Data Center Virtualization
>>> www.OpenNebula.org | @tinova79 | @OpenNebula
>>>
>>>
>>> On Sun, Apr 29, 2012 at 1:05 AM, Shankhadeep Shome <shank15217 at gmail.com>
>>> wrote:
>>> > Hi I noticed that following in qemu.conf had to set to
>>> > support simultaneous mass migrations from one host to another with KVM
>>> on
>>> > ubuntu 12.04, not sure why can anybody clarify whats really going on?
>>> >
>>> > security_driver = "none"
>>> >
>>> > I get the following in the dmesg logs when vms are starting up and
>>> when I
>>> > chose multiple vms to live migrate all of them fail to migrate to the
>>> > destination node, however if I migrate one vm at a time with a 2-4 sec
>>> gap
>>> > the migration is successful sequentially, any ideas?
>>> >
>>> > The dmesg logs are full of apparmor denies, however vms work fine if
>>> started
>>> > up and migrated one at a time. It seems to an issue with creating the
>>> nic
>>> > interface to the vm from the host. This error is easy to recreate and
>>> I can
>>> > post additional info if anybody needs as this isn't a production
>>> > configuration.
>>> >
>>> > [584213.761452] virbr1: topology change detected, propagating
>>> > [584213.761467] virbr1: port 2(vnet0) entering forwarding state
>>> > [584213.761494] virbr1: port 2(vnet0) entering forwarding state
>>> > [584213.995055] type=1400 audit(1335631280.945:78): apparmor="DENIED"
>>> > operation="open" parent=1
>>> > profile="libvirt-987e7f7c-cb53-7093-4b4e-e87892109432"
>>> > name="/proc/19538/auxv" pid=19538 comm="kvm" requested_mask="r"
>>> > denied_mask="r" fsuid=1001 ouid=1001
>>> > [584223.872064] vnet0: no IPv6 routers present
>>> > [584224.038415] kvm: 19538: cpu0 unhandled rdmsr: 0xc0010001
>>> > [584485.067707] type=1400 audit(1335631552.015:79): apparmor="DENIED"
>>> > operation="open" parent=65334 profile="/usr/lib/libvirt/virt-aa-helper"
>>> > name="/var/lib/one/datastores/101/b79b6b3f36d4ba5f42f42af394b1a450"
>>> > pid=20285 comm="virt-aa-helper" requested_mask="r" denied_mask="r"
>>> fsuid=0
>>> > ouid=1001
>>> > [584485.613221] type=1400 audit(1335631552.563:80): apparmor="STATUS"
>>> > operation="profile_replace"
>>> > name="libvirt-987e7f7c-cb53-7093-4b4e-e87892109432" pid=20286
>>> > comm="apparmor_parser"
>>> > [584489.907793] virbr1: port 2(vnet0) entering forwarding state
>>> > [584489.910679] device vnet0 left promiscuous mode
>>> > [584489.910693] virbr1: port 2(vnet0) entering disabled state
>>> > [584491.054608] type=1400 audit(1335631558.003:81): apparmor="STATUS"
>>> > operation="profile_remove"
>>> > name="libvirt-987e7f7c-cb53-7093-4b4e-e87892109432" pid=20300
>>> > comm="apparmor_parser"
>>> > [603789.617042] type=1400 audit(1335650856.569:82): apparmor="DENIED"
>>> > operation="open" parent=65334 profile="/usr/lib/libvirt/virt-aa-helper"
>>> > name="/var/lib/one/datastores/101/b79b6b3f36d4ba5f42f42af394b1a450"
>>> > pid=56380 comm="virt-aa-helper" requested_mask="r" denied_mask="r"
>>> fsuid=0
>>> > ouid=1001
>>> > [603790.163553] type=1400 audit(1335650857.113:83): apparmor="STATUS"
>>> > operation="profile_load"
>>> name="libvirt-5deab1dd-a5eb-8780-5468-e0456feda51e"
>>> > pid=56381 comm="apparmor_parser"
>>> > [603790.488174] device vnet0 entered promiscuous mode
>>> > [603790.569330] virbr1: topology change detected, propagating
>>> > [603790.569344] virbr1: port 2(vnet0) entering forwarding state
>>> > [603790.569370] virbr1: port 2(vnet0) entering forwarding state
>>> > [603790.806858] type=1400 audit(1335650857.757:84): apparmor="DENIED"
>>> > operation="open" parent=1
>>> > profile="libvirt-5deab1dd-a5eb-8780-5468-e0456feda51e"
>>> > name="/proc/56410/auxv" pid=56410 comm="kvm" requested_mask="r"
>>> > denied_mask="r" fsuid=1001 ouid=1001
>>> > requested_mask="r" denied_mask="r" fsuid=0 ouid=1001
>>> >
>>> > This issue leads me to another question.. would it be prudent to to
>>> > configure mass live migrations as a serialized operation?
>>> >
>>> > _______________________________________________
>>> > Users mailing list
>>> > Users at lists.opennebula.org
>>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>> >
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20120430/38871bba/attachment-0003.htm>


More information about the Users mailing list