<div dir="ltr">Hi Carlos,<div><br></div><div>did you figure out what was wrong? or is it still not working? if it doesn't work, could you please open a bug report [1] so we can look into it?</div><div><br></div><div style>
[1] <a href="http://dev.opennebula.org">dev.opennebula.org</a></div><div style><br></div><div style>cheers,<br>Jaime</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Feb 5, 2013 at 12:41 PM, Carlos Jiménez <span dir="ltr"><<a href="mailto:cjimenez@eneotecnologia.com" target="_blank">cjimenez@eneotecnologia.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
Hi Jaime,<br>
<br>
It is happening for all the interfaces and we are using the bridging
compatibility layer, but what we have defined in the template as
network mode is openvswitch, not default (bridge).<br>
This might be a case of use where the physical machines just power
on.<br>
<br>
<br>
Regards,<br>
<br>
Carlos.<div><div class="h5"><br>
<br>
<br>
<div>On 02/05/2013 12:12 PM, Jaime Melis
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi,
<div><br>
</div>
<div>the expected behaviour is for the vnet to go away
after the VM shuts down (the hypervisor should run brctl delif
...). Is this happening only for a few interfaces or for all
of them? are you using the bridging compatibility layer?</div>
<div><br>
</div>
<div>regards,<br>
Jaime</div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Tue, Feb 5, 2013 at 11:46 AM, Carlos
Jiménez <span dir="ltr"><<a href="mailto:cjimenez@eneotecnologia.com" target="_blank">cjimenez@eneotecnologia.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"> Hi all,<br>
<br>
We're running OpenNebula 3.8.3 with Open vSwitch and we've
found out an issue. Once the frontend and the host are
started, the VMs appear in Pending state and move to
Prolog and again back to Pending states.<br>
<br>
This is the output of oned.log:<br>
<br>
<i><small>Tue Feb 5 11:30:04 2013 [DiM][D]: Deploying VM
0<br>
Tue Feb 5 11:30:04 2013 [ReM][D]: Req:5360 UID:0
VirtualMachineDeploy result SUCCESS, 0<br>
Tue Feb 5 11:30:07 2013 [TM][D]: Message received:
LOG I 0 clone: Cloning
/var/lib/one/datastores/1/d76d1fd89f175e1027f8506978165c03
in host1:/var/lib/one//datastores/0/0/disk.0<br>
Tue Feb 5 11:30:07 2013 [TM][D]: Message received:
LOG I 0 ExitCode: 0<br>
Tue Feb 5 11:30:07 2013 [TM][D]: Message received:
LOG I 0 ln: Linking
/var/lib/one/datastores/1/923331c1aeb5a587dd428d0b8607ff29
in host1:/var/lib/one//datastores/0/0/disk.1<br>
Tue Feb 5 11:30:07 2013 [TM][D]: Message received:
LOG I 0 ExitCode: 0<br>
Tue Feb 5 11:30:07 2013 [TM][D]: Message received:
TRANSFER SUCCESS 0 -<br>
Tue Feb 5 11:30:08 2013 [VMM][D]: Message received:
LOG I 0 ExitCode: 0<br>
Tue Feb 5 11:30:08 2013 [VMM][D]: Message received:
LOG I 0 Successfully execute network driver operation:
pre.<br>
Tue Feb 5 11:30:08 2013 [VMM][D]: Message received:
LOG I 0 Command execution fail: cat << EOT |
/var/tmp/one/vmm/kvm/deploy
/var/lib/one//datastores/0/0/deployment.24 host1 0
host1<br>
Tue Feb 5 11:30:08 2013 [VMM][D]: Message received:
LOG I 0 error: Failed to create domain from
/var/lib/one//datastores/0/0/deployment.24<br>
Tue Feb 5 11:30:08 2013 [VMM][D]: Message received:
LOG I 0 error: Unable to add bridge vbr1 port vnet0:
Invalid argument<br>
Tue Feb 5 11:30:08 2013 [VMM][D]: Message received:
LOG E 0 Could not create domain from
/var/lib/one//datastores/0/0/deployment.24<br>
Tue Feb 5 11:30:08 2013 [VMM][D]: Message received:
LOG I 0 ExitCode: 255<br>
Tue Feb 5 11:30:08 2013 [VMM][D]: Message received:
LOG I 0 Failed to execute virtualization driver
operation: deploy.<br>
Tue Feb 5 11:30:08 2013 [VMM][D]: Message received:
DEPLOY FAILURE 0 Could not create domain from
/var/lib/one//datastores/0/0/deployment.24</small></i><br>
<br>
We've realised that one tries to create a vnetx but that
vnet interface is already into the Open vSwitch database,
so it is unable to introduce that interface and therefore
to create the VM. This is the output of the openvswitch:<br>
<i><small>#ovs-vsctl show<br>
6725e67a-3af1-4fdf-9dfe-f606d09918a8<br>
Bridge "vbr1"<br>
Port "bond0"<br>
Interface "bond0"<br>
Port "vbr1"<br>
Interface "vbr1"<br>
type: internal<br>
ovs_version: "1.4.3"</small></i><br>
<br>
We've managed to solve it manually deleting those
interfaces into the open vswitch database and immeditely
one has been able to create the VMs.<br>
This is the output:<br>
<br>
<i><small>Tue Feb 5 11:31:37 2013 [TM][D]: Message
received: LOG I 0 clone: Cloning
/var/lib/one/datastores/1/d76d1fd89f175e1027f8506978165c03
in host1:/var/lib/one//datastores/0/0/disk.0<br>
Tue Feb 5 11:31:37 2013 [TM][D]: Message received:
LOG I 0 ExitCode: 0<br>
Tue Feb 5 11:31:37 2013 [TM][D]: Message received:
LOG I 0 ln: Linking
/var/lib/one/datastores/1/923331c1aeb5a587dd428d0b8607ff29
in host1:/var/lib/one//datastores/0/0/disk.1<br>
Tue Feb 5 11:31:37 2013 [TM][D]: Message received:
LOG I 0 ExitCode: 0<br>
Tue Feb 5 11:31:37 2013 [TM][D]: Message received:
TRANSFER SUCCESS 0 -<br>
Tue Feb 5 11:31:38 2013 [VMM][D]: Message received:
LOG I 0 ExitCode: 0<br>
Tue Feb 5 11:31:38 2013 [VMM][D]: Message received:
LOG I 0 Successfully execute network driver operation:
pre.<br>
Tue Feb 5 11:31:38 2013 [VMM][D]: Message received:
LOG I 0 ExitCode: 0<br>
Tue Feb 5 11:31:38 2013 [VMM][D]: Message received:
LOG I 0 Successfully execute virtualization driver
operation: deploy.<br>
Tue Feb 5 11:31:38 2013 [VMM][D]: Message received:
LOG I 0 post: Executed "sudo /usr/bin/ovs-ofctl
add-flow vbr1
in_port=2,dl_src=02:00:c0:a8:0f:64,priority=40000,actions=normal".<br>
Tue Feb 5 11:31:38 2013 [VMM][D]: Message received:
LOG I 0 post: Executed "sudo /usr/bin/ovs-ofctl
add-flow vbr1 in_port=2,priority=39000,actions=drop".<br>
Tue Feb 5 11:31:38 2013 [VMM][D]: Message received:
LOG I 0 ExitCode: 0<br>
Tue Feb 5 11:31:38 2013 [VMM][D]: Message received:
LOG I 0 Successfully execute network driver operation:
post.<br>
Tue Feb 5 11:31:38 2013 [VMM][D]: Message received:
DEPLOY SUCCESS 0 one-0</small></i><br>
<br>
Is there any way to manage it? We've thought on an script
to automatically check it everytime we restart the
servers, but perhaps there is already a better way we
unknow.<br>
<br>
<br>
Thanks in advance,<br>
<br>
Carlos.<br>
<br>
</div>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br>
</blockquote>
</div>
<br>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
Jaime Melis<br>
Project Engineer<br>
OpenNebula - The Open Source Toolkit for Cloud Computing<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a>
</blockquote>
<br>
</div></div></div>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div><br clear="all"><div><br></div>-- <br>Jaime Melis<br>Project Engineer<br>OpenNebula - The Open Source Toolkit for Cloud Computing<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a>