[one-users] ping stops working after a normal migration

Adnan Pasic pqado at yahoo.de
Wed Jul 27 04:04:00 PDT 2011


I'm still using the predefined ttylinux from the documentation, so I assume the acpi should be installed and up-to-date, right?
It is strange however, because until before the live-migration I was able to perform commands in the VM (via vncviewer), but after live-migrating it seems as if the window is frozen - my commands won't get recognized anymore, and the window just scrolls down when I press enter.
Yet, still I can normally ssh the VM.

Using a normal migration however changes the situation. After migrating the VM normally I can't even ping the ip anymore (as described in the last mail). I can still log in via vncviewer, though, but as described above - the window just keeps scrolling down, without accepting any input.

Any ideas?



________________________________
Von: Javier Fontan <jfontan at gmail.com>
An: Adnan Pasic <pqado at yahoo.de>
Cc: "users at lists.opennebula.org" <users at lists.opennebula.org>
Gesendet: 12:48 Mittwoch, 27.Juli 2011 
Betreff: Re: [one-users] ping stops working after a normal migration

The problem could but that your machine is frozen after resume. It can
be caused because it did not receive or did anything with the acpi
message telling it is awake again. Can you check using vnc that the VM
is running after migration? Also check that acpi is installed in that
VM

Cheers


On Tue, Jul 26, 2011 at 11:04 AM, Adnan Pasic <pqado at yahoo.de> wrote:
> Hello guys,
> maybe you can help me with following issue. I have created a little cloud
> with a host and two worker nodes. The setup went successfully until now, I
> am able to create VM's and move them via normal and live migration.
> Another (possibly) important information is that I configured my virtual
> bridge on both worker nodes like this:
>
>  auto br0
>  iface br0 inet static
>          address 192.168.0.[2|3]
>          netmask 255.255.255.0
>          network 192.168.0.0
>          broadcast 192.168.0.255
>          #gateway 192.168.0.1
>          bridge_ports eth0
>          bridge_stp on
>          bridge_maxwait 0
>
> The command "brctl show" gives me following things back:
>
> bridge name    bridge id            STP enabled    interfaces
> br0            8000.003005c34278    yes            eth0
>                                                    vnet0 (<- only appears on
> node with running VM)
>
> virbr0         8000.000000000000    yes
>
> According to the libvirt wiki this setting is good as is. However, the issue
> I'm having is that when I create a VM and assign a static IP to it, which
> looks like e.g. 192.168.0.5,
> I
>  firstly am able to ping this VM from both worker nodes, and also when I
> perform a live migration the ping stops for a few seconds (until the nodes
> realize the new route to this VM) and then starts pinging normally again.
>
> However, when I perform a normal migration the ping doesn't recover anymore,
> but answers repeatedly with: Destination Host Unreachable
>
> Do you know what could be the problem? Where is the difference between a
> normal and live migration and how can the ping after live migrating still
> work, but after a normal migration not?
>
> Thanks a lot!
> Regards, Adnan
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>



-- 
Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20110727/9fcae9fb/attachment-0003.htm>


More information about the Users mailing list