[one-users] ping stops working after a normal migration

Héctor Sanjuán hsanjuan at opennebula.org
Wed Jul 27 06:31:26 PDT 2011


Just in case, have you put

FEATURES=[ acpi="yes" ]

in your VM template?

Hector

El 27/07/11 14:25, Adnan Pasic escribió:
> Ok, at least I am now a little smarter regarding this whole thing.
> I have now set up a debian 6.0.2.1 Virtual Machine, contextualized and
> everything.
> Also, it tells me that ACPI v1.5 is installed. However, when I only type
> "acpi" in the command line, I get this message: "No support for device
> type: power_supply". Could this be a hint?
> 
> And do I also need to have ACPI installed on both my worker nodes?
> How can I properly set up ACPI in the VM???
> 
> ------------------------------------------------------------------------
> *Von:* Javier Fontan <jfontan at gmail.com>
> *An:* Adnan Pasic <pqado at yahoo.de>
> *Cc:* "users at lists.opennebula.org" <users at lists.opennebula.org>
> *Gesendet:* 13:15 Mittwoch, 27.Juli 2011
> *Betreff:* Re: [one-users] ping stops working after a normal migration
> 
> The ttylinux we provide does not have acpi drivers so that is the
> problem you are facing. We've chosen that distro to test deployments
> and contextualization for its size, not completeness. Can you try the
> migrations using another distro with acpi properly setup?
> 
> On Wed, Jul 27, 2011 at 1:04 PM, Adnan Pasic <pqado at yahoo.de
> <mailto:pqado at yahoo.de>> wrote:
>> I'm still using the predefined ttylinux from the documentation, so I
> assume
>> the acpi should be installed and up-to-date, right?
>> It is strange however, because until before the live-migration I was
> able to
>> perform commands in the VM (via vncviewer), but after live-migrating it
>> seems as if the window is frozen - my commands won't get recognized
> anymore,
>> and the window just scrolls down when I press enter.
>> Yet, still I can normally ssh the VM.
>> Using a normal migration however changes the situation. After
> migrating the
>> VM normally I can't even ping the ip anymore (as described in the last
>> mail). I can still log in via vncviewer, though, but as described above -
>> the window just keeps scrolling down, without accepting any input.
>> Any ideas?
>>
>> ________________________________
>> Von: Javier Fontan <jfontan at gmail.com <mailto:jfontan at gmail.com>>
>> An: Adnan Pasic <pqado at yahoo.de <mailto:pqado at yahoo.de>>
>> Cc: "users at lists.opennebula.org <mailto:users at lists.opennebula.org>"
> <users at lists.opennebula.org <mailto:users at lists.opennebula.org>>
>> Gesendet: 12:48 Mittwoch, 27.Juli 2011
>> Betreff: Re: [one-users] ping stops working after a normal migration
>>
>> The problem could but that your machine is frozen after resume. It can
>> be caused because it did not receive or did anything with the acpi
>> message telling it is awake again. Can you check using vnc that the VM
>> is running after migration? Also check that acpi is installed in that
>> VM
>>
>> Cheers
>>
>>
>> On Tue, Jul 26, 2011 at 11:04 AM, Adnan Pasic <pqado at yahoo.de
> <mailto:pqado at yahoo.de>> wrote:
>>> Hello guys,
>>> maybe you can help me with following issue. I have created a little cloud
>>> with a host and two worker nodes. The setup went successfully until
> now, I
>>> am able to create VM's and move them via normal and live migration.
>>> Another (possibly) important information is that I configured my virtual
>>> bridge on both worker nodes like this:
>>>
>>>  auto br0
>>>  iface br0 inet static
>>>          address 192.168.0.[2|3]
>>>          netmask 255.255.255.0
>>>          network 192.168.0.0
>>>          broadcast 192.168.0.255
>>>          #gateway 192.168.0.1
>>>          bridge_ports eth0
>>>          bridge_stp on
>>>          bridge_maxwait 0
>>>
>>> The command "brctl show" gives me following things back:
>>>
>>> bridge name    bridge id            STP enabled    interfaces
>>> br0            8000.003005c34278    yes            eth0
>>>                                                    vnet0 (<- only appears
>>> on
>>> node with running VM)
>>>
>>> virbr0         8000.000000000000    yes
>>>
>>> According to the libvirt wiki this setting is good as is. However, the
>>> issue
>>> I'm having is that when I create a VM and assign a static IP to it, which
>>> looks like e.g. 192.168.0.5,
>>> I
>>>  firstly am able to ping this VM from both worker nodes, and also when I
>>> perform a live migration the ping stops for a few seconds (until the
> nodes
>>> realize the new route to this VM) and then starts pinging normally again.
>>>
>>> However, when I perform a normal migration the ping doesn't recover
>>> anymore,
>>> but answers repeatedly with: Destination Host Unreachable
>>>
>>> Do you know what could be the problem? Where is the difference between a
>>> normal and live migration and how can the ping after live migrating still
>>> work, but after a normal migration not?
>>>
>>> Thanks a lot!
>>> Regards, Adnan
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org <mailto:Users at lists.opennebula.org>
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>>
>>
>> --
>> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
>> DSA Research Group: http://dsa-research.org
>> Globus GridWay Metascheduler: http://www.GridWay.org
>> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
>>
>>
>>
> 
> 
> 
> -- 
> Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
> DSA Research Group: http://dsa-research.org
> Globus GridWay Metascheduler: http://www.GridWay.org
> OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org
> 
> 
> 
> 
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


-- 
Héctor Sanjuán
OpenNebula Sunstone Developer



More information about the Users mailing list