[one-users] opennebula remove all vm after reboot cisco switch :(

Carlos Martín Sánchez cmartin at opennebula.org
Tue Sep 3 06:22:47 PDT 2013


Hi,

The normal way to handle a network failure is to move the VMs to the
unknown state. OpenNebula will keep monitoring these VMs until they
reappear, and are then moved to running.

It's hard to know what happened without the log files, but you probably set
up the fault tolerance hook that deleted and recreated the VMs without a
grace period. If that is the case, you might want to use the -p flag:

           -p <n> avoid resubmission if host comes
                  back after n monitoring cycles


Regards


--
Join us at OpenNebulaConf2013 <http://opennebulaconf.com> in Berlin, 24-26
September, 2013
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmartin at opennebula.org |
@OpenNebula<http://twitter.com/opennebula><cmartin at opennebula.org>


On Sat, Aug 31, 2013 at 11:03 AM, 12navidb2 at gmail.com
<12navidb2 at gmail.com>wrote:

> hello dear
>
> all vm inside of hypervisor and managemenet opennebula connect to cisco
> switch.
> after reboot cisco switch, vm status going (from running) to pending and
> then after some minutes goes to faile status,  and then removed all vm from
> hard :|
> now inside vm folder (with 22 id) just there is a file
> .nfs000000000198002000000001
>
> now is there any way, when vm goes to fail status, opennebula dont remove
> it automatically?
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130903/552d685c/attachment.htm>


More information about the Users mailing list