Hi Jhon,<div><br></div><div>Your description is very accurate, and I have just one comment:</div><div><br></div><div>Instead of manually executing the 'virsh create' command, you can execute 'onevm restart' [1]. This will have the same effect, the hypervisor deployment command is executed without going through the prolog state. The files currently in the host are used, not copied again from the image repository, preserving the disk changes.</div>
<div><br></div><div>Cheers</div>
<div><br></div><div>[1] <a href="http://opennebula.org/documentation:rel3.2:vm_guide_2#onevm_command">http://opennebula.org/documentation:rel3.2:vm_guide_2#onevm_command</a><br></div><div><br clear="all">--<br>Carlos Martín, MSc<br>
Project Engineer<br>OpenNebula - The Open-source Solution for Data Center Virtualization<div><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a> | <a href="http://twitter.com/opennebula" target="_blank">@OpenNebula</a></span><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="mailto:cmartin@opennebula.org" style="color:rgb(42,93,176)" target="_blank"></a></span></div>
<br>
<br><br><div class="gmail_quote">2012/3/12 Jhon Masschelein <span dir="ltr"><<a href="mailto:Jhon.Masschelein@sara.nl" target="_blank">Jhon.Masschelein@sara.nl</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi list,<br>
<br>
In you mail, you mix FAILED and UNKNOWN.<br>
<br>
When a VM goes to FAILED, it pretty much always means that it was not able to deploy due to some error. The log file would give more information. Look for things like inaccessible disks or networks, bad template variables, etc..<br>
<br>
As far as I know, a FAILED VM should never go to READY state without resubmission. Please correct me if I am wrong anybody.<br>
<br>
UNKNOWN state is different; this happens when oned does not get any monitoring info from the VM for a while. This could be a result of the system and or libvirt being very busy or maybe network problems.<br>
Once monitoring resumes, this usually result in an UNKNOWN state going back to READY. Of course, if for some reason the KVM or XEN domain process died, monitoring will never resume.<br>
<br>
(Not sure if you are using KVM or XEN, the following is based on KVM but I think XEN is relatively similar.)<br>
For example, if you have a node crash, the KVM process will of course have died, the monitoring will stop and the VM will end up in UNKNOWN state.<br>
<br>
When the crashed node is rebooted, you can "recover" the VM by booting it again. In the /var/lib/one/$VMID/images directory for the VM, you will find a deployment.X file and the images files. You can simply use "virsh create deployment.X" (replace X with the highest number you find in the directory). This will restart the VM.<br>
<br>
After a little while, opennebula will start receiving monitoring info from the restarted VM again and the VM will turn READY.<br>
<br>
For a FAILED VM, this mostly is not possible because the reason the VM is FAILED is because either the deployment file could not be created, is faulty or the disk images could not be copied.<br>
<br>
All this is based on my experience with opennebula. Please correct me if I am wrong.<br>
<br>
Wkr,<br>
<br>
Jhon<div><div><br>
<br>
<br>
On 03/11/2012 10:08 PM, Łukasz Oleś wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Thursday 08 March 2012 06:45:54 Siva Prasad wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi All,<br>
<br>
I have a peculiar issues. For some reason if vm is heavily loaded it<br>
goes to unknown state. To recover from unknown state I use "restart".<br>
some times the vm gets recovered and sometimes it goes to failed state (<br>
in both cases all the vm files exists on the disk).Below are my queries.<br>
<br>
1) How to debug why some times vm goes to failed state and why it<br>
recovers sometimes<br>
</blockquote>
Check /var/log/one/{vm_id}.log file<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
2) Is there a way to recover failed vms.<br>
</blockquote>
<br>
I'm also interested in this question. Anyone?<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Thanks,<br>
Siva<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/<u></u>listinfo.cgi/users-opennebula.<u></u>org</a><br>
</blockquote>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/<u></u>listinfo.cgi/users-opennebula.<u></u>org</a><br>
</blockquote>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/<u></u>listinfo.cgi/users-opennebula.<u></u>org</a><br>
</div></div></blockquote></div><br></div>