Hi,<div><br></div><div>Your scenario is far better handled in the new 3.8 version.</div><div>If a resume operation fails the VM is not set to failed; it will come back to stopped so you can retry the resume again. Fixed in #1210</div>
<div><br></div><div>We also added a new onedb command to fix corrupted DBs. I advise you to upgrade [2] to 3.8.1 and execute the onedb fsck command [3]</div><div><br></div><div>Regards</div><div><br></div><div>[1] <a href="http://dev.opennebula.org/issues/1210">http://dev.opennebula.org/issues/1210</a></div>
<div>[2] <a href="http://opennebula.org/documentation:rel3.8:upgrade">http://opennebula.org/documentation:rel3.8:upgrade</a></div><div>[3] <a href="http://opennebula.org/documentation:rel3.8:onedb">http://opennebula.org/documentation:rel3.8:onedb</a></div>
<div><br clear="all">--<br>Carlos Martín, MSc<br>Project Engineer<br>OpenNebula - The Open-source Solution for Data Center Virtualization<div><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a> | <a href="http://twitter.com/opennebula" target="_blank">@OpenNebula</a></span><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="mailto:cmartin@opennebula.org" style="color:rgb(42,93,176)" target="_blank"></a></span></div>
<br>
<br><br><div class="gmail_quote">On Thu, Oct 11, 2012 at 12:36 AM, Lawrence Chiong <span dir="ltr"><<a href="mailto:junix88@gmail.com" target="_blank">junix88@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hello,<br><br>Need your help for the solution I encountered -<br><br>SCENARIO:<br><br>VM failed when I do action "Resume" from "Stop"<br><br>Therefore, to restored the VM I issued the command below from my node2 -<br>
<pre><span>virsh --connect qemu:///system restore /var/lib/one//datastores/0/3/checkpoint</span></pre>the VM was back online,however, when I checked my Sunstone Virtual Machines list the status was still on "FAILED" state. So, upated vm_pool tables and set it on to "RUNNING" and Sunstone VM status went on "RUNNING" state now.<br>
<br>PROBLEM:<br><br>When I checked the Host both on Sunstone and "onehost list" command my host/node gave me false information based on the output when I issued "virsh list" from node2. via "virsh list" running VM were 3 BUT via "onehost list" and Sunstone Hosts monitoring the Running VMs were only 2.<br>
<br>-------------<br><br>Can anyone help me for a step-by-step solution on how to fix this issues - giving us the right information from the Host after Failed VM being restored via virsh command?<br><br>Your help is very much appreciated.<br>
<br>Thank you.<br><br>Junix<br>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div>