[one-users] Resume paused VM due to full system datastore

Ruben S. Montero rsmontero at opennebula.org
Mon Mar 31 03:40:09 PDT 2014


Hi

The recommended way is to recover the VMs manually and let the
monitorization process update the state in oned, as you describe in your
email. UNKNOWN state is to accommodate these failure situations (that may
require different recover procedures).

Note also that we can re-send from OpenNebula a deploy action (virsh
create, in your case) with onevm boot (or through the play icon in
Sunstone) for UNKNOWN, however there is no way to send from OpenNebula a
'virsh resume' command.

BTW OpenNebula 4.4 should control the system datastore usage (even not
shared) and not schedule VMs to prevent taking all the space, you can even
limit manually the size used by OpenNebula. Is this not working for you?

Cheers

Ruben


On Thu, Mar 27, 2014 at 4:51 PM, Daniel Dehennin <
daniel.dehennin at baby-gnu.org> wrote:

> Hello,
>
> I just encounter an issue with KVM based VMs when the non shared system
> datastore became full.
>
> The libvirt/kvm paused the VMs trying to write on their discs and I have
> to run:
>
>     for vm in $(virsh -c qemu:///system list | awk '/paused/ {print $1}')
>     do
>         virsh -c qemu:///system resume ${vm}
>     done
>
> In ONE they was in UNKNOWN state.
>
> Shouldn't it be handled by ONE directly?
>
> Regards.
> --
> Daniel Dehennin
> Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
> Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>


-- 
-- 
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140331/13a17cc1/attachment-0002.htm>


More information about the Users mailing list