[one-users] FW: VM reboot

Javier Fontan jfontan at fdi.ucm.es
Thu Jul 16 04:22:48 PDT 2009


On Jul 4, 2009, at 3:56 AM, Harsha Buggi wrote:

> FYI...
> I have also found that the restores are not consistant. Somtimes it  
> goes through if the VM is in pre-boot state when it is deployed. I  
> have tried with this another image which is only 20 MB with Unix  
> like OS. This image did not have any problem with restoring. So now  
> I feel my setup is ok and the problem might be with qemu or libvirt.  
> Has anybody ever face this issue? I am a newbie so would appreciate  
> if responses are a more descriptive.

I don't really understand your problem. A restore after a stop copies  
files from the frontend machine to the node that will run the VM if  
you are using ssh drivers. These files are VM images and VM state.  
With a 20 Mb image files this copy should be much faster that one with  
bigger images and you probably can't see the VM in prolog state.

>     I am facing another issue in that I am not able to restore  
> images which have been 'stopped' or 'suspended'. I have configured  
> NFS as the transfer manager and am running the entire setup on  
> fc8(client and server).
>     I first create an image using virt-manager by connecting to KVM  
> hypervisor. Then I use this image in my configuration file along  
> with 'Clone' option to create VM's. This is an isolated setup so I  
> am running all machines with 'root' user. I do not have 'oneadmin'  
> user or NIS server running in my setup. The VM's boot up without any  
> problems. They also generate the 'checkpoint' file and go to 'susp'  
> or 'stop' state. But when I try to 'resume' them - they fail. I logs  
> say qemu error and libvirt could not restore from the file(I do not  
> have the exact log file right now but I can send this if you need  
> it). I am trying to deploy  Windows XP and a fc8 image files.
>    But I am not facing this issue if I deploy my VM's using ssh as  
> the transfer method.

>    Is there any work around for this problem?

Please, check the permissions of the files in $ONE_LOCATION/var/<vmid>/ 
images. Using NFS with root user and not specifying no_root_squash in  
nfs export options makes root credentials in remote nodes being  
converted to "nobody". That could make the hypervisor unable to read  
those files. Using ssh you will not have that as there is not that  
owner id transformation.

Using a user for OpenNebula installation is still a good way to solve  
those problems.

>    I have also found another issue in that I wasnt able to use KVM  
> hypervisor on fc8 as the 'emulator' parameter generated in the VM  
> config file was pointing to 'usr/bin/kvm'. But on fc8 this file is   
> named 'qemu-kvm' and not 'kvm'. I made this modification in  
> 'libvirtdriver.cc' file and recompiled and things started working  
> thereafter.
>   I 'one' server version I am running is 1.2.1.

That problem can be also easily solved creating a symbolic link to  
that file. We have noticed that not all distributions name kvm  
executable or puts it in the same path and that was how we solved  
that. You can also see some explanation here for libvirt problems with  



Javier Fontan, Grid & Virtualization Technology Engineer/Researcher
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org

More information about the Users mailing list