<div dir="ltr">Done! [1]<div><br></div><div style>Thanks Bill!</div><div style><br></div><div style>[1] <a href="http://opennebula.org/documentation:rel4.0:ceph_ds">http://opennebula.org/documentation:rel4.0:ceph_ds</a><br>
</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Apr 17, 2013 at 6:58 PM, Ruben S. Montero <span dir="ltr"><<a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">This should go straightaway to the ceph datastore documentation. Thanks Bill!<br>
<div class="HOEnZb"><div class="h5"><br>
On Wed, Apr 17, 2013 at 6:35 PM, Campbell, Bill<br>
<<a href="mailto:bcampbell@axcess-financial.com">bcampbell@axcess-financial.com</a>> wrote:<br>
> From our experience that's because it cannot see the checkpoint file on the target hypervisor.<br>
><br>
> For migration, the Ceph driver uses the transfer manager of the 'system' datastore, and from our experience we have to do either one of two things:<br>
><br>
> 1. The /var/lib/one/datastores/0 (system datastore) directory needs to be shared between opennebula and the hypervisor nodes<br>
> 2. Change the system datastore to use the ssh transfer manager, then modify the pre and postmigrate scripts for the ssh transfer manager to copy over the deployment/checkpoint files from the deployed node<br>
><br>
> We went with option 2 to prevent the reliance on a shared storage volume (NFS, iSCSI etc.) for holding the deployment and configuration files.<br>
><br>
> ----- Original Message -----<br>
> From: "Ruben S. Montero" <<a href="mailto:rsmontero@opennebula.org">rsmontero@opennebula.org</a>><br>
> To: <a href="mailto:users@lists.opennebula.org">users@lists.opennebula.org</a><br>
> Sent: Wednesday, April 17, 2013 11:48:54 AM<br>
> Subject: Re: [one-users] Opennebula 4.0, Ceph cluster migrate vm problem<br>
><br>
> Hi<br>
><br>
> Can you check that the checkpoint file is created and it has access<br>
> permissions for oneadmin?. Could it be an issue with the KVM<br>
> configuration (dynamic_ownership or user/group). From the logs it<br>
> seems that the checkpoint file is not created or cannot be read... You<br>
> can also try onevm stop/resume to test the checkpoint functionality.<br>
><br>
> Cheers<br>
><br>
> Ruben<br>
><br>
> On Tue, Apr 16, 2013 at 4:50 PM, Stefan Ivanov <<a href="mailto:s.ivanov@maxtelecom.bg">s.ivanov@maxtelecom.bg</a>> wrote:<br>
>> Hello all.<br>
>><br>
>> I have problem with migration of virtual machines. When i try to migrate<br>
>> from hos to host i get this error:<br>
>> VMM][I]: Command execution fail: /var/tmp/one/vmm/kvm/restore<br>
>> /var/lib/one//datastores/0/4/checkpoint gamma 4 gamma<br>
>> Tue Apr 16 17:46:30 2013 [VMM][E]: restore: Command "virsh --connect<br>
>> qemu:///system restore /var/lib/one//datastores/0/4/checkpoint" failed:<br>
>> error: Failed to restore domain from /var/lib/one//datastores/0/4/checkpoint<br>
>><br>
>> There is my datastore info:<br>
>> DATASTORE 107 INFORMATION<br>
>> ID : 107<br>
>> NAME : ceph_data<br>
>> USER : oneadmin<br>
>> GROUP : oneadmin<br>
>> CLUSTER : -<br>
>> TYPE : IMAGE<br>
>> DS_MAD : ceph<br>
>> TM_MAD : ceph<br>
>> BASE PATH : /var/lib/one/datastores/107<br>
>> DISK_TYPE : RBD<br>
>><br>
>> PERMISSIONS<br>
>> OWNER : um-<br>
>> GROUP : u--<br>
>> OTHER : ---<br>
>><br>
>> DATASTORE TEMPLATE<br>
>> DISK_TYPE="RBD"<br>
>> DS_MAD="ceph"<br>
>> HOST="alpha"<br>
>> POOL_NAME="data"<br>
>> TM_MAD="ceph"<br>
>> TYPE="IMAGE_DS"<br>
>><br>
>><br>
>> There is mu error log.<br>
>> Tue Apr 16 17:46:25 2013 [LCM][I]: New VM state is SAVE_MIGRATE<br>
>> Tue Apr 16 17:46:27 2013 [VMM][I]: ExitCode: 0<br>
>> Tue Apr 16 17:46:27 2013 [VMM][I]: Successfully execute virtualization<br>
>> driver operation: save.<br>
>> Tue Apr 16 17:46:27 2013 [VMM][I]: ExitCode: 0<br>
>> Tue Apr 16 17:46:27 2013 [VMM][I]: Successfully execute network driver<br>
>> operation: clean.<br>
>> Tue Apr 16 17:46:29 2013 [LCM][I]: New VM state is PROLOG_MIGRATE<br>
>> Tue Apr 16 17:46:29 2013 [TM][I]: ExitCode: 0<br>
>> Tue Apr 16 17:46:29 2013 [TM][I]: ExitCode: 0<br>
>> Tue Apr 16 17:46:30 2013 [LCM][I]: New VM state is BOOT<br>
>> Tue Apr 16 17:46:30 2013 [VMM][I]: ExitCode: 0<br>
>> Tue Apr 16 17:46:30 2013 [VMM][I]: Successfully execute network driver<br>
>> operation: pre.<br>
>> Tue Apr 16 17:46:30 2013 [VMM][I]: Command execution fail:<br>
>> /var/tmp/one/vmm/kvm/restore /var/lib/one//datastores/0/4/checkpoint gamma 4<br>
>> gamma<br>
>> Tue Apr 16 17:46:30 2013 [VMM][E]: restore: Command "virsh --connect<br>
>> qemu:///system restore /var/lib/one//datastores/0/4/checkpoint" failed:<br>
>> error: Failed to restore domain from /var/lib/one//datastores/0/4/checkpoint<br>
>> Tue Apr 16 17:46:30 2013 [VMM][I]: error: Failed to create file<br>
>> '/var/lib/one//datastores/0/4/checkpoint': No such file or directory<br>
>> Tue Apr 16 17:46:30 2013 [VMM][E]: Could not restore from<br>
>> /var/lib/one//datastores/0/4/checkpoint<br>
>> Tue Apr 16 17:46:30 2013 [VMM][I]: ExitCode: 1<br>
>> Tue Apr 16 17:46:30 2013 [VMM][I]: Failed to execute virtualization driver<br>
>> operation: restore.<br>
>> Tue Apr 16 17:46:30 2013 [VMM][E]: Error restoring VM: Could not restore<br>
>> from /var/lib/one//datastores/0/4/checkpoint<br>
>> Tue Apr 16 17:46:32 2013 [DiM][I]: New VM state is FAILED<br>
>><br>
>> Any ideas what is the problem?<br>
>> _______________________________________________<br>
>> Users mailing list<br>
>> <a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
>> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
><br>
><br>
><br>
> --<br>
> Ruben S. Montero, PhD<br>
> Project co-Lead and Chief Architect<br>
> OpenNebula - The Open Source Solution for Data Center Virtualization<br>
> <a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org">rsmontero@opennebula.org</a> | @OpenNebula<br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
> NOTICE: Protect the information in this message in accordance with the company's security policies. If you received this message in error, immediately notify the sender and destroy all copies.<br>
<br>
<br>
<br>
--<br>
Ruben S. Montero, PhD<br>
Project co-Lead and Chief Architect<br>
OpenNebula - The Open Source Solution for Data Center Virtualization<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org">rsmontero@opennebula.org</a> | @OpenNebula<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Jaime Melis<br>Project Engineer<br>OpenNebula - The Open Source Toolkit for Cloud Computing<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a>
</div>