This is now fixed for the next release. The snapshot will be reused, the only potential problem is that reusing the same volume with old images from a previous installation, may lead to inconsistent copies. This is an anomalous situation, and reusing the snapshot is probably faster and safer.<div><br></div><div>Cheers</div><div><br></div><div>Ruben<br><br><div class="gmail_quote">On Sat Dec 13 2014 at 8:58:41 AM Damon (Albino Geek) <<a href="mailto:albinogeek@gmail.com">albinogeek@gmail.com</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Fri, 12 Dec 2014 04:15:34 -0800, Fabian Zimmermann <<a href="mailto:dev.faz@gmail.com" target="_blank">dev.faz@gmail.com</a>><br>
wrote:<br>
> Hi,<br>
><br>
> Ah! Thanks I just misread the oned.conf<br>
><br>
> Nevertheless, I used "-r" and assumed it would re-create the VM, but<br>
> this failed if you use (ceph) shared storage, because clone will abort<br>
> if previous cleanup failed, so there is (in my opinion) a bug, because<br>
> tm should handle this by removing or using the old snap/disk, isn't it?<br>
><br>
> Fabian<br>
<br>
I do think that it should be better handled by the CephFS TM.<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/<u></u>listinfo.cgi/users-opennebula.<u></u>org</a><br>
</blockquote></div></div>