<div dir="ltr"><div>I can't migrate to NFS because the disk io performance won't handle my VM's.<br></div>There isn't any sunstone modification to use rbd v2 images snapshots ? I can create them using rbd snap create, but don't have the possibility to use it via sunstone.<div>
<br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-05-14 16:31 GMT+02:00 Vladislav Gorbunov <span dir="ltr"><<a href="mailto:vadikgo@gmail.com" target="_blank">vadikgo@gmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">>Can you write me what i need to do to migrate to qcow2 format? I've got running vm's that cannot be "deleted". I tried migrating them form rbd v1 to v2 but this doesn't solved my problem.<br>
</div>You need NFS datastore. Only NFS or SSH datastore support qcow2 file<br>
format. But Opennebula can't move image between datastores.<br>
1. Shutdown vm<br>
2. Convert from ceph to qcow2 format:<br>
export image="vm-disk-1"; qemu-img convert -O qcow2 rbd:`oneimage show<br>
$image|awk '/^SOURCE/ {print $3}'`<br>
/var/lib/one/datastore/123/$image.qcow2<br>
3. Rename ceph image:<br>
oneimage rename $image $image.bak<br>
4. Import qcow2 image to NFS datastore with id 123 for example:<br>
oneimage create -d nfsdatastorename --name $image --type os<br>
--persistent --source /var/lib/one/datastores/123/$image.qcow2<br>
--driver qcow2 --size `rbd info rados/$image|awk '/size/ {print<br>
$5*4}'`<br>
5. Start vm<br>
<br>
2014-05-14 22:08 GMT+12:00 Leszek Master <<a href="mailto:keksior@gmail.com">keksior@gmail.com</a>>:<br>
<div class="HOEnZb"><div class="h5">> Can you write me what i need to do to migrate to qcow2 format? I've got<br>
> running vm's that cannot be "deleted". I tried migrating them form rbd v1 to<br>
> v2 but this doesn't solved my problem.<br>
><br>
><br>
> 2014-04-30 12:41 GMT+02:00 Javier Fontan <<a href="mailto:jfontan@opennebula.org">jfontan@opennebula.org</a>>:<br>
><br>
>> I'm not sure I follow you with that. Just changing the TM from ceph to<br>
>> qcow2 is something that probably doesn't work. One of the things that<br>
>> can be failing is that getting the free space from ceph and qcow2 is<br>
>> totally different and it may be that the datastore is reporting 0<br>
>> bytes free. In this case the scheduler will never start a VM that uses<br>
>> that datastore.<br>
>><br>
>> On Wed, Apr 23, 2014 at 9:18 AM, Leszek Master <<a href="mailto:keksior@gmail.com">keksior@gmail.com</a>> wrote:<br>
>> > I tried to change rbd image version from 1 to 2 but it doesn't work. But<br>
>> > when i changed in my ceph datastore TM_MAD to qcow2 the new VM<br>
>> > instantized<br>
>> > from image that was in the datastore stuck on "PENDING". Is there any<br>
>> > way to<br>
>> > get this working with "existing" images?<br>
>> ><br>
>> ><br>
>> > 2014-04-21 14:53 GMT+02:00 Stuart Longland <<a href="mailto:stuartl@vrt.com.au">stuartl@vrt.com.au</a>>:<br>
>> ><br>
>> >> On 17/04/14 01:18, Leszek Master wrote:<br>
>> >> > 1) I'm using OpenNebula 4.4 with ceph datastore. When i try to make<br>
>> >> > snapshot i've got error:<br>
>> >> > 3) Is there any way to use copy-on-write in opennebula 4.4 ?<br>
>> >><br>
>> >> I did some experimental work on this with our OpenNebula 4.4 instance.<br>
>> >> It relies on modern kernel's ability to map version-2 RBD images using<br>
>> >> the kernel rbd driver, so it needs kernel >3.10 (? Not sure of exact<br>
>> >> version; we're using 3.13).<br>
>> >><br>
>> >> This newer format allows copy-on-write cloning. "Hot" snapshots work,<br>
>> >> however I haven't managed to trigger a deferred snapshot, so that's<br>
>> >> untested. The driver also makes use of hypervisor-side cache using<br>
>> >> FlashCache, allocating slices of a nominated LVM volume to boost<br>
>> >> performance.<br>
>> >><br>
>> >> I've thrown my work into a git repository for now, consider this very<br>
>> >> much pre-alpha. If you're brave, feel free to give it a go:<br>
>> >><br>
>> >> git clone git://<a href="http://git.longlandclan.yi.org/opennebula-ceph-flashcache.git" target="_blank">git.longlandclan.yi.org/opennebula-ceph-flashcache.git</a><br>
>> >><br>
>> >> Regards,<br>
>> >> --<br>
>> >> Stuart Longland<br>
>> >> Systems Engineer<br>
>> >> _ ___<br>
>> >> \ /|_) | T: <a href="tel:%2B61%207%203535%209619" value="+61735359619">+61 7 3535 9619</a><br>
>> >> \/ | \ | 38b Douglas Street F: <a href="tel:%2B61%207%203535%209699" value="+61735359699">+61 7 3535 9699</a><br>
>> >> SYSTEMS Milton QLD 4064 <a href="http://www.vrt.com.au" target="_blank">http://www.vrt.com.au</a><br>
>> >><br>
>> >><br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > Users mailing list<br>
>> > <a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
>> > <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
>> ><br>
>><br>
>><br>
>><br>
>> --<br>
>> Javier Fontán Muiños<br>
>> Developer<br>
>> OpenNebula - The Open Source Toolkit for Data Center Virtualization<br>
>> <a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | @OpenNebula | <a href="http://github.com/jfontan" target="_blank">github.com/jfontan</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
><br>
</div></div></blockquote></div><br></div>