[one-users] Ceph and thin provision

Kenneth kenneth at apolloglobal.net
Thu Dec 12 06:29:49 PST 2013


 

This all is good news. And I think this will solve my problem of a
bit slow (a few minutes) of deploying a VM, that is cloning is really
time consuming. 

Although I really like this RBD format 2, I'm not
quite adept yet on how to implement it in nebula. And my ceph version is
dumpling 0.67, does it support rbd format 2? 

 If you have any docs,
I'd greatly appreciate it. Or rather I'm willing to wait a little
longer, maybe on the next release of nebula(?), to make rbd format 2 to
be the default format? 
---

Thanks,
Kenneth
Apollo Global Corp.

On
12/12/2013 09:48 PM, Campbell, Bill wrote: 

> Ceph's RBD Format 2
images support the copy-on-write clones/snapshots for quick
provisioning, where essentially the following happens: 
> 
> Snapshot of
Image created --> Snapshot protected from deletion --> Clone image
created from snapshot 
> 
> The protected snapshot acts as a base image
for the clone, where only the additional data is stored in the clone.
See more here: http://ceph.com/docs/master/rbd/rbd-snapshot/#layering
[2] 
> 
> For our environment here I have modified the included
datastore/tm drivers for Ceph to take advantage of these format 2
images/layering for Non-Persistent images. It works rather well, and all
image functions work appropriately for non-persistent images (save as,
etc.). One note/requirement is to be using a newer Ceph release
(recommend Dumpling or newer) and newer versions of QEMU/Libvirt (there
were some bugs in older releases, but the versions from Ubuntu Cloud
Archive for 12.04 work fine). I did submit them for improvement prior to
the 4.0 release, but the simple format 1 images are the default
currently for OpenNebula. 
> 
> I think this would be a good question
for the developers. Would creating the option for Format 2 images
(either in the image template as a parameter or on the Datastore as a
configuration attribute) and then developing the DS/TM drivers further
to accommodate this option be worth the effort? I can see use cases for
both (separate images vs. cloned images having to rely on the base
image), but cloned images are WAY faster to deploy. 
> 
> I have the
basic code for format 2 images, I think the logic for looking up the
parameter/attribute and then applying appropriate action should be
rather simple. Could collaborate/share if you'd like. 
> 
>
-------------------------
> 
> FROM: "Kenneth"
<kenneth at apolloglobal.net>
> TO: users at lists.opennebula.org
> SENT:
Thursday, December 12, 2013 6:11:15 AM
> SUBJECT: Re: [one-users] Ceph
and thin provision
> 
> Yes, that is possible. But as I said, all my
images were all preallocated as I haven't created any image from
sunstone. 
> 
> ---
> 
> Thanks,
> Kenneth
> Apollo Global Corp.
> 
> On
12/12/2013 06:25 PM, Michael wrote: 
> 
>> This doesn't appear to be the
case, I've 2TB of images on Ceph and 380GB 
>> data reported by Ceph
(760G after replication). All of these Ceph images 
>> were created
through the Opennebula Sunstone template GUI.
>> 
>> -Michael
>> 
>> On
12/12/2013 09:11, Kenneth wrote:
>> 
>>> I haven't tried creating a thin
or thick provision in ceph rbd from scratch. So basically, I can say
that a 100GB disk will consume 100GB RBD in ceph (of course it will be
200GB in ceph storage since ceph duplicates the disks by default).
>>

>> _______________________________________________
>> Users mailing
list
>> Users at lists.opennebula.org
>>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]
> 
>
_______________________________________________
> Users mailing list
>
Users at lists.opennebula.org
>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org 
> 
>
NOTICE: PROTECT THE INFORMATION IN THIS MESSAGE IN ACCORDANCE WITH THE
COMPANY'S SECURITY POLICIES. IF YOU RECEIVED THIS MESSAGE IN ERROR,
IMMEDIATELY NOTIFY THE SENDER AND DESTROY ALL COPIES.



Links:
------
[1]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[2]
http://ceph.com/docs/master/rbd/rbd-snapshot/#layering
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131212/4bbf789d/attachment-0002.htm>


More information about the Users mailing list