[one-users] How to use ceph filesystem

Jon three18ti at gmail.com
Tue Dec 10 01:17:09 PST 2013


Note: you will have to mount a different rbd for each
/var/lib/oneadmin/datastores/0, you can't mount the same rbd on multiple
hosts (rbds are not "cluster aware" per se. That is to say, unless you
include a clustered filesystem on top of your rbds--google "ceph
iscsi"--but at that point, why not use cephfs other than it not being "redy
for primetime" and adding additional infrastructure complexity (which using
iscsi on top of ceph does anyway and provides a single point of failure,
defeating the purpose of ceph)).

I rewrote my ssh transfer drivers to rsync the files before live migrating
the vm. Though, there are obvious pitfalls with doing that as there is no
grantee of consistency.

Ultimately, my solution was to use a separate rbd for swap, or not
configure any at all (libvirt does support memory ballooning, though I'm
not sure that O:N provides any controls), that way only non-vm files are
ever stored in datastores/0 (I.e. only OpenNebula files are stored there).
On Dec 4, 2013 3:13 AM, "Kenneth" <kenneth at apolloglobal.net> wrote:

>  Going directly to RBD layer will be used once you use ceph as IMAGE
> datastore. Nebula will be interfacing directly with ceph and your images
> will be in RBD format.This is already the "direct way". This is the same
> things as using KVM with ceph RBD.
>
> The only thing you may want to use is the cephfs (which is not RBD)  is
> when you use it for the SYSTEM datastore and use shared for TM. This is
> what I use. Besides, system datastore does't contain a lot of files so if
> this method is inefficient, I won't notice it all.
>
> But if you also want to use RBD for the system datastore, you can still
> use it. You just mount an RBD image in the /var/lib/datastore/0/ of you
> nodes.
> ---
>
> Thanks,
> Kenneth
> Apollo Global Corp.
>
>  On 12/04/2013 04:08 PM, Mario Giammarco wrote:
>
>    I have read all posts of this interesting thread.
> You suggest to use ceph as a shared filesystem and it is a good idea I
> agree.
>
> But I supposed that, because kvm supports ceph rbd and because opennebula
> supports ceph, there is a "direct way" to use it.
> I mean not using ceph DFS layer and going directly to rbd layer (also for
> system datastore)
> I do not understand what advantages has opennebula actual ceph support,
> can you explain to me?
>
> Thanks,
> Mario
>
>
> 2013/12/3 Jaime Melis <jmelis at c12g.com>
>
>> Hi Mario,
>>
>> Cephfs CAN be used a shared filesystem datastore. I don't completely
>> agree with Kenneth's recommendation of using 'ssh' as the TM for the system
>> datastore. I think you can go for 'shared' as long as you have the
>> /var/lib/one/datastores/... shared via Cephfs. OpenNebula doesn't care
>> about what DFS solution you're using, it will simply assume files are
>> already there.
>>
>> Another thing worth mentioning, from 4.4 onwards the HOST attribute of
>> the datastore should be renamed as BRIDGE_LIST.
>>
>> cheers,
>> Jaime
>>
>>
>>  On Tue, Dec 3, 2013 at 11:28 AM, Kenneth <kenneth at apolloglobal.net>wrote:
>>
>>>   Actually, I'm using ceph as the system datastore. I used the cephfs
>>> (CEPH FUSE) and mounted it on all nodes on /var/lib/one/datastores/0/
>>>
>>> Regarding ssh for trasfer driver, I haven't really used it since I'm all
>>> on ceph, both system and image datastore. I may be wrong but that is how I
>>> understand it from the docs.
>>>  ---
>>>
>>> Thanks,
>>> Kenneth
>>> Apollo Global Corp.
>>>
>>>   On 12/03/2013 06:11 PM, Mario Giammarco wrote:
>>>
>>>   My problem was that  because ceph is a distributed filesystem (and so
>>> it can be used as an alternative to nfs) I supposed I can use as a shared
>>> system datastore.
>>> Reading your reply I can see it is not true. Probably official
>>> documentation should clarify this.
>>>
>>> Infact I hoped to use ceph as system datastore because ceph is fault
>>> tolerant and nfs is not.
>>>
>>> Thanks for help,
>>> Mario
>>>
>>>
>>> 2013/12/3 Kenneth <kenneth at apolloglobal.net>
>>>
>>>>  Ceph won't be the default image datastore, but you can always choose
>>>> it whenever you create an image.
>>>>
>>>> You said you don't have an NFS disk and your just use plain disk on
>>>> your system datastore so you *should* use ssh in order to have live
>>>> migrations.
>>>>
>>>> Mine uses shared as datastore since I mounted a shared folder on each
>>>> nebula node.
>>>>  ---
>>>>
>>>> Thanks,
>>>> Kenneth
>>>> Apollo Global Corp.
>>>>
>>>>   On 12/03/2013 03:01 PM, Mario Giammarco wrote:
>>>>
>>>> First, thanks you for your very detailed reply!
>>>>
>>>>
>>>> 2013/12/3 Kenneth <kenneth at apolloglobal.net>
>>>>
>>>>>  You don't need to replace existing datastores, the important is you
>>>>> edit the system datastore as "ssh" because you still need to transfer files
>>>>> in each node when you deploy a VM.
>>>>>
>>>>
>>>> So I lose live migration, right?
>>>> If I understand correctly ceph cannot be default datastore also.
>>>>
>>>>>  Next, you should make sure that all your node are able to
>>>>> communicate with the ceph cluster. Issue the command "ceph -s" on all nodes
>>>>> including the front end to be sure that they are connected to ceph.
>>>>>
>>>>>
>>>>>
>>>>
>>>> ... will check...
>>>>
>>>>
>>>>>
>>>>> oneadmin at cloud-node1:~$ onedatastore list
>>>>>
>>>>>   ID NAME                SIZE AVAIL CLUSTER      IMAGES TYPE DS
>>>>> TM
>>>>>
>>>>>    0 system                 - -     -                 0 sys  -
>>>>> shared
>>>>>
>>>>>    1 default             7.3G 71%   -                 1 img  fs
>>>>> shared
>>>>>
>>>>>    2 files               7.3G 71%   -                 0 fil  fs
>>>>> ssh
>>>>>
>>>>> * 100 cephds              5.5T 59%   -                 3 img  ceph
>>>>> ceph*
>>>>>
>>>>> Once you add the you have verified that the ceph datastore is active
>>>>> you can upload images on the sunstone GUI. Be aware that conversion of
>>>>> images to RBD format of ceph may take quite some time.
>>>>>
>>>>
>>>> I see in your configuration that system datastore is shared!
>>>>
>>>> Thanks again,
>>>> Mario
>>>>
>>>>       _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>>
>> --
>> Jaime Melis
>> C12G Labs - Flexible Enterprise Cloud Made Simple
>> http://www.c12g.com | jmelis at c12g.com
>>
>> --
>>
>> Confidentiality Warning: The information contained in this e-mail and
>> any accompanying documents, unless otherwise expressly indicated, is
>> confidential and privileged, and is intended solely for the person
>> and/or entity to whom it is addressed (i.e. those identified in the
>> "To" and "cc" box). They are the property of C12G Labs S.L..
>> Unauthorized distribution, review, use, disclosure, or copying of this
>> communication, or any part thereof, is strictly prohibited and may be
>> unlawful. If you have received this e-mail in error, please notify us
>> immediately by e-mail at abuse at c12g.com and delete the e-mail and
>> attachments and any copy from your system. C12G's thanks you for your
>> cooperation.
>>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131210/df059072/attachment-0002.htm>


More information about the Users mailing list