[one-users] How to use ceph filesystem

Mario Giammarco mgiammarco at gmail.com
Wed Dec 4 00:08:58 PST 2013


I have read all posts of this interesting thread.
You suggest to use ceph as a shared filesystem and it is a good idea I
agree.

But I supposed that, because kvm supports ceph rbd and because opennebula
supports ceph, there is a "direct way" to use it.
I mean not using ceph DFS layer and going directly to rbd layer (also for
system datastore)
I do not understand what advantages has opennebula actual ceph support, can
you explain to me?

Thanks,
Mario


2013/12/3 Jaime Melis <jmelis at c12g.com>

> Hi Mario,
>
> Cephfs CAN be used a shared filesystem datastore. I don't completely agree
> with Kenneth's recommendation of using 'ssh' as the TM for the system
> datastore. I think you can go for 'shared' as long as you have the
> /var/lib/one/datastores/... shared via Cephfs. OpenNebula doesn't care
> about what DFS solution you're using, it will simply assume files are
> already there.
>
> Another thing worth mentioning, from 4.4 onwards the HOST attribute of the
> datastore should be renamed as BRIDGE_LIST.
>
> cheers,
> Jaime
>
>
> On Tue, Dec 3, 2013 at 11:28 AM, Kenneth <kenneth at apolloglobal.net> wrote:
>
>>  Actually, I'm using ceph as the system datastore. I used the cephfs
>> (CEPH FUSE) and mounted it on all nodes on /var/lib/one/datastores/0/
>>
>> Regarding ssh for trasfer driver, I haven't really used it since I'm all
>> on ceph, both system and image datastore. I may be wrong but that is how I
>> understand it from the docs.
>> ---
>>
>> Thanks,
>> Kenneth
>> Apollo Global Corp.
>>
>>  On 12/03/2013 06:11 PM, Mario Giammarco wrote:
>>
>>   My problem was that  because ceph is a distributed filesystem (and so
>> it can be used as an alternative to nfs) I supposed I can use as a shared
>> system datastore.
>> Reading your reply I can see it is not true. Probably official
>> documentation should clarify this.
>>
>> Infact I hoped to use ceph as system datastore because ceph is fault
>> tolerant and nfs is not.
>>
>> Thanks for help,
>> Mario
>>
>>
>> 2013/12/3 Kenneth <kenneth at apolloglobal.net>
>>
>>>  Ceph won't be the default image datastore, but you can always choose
>>> it whenever you create an image.
>>>
>>> You said you don't have an NFS disk and your just use plain disk on your
>>> system datastore so you *should* use ssh in order to have live
>>> migrations.
>>>
>>> Mine uses shared as datastore since I mounted a shared folder on each
>>> nebula node.
>>>  ---
>>>
>>> Thanks,
>>> Kenneth
>>> Apollo Global Corp.
>>>
>>>   On 12/03/2013 03:01 PM, Mario Giammarco wrote:
>>>
>>> First, thanks you for your very detailed reply!
>>>
>>>
>>> 2013/12/3 Kenneth <kenneth at apolloglobal.net>
>>>
>>>>  You don't need to replace existing datastores, the important is you
>>>> edit the system datastore as "ssh" because you still need to transfer files
>>>> in each node when you deploy a VM.
>>>>
>>>
>>> So I lose live migration, right?
>>> If I understand correctly ceph cannot be default datastore also.
>>>
>>>>  Next, you should make sure that all your node are able to communicate
>>>> with the ceph cluster. Issue the command "ceph -s" on all nodes including
>>>> the front end to be sure that they are connected to ceph.
>>>>
>>>>
>>>>
>>>
>>> ... will check...
>>>
>>>
>>>>
>>>> oneadmin at cloud-node1:~$ onedatastore list
>>>>
>>>>   ID NAME                SIZE AVAIL CLUSTER      IMAGES TYPE DS
>>>> TM
>>>>
>>>>    0 system                 - -     -                 0 sys  -
>>>> shared
>>>>
>>>>    1 default             7.3G 71%   -                 1 img  fs
>>>> shared
>>>>
>>>>    2 files               7.3G 71%   -                 0 fil  fs
>>>> ssh
>>>>
>>>> * 100 cephds              5.5T 59%   -                 3 img  ceph
>>>> ceph*
>>>>
>>>> Once you add the you have verified that the ceph datastore is active
>>>> you can upload images on the sunstone GUI. Be aware that conversion of
>>>> images to RBD format of ceph may take quite some time.
>>>>
>>>
>>> I see in your configuration that system datastore is shared!
>>>
>>> Thanks again,
>>> Mario
>>>
>>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
>
> --
> Jaime Melis
> C12G Labs - Flexible Enterprise Cloud Made Simple
> http://www.c12g.com | jmelis at c12g.com
>
> --
>
> Confidentiality Warning: The information contained in this e-mail and
> any accompanying documents, unless otherwise expressly indicated, is
> confidential and privileged, and is intended solely for the person
> and/or entity to whom it is addressed (i.e. those identified in the
> "To" and "cc" box). They are the property of C12G Labs S.L..
> Unauthorized distribution, review, use, disclosure, or copying of this
> communication, or any part thereof, is strictly prohibited and may be
> unlawful. If you have received this e-mail in error, please notify us
> immediately by e-mail at abuse at c12g.com and delete the e-mail and
> attachments and any copy from your system. C12G's thanks you for your
> cooperation.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131204/aafb833f/attachment-0002.htm>


More information about the Users mailing list