[one-users] How to use ceph filesystem

Kenneth kenneth at apolloglobal.net
Wed Dec 4 02:13:41 PST 2013


 

Going directly to RBD layer will be used once you use ceph as IMAGE
datastore. Nebula will be interfacing directly with ceph and your images
will be in RBD format.This is already the "direct way". This is the same
things as using KVM with ceph RBD. 

The only thing you may want to use
is the cephfs (which is not RBD) is when you use it for the SYSTEM
datastore and use shared for TM. This is what I use. Besides, system
datastore does't contain a lot of files so if this method is
inefficient, I won't notice it all. 

But if you also want to use RBD
for the system datastore, you can still use it. You just mount an RBD
image in the /var/lib/datastore/0/ of you nodes.

---

Thanks,
Kenneth
Apollo Global Corp.

On 12/04/2013 04:08 PM, Mario
Giammarco wrote: 

> I have read all posts of this interesting thread.
You suggest to use ceph as a shared filesystem and it is a good idea I
agree.
> 
> But I supposed that, because kvm supports ceph rbd and
because opennebula supports ceph, there is a "direct way" to use it. I
mean not using ceph DFS layer and going directly to rbd layer (also for
system datastore) I do not understand what advantages has opennebula
actual ceph support, can you explain to me?
> 
> Thanks, Mario 
> 
>
2013/12/3 Jaime Melis <jmelis at c12g.com>
> 
>> Hi Mario, 
>> 
>> Cephfs
CAN be used a shared filesystem datastore. I don't completely agree with
Kenneth's recommendation of using 'ssh' as the TM for the system
datastore. I think you can go for 'shared' as long as you have the
/var/lib/one/datastores/... shared via Cephfs. OpenNebula doesn't care
about what DFS solution you're using, it will simply assume files are
already there. 
>> 
>> Another thing worth mentioning, from 4.4 onwards
the HOST attribute of the datastore should be renamed as BRIDGE_LIST.

>> 
>> cheers,
>> Jaime 
>> 
>> On Tue, Dec 3, 2013 at 11:28 AM,
Kenneth <kenneth at apolloglobal.net> wrote: 
>> 
>>> Actually, I'm using
ceph as the system datastore. I used the cephfs (CEPH FUSE) and mounted
it on all nodes on /var/lib/one/datastores/0/ 
>>> 
>>> Regarding ssh
for trasfer driver, I haven't really used it since I'm all on ceph, both
system and image datastore. I may be wrong but that is how I understand
it from the docs. 
>>> 
>>> ---
>>> 
>>> Thanks,
>>> Kenneth
>>> Apollo
Global Corp.
>>> 
>>> On 12/03/2013 06:11 PM, Mario Giammarco wrote:

>>> 
>>>> My problem was that because ceph is a distributed filesystem
(and so it can be used as an alternative to nfs) I supposed I can use as
a shared system datastore. Reading your reply I can see it is not true.
Probably official documentation should clarify this.
>>>> 
>>>> Infact I
hoped to use ceph as system datastore because ceph is fault tolerant and
nfs is not.
>>>> 
>>>> Thanks for help, Mario 
>>>> 
>>>> 2013/12/3
Kenneth <kenneth at apolloglobal.net>
>>>> 
>>>>> Ceph won't be the default
image datastore, but you can always choose it whenever you create an
image. 
>>>>> 
>>>>> You said you don't have an NFS disk and your just
use plain disk on your system datastore so you SHOULD use ssh in order
to have live migrations. 
>>>>> 
>>>>> Mine uses shared as datastore
since I mounted a shared folder on each nebula node. 
>>>>> 
>>>>>
---
>>>>> 
>>>>> Thanks,
>>>>> Kenneth
>>>>> Apollo Global Corp.
>>>>>

>>>>> On 12/03/2013 03:01 PM, Mario Giammarco wrote: 
>>>>> 
>>>>>>
First, thanks you for your very detailed reply!
>>>>>> 
>>>>>> 2013/12/3
Kenneth <kenneth at apolloglobal.net>
>>>>>> 
>>>>>>> You don't need to
replace existing datastores, the important is you edit the system
datastore as "ssh" because you still need to transfer files in each node
when you deploy a VM.
>>>>>> 
>>>>>> So I lose live migration, right?

>>>>>> If I understand correctly ceph cannot be default datastore also.

>>>>>> 
>>>>>>> Next, you should make sure that all your node are able
to communicate with the ceph cluster. Issue the command "ceph -s" on all
nodes including the front end to be sure that they are connected to
ceph.
>>>>>> 
>>>>>> ... will check... 
>>>>>> 
>>>>>>
oneadmin at cloud-node1:~$ onedatastore list
>>> 
>>>
_______________________________________________
>>> Users mailing
list
>>> Users at lists.opennebula.org
>>>
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org [1]
>> 
>>
-- 
>> 
>> Jaime Melis
>> C12G Labs - Flexible Enterprise Cloud Made
Simple
>> http://www.c12g.com [2] | jmelis at c12g.com
>> 
>> -- 
>> 
>>
Confidentiality Warning: The information contained in this e-mail and

>> any accompanying documents, unless otherwise expressly indicated, is

>> confidential and privileged, and is intended solely for the person

>> and/or entity to whom it is addressed (i.e. those identified in the

>> "To" and "cc" box). They are the property of C12G Labs S.L.. 
>>
Unauthorized distribution, review, use, disclosure, or copying of this

>> communication, or any part thereof, is strictly prohibited and may
be 
>> unlawful. If you have received this e-mail in error, please
notify us 
>> immediately by e-mail at abuse at c12g.com and delete the
e-mail and 
>> attachments and any copy from your system. C12G's thanks
you for your 
>> cooperation.
 

Links:
------
[1]
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
[2]
http://www.c12g.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131204/c22465e1/attachment-0002.htm>


More information about the Users mailing list