[one-users] How to use ceph filesystem
Kenneth
kenneth at apolloglobal.net
Tue Dec 3 02:28:20 PST 2013
Actually, I'm using ceph as the system datastore. I used the cephfs
(CEPH FUSE) and mounted it on all nodes on /var/lib/one/datastores/0/
Regarding ssh for trasfer driver, I haven't really used it since I'm
all on ceph, both system and image datastore. I may be wrong but that is
how I understand it from the docs.
---
Thanks,
Kenneth
Apollo Global
Corp.
On 12/03/2013 06:11 PM, Mario Giammarco wrote:
> My problem was
that because ceph is a distributed filesystem (and so it can be used as
an alternative to nfs) I supposed I can use as a shared system
datastore. Reading your reply I can see it is not true. Probably
official documentation should clarify this.
>
> Infact I hoped to use
ceph as system datastore because ceph is fault tolerant and nfs is
not.
>
> Thanks for help, Mario
>
> 2013/12/3 Kenneth
<kenneth at apolloglobal.net>
>
>> Ceph won't be the default image
datastore, but you can always choose it whenever you create an image.
>>
>> You said you don't have an NFS disk and your just use plain disk
on your system datastore so you SHOULD use ssh in order to have live
migrations.
>>
>> Mine uses shared as datastore since I mounted a
shared folder on each nebula node.
>>
>> ---
>>
>> Thanks,
>>
Kenneth
>> Apollo Global Corp.
>>
>> On 12/03/2013 03:01 PM, Mario
Giammarco wrote:
>>
>>> First, thanks you for your very detailed
reply!
>>>
>>> 2013/12/3 Kenneth <kenneth at apolloglobal.net>
>>>
>>>>
You don't need to replace existing datastores, the important is you edit
the system datastore as "ssh" because you still need to transfer files
in each node when you deploy a VM.
>>>
>>> So I lose live migration,
right?
>>> If I understand correctly ceph cannot be default datastore
also.
>>>
>>>> Next, you should make sure that all your node are able
to communicate with the ceph cluster. Issue the command "ceph -s" on all
nodes including the front end to be sure that they are connected to
ceph.
>>>
>>> ... will check...
>>>
>>> oneadmin at cloud-node1:~$
onedatastore list
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131203/30156810/attachment-0002.htm>
More information about the Users
mailing list