[one-users] Do we need to mount /var/lib/one with ceph
Jaime Melis
jmelis at opennebula.org
Wed Nov 19 08:05:36 PST 2014
Hi,
are you planning on using CephFS? if that's so you might find it easier to
set it up like this:
- /var/lib/one -> not shared, only for frontend
- /var/lib/one/datastores -> lives in a cephFS share avaialble to the
frontend and all the nodes
would this suit you?
cheers,
Jaime
On Wed, Nov 12, 2014 at 6:09 AM, Huynh Dac Nguyen <ndhuynh at spsvietnam.vn>
wrote:
> Dear Ruben,
>
> You mean:
>
> - /var/lib/one is mounted from NAS server just for Front-end
>
> - The datastore is mounted from Ceph for frond-end and node
>
> So
>
> Front-end servers must be mounted NAS (/var/lib/one) and Ceph
> (/var/lib/one/datastores/[number]), 2 mounted points.
>
> Node servers must be mounted only Ceph (/var/lib/one/datastores/[number]),
> 1 mounted point
>
> and the passwordless file must be updated manually
>
> Is it correct?
>
> so why don't we use ceph for both of shared location? or just
> /var/lib/one? I really don't want to manage more device.
>
> Could you show me the best solution? I'm confused to continue because of
> stuck in integrate Ceph and opennebula
>
>
> Regards,
>
> Ndhuynh
>
>
> ----------------------------------------------------------------------
>
> Message: 1
>
> Date: Tue, 11 Nov 2014 14:30:01 +0000
>
> From: "Ruben S. Montero" <rsmontero at opennebula.org>
>
> To: Huynh Dac Nguyen <ndhuynh at spsvietnam.vn>,
>
> users at lists.opennebula.org
>
> Subject: Re: [one-users] Do we need to mount /var/lib/one with ceph
>
> Message-ID:
>
> <CAGi56tetpMozcEk+dDzLULj7y_YC-PY40av0rbGoPUDB7mcUCg at mail.gmail.com>
>
> Content-Type: text/plain; charset="utf-8"
>
> The system datastore is accessed from the front-end to generate the
> context
>
> iso. Note that you don't need to be it exported from the front-end, the
>
> nodes and the front-end itself can mount it from an different NAS server.
>
> The /var/lib/one contents are needed in the front-end but not by the
> nodes.
>
> Just the system datastore directory.
>
> If the VM tries to access a disk from a device that is not mounted or the
>
> NAS server is down you'll be in trouble. However, note that context ISO is
>
> only accessed during the boot state and the main disks of the VM are in the
>
> ceph pool, so you'll probably be fine (but if not tested it)
>
> Cheers
>
>
> On Mon Nov 10 2014 at 3:38:14 AM Huynh Dac Nguyen <ndhuynh at spsvietnam.vn>
>
> wrote:
>
> > Dear Ruben,
>
> >
>
> > Thank you for replying
>
> >
>
> > So what happens if the front-end opennebula is down
>
> > the /var/lib/one isn't mounted,only System Datastore is mounted,
>
> > then the VM can't work, right? (the vm requires image and additional
>
> > files)
>
> >
>
> > Can you explain why we don't need to export the whole /var/lib/one ?
>
> >
>
> > Regards,
>
> > Ndhuynh
>
> >
>
> > >>> "Ruben S. Montero" <rsmontero at opennebula.org> 11/7/2014 4:52 PM
>
> > >>>
>
> > Hi Ndhuynh
>
> >
>
> > Ceph storage in OpenNebula is handled as follows:
>
> >
>
> >
>
> > 1.- Image Datastores, hold disk images repository as well as images for
>
> > running VMs in a Ceph volume
>
> > 2.- System Datastore holds additional VM files, checkpoints, context
>
> > disks and the like.
>
> >
>
> >
>
> > If you need live-migration the easiest way is to have a shared
>
> > filesystem for the System Datastore. You don't need to export the whole
>
> > /var/lib/one, just the datastore directory, though.
>
> >
>
> >
>
> > If you do not need to live-migrate VMs you should be ok with a ssh
>
> > based system datastore....
>
> >
>
> >
>
> > Cheers
>
> >
>
> >
>
> >
>
> >
>
> >
>
> >
>
> > On Wed Nov 05 2014 at 12:09:47 PM Huynh Dac Nguyen
>
> > <ndhuynh at spsvietnam.vn> wrote:
>
> >
>
> > Hi All,
>
> >
>
> >
>
> > I'm researching opennebula with ceph, i saw that most of guide focus on
>
> > using ceph as datastore - block device, right?
>
> >
>
> >
>
> > Do we need to mount ceph to /var/lib/one as file system to prevent the
>
> > opennebula frontend down expectedly?
>
> >
>
> >
>
> > My script is:
>
> >
>
> >
>
> > 1) Make a ceph file system : one and mount to /var/lib/one in all
>
> > opennebula node (front-end and node)
>
> > 2) Create a ceph block device and add to opennebula as datastore
>
> >
>
> >
>
> > Is this a right way?
>
> >
>
> >
>
> > Regards,
>
> > Ndhuynh
>
> > ndhuynh at spsvietnam.vn
>
> >
>
> > This e-mail message including any attachments is for the sole use of
>
> > the intended(s) and may contain privileged or confidential information.
>
> > Any unauthorized review, use, disclosure or distribution is prohibited.
>
> > If you are not intended recipient, please immediately contact the sender
>
> > by reply e-mail and delete the original message and destroy all copies
>
> > thereof.
>
> >
>
> > _______________________________________________
>
> > Users mailing list
>
> > Users at lists.opennebula.org
>
> > *http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> <http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>*
>
> >
>
> >
>
> > This e-mail message including any attachments is for the sole use of the
>
> > intended(s) and may contain privileged or confidential information. Any
>
> > unauthorized review, use, disclosure or distribution is prohibited. If
>
> > you are not intended recipient, please immediately contact the sender by
>
> > reply e-mail and delete the original message and destroy all copies
>
> > thereof.
>
> >
>
> This e-mail message including any attachments is for the sole use of the
> intended(s) and may contain privileged or confidential information. Any
> unauthorized review, use, disclosure or distribution is prohibited. If you
> are not intended recipient, please immediately contact the sender by reply
> e-mail and delete the original message and destroy all copies thereof.
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
--
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20141119/65b075d1/attachment-0001.htm>
More information about the Users
mailing list