[one-users] OpenNebula head node as a virtual machine?

Shankhadeep Shome shank15217 at gmail.com
Mon Apr 16 11:40:43 PDT 2012


GlusterFS clients work just fine inside virtual machines. Make sure you use
vhost_net to get maximum performance to the VM. I was able to push nearly 8
Gbps to the VMs using the kernel level virtio-net driver.

http://lwn.net/Articles/346267/

Shank

On Tue, Apr 10, 2012 at 10:43 PM, Steven C Timm <timm at fnal.gov> wrote:

>  I am also thinking about running the head node as a pure KVM VM through
> virsh, i.e. outside of OpenNebula.****
>
> Do GlusterFS clients work OK inside of virtual machines?  How is the
> performance?  Have heard bits and pieces about****
>
> GlusterFS but not seen the full  package in operation,.****
>
> ** **
>
> Steve Timm****
>
> ** **
>
> *From:* Shankhadeep Shome [mailto:shank15217 at gmail.com]
> *Sent:* Tuesday, April 10, 2012 9:35 PM
> *To:* Steven C Timm
> *Cc:* users at lists.opennebula.org
> *Subject:* Re: [one-users] OpenNebula head node as a virtual machine?****
>
> ** **
>
> I run the head node as a VM but purely as a KVM vm controlled through
> virsh. The back-end storage is glusterfs presented from the hyper-visor
> nodes themselves.****
>
> On Tue, Apr 10, 2012 at 9:26 PM, Steven C Timm <timm at fnal.gov> wrote:****
>
> Has anyone managed to successfully run the OpenNebula head node in
> OpenNebula 3.x as a virtual machine in production?****
>
> I am interested in doing this for ease of migration and/or failover with
> heartbeat/DRBD.****
>
> If so, have you done it with a Shared file system such as GFS, and let GFS
> be seen by the head node VM.****
>
>  ****
>
> Steve Timm****
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org****
>
> ** **
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20120416/475bb1df/attachment-0001.htm>


More information about the Users mailing list