[one-users] Shared File System HA

Ranga Chakravarthula rbabu at hexagrid.com
Wed Mar 14 08:56:40 PDT 2012


If you are looking at HA at storage level, it would be better you have
Heartbeat/Failover on the NFS resource than failing over to secondary
front-end server. Anyway your NFS is mounted on the compute nodes and if
one storage goes down, heartbeat will failover to another storage. Your
frontend doesn't have to part of this.

On Wed, Mar 14, 2012 at 10:26 AM, Marshall Grillos <mgrillos at optimalpath.com
> wrote:

>  I am debating the differences between Shared and Non-shared file systems
> for an OpenNebula deployment.****
>
> ** **
>
> One concern with the shared file system is High Availability.  I am
> setting up the OpenNebula front-end with connectivity to a storage device.
> To avoid the event of a storage device failure (RAID controller, Power,
> etc) I am looking into setting up a secondary front-end server with
> attached storage.  I would use NFS to share the storage to each VM Host and
> setup DRDB for block level replication between each cluster node.  In the
> event of a storage failure, a failover would occur utilizing
> heartbeat/pacemaker to the secondary front-end server.****
>
> ** **
>
> If anyone has tested a similar setup how do the VMs handle the minimal
> outage required for the failover to occur (the several seconds required to
> failover to the secondary front-end)?  For a certain duration, wouldn’t the
> NFS mount be unavailable due to the failover mechanism?****
>
> ** **
>
> Thanks,****
>
> Marshall****
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20120314/87c366ab/attachment-0003.htm>


More information about the Users mailing list