[one-users] Shared File System HA

Hans-Joachim Ehlers HansJoachim.Ehlers at eumetsat.int
Wed Mar 14 08:45:08 PDT 2012


In case we are going to deploy OpenNebula we will use GPFS as our clustered FS ... 

Since it does not answer your question: In case NFS is used you must make sure that:

• The NFS server exports the FS with the “sync” option. Otherwise data corruption could/will during server crash.
• The NFS client must at least use the mount option “hard“ 

Hth
Hajo




From: users-bounces at lists.opennebula.org [mailto:users-bounces at lists.opennebula.org] On Behalf Of Marshall Grillos
Sent: Wednesday, March 14, 2012 4:27 PM
To: users at lists.opennebula.org
Subject: [one-users] Shared File System HA

I am debating the differences between Shared and Non-shared file systems for an OpenNebula deployment.

One concern with the shared file system is High Availability.  I am setting up the OpenNebula front-end with connectivity to a storage device.  To avoid the event of a storage device failure (RAID controller, Power, etc) I am looking into setting up a secondary front-end server with attached storage.  I would use NFS to share the storage to each VM Host and setup DRDB for block level replication between each cluster node.  In the event of a storage failure, a failover would occur utilizing heartbeat/pacemaker to the secondary front-end server.

If anyone has tested a similar setup how do the VMs handle the minimal outage required for the failover to occur (the several seconds required to failover to the secondary front-end)?  For a certain duration, wouldn’t the NFS mount be unavailable due to the failover mechanism?

Thanks,
Marshall



More information about the Users mailing list