[one-users] ceph+flashcache datastore driver

Stuart Longland stuartl at vrt.com.au
Wed Mar 26 05:39:47 PDT 2014


Hi Shankhadeep,
On 26/03/14 12:35, Shankhadeep Shome wrote:
> Try bcache as a flash backend, I feel its more flexible as a caching
> tier and its well integrated into the kernel. The kernel 3.10.X version
> is now quite mature so an epel6 long term kernel would work great. We
> are using it in a linux based production SAN as a cache tier with pci-e
> SSDs, a very flexible subsystem and rock solid. 

Cheers for the heads up, I will have a look.  What are you using to
implement the SAN and what sort of VMs are you using with it?

One thing I'm finding: when I tried using this, I had a stack of RBD
images created by OpenNebula that were in RBD v1 format.  I converted
them to v2 format by means of a simple script: basically renaming the
old images then doing a pipe from 'rbd export' to 'rbd import'.

I had a few images in there, most originally for other hypervisors:
- Windows 2000 Pro image
- Windows XP Pro image (VMWare ESXi image)
- Windows 2012 Standard Evaluation image (CloudBase OpenStack image)
- Windows 2008 R2 Enterprise Evaluation (HyperV image)
- Windows 2012 R2 Data Centre Evaluation (HyperV image)

The latter two were downloaded from Microsoft's site and are actually
supposed to run on HyperV, however they ran fine with IDE storage under
KVM under the out-of-the-box Ceph support in OpenNebula 4.4.

I'm finding that after conversion of the RBDs to RBDv2 format, and
re-creating the image in OpenNebula to clear out the DISK_TYPE attribute
(DISK_TYPE=RBD kept creeping in), the image would deploy but then the OS
would crash.

Win2008r2 would crash after changing the Administrator password (hang
with black screen), Win2012r2 would crash with a CRITICAL_PROCESS_DIED
blue-screen-of-death when attempting to set the Administrator password.

The other images run fine.  The only two that were actually intended for
KVM are the Windows 2012 evaluation image produced by CloudBase (for
OpenStack), and the Windows 2000 image that I personally created.  The
others were all built on other hypervisors, then converted.

I'm not sure if it's something funny with the conversion of the RBDs or
whether it's an oddity with FlashCache+RBD that's causing this.  These
images were fine before I got FlashCache involved (if a little slow).
Either there's a bug in my script, in FlashCache, or I buggered up the
RBD conversion.

But I will have a look at bcache and see how it performs in comparison.
 One thing we are looking for is the ability to throttle or control
cache write-backs for non-production work-loads ... that is, we wish to
prioritise Ceph traffic for production VMs during work hours.
FlashCache doesn't offer this feature at this time.

Do you know if bcache offers any such controls?
-- 
Stuart Longland
Contractor
     _ ___
\  /|_) |                           T: +61 7 3535 9619
 \/ | \ |     38b Douglas Street    F: +61 7 3535 9699
   SYSTEMS    Milton QLD 4064       http://www.vrt.com.au





More information about the Users mailing list