[one-users] ceph+flashcache datastore driver

Shankhadeep Shome shank15217 at gmail.com
Thu Mar 27 20:03:19 PDT 2014


We have been running KVM very successfully and I find ceph very
interesting, I think a combination of ceph and linux scsi target with a
scale out architecture is the future of storage in enterprise.


On Thu, Mar 27, 2014 at 10:57 PM, Shankhadeep Shome <shank15217 at gmail.com>wrote:

> Yes bcache will allow real time configuration of cache policies, there are
> a lot of tunables. It allows a single cache device to map to multiple
> backing devices. We are using bcache with LVM and linux scsi target to
> implement the storage target devices. We use several of these devices to
> export block devices to servers over fiber channel then use those block
> devices as building blocks for local or distributed file systems.The
> opennebula implementation was using glusterfs-fuse but we hope to
> transition to native glusterfs with qemu.
>
>
> On Wed, Mar 26, 2014 at 8:39 AM, Stuart Longland <stuartl at vrt.com.au>wrote:
>
>> Hi Shankhadeep,
>> On 26/03/14 12:35, Shankhadeep Shome wrote:
>> > Try bcache as a flash backend, I feel its more flexible as a caching
>> > tier and its well integrated into the kernel. The kernel 3.10.X version
>> > is now quite mature so an epel6 long term kernel would work great. We
>> > are using it in a linux based production SAN as a cache tier with pci-e
>> > SSDs, a very flexible subsystem and rock solid.
>>
>> Cheers for the heads up, I will have a look.  What are you using to
>> implement the SAN and what sort of VMs are you using with it?
>>
>> One thing I'm finding: when I tried using this, I had a stack of RBD
>> images created by OpenNebula that were in RBD v1 format.  I converted
>> them to v2 format by means of a simple script: basically renaming the
>> old images then doing a pipe from 'rbd export' to 'rbd import'.
>>
>> I had a few images in there, most originally for other hypervisors:
>> - Windows 2000 Pro image
>> - Windows XP Pro image (VMWare ESXi image)
>> - Windows 2012 Standard Evaluation image (CloudBase OpenStack image)
>> - Windows 2008 R2 Enterprise Evaluation (HyperV image)
>> - Windows 2012 R2 Data Centre Evaluation (HyperV image)
>>
>> The latter two were downloaded from Microsoft's site and are actually
>> supposed to run on HyperV, however they ran fine with IDE storage under
>> KVM under the out-of-the-box Ceph support in OpenNebula 4.4.
>>
>> I'm finding that after conversion of the RBDs to RBDv2 format, and
>> re-creating the image in OpenNebula to clear out the DISK_TYPE attribute
>> (DISK_TYPE=RBD kept creeping in), the image would deploy but then the OS
>> would crash.
>>
>> Win2008r2 would crash after changing the Administrator password (hang
>> with black screen), Win2012r2 would crash with a CRITICAL_PROCESS_DIED
>> blue-screen-of-death when attempting to set the Administrator password.
>>
>> The other images run fine.  The only two that were actually intended for
>> KVM are the Windows 2012 evaluation image produced by CloudBase (for
>> OpenStack), and the Windows 2000 image that I personally created.  The
>> others were all built on other hypervisors, then converted.
>>
>> I'm not sure if it's something funny with the conversion of the RBDs or
>> whether it's an oddity with FlashCache+RBD that's causing this.  These
>> images were fine before I got FlashCache involved (if a little slow).
>> Either there's a bug in my script, in FlashCache, or I buggered up the
>> RBD conversion.
>>
>> But I will have a look at bcache and see how it performs in comparison.
>>  One thing we are looking for is the ability to throttle or control
>> cache write-backs for non-production work-loads ... that is, we wish to
>> prioritise Ceph traffic for production VMs during work hours.
>> FlashCache doesn't offer this feature at this time.
>>
>> Do you know if bcache offers any such controls?
>> --
>> Stuart Longland
>> Contractor
>>      _ ___
>> \  /|_) |                           T: +61 7 3535 9619
>>  \/ | \ |     38b Douglas Street    F: +61 7 3535 9699
>>    SYSTEMS    Milton QLD 4064       http://www.vrt.com.au
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140327/685df2fe/attachment-0002.htm>


More information about the Users mailing list