<div dir="ltr"><br><div>We have been running KVM very successfully and I find ceph very interesting, I think a combination of ceph and linux scsi target with a scale out architecture is the future of storage in enterprise.</div>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Mar 27, 2014 at 10:57 PM, Shankhadeep Shome <span dir="ltr"><<a href="mailto:shank15217@gmail.com" target="_blank">shank15217@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Yes bcache will allow real time configuration of cache policies, there are a lot of tunables. It allows a single cache device to map to multiple backing devices. We are using bcache with LVM and linux scsi target to implement the storage target devices. We use several of these devices to export block devices to servers over fiber channel then use those block devices as building blocks for local or distributed file systems.The opennebula implementation was using glusterfs-fuse but we hope to transition to native glusterfs with qemu. </div>
<div class="HOEnZb"><div class="h5">
<div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Mar 26, 2014 at 8:39 AM, Stuart Longland <span dir="ltr"><<a href="mailto:stuartl@vrt.com.au" target="_blank">stuartl@vrt.com.au</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Shankhadeep,<br>
<div>On 26/03/14 12:35, Shankhadeep Shome wrote:<br>
> Try bcache as a flash backend, I feel its more flexible as a caching<br>
> tier and its well integrated into the kernel. The kernel 3.10.X version<br>
> is now quite mature so an epel6 long term kernel would work great. We<br>
> are using it in a linux based production SAN as a cache tier with pci-e<br>
> SSDs, a very flexible subsystem and rock solid.<br>
<br>
</div>Cheers for the heads up, I will have a look. What are you using to<br>
implement the SAN and what sort of VMs are you using with it?<br>
<br>
One thing I'm finding: when I tried using this, I had a stack of RBD<br>
images created by OpenNebula that were in RBD v1 format. I converted<br>
them to v2 format by means of a simple script: basically renaming the<br>
old images then doing a pipe from 'rbd export' to 'rbd import'.<br>
<br>
I had a few images in there, most originally for other hypervisors:<br>
- Windows 2000 Pro image<br>
- Windows XP Pro image (VMWare ESXi image)<br>
- Windows 2012 Standard Evaluation image (CloudBase OpenStack image)<br>
- Windows 2008 R2 Enterprise Evaluation (HyperV image)<br>
- Windows 2012 R2 Data Centre Evaluation (HyperV image)<br>
<br>
The latter two were downloaded from Microsoft's site and are actually<br>
supposed to run on HyperV, however they ran fine with IDE storage under<br>
KVM under the out-of-the-box Ceph support in OpenNebula 4.4.<br>
<br>
I'm finding that after conversion of the RBDs to RBDv2 format, and<br>
re-creating the image in OpenNebula to clear out the DISK_TYPE attribute<br>
(DISK_TYPE=RBD kept creeping in), the image would deploy but then the OS<br>
would crash.<br>
<br>
Win2008r2 would crash after changing the Administrator password (hang<br>
with black screen), Win2012r2 would crash with a CRITICAL_PROCESS_DIED<br>
blue-screen-of-death when attempting to set the Administrator password.<br>
<br>
The other images run fine. The only two that were actually intended for<br>
KVM are the Windows 2012 evaluation image produced by CloudBase (for<br>
OpenStack), and the Windows 2000 image that I personally created. The<br>
others were all built on other hypervisors, then converted.<br>
<br>
I'm not sure if it's something funny with the conversion of the RBDs or<br>
whether it's an oddity with FlashCache+RBD that's causing this. These<br>
images were fine before I got FlashCache involved (if a little slow).<br>
Either there's a bug in my script, in FlashCache, or I buggered up the<br>
RBD conversion.<br>
<br>
But I will have a look at bcache and see how it performs in comparison.<br>
One thing we are looking for is the ability to throttle or control<br>
cache write-backs for non-production work-loads ... that is, we wish to<br>
prioritise Ceph traffic for production VMs during work hours.<br>
FlashCache doesn't offer this feature at this time.<br>
<br>
Do you know if bcache offers any such controls?<br>
<div><div>--<br>
Stuart Longland<br>
Contractor<br>
_ ___<br>
\ /|_) | T: <a href="tel:%2B61%207%203535%209619" value="+61735359619" target="_blank">+61 7 3535 9619</a><br>
\/ | \ | 38b Douglas Street F: <a href="tel:%2B61%207%203535%209699" value="+61735359699" target="_blank">+61 7 3535 9699</a><br>
SYSTEMS Milton QLD 4064 <a href="http://www.vrt.com.au" target="_blank">http://www.vrt.com.au</a><br>
<br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>