<div dir="ltr">Also there has been some work on vhost-blk as well as vhost-scsi integrated with LIO so this work seems like more of an offshoot that the general direction of KVM performance. </div><div class="gmail_extra">
<br><br><div class="gmail_quote">On Sat, Sep 7, 2013 at 1:41 AM, Erico Augusto Cavalcanti Guedes <span dir="ltr"><<a href="mailto:eacg@cin.ufpe.br" target="_blank">eacg@cin.ufpe.br</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div>Hi,<br><br></div><br><div class="gmail_extra"><div class="gmail_quote">2013/9/5 Ruben S. Montero <span dir="ltr"><<a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>></span><div class="im">
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi<div><br></div><div>I haven't tried this myself either, but if there is such an improvement we could add a hint at the KVM driver documentation.</div>
<div><br></div><div>I've a concern about the first sentence of the post:</div>
<div><br></div><div>"Data plane is suitable for LVM or raw image file configurations where live migration and advanced block features are not needed."</div><div><br></div><div>So, are we missing then live-migration? have you tried it?</div>
</div></blockquote><div><br></div></div><div>No, I haven't tried it. Nevertheless, this information refers to qemu-1.4. I'm using qemu-1.6, compiled by myself and linked to kvm command(I don't install qemu-kvm package on my debian 7.1 node). Maybe look for this information on qemu version history can give us some news.<br>
</div><div class="im"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">
</div><div class="gmail_extra"><div><div><br><br><div class="gmail_quote">On Thu, Sep 5, 2013 at 12:02 PM, Vladislav Gorbunov <span dir="ltr"><<a href="mailto:vadikgo@gmail.com" target="_blank">vadikgo@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">It works with:<br>
DISK = [ driver = "raw" , cache = "none", io = "native"]<br>
and on RedHat/CentOS kvm only.<br></blockquote></div></div></div></div></blockquote></div><div><br>Vladislav, can you please send kvm process line of your node?<br> </div><div class="im"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_extra"><div><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
2013/9/1 Valentin Bud <<a href="mailto:valentin.bud@gmail.com" target="_blank">valentin.bud@gmail.com</a>>:<br>
<div><div>> Hi Erico,<br>
><br>
> This is the first time I hear about virtio-blk-data-plane. Thank you for the<br>
> info, looks like this feature brings notable IO improvements.<br>
><br>
> You can try to use the RAW Section [1] to pass special attributes to the<br>
> underlying hypervisor.<br>
> I have found a blog post [2] in which there is a method to enable<br>
> virtio-blk-data-plane using the libvirt XML. The RAW section DATA gets<br>
> passed to libvirt in XML format.<br>
><br>
> I think the following could work:<br>
><br>
> RAW = [<br>
> TYPE="kvm",<br>
> DATA="<qemu:commandline><qemu:arg value='-set'/><qemu:arg<br>
> value='device.virtio-disk0.scsi=off'/></qemu:commandline><!-- config-wce=off<br>
> is not needed in RHEL 6.4 --><qemu:commandline><qemu:arg<br>
> value='-set'/><qemu:arg<br>
> value='device.virtio-disk0.config-wce=off'/></qemu:commandline><qemu:commandline<qemu:arg<br>
> value='-set'/><qemu:arg<br>
> value='device.virtio-disk0.x-data-plane=on'></qemu:commandline>"<br>
> ]<br>
><br>
> I don't have a test machine around and I would to hear back from you if it<br>
> works or not.<br></div></div></blockquote></div></div></div></div></blockquote><div><br></div></div><div>It results on the following kvm process(observe last line):<br><br>kvm -S -M pc-i440fx-1.6 -cpu qemu32 -enable-kvm -m 256 -smp 1,sockets=1,cores=1,threads=1 -name one-34 -uuid 632931d1-6195-4dfc-c01e-9fed9b19dd84 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-34.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot c <br>
-drive file=/srv/cloud/one/var//datastores/0/34/disk.0,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native <br>-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -drive file=/srv/cloud/one/var//datastores/0/34/disk.1,if=none,media=cdrom,id=drive-ide0-0-0,readonly=on,format=raw <br>
<div class="im">
-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=22,id=hostnet0 <br>-device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:c0:a8:0f:6e,bus=pci.0,addr=0x3 <br></div>-usb -vnc <a href="http://0.0.0.0:34" target="_blank">0.0.0.0:34</a> -vga cirrus <br>
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 <br>-set device.virtio-disk0.scsi=off -set device.virtio-disk0.config-wce=off -set device.virtio-disk0.x-data-plane=on<br></div><div><br></div><div>The Virtual machine reaches runn status, but invariably, there were <br>
</div><div>Boot failed: could not read the boot disk error.<br><br></div><div>Some possibilities:<br></div><div>- Error with vd prefix X GRUB configuration?<br></div><div>- Is it mandotory to use IDE disk on my raw virtual machine image?<br>
<br></div><div>I'm looking for a solution for "Boot failed: could not read the boot disk error."<br></div><div>Some idea?<br><br></div><div>Thanks,<br><br>Erico.<br></div><div><div class="h5"><div><br><br></div>
<div><br> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="gmail_extra"><div><div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div>
><br>
> [1]: <a href="http://opennebula.org/documentation:rel4.2:template#raw_section" target="_blank">http://opennebula.org/documentation:rel4.2:template#raw_section</a><br>
> [2]:<br>
> <a href="http://blog.vmsplice.net/2013/03/new-in-qemu-14-high-performance-virtio.html" target="_blank">http://blog.vmsplice.net/2013/03/new-in-qemu-14-high-performance-virtio.html</a><br>
><br>
> Health and Goodwill,<br>
><br>
><br>
> On Sat, Aug 31, 2013 at 11:01 PM, Erico Augusto Cavalcanti Guedes<br>
> <<a href="mailto:eacg@cin.ufpe.br" target="_blank">eacg@cin.ufpe.br</a>> wrote:<br>
>><br>
>> Hello,<br>
>><br>
>> on [1], page 10, section 2.3 - KVM Configuration: "To achieve the best<br>
>> possible I/O rates for the KVM guest, the virtio-blk-data-plane feature was<br>
>> enabled for each LUN (a disk or partition) that was passed from the host to<br>
>> the guest. To enable virtio-blk-data-plane for a LUN being passed to the<br>
>> guest, the x-data-plane=on option was added for that LUN in the qemu-kvm<br>
>> command line used to set up the guest. For example:<br>
>> /usr/libexec/qemu-kvm -drive<br>
>> if=none,id=drive0,cache=none,aio=native,format=raw,file=<disk or partition><br>
>> -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on<br>
>> "<br>
>> I'll be grateful if you can help me with the following question:<br>
>> How to customize -device virtio-blk-pci parameter during OpenNebula VM<br>
>> initialization to insert x-data-plane=on on it?<br>
>><br>
>> My VM Template:<br>
>> CONTEXT=[NETWORK="YES",SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]"]<br>
>> CPU="1"<br>
>><br>
>> DISK=[AIO="native",BUS="virtio",CACHE="none",DEV_PREFIX="vd",FORMAT="raw",IMAGE_ID="1"]<br>
>> GRAPHICS=[LISTEN="0.0.0.0",TYPE="VNC"]<br>
>> MEMORY="256"<br>
>> NIC=[NETWORK_ID="0"]<br>
>> OS=[ARCH="i686",BOOT="hd"]<br>
>><br>
>> KVM process on node:<br>
>> /usr/bin/kvm -S -M pc-i440fx-1.6 -cpu qemu32 -enable-kvm -m 256 -smp<br>
>> 1,sockets=1,cores=1,threads=1 -name one-27<br>
>> -uuid c014337c-5255-e983-862e-b744f889aa49 -no-user-config -nodefaults<br>
>> -chardev<br>
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-27.monitor,server,nowait<br>
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc<br>
>> -no-shutdown-boot c<br>
>> -drive<br>
>> file=/srv/cloud/one/var//datastores/0/27/disk.0,if=none,id=drive-virtio-disk0,format=raw,cache=none<br>
>> -device<br>
>> virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0<br>
>> -drive<br>
>> file=/srv/cloud/one/var//datastores/0/27/disk.1,if=none,media=cdrom,id=drive-ide0-0-0,readonly=on,format=raw<br>
>> -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0<br>
>> -netdev tap,fd=22,id=hostnet0<br>
>> -device<br>
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:c0:a8:0f:6e,bus=pci.0,addr=0x3<br>
>> -usb -vnc <a href="http://0.0.0.0:27" target="_blank">0.0.0.0:27</a> -vga cirrus -device<br>
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5<br>
>><br>
>> I'm running ONE 4.2 on Debian 7.1 x86_64, kernel 3.2.0-4-amd64, with<br>
>> customized qemu-1.6(compiled by myself to support virtio-blk-data-plane) to<br>
>> enable virtio-blk-data-plane, with Debian 7.1 i386 VMs.<br>
>><br>
>> Thanks in advance,<br>
>><br>
>> Erico.<br>
>><br>
>> [1]<br>
>> <a href="ftp://public.dhe.ibm.com/linux/pdfs/KVM_Virtualized_IO_Performance_Paper_v2.pdf" target="_blank">ftp://public.dhe.ibm.com/linux/pdfs/KVM_Virtualized_IO_Performance_Paper_v2.pdf</a><br>
>><br>
>> _______________________________________________<br>
>> Users mailing list<br>
>> <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
>> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
>><br>
><br>
><br>
><br>
> --<br>
> Valentin Bud<br>
> <a href="http://databus.pro" target="_blank">http://databus.pro</a> | <a href="mailto:valentin@databus.pro" target="_blank">valentin@databus.pro</a><br>
><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
><br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br></div></div><span><font color="#888888"><div dir="ltr"><div><div><div>-- </div><div>Join us at OpenNebulaConf2013 in Berlin, 24-26 September, 2013</div>
</div><div>-- </div></div>Ruben S. Montero, PhD<br>
Project co-Lead and Chief Architect<br>OpenNebula - The Open Source Solution for Data Center Virtualization<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula</div>
</font></span></div>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div></div></div><br></div></div>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div>