<br><br><div class="gmail_quote">On Thu, May 10, 2012 at 1:01 AM, Shankhadeep Shome <span dir="ltr"><<a href="mailto:shank15217@gmail.com" target="_blank">shank15217@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
What we did was expose the IPoIB network directly to the VM using 1 to 1 NAT. The VMs themselves can now connect to a iscsi or NFS source and log in directly. I am not sure what would be faster, iSER to the host which is exposed to the VMs as a raw device or direct attached network storage. I think either way you lose some performance. If you want to take the iSER option then your best solution is to present an iSER volume to the hosts and use LVM to present storage to the VMs via virtio-blk mechanism. There is another currently proprietary way using Mellanox's SRIOV drivers to present an Infiniband virtual function directly to the VMs however these drivers have not been released to OFED yet and they have their own limitations.<br>
<br>We are able to hit around 9Gbps over the IPoIB link to the VMs via NAT using 64K frames with connected mode IPoIB with netperf and max out our local NAS appliance throughput. Our cards are ConnectX-2 at 40Gbps. IPoIB will max out around 12-14 Gbps with multiple streams even on these cards. IPoIB performance is greatly dependent on the kernel and driver versions, if you are running an older kernel and drivers the performance is much lower keep that in mind. We are using the latest MLNXOFED drivers with Linux Kernel 3.0 to get this level of performance with large transmit/receive offload enabled.<div class="HOEnZb">
<div class="h5"><br>
<br><div class="gmail_quote">On Wed, May 9, 2012 at 3:58 PM, Christopher Barry <span dir="ltr"><<a href="mailto:cbarry@rjmetrics.com" target="_blank">cbarry@rjmetrics.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>On Wed, 2012-05-09 at 00:16 -0400, Shankhadeep Shome wrote:<br>
> Hi Jamie<br>
><br>
><br>
> Thanks for the info, I am creating a readme and cleaning up the driver<br>
> scripts and will be uploading them shortly.<br>
><br>
><br>
> Shank<br>
<br>
</div>It's not clear to me if you are using iSER or iSCSI over IPoIB. Can you<br>
clarify that? I'm envisioning iSER being used where the LVM volumes are<br>
logged into from the assigned guest's host, and then exposed to the<br>
guest as a local scsi block device and connected to via virtio.<br>
<br>
If this is how it works, does the host itself become a storage device<br>
within 'one'? Or, is the infiniband enabled storage device seen as the<br>
storage device in 'one'?<br>
<br>
Thanks,<br>
-C<br>
<div><div><br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
</div></div></blockquote></div><br>
</div></div></blockquote></div><br>