Just added the vmm driver for the IPoIB NAT stuff<br><br><div class="gmail_quote">On Thu, May 10, 2012 at 2:07 AM, Shankhadeep Shome <span dir="ltr"><<a href="mailto:shank15217@gmail.com" target="_blank">shank15217@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Its not clear where I would upload the drivers to, I created a wiki account and a wiki page for 1-to-1 NAT configuration for IPoIB. I can just send you the tar file with the updated driver.<span class="HOEnZb"><font color="#888888"><div>
<br></div></font></span><div><span class="HOEnZb"><font color="#888888">Shankhadeep</font></span><div><div class="h5"><br>
<br><div class="gmail_quote">On Tue, May 8, 2012 at 9:47 AM, Jaime Melis <span dir="ltr"><<a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Shankhadeep,<div><br></div><div>I think the community wiki site is the best place to upload these drivers to:</div><div><a href="http://wiki.opennebula.org/" target="_blank">http://wiki.opennebula.org/</a></div><div><br>
</div><div>It's open to registration let me know if you run into any issues.</div>
<div><br></div><div>About the blog post, our community manager will send you your login info in a PM.</div><div><br></div><div>Thanks!</div><div><br>Cheers,<br>Jaime<div><div><br><br><div class="gmail_quote">On Mon, May 7, 2012 at 7:42 PM, Shankhadeep Shome <span dir="ltr"><<a href="mailto:shank15217@gmail.com" target="_blank">shank15217@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Sure, where and how do I do it? I noticed that you have a community wiki site. Do i upload the driver and make an entry there?<span><font color="#888888"><div>
<br></div></font></span><div><span><font color="#888888">Shankhadeep</font></span><div><div><br><br><div class="gmail_quote">On Mon, May 7, 2012 at 11:54 AM, Jaime Melis <span dir="ltr"><<a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello Shankhadeep,<div><br></div><div>that sounds really nice. Would you be interested in contributing your code to OpenNebula's ecosystem and/or publishing an entry in opennebula's blog?</div>
<div><br></div><div>Regards,<br>
Jaime<div><div><br><br><div class="gmail_quote">On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome <span dir="ltr"><<a href="mailto:shank15217@gmail.com" target="_blank">shank15217@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br><br><div class="gmail_quote">On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome <span dir="ltr"><<a href="mailto:shank15217@gmail.com" target="_blank">shank15217@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>Hi Chris</div><div><br></div>We have a solution we are using on Oracle Exalogic hardware (we are using the bare metal boxes and gateway switches). I think I understand the requirement, IB accessible storage from VMs is possible however its a bit convoluted. Our solution was to create a one-to-one NAT from the VMs to the IB IPoIB network. This allows the VMs to mount storage natively over the IB network. The performance is pretty good, about 9Gbps per node with 64k MTU sizes. We created an open nebula driver for this and I'm happy to share it with the community. The driver handles VM migrations by enabling/diabling ip aliases on the host and can also be used to manipulate iptable rules on source and destination when open nebula moves VMs around.<span><font color="#888888"><div>
<br></div></font></span><div><span><font color="#888888">Shankhadeep</font></span><div><div><div><div><br><br><div class="gmail_quote">On Sat, May 5, 2012 at 5:42 PM, Chris Barry <span dir="ltr"><<a href="mailto:cbarry@rjmetrics.com" target="_blank">cbarry@rjmetrics.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Reading more, I see that the available methods for block store are iSCSI, and that the LUNS are attached to the host. From there, a symlink tree exposes the target to the guest in a predictable way on every host.<br>
<br>So then to modify my question a bit, are all LUNs attached to all hosts simultaneously? Or does the attachment only happen when a migration is to occur? Also, is the LUN put into a read-only mode or something during migration on the original host to protect the data? Or, must a clustering filesystem be employed?<br>
<br>Guess I have a lot to read :)<br><br>Thanks<span><font color="#888888"><br>-C</font></span><div><div><br><br><div class="gmail_quote">On Sat, May 5, 2012 at 5:23 PM, Chris Barry <span dir="ltr"><<a href="mailto:cbarry@rjmetrics.com" target="_blank">cbarry@rjmetrics.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Gubda,<br><br>Thank you for replying. My goal was to use a single shared iso image to boot from, and use an in-memory minimal linux on each node that had no 'disk' at all, then to mount logical data volume(s) from a centralized storage system. Perhaps that is outside the scope of opennebula's design goals, and may not be possible - I'm just now investigating it.<br>
<br>I see you are using NFS, but my desire is to use block storage instead, ideally LVM, and not incur the performance penalties of IPoIB. It does sound simple though, and that's always good. Do you have any performance data on that setup in terms of IOPs and/or MB/s write speeds? It does sound interesting.<br>
<br>Thanks again,<br>-C<div><div><br><br><div class="gmail_quote">On Sat, May 5, 2012 at 4:01 PM, Guba Sándor <span dir="ltr"><<a href="mailto:gubasanyi@gmail.com" target="_blank">gubasanyi@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
Hi<br>
<br>
I'm using InfiniBand to make shared storage. My setup is simple: the
opennebula installdir is shared on NFS with the worker nodes. <br>
<br>
- I don't understand what you mean on shared image. There will be a
copy (or symlink if the image is persistent) on the NFS host and
that is that the hypervisor will use over the network. Live migrate
is available beacuse you don't move the image only another host will
use it from the same spot. With my linux images I have about 30s
delay when livemigrate. You can use qcow2 driver for shared image.<br>
<br>
- I don't understant exactly what you mean on "guest unaware". If
you mean storage - host connection it has nothing to do with nebula.
You can use any shared filesystem. NFS uses IPoIB connection.<br>
<br>
2012-05-05 19:01 keltezéssel, Chris Barry írta:
<blockquote type="cite"><div><div>Greetings,<br>
<br>
I'm interested in hearing user accounts about using infiniband as
the storage interconnect with OpenNebula if anyone has any
thoughts to share. Specifically about:<br>
* using a shared image and live migrating it (e.g. no copying of
images).<br>
* is the guest unaware of the infiniband or is it running IB
drivers?<br>
* does the host expose the volumes to the guest, or does the guest
connect directly?<br>
* I'd like to avoid iSCSI over IPoIB if possible.<br>
* clustering filesystem/LVM requirements.<br>
* file-based vdisk or logical volume usage?<br>
<br>
Anything, an experiences at all will be helpful.<br>
<br>
Thanks<br>
Christopher<br>
<br>
<br>
<fieldset></fieldset>
<br>
</div></div><pre>_______________________________________________
Users mailing list
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a>
</pre>
</blockquote>
<br>
<br>
</div>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br>
</div></div></blockquote></div><br>
</div></div><br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div></div></div></div></div>
</blockquote></div><br>
<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div></div></div><br clear="all"><div><div><div><br></div>-- <br>Jaime Melis<br>Project Engineer<br>OpenNebula - The Open Source Toolkit for Cloud Computing<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a><br>
</div></div></blockquote></div><br></div></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Jaime Melis<br>Project Engineer<br>OpenNebula - The Open Source Toolkit for Cloud Computing<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:jmelis@opennebula.org" target="_blank">jmelis@opennebula.org</a><br>
</div></div></div>
</blockquote></div><br></div></div></div>
</blockquote></div><br>