[one-users] infiniband

Jaime Melis jmelis at opennebula.org
Mon May 14 05:56:41 PDT 2012


Hi Shankhadeep,

they look really nice, I've adapted your README to wiki style and added it
to wiki.opennebula.org:

http://wiki.opennebula.org/infiniband

Feel free to modify it

Thanks a lot for contributing it!

Cheers,
Jaime

On Thu, May 10, 2012 at 8:46 AM, Shankhadeep Shome <shank15217 at gmail.com>wrote:

> Just added the vmm driver for the IPoIB NAT stuff
>
>
> On Thu, May 10, 2012 at 2:07 AM, Shankhadeep Shome <shank15217 at gmail.com>wrote:
>
>> Its not clear where I would upload the drivers to, I created a wiki
>> account and a wiki page for 1-to-1 NAT configuration for IPoIB. I can just
>> send you the tar file with the updated driver.
>>
>> Shankhadeep
>>
>>
>> On Tue, May 8, 2012 at 9:47 AM, Jaime Melis <jmelis at opennebula.org>wrote:
>>
>>> Hi Shankhadeep,
>>>
>>> I think the community wiki site is the best place to upload these
>>> drivers to:
>>> http://wiki.opennebula.org/
>>>
>>> It's open to registration let me know if you run into any issues.
>>>
>>> About the blog post, our community manager will send you your login info
>>> in a PM.
>>>
>>> Thanks!
>>>
>>> Cheers,
>>> Jaime
>>>
>>>
>>> On Mon, May 7, 2012 at 7:42 PM, Shankhadeep Shome <shank15217 at gmail.com>wrote:
>>>
>>>> Sure, where and how do I do it? I noticed that you have a community
>>>> wiki site. Do i upload the driver and make an entry there?
>>>>
>>>> Shankhadeep
>>>>
>>>>
>>>> On Mon, May 7, 2012 at 11:54 AM, Jaime Melis <jmelis at opennebula.org>wrote:
>>>>
>>>>> Hello Shankhadeep,
>>>>>
>>>>> that sounds really nice. Would you be interested in contributing your
>>>>> code to OpenNebula's ecosystem and/or publishing an entry in opennebula's
>>>>> blog?
>>>>>
>>>>> Regards,
>>>>> Jaime
>>>>>
>>>>>
>>>>> On Sun, May 6, 2012 at 5:39 AM, Shankhadeep Shome <
>>>>> shank15217 at gmail.com> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Sat, May 5, 2012 at 11:38 PM, Shankhadeep Shome <
>>>>>> shank15217 at gmail.com> wrote:
>>>>>>
>>>>>>> Hi Chris
>>>>>>>
>>>>>>> We have a solution we are using on Oracle Exalogic hardware (we are
>>>>>>> using the bare metal boxes and gateway switches). I think I understand the
>>>>>>> requirement, IB accessible storage from VMs is possible however its a bit
>>>>>>> convoluted. Our solution was to create a one-to-one NAT from the VMs to the
>>>>>>> IB IPoIB network. This allows the VMs to mount storage natively over the IB
>>>>>>> network. The performance is pretty good, about 9Gbps per node with 64k MTU
>>>>>>> sizes.  We created an open nebula driver for this and I'm happy to share it
>>>>>>> with the community. The driver handles VM migrations by enabling/diabling
>>>>>>> ip aliases on the host and can also be used to manipulate iptable rules on
>>>>>>> source and destination when open nebula moves VMs around.
>>>>>>>
>>>>>>> Shankhadeep
>>>>>>>
>>>>>>>
>>>>>>> On Sat, May 5, 2012 at 5:42 PM, Chris Barry <cbarry at rjmetrics.com>wrote:
>>>>>>>
>>>>>>>> Reading more, I see that the available methods for block store are
>>>>>>>> iSCSI, and that the LUNS are attached to the host. From there, a symlink
>>>>>>>> tree exposes the target to the guest in a predictable way on every host.
>>>>>>>>
>>>>>>>> So then to modify my question a bit, are all LUNs attached to all
>>>>>>>> hosts simultaneously? Or does the attachment only happen when a migration
>>>>>>>> is to occur? Also, is the LUN put into a read-only mode or something during
>>>>>>>> migration on the original host to protect the data? Or, must a clustering
>>>>>>>> filesystem be employed?
>>>>>>>>
>>>>>>>> Guess I have a lot to read :)
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>> -C
>>>>>>>>
>>>>>>>>
>>>>>>>> On Sat, May 5, 2012 at 5:23 PM, Chris Barry <cbarry at rjmetrics.com>wrote:
>>>>>>>>
>>>>>>>>> Hi Gubda,
>>>>>>>>>
>>>>>>>>> Thank you for replying. My goal was to use a single shared iso
>>>>>>>>> image to boot from, and use an in-memory minimal linux on each node that
>>>>>>>>> had no 'disk' at all, then to mount logical data volume(s) from a
>>>>>>>>> centralized storage system. Perhaps that is outside the scope of
>>>>>>>>> opennebula's design goals, and may not be possible  - I'm just now
>>>>>>>>> investigating it.
>>>>>>>>>
>>>>>>>>> I see you are using NFS, but my desire is to use block storage
>>>>>>>>> instead, ideally LVM, and not incur the performance penalties of IPoIB. It
>>>>>>>>> does sound simple though, and that's always good. Do you have any
>>>>>>>>> performance data on that setup in terms of IOPs and/or MB/s write speeds?
>>>>>>>>> It does sound interesting.
>>>>>>>>>
>>>>>>>>> Thanks again,
>>>>>>>>> -C
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor <gubasanyi at gmail.com>wrote:
>>>>>>>>>
>>>>>>>>>>  Hi
>>>>>>>>>>
>>>>>>>>>> I'm using InfiniBand to make shared storage. My setup is simple:
>>>>>>>>>> the opennebula installdir is shared on NFS with the worker nodes.
>>>>>>>>>>
>>>>>>>>>> - I don't understand what you mean on shared image. There will be
>>>>>>>>>> a copy (or symlink if the image is persistent) on the NFS host and that is
>>>>>>>>>> that the hypervisor will use over the network. Live migrate is available
>>>>>>>>>> beacuse you don't move the image only another host will use it from the
>>>>>>>>>> same spot. With my linux images I have about 30s delay when livemigrate.
>>>>>>>>>> You can use qcow2 driver for shared image.
>>>>>>>>>>
>>>>>>>>>> - I don't understant exactly what you mean on "guest unaware". If
>>>>>>>>>> you mean storage - host connection it has nothing to do with nebula. You
>>>>>>>>>> can use any shared filesystem. NFS uses IPoIB connection.
>>>>>>>>>>
>>>>>>>>>> 2012-05-05 19:01 keltezéssel, Chris Barry írta:
>>>>>>>>>>
>>>>>>>>>> Greetings,
>>>>>>>>>>
>>>>>>>>>> I'm interested in hearing user accounts about using infiniband as
>>>>>>>>>> the storage interconnect with OpenNebula if anyone has any thoughts to
>>>>>>>>>> share. Specifically about:
>>>>>>>>>> * using a shared image and live migrating it (e.g. no copying of
>>>>>>>>>> images).
>>>>>>>>>> * is the guest unaware of the infiniband or is it running IB
>>>>>>>>>> drivers?
>>>>>>>>>> * does the host expose the volumes to the guest, or does the
>>>>>>>>>> guest connect directly?
>>>>>>>>>> * I'd like to avoid iSCSI over IPoIB if possible.
>>>>>>>>>> * clustering filesystem/LVM requirements.
>>>>>>>>>> * file-based vdisk or logical volume usage?
>>>>>>>>>>
>>>>>>>>>> Anything, an experiences at all will be helpful.
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>> Christopher
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Users mailing listUsers at lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Users mailing list
>>>>>>>>>> Users at lists.opennebula.org
>>>>>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list
>>>>>>>> Users at lists.opennebula.org
>>>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at lists.opennebula.org
>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Jaime Melis
>>>>> Project Engineer
>>>>> OpenNebula - The Open Source Toolkit for Cloud Computing
>>>>> www.OpenNebula.org | jmelis at opennebula.org
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Jaime Melis
>>> Project Engineer
>>> OpenNebula - The Open Source Toolkit for Cloud Computing
>>> www.OpenNebula.org | jmelis at opennebula.org
>>>
>>
>>
>


-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20120514/140c2689/attachment-0002.htm>


More information about the Users mailing list