[one-users] infiniband

Jaime Melis jmelis at opennebula.org
Mon May 7 08:52:46 PDT 2012


Hello Chris,

So then to modify my question a bit, are all LUNs attached to all hosts
> simultaneously? Or does the attachment only happen when a migration is to
> occur? Also, is the LUN put into a read-only mode or something during
> migration on the original host to protect the data? Or, must a clustering
> filesystem be employed?
>

Our stock drivers do expose all the iSCSI targets (which are LVM volumes)
to all the hosts. But only one host will log into the iscsi  target at a
time, therefore there won't be any collision. In other words: when starting
the vm we log into the iscsi target, and we log out when stopping the vm.
We also handle migrations, we issue logout in the source host and log in in
the target host.

However, these drivers are meant to be hacked to fit each datacenter
requirements.

Regards,
Jaime


> Guess I have a lot to read :)
>
> Thanks
> -C
>
>
> On Sat, May 5, 2012 at 5:23 PM, Chris Barry <cbarry at rjmetrics.com> wrote:
>
>> Hi Gubda,
>>
>> Thank you for replying. My goal was to use a single shared iso image to
>> boot from, and use an in-memory minimal linux on each node that had no
>> 'disk' at all, then to mount logical data volume(s) from a centralized
>> storage system. Perhaps that is outside the scope of opennebula's design
>> goals, and may not be possible  - I'm just now investigating it.
>>
>> I see you are using NFS, but my desire is to use block storage instead,
>> ideally LVM, and not incur the performance penalties of IPoIB. It does
>> sound simple though, and that's always good. Do you have any performance
>> data on that setup in terms of IOPs and/or MB/s write speeds? It does sound
>> interesting.
>>
>> Thanks again,
>> -C
>>
>>
>> On Sat, May 5, 2012 at 4:01 PM, Guba Sándor <gubasanyi at gmail.com> wrote:
>>
>>>  Hi
>>>
>>> I'm using InfiniBand to make shared storage. My setup is simple: the
>>> opennebula installdir is shared on NFS with the worker nodes.
>>>
>>> - I don't understand what you mean on shared image. There will be a copy
>>> (or symlink if the image is persistent) on the NFS host and that is that
>>> the hypervisor will use over the network. Live migrate is available beacuse
>>> you don't move the image only another host will use it from the same spot.
>>> With my linux images I have about 30s delay when livemigrate. You can use
>>> qcow2 driver for shared image.
>>>
>>> - I don't understant exactly what you mean on "guest unaware". If you
>>> mean storage - host connection it has nothing to do with nebula. You can
>>> use any shared filesystem. NFS uses IPoIB connection.
>>>
>>> 2012-05-05 19:01 keltezéssel, Chris Barry írta:
>>>
>>> Greetings,
>>>
>>> I'm interested in hearing user accounts about using infiniband as the
>>> storage interconnect with OpenNebula if anyone has any thoughts to share.
>>> Specifically about:
>>> * using a shared image and live migrating it (e.g. no copying of images).
>>> * is the guest unaware of the infiniband or is it running IB drivers?
>>> * does the host expose the volumes to the guest, or does the guest
>>> connect directly?
>>> * I'd like to avoid iSCSI over IPoIB if possible.
>>> * clustering filesystem/LVM requirements.
>>> * file-based vdisk or logical volume usage?
>>>
>>> Anything, an experiences at all will be helpful.
>>>
>>> Thanks
>>> Christopher
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing listUsers at lists.opennebula.orghttp://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>>
>>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>


-- 
Jaime Melis
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20120507/2f29bff5/attachment-0003.htm>


More information about the Users mailing list