[Interoperability] Storage Management using CDMI

florian.feldhaus at tu-dortmund.de florian.feldhaus at tu-dortmund.de
Fri Dec 2 05:46:01 PST 2011


Hi,

Gerhard and I have been working on the CDMI part for quite a while and we
have a first working version ready. I created the following feature in the
OpenNebula issue tracker for it:
http://dev.opennebula.org/issues/1018

We will provide further information and a patch in the next days and I
would be glad to discuss details and start testing/improving the code.

Cheers,
Florian

Am 01.09.11 17:53 schrieb "Daniel Molina" unter <dmolina at opennebula.org>:

>Hi Florian,
>
>Sorry for the late response but we are working hard in the OpenNebula
>3.0 release. Find my comments inline
>
>On 24 August 2011 15:55,  <florian.feldhaus at tu-dortmund.de> wrote:
>> Hi,
>>
>> as announced during the last IRC session and on the interoperability
>> mailinglist, I'm currently working together with a student of mine
>> (Gerhard Sikora) on an implementation of CDMI for OpenNebula. We managed
>> to import images from a CDMI server into the OpenNebula image
>>repository,
>> but we would like to go one step further and use the images directly on
>> the CDMI server without copying them to a local disk. CDMI supports an
>> export feature for Storage objects which allows to mount an NFSv4 share
>>or
>> connect to an iSCSI or FibreChannel target [1]. Thus it is possible to
>> directly use an image file / block device without copying it to the
>>image
>> repository.
>>
>> We had a look at the Hybrid storage feature (Feature #652) to see how
>>you
>> implement it for S3. From the code we got the impression, that you're
>> still copying the image files locally without using the advantages of
>> using S3 directly. Is this correct? Do you have plans to directly use S3
>> in the future?
>
>In order to use images from external providers we have 2 options:
>
>1. Import the image to the OpenNebula Image repository and deal with
>it like any other OpenNebula image (Catalog component)
>2. Adapt the TM driver to interact with the external Cloud provider,
>as we do with the http urls. The image is copied to the host by using
>wget instead of copying it from the repo.
>
>In the case of S3, we decided to copy the images locally and use the
>image repository instead of the provider URL. Therefore we only have
>to transfer the image one time and also we can benefit from the
>OpenNebula authorization. I think it's faster to use one of the tm
>drivers such as ssh or nfs instead of downloading the image in each
>host, you save a lot of time.
>
>One of the advantages of using the second option (adapt the TM driver)
>would be that if the image is modified in S3 the hosts will use the
>last version of the image, but the time spent downloading the image
>from S3 will be longer than copying it from the repository and maybe
>the image is not stable.
>
>In the case of CDMI, we could support both options.
>1. A CDMI driver could be implemented for the OpenNebula catalog
>component in order to import those images to the repo, in case the
>cloud storage and the host networks are different.
>2. The TM drivers could be adapted in order to deal with CDMI urls, in
>case the cloud storage and the host networks are the same.
>
>>
>> With the current storage model in OpenNebula we would implement direct
>> support for CDMI as following:
>> - use the SOURCE parameter during image creation to specify the CDMI URI
>> to the CDMI object (e.g.
>> SOURCE=http://my.cdmi-server.tld/cdmi_objectid/45234)
>> - modify one_image.rb to recognize CDMI objects and don't copy the image
>> locally
>> - create a transfer manager for CDMI. The transfer manager needs to use
>>a
>> CDMI client to issue CDMI commands. It then can use CDMI syntax to copy
>> (or clone) images directly on the CDMI server so that the CDMI server
>>may
>> use deduplication and thin provisioning. When the image is ready, the
>> transfer manager should extract an NFSv4 mountpoint or iSCSI/FC target
>> from the metadata of the CDMI object and either mount the NFSv4 share or
>> directly attach the block device to the VM. When deleting VMs, the
>>mounts
>> must be removed and the Object be deleted/modified using CDMI syntax.
>>
>> As far as we could test it, the above implementation should work, but
>> before we change OpenNebula code, we would like to discuss with you how
>> you would do it and what your views are regarding direct usage of
>>external
>> storage.
>
>I think you don't have to create a new TM driver. You could implement
>it as we do with the http sources. Therefore you can still use for
>example the predefined ssh driver with the CDMI option.
>1. You specify the CDMI url in the SOURCE parameter, using a tag to
>identify the cdmi url.
>SOURCE=cdmi://my.cdmi-server.tld/cdmi_objectid/45234
>The image is not copied if a source is specified, the image is copied
>only if there is a PATH parameter.
>
>2. Adapt the nfs/ssh... TM drivers to deal with the CDMI urls. For
>example (tm_mad/nfs/tm_clone.sh):
>case $SRC in
>cdmi://*)
>    TODO
>    ;;
>http://*)
>    log "Downloading $SRC"
>    exec_and_log "$WGET -O $DST_PATH $SRC" \
>        "Error downloading $SRC"
>    ;;
>
>*)
>    log "Cloning $SRC_PATH"
>    exec_and_log "cp -r $SRC_PATH $DST_PATH" \
>        "Error copying $SRC to $DST"
>    ;;
>esac
>
>Hope this helps and thanks for the efforts and the continue support
>from you and your team.
>
>BTW if you want we can use the interoperability list for these kind of
>discussions. Maybe there is people interested in this kind of
>information or implementing the same use case.
>
>Kind regards.
>
>>
>> Cheers,
>> Florian
>>
>> [1]
>> 
>>http://cdmi.sniacloud.com/CDMI_Spec/13-Exported_Protocols/13-Exported_Pro
>>to
>> cols.htm
>>
>>
>
>
>
>-- 
>Daniel Molina, Cloud Technology Engineer/Researcher
>Major Contributor
>OpenNebula - The Open Source Toolkit for Cloud Computing
>www.OpenNebula.org | dmolina at opennebula.org



More information about the Interoperability mailing list