[one-users] Add storage backend management

Paul Grandperrin paul.grandperrin at alterway.fr
Mon Oct 24 07:38:47 PDT 2011


PS (pre-scriptum :) : please view this mail with fixed sized font and
without the 80 column maximum limit.

Hi all,

My name is Paul (paulg) Grandperrin and I'm working as a engineer intern at
a french company called Alterway Hosting specialized in open source
services.

My mission is to implement in Opennebula a way to manage a pool of storage
backends (like iSCSI targets).
A storage backend (SB) will be similar to a host except its role is to
manage images instead of VMs.
The objective is to manage a pool of heterogeneous storage backends to store
the images and the VM's block devices.

A typical use case could be:
 20 hypervisors
 5 high performance iSCSI targets with SSD
 10 low performance iSCSI targets
 1 ceph cluster

We want Opennebula to automaticaly allocate and attach the VMs storage on
the different storage backends.


SB attributes are: ID(int), NAME(string), STATE(enum), TM(string),
CAPACITY(int), FREE(int), TM_RAW(string), COST(int), ROLE(enum)
 TM is the name of the Transfer Manager to use to communicate with the SB.
 MAX_STORAGE, FREE_STORAGE and USED_STORAGE give information about the used
space on the SB.
 TM_RAW is an optionnal argument to directly pass to the TM.
 COST is an integer choosen by the administrator to prioritize some SB over
others.
 ROLE would represent the usage, it could be SAN or REPOSITORY.

We might also add these attribute: TAGS ( a list of string to do some
filtering), NBIMAGE(int), NBVM(int), USER, GROUP...


We would create a new Transfer Manager (TM2), supporting multiple drivers at
opnce. The TM2 (and it's drivers) will provide the following actions:
COMMAND         ARG1                         ARG2
OTHER
 get            [HOSTNAME]:[IMAGENAME]       /path
 stores an image on a path
 push           /path                        [HOSTNAME]:[IMAGENAME]
stores an image
 clone          [HOSTNAME]:[IMAGENAMESRC]    [IMAGENAMEDST]
clones an image on the same storage backend.
 delete         [HOSTNAME]:[IMAGENAME]
 simply delete an image
 get-size       [HOSTNAME]:[IMAGENAME]
 returns the size of the image
 allocate       [HOSTNAME]:[IMAGENAME]       size
creates a data block on the HOSTNAME storage
 resize         [HOSTNAME]:[IMAGENAME]       size
resize the image
 read           [HOSTNAME]:[IMAGENAME]
 reads an image, outputs to stdout
 write          [HOSTNAME]:[IMAGENAME]
 write the content of stdin to an image
 attach         [HOSTNAME]:[IMAGENAME]       /path
 attach the image IMAGENAME stored on HOSTNAME to the path "/path"
 dettach        [HOSTNAME]:[IMAGENAME]       /path
 disconnect the image/remove the symlink, etc


Some other things to do :
 Add a command called onestoragebackend with the following actions: create,
delete, list, top, show (enable, disable, update)
 Add a new table in the BDD named storagebackend_pool.
 In the source code add the needed classes:
StorageBackend,StorageBackendPool, RequestManagerStorageBackend...


For backward compatibility, it should be possible to write a TM2 driver
(named LEGACYTM) which will use the old TM drivers to complete the requested
actions.


In the futur (depending if we have enough time), we might add:
 Cluster awareness
 Permission management
 Tags filtering: We want to be able to tell Opennebula that one VM needs
some special storage properties (ex: HDD7K HDD15K, SSD, DISTRIBUTED,
whatever).
                 Then the scheduler will filter the storagebackends
providing all theses properties and choose on of them.


Please feel free to comment and make suggestions.

Regards,

Paul Grandperrin,
AlterWay Hosting
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20111024/f6582fa6/attachment-0002.htm>


More information about the Users mailing list