[one-users] Use qcow and ssh tm driver for the same host

Carlos Martín Sánchez cmartin at opennebula.org
Wed Jul 18 07:32:46 PDT 2012


Hi,

If I understood correctly, you want to share the qcow datastore, but use
local storage for the system datastore.
To do this, simply set the ssh TM driver for the system DS [1], and make
sure you are exporting only /var/lib/one/datastores/100.

Regards

[1] http://opennebula.org/documentation:rel3.6:system_ds
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open-source Solution for Data Center Virtualization
www.OpenNebula.org | cmartin at opennebula.org |
@OpenNebula<http://twitter.com/opennebula><cmartin at opennebula.org>



On Wed, Jul 18, 2012 at 1:30 AM, Paulo A L Rego <pauloalr.alu at gmail.com>wrote:

> Hello,
>
> Is there any way to use qcow and ssh transfer driver for the same host?
> We have a shared FS with NFS using qcow, but when a lot of VMs are
> launched, the storage I/O becomes a bottleneck.
> We want to use both shared and local images for all hosts.
>
> We created two datastores - qcow(id 100) and ssh (id 101) - and the same
> image was created for both.
> The problem is that the running VMs are always placed in
> /var/lib/one/datastores/0.
> How the shared FS is mounted in that directory, when we deploy a VM using
> the image on ssh datastore, the image is tranfered using ssh, but is placed
> on the shared FS.
>
> We would like to have two directories for running VMs. This way we could
> deploy on the shared FS on /var/lib/one/datastores/0 and use another
> directory (maybe /var/lib/one/datastores/101) for local images.
>
> Any solution?
>
> Thanks a lot.
> Paulo Antonio Leal Rego
> Federal University of Ceara - Brazil
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20120718/7929ae01/attachment-0002.htm>


More information about the Users mailing list