[one-users] Using shared fs and ssh TM

Javier Fontan jfontan at opennebula.org
Mon Jul 30 05:35:20 PDT 2012


Then, if the system will be local you have to change the tm driver
from shared to ssh. Use "onedatastore update" to do so.

http://opennebula.org/documentation:rel3.6:system_ds#using_the_ssh_transfer_driver

On Mon, Jul 30, 2012 at 2:13 PM, Andreas Calvo <andreas.calvo at scytl.com> wrote:
> No, this directory is linked to be only at the node's local filesystem -- as
> it is the one that holds QCOW2 deltas.
> Only the image datastore (/var/lib/one//datastores/1/) is shared.
>
> Main problem is R/W access to the shared filesystem, and that's why we tried
> to hold deltas at node's local filesystem -- by linking the system datastore
> to a local path on the node.
> So far so good, except it cannot do any migration nor save.
>
> El 30/07/12 13:50, Javier Fontan escribió:
>
>> The checkpoint should be located at /var/lib/one//datastores/0/5366 in
>> both the node and the frontend. That is the system datastore. As the
>> driver is shared the tm is not copying the file as it is supposed to
>> be shared using NFS or a similar shared filesystem. Is
>> /var/lib/one//datastores/0 shared between the frontend and the nodes?
>>
>> On Mon, Jul 30, 2012 at 12:35 PM, Andreas Calvo <andreas.calvo at scytl.com>
>> wrote:
>>>
>>> Javier,
>>> Where should the checkpoint be located?
>>>
>>> I've checked in /var/lib/one/$VMID, but no checkpoint found.
>>> Only 4 files are located under this path: context.sh  deployment.0
>>> transfer.0.prolog  transfer.0.stop
>>>
>>> transfer.0.stop show:
>>> MV qcow2 cloud02:/var/lib/one//datastores/0/5366/disk.0
>>> opennebula:/var/lib/one/datastores/0/5366/disk.0 5366 1
>>> MV shared cloud02:/var/lib/one//datastores/0/5366
>>> opennebula:/var/lib/one/datastores/0/5366 5366 0
>>>
>>> On the node which ran the VM, there is a directory
>>> /var/lib/one//datastores/0/$VMID with a checkpoint file in it, but I
>>> can't
>>> find it in the frontend.
>>> El 27/07/12 18:02, Javier Fontan escribió:
>>>
>>>> Somehow the checkpoint did not get copied. Can you try to stop a VM
>>>> and check if the checkpoint is transfered back to the frontend? Just
>>>> to see if the tm is working correctly.
>>>>
>>>> On Fri, Jul 27, 2012 at 11:49 AM, Andreas Calvo
>>>> <andreas.calvo at scytl.com>
>>>> wrote:
>>>>>
>>>>> No,
>>>>> System datastore (where all QCOW2 delta are stored) is not shared, it
>>>>> relays
>>>>> on the node local filesystem.
>>>>> However, datastore 1 (where all images are stored) it is stored and
>>>>> shared
>>>>> on all nodes under the same path.
>>>>>
>>>>> To use the same directory logic, the paths have been linked.
>>>>>
>>>>> In this case:
>>>>> /var/lib/one is shared on all nodes
>>>>> /var/lib/one/datastores/0 is linked to /one/datastores/0, which is
>>>>> local
>>>>> /one/datastores/1 is linked to /var/lib/one/datastores/1 (which is
>>>>> shared)
>>>>>
>>>>> If I'm not wrong, the system datastores holds the incremental changes,
>>>>> whereas the other datastores hold images.
>>>>> El 27/07/12 11:22, Javier Fontan escribió:
>>>>>
>>>>>> Is your system datastore (0) shared and mounted in all your nodes?
>>>>>>
>>>>>> On Wed, Jul 25, 2012 at 3:56 PM, Andreas Calvo
>>>>>> <andreas.calvo at scytl.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> Hello again,
>>>>>>> I've tried to reuse the SSH MV script, but it fails.
>>>>>>> Output is:
>>>>>>>
>>>>>>> Wed Jul 25 15:51:59 2012 [VMM][I]: Command execution fail:
>>>>>>> /var/tmp/one/vmm/kvm/restore
>>>>>>> /var/lib/one//datastores/0/5220/checkpoint
>>>>>>> cloud13 5220 cloud13
>>>>>>> Wed Jul 25 15:51:59 2012 [VMM][E]: restore: Command "virsh --connect
>>>>>>> qemu:///system restore /var/lib/one//datastores/0/5220/checkpoint"
>>>>>>> failed:
>>>>>>> error: Failed to restore domain from
>>>>>>> /var/lib/one//datastores/0/5220/checkpoint
>>>>>>> Wed Jul 25 15:51:59 2012 [VMM][I]: error: Failed to create file
>>>>>>> '/var/lib/one//datastores/0/5220/checkpoint': No such file or
>>>>>>> directory
>>>>>>> Wed Jul 25 15:51:59 2012 [VMM][E]: Could not restore from
>>>>>>> /var/lib/one//datastores/0/5220/checkpoint
>>>>>>> Wed Jul 25 15:51:59 2012 [VMM][I]: ExitCode: 1
>>>>>>> Wed Jul 25 15:51:59 2012 [VMM][I]: Failed to execute virtualization
>>>>>>> driver
>>>>>>> operation: restore.
>>>>>>> Wed Jul 25 15:51:59 2012 [VMM][E]: Error restoring VM: Could not
>>>>>>> restore
>>>>>>> from /var/lib/one//datastores/0/5220/checkpoint
>>>>>>> Wed Jul 25 15:51:59 2012 [DiM][I]: New VM state is FAILED
>>>>>>>
>>>>>>>
>>>>>>> Any thought?
>>>>>>> I have to check but I expect to have the same problem when
>>>>>>> stopping/resuming
>>>>>>> VMs.
>>>>>>>
>>>>>>> El 11/07/12 18:38, Javier Fontan escribió:
>>>>>>>
>>>>>>> You are right. I've overlooked the driver. In qcow the mv driver is
>>>>>>> dummy as it expects the qcow image to be shared. You can just copy mv
>>>>>>> script from ssh tm to qcow remotes directory. I have not tested that
>>>>>>> but it should work. The qcow image will me moved on stop to the
>>>>>>> frontend and on resume back to a node. The backing storage path
>>>>>>> should
>>>>>>> be the same in your setup.
>>>>>>>
>>>>>>> On Wed, Jul 11, 2012 at 6:07 PM, Andreas Calvo
>>>>>>> <andreas.calvo at scytl.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> The shared storage is mounted in the same place in all the nodes.
>>>>>>>
>>>>>>> The directory structured is as follows:
>>>>>>> /var/lib/one/datastores (shared storage)
>>>>>>> -- 0 -> link to /one/datastores/0
>>>>>>> -- 1
>>>>>>> -- 100
>>>>>>>
>>>>>>> /one/datastores/ (local storage)
>>>>>>> -- 0
>>>>>>> -- 1 -> link to /var/lib/one/datastores/1
>>>>>>> -- 100 -> link to /var/lib/one/datastores/100
>>>>>>>
>>>>>>> When a virtual machine is stopped, its delta is stored in
>>>>>>> /one/datastores/0/$VMID.
>>>>>>> When another node tries to resume the virtual machine, expects to
>>>>>>> have
>>>>>>> in
>>>>>>> the same place the delta (and some other files, like the checkpoint),
>>>>>>> but,
>>>>>>> as it is not in its local disk, it fails.
>>>>>>>
>>>>>>> Same thing happen when a virtual machine is migrated.
>>>>>>> I think the TM should be tweaked to be part qcow and part SSH.
>>>>>>>
>>>>>>> El 11/07/12 17:51, Javier Fontan escribió:
>>>>>>>
>>>>>>> If the shared datastore is mounted in the same place in both nodes
>>>>>>> there wont be any problem. The base image will be accessible from
>>>>>>> both
>>>>>>> nodes (shared) and the qcow delta is moved to the host.
>>>>>>>
>>>>>>> On Wed, Jul 11, 2012 at 12:59 PM, Andreas Calvo
>>>>>>> <andreas.calvo at scytl.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Javier,
>>>>>>> Thanks for your quick reply!
>>>>>>> The only problem is that when a stopped virtual machine is resumed
>>>>>>> from
>>>>>>> another host in the cloud (different from which it was stopped), as
>>>>>>> it
>>>>>>> cannot find the deployed (in this case, linked) image and the deltas.
>>>>>>>
>>>>>>> El mar 10 jul 2012 11:46:55 CEST, Javier Fontan escribió:
>>>>>>>
>>>>>>> Sure it is possible. You need to have the datastore that holds the
>>>>>>> images shared and mount it in every node. Then you'll make the system
>>>>>>> datastore(0) local and configure it to use qcow tm drivers. Make sure
>>>>>>> that /var/lib/one/datastores/0 is not mounted from the shared storage
>>>>>>> in the nodes as this is where the deltas will be written.
>>>>>>>
>>>>>>> On Mon, Jul 9, 2012 at 11:59 AM, Andreas Calvo
>>>>>>> <andreas.calvo at scytl.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>> Hello,
>>>>>>> Is there any way to mix TM in an environment?
>>>>>>> We currently have a shared FS with GFS2 using qcow, but when a lot of
>>>>>>> VMs
>>>>>>> are launched, writing changes becomes a I/O bottleneck.
>>>>>>> We were thinking of a mixture where the image is shared and the
>>>>>>> incremental
>>>>>>> changes (qcow) are write locally.
>>>>>>>
>>>>>>> Is it possible?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> --
>>>>>>> Andreas Calvo Gómez
>>>>>>> Systems Engineer
>>>>>>> Scytl Secure Electronic Voting
>>>>>>> Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona
>>>>>>> Phone: + 34 934 230 324
>>>>>>> Fax: + 34 933 251 028
>>>>>>> http://www.scytl.com
>>>>>>>
>>>>>>> NOTICE: The information in this e-mail and in any of its attachments
>>>>>>> is
>>>>>>> confidential and intended solely for the attention and use of the
>>>>>>> named
>>>>>>> addressee(s). If you are not the intended recipient, any disclosure,
>>>>>>> copying,
>>>>>>> distribution or retaining of this message or any part of it, without
>>>>>>> the
>>>>>>> prior
>>>>>>> written consent of Scytl Secure Electronic Voting, SA is prohibited
>>>>>>> and
>>>>>>> may be
>>>>>>> unlawful. If you have received this in error, please contact the
>>>>>>> sender
>>>>>>> and
>>>>>>> delete the material from any computer.
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at lists.opennebula.org
>>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Andreas Calvo Gómez
>>>>>>> Systems Engineer
>>>>>>> Scytl Secure Electronic Voting
>>>>>>> Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona
>>>>>>> Phone: + 34 934 230 324
>>>>>>> Fax:   + 34 933 251 028
>>>>>>> http://www.scytl.com
>>>>>>>
>>>>>>> NOTICE: The information in this e-mail and in any of its attachments
>>>>>>> is
>>>>>>> confidential and intended solely for the attention and use of the
>>>>>>> named
>>>>>>> addressee(s). If you are not the intended recipient, any disclosure,
>>>>>>> copying,
>>>>>>> distribution or retaining of this message or any part of it, without
>>>>>>> the
>>>>>>> prior
>>>>>>> written consent of Scytl Secure Electronic Voting, SA is prohibited
>>>>>>> and
>>>>>>> may be
>>>>>>> unlawful. If you have received this in error, please contact the
>>>>>>> sender
>>>>>>> and
>>>>>>> delete the material from any computer.
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Andreas Calvo Gómez
>>>>>>> Systems Engineer
>>>>>>> Scytl Secure Electronic Voting
>>>>>>> Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona
>>>>>>> Phone: + 34 934 230 324
>>>>>>> Fax:   + 34 933 251 028
>>>>>>> http://www.scytl.com
>>>>>>>
>>>>>>> NOTICE: The information in this e-mail and in any of its attachments
>>>>>>> is
>>>>>>> confidential and intended solely for the attention and use of the
>>>>>>> named
>>>>>>> addressee(s). If you are not the intended recipient, any disclosure,
>>>>>>> copying,
>>>>>>> distribution or retaining of this message or any part of it, without
>>>>>>> the
>>>>>>> prior
>>>>>>> written consent of Scytl Secure Electronic Voting, SA is prohibited
>>>>>>> and
>>>>>>> may be
>>>>>>> unlawful. If you have received this in error, please contact the
>>>>>>> sender
>>>>>>> and
>>>>>>> delete the material from any computer.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Andreas Calvo Gómez
>>>>>>> Systems Engineer
>>>>>>> Scytl Secure Electronic Voting
>>>>>>> Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona
>>>>>>> Phone: + 34 934 230 324
>>>>>>> Fax:   + 34 933 251 028
>>>>>>> http://www.scytl.com
>>>>>>>
>>>>>>> NOTICE: The information in this e-mail and in any of its attachments
>>>>>>> is
>>>>>>> confidential and intended solely for the attention and use of the
>>>>>>> named
>>>>>>> addressee(s). If you are not the intended recipient, any disclosure,
>>>>>>> copying,
>>>>>>> distribution or retaining of this message or any part of it, without
>>>>>>> the
>>>>>>> prior
>>>>>>> written consent of Scytl Secure Electronic Voting, SA is prohibited
>>>>>>> and
>>>>>>> may be
>>>>>>> unlawful. If you have received this in error, please contact the
>>>>>>> sender
>>>>>>> and
>>>>>>> delete the material from any computer.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>> --
>>>>> Andreas Calvo Gómez
>>>>> Systems Engineer
>>>>> Scytl Secure Electronic Voting
>>>>> Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona
>>>>> Phone: + 34 934 230 324
>>>>> Fax:   + 34 933 251 028
>>>>> http://www.scytl.com
>>>>>
>>>>> NOTICE: The information in this e-mail and in any of its attachments is
>>>>> confidential and intended solely for the attention and use of the named
>>>>> addressee(s). If you are not the intended recipient, any disclosure,
>>>>> copying,
>>>>> distribution or retaining of this message or any part of it, without
>>>>> the
>>>>> prior
>>>>> written consent of Scytl Secure Electronic Voting, SA is prohibited and
>>>>> may be
>>>>> unlawful. If you have received this in error, please contact the sender
>>>>> and
>>>>> delete the material from any computer.
>>>>>
>>>>>
>>>>>
>>>>
>>> --
>>> Andreas Calvo Gómez
>>> Systems Engineer
>>> Scytl Secure Electronic Voting
>>> Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona
>>> Phone: + 34 934 230 324
>>> Fax:   + 34 933 251 028
>>> http://www.scytl.com
>>>
>>> NOTICE: The information in this e-mail and in any of its attachments is
>>> confidential and intended solely for the attention and use of the named
>>> addressee(s). If you are not the intended recipient, any disclosure,
>>> copying,
>>> distribution or retaining of this message or any part of it, without the
>>> prior
>>> written consent of Scytl Secure Electronic Voting, SA is prohibited and
>>> may be
>>> unlawful. If you have received this in error, please contact the sender
>>> and
>>> delete the material from any computer.
>>>
>>>
>>>
>>
>>
>
> --
> Andreas Calvo Gómez
> Systems Engineer
> Scytl Secure Electronic Voting
> Plaça Gal·la Placidia, 1-3, 1st floor · 08006 Barcelona
> Phone: + 34 934 230 324
> Fax:   + 34 933 251 028
> http://www.scytl.com
>
> NOTICE: The information in this e-mail and in any of its attachments is
> confidential and intended solely for the attention and use of the named
> addressee(s). If you are not the intended recipient, any disclosure,
> copying,
> distribution or retaining of this message or any part of it, without the
> prior
> written consent of Scytl Secure Electronic Voting, SA is prohibited and
> may be
> unlawful. If you have received this in error, please contact the sender
> and
> delete the material from any computer.
>
>
>



-- 
Javier Fontán Muiños
Project Engineer
OpenNebula - The Open Source Toolkit for Data Center Virtualization
www.OpenNebula.org | jfontan at opennebula.org | @OpenNebula



More information about the Users mailing list