[one-users] Re; Snapshots ...
Gareth Bult
gareth at linux.co.uk
Wed Nov 20 05:12:27 PST 2013
Ok, I see that now .. interesting.
I'm wondering how to handle this .. vdc-store snapshots are essentially catalogue entries rather
than files, so although I can call CPDS to make the snapshot, there doesn't seem to be any facility
to use a snapshot that's not actually a stand-alone VM store.
.. specifically snapshot delete and snapshot rollback would be the operations I'd most need.
Also, my idea of snapshots (given there is no overhead) would be to snap every hour .. in which case
the "Images" screen on SunStone would fill up very quickly if for example I was to create a DB entry
for each snapshot ..
Any ideas how best to approach this ??
Gareth.
--
Gareth Bult
“The odds of hitting your target go up dramatically when you aim at it.”
----- Original Message -----
From: "Ruben S. Montero" <rsmontero at opennebula.org>
To: "Gareth Bult" <gareth at linux.co.uk>
Cc: "users" <users at lists.opennebula.org>
Sent: Wednesday, 20 November, 2013 12:36:57 PM
Subject: Re: [one-users] Re; Snapshots ...
Ups my mistake, Storage tab I meant... When you save a disk you can do it live or deferred, deferred is when the VM is shutdown, live calls the cpds script to copy the disk to the datastore.
On Wed, Nov 20, 2013 at 1:41 PM, Gareth Bult < gareth at linux.co.uk > wrote:
Ok, I've completely missed the capacity / cpds bit , indeed I can't see any other snapshot option on the GUI
so I'm going to have to go back and read some more .. however
Re; the bug, looks like rendering, gone back to it now and it looks fine ...
# Host Action Reason Chg time Total time Prolog time 0 node3 live-migrate USER 19:08:46 19/11/2013 0d 00:01 0d 00:00
1 node2 live-migrate USER 19:09:44 19/11/2013 0d 00:03 0d 00:00
2 node1 live-migrate USER 19:12:56 19/11/2013 0d 00:01 0d 00:00
3 node3 live-migrate USER 19:13:59 19/11/2013 0d 14:47 0d 00:00
4 node1 none NONE 10:00:15 20/11/2013 0d 02:17 0d 00:00
I must admit to being a little confused, my "capacity" tab on VM's has only a "resize" button .. which is inactive (?)
(cpds sounds ideal if I can find how to use it .. :) )
--
Gareth Bult
“The odds of hitting your target go up dramatically when you aim at it.”
From: "Ruben S. Montero" < rsmontero at opennebula.org >
To: "Gareth Bult" < gareth at linux.co.uk >
Cc: "users" < users at lists.opennebula.org >
Sent: Wednesday, 20 November, 2013 11:20:09 AM
Subject: Re: [one-users] Re; Snapshots ...
Hi
As you know there are two types of snapshots, system and disk. Disk only snapshots are handled through the capacity tab in sunstone and eventually by the CPDS script in the TM. System snapshots are handled by the snapshot script of VMM.
There are no plans to redesign this, disk snapshotting can use a custom storage facility through "cpds", but system snapshots will be handled through the hypervisor....
Note that system snapshots require also to checkpoint the memory state of the system, thats the reason for requiring qcow2 in kvm. So I am not really sure how this two processes, memory and disk snapshot, can be orchestrated outside of libvirt, maybe libvirt hooks?
About the bug, I am wondering if this is a rendering issue or something deeper, could you send the output of:
onevm show <VM_ID> -x
Cheers
Ruben
On Wed, Nov 20, 2013 at 11:19 AM, Gareth Bult < gareth at linux.co.uk > wrote:
<blockquote>
Hi,
It seems that the snapshot facility relies on the "libvirt" snapshot facility wh
<blockquote>
ich at the moment
relies entirely upon the QEMU snapshot facility, which means you can really only snapshot QCOW
images. Before I start to modify remotes/bmm/kvm/snapshot*, is there a way or are there any plans
to move snapshot functionality to the drivers such that we can use a custom snapshot facility on
a per storage facility basis?
Case in point;
At the moment there seems to be a script which does this;
virsh --connect $LIBVIRT_URI snapshot-create-as $DOMAIN (which is QCOW2 only?)
I would like it to be able to handle this;
vdc-tool -n ON_IM_81 --mksnap "First Snapshot"
> :: Snapshot created [First Snapshot]
vdc-tool -n ON_IM_81 --lssnap
+----------+--------------------------------------+----------+----------+----------------------+
| UniqueID | Snapshot Name | Size | Blocks | Created@ |
+----------+--------------------------------------+----------+----------+----------------------+
| 1 | First Snapshot | 662.53M | 370404 | 20 Oct 2013 09:55:08 |
| | 6c0ca1c0-9d62-474a-83fc-369fa01d4068 | | | Root: 1196295401 |
+----------+--------------------------------------+----------+----------+----------------------+
Number of snapshot blocks used (370404) taking ( 662.53M)
Current committed blocks = 652079
Current blocks (0 ) and current size ( 0.00b)
Obviously the vdc-tool output can be tweaked as necessary .. I would like to be able to integrate
this into libvirt, but the snapshot API in libvirt appears still to be on the drawing board whereas
I already have a working / usable snapshot facility ... any thoughts ?
Incidentally, I think I just spotted a bug on the Placement log;
# Host Action Reason Chg time Total time Prolog time
0 node3 live-migrate USER 19:08:46 19/11/2013 0d 00:01 0d 00:00
1 node2 live-migrate USER 19:09:44 19/11/2013 0d 00:03 0d 00:00
2 node1 live-migrate USER 19:12:56 19/11/2013 0d 00:01 0d 00:00
3 node3 live-migrate USER 19:13:59 19/11/2013 0d 14:47 0d 00:00
4 node1 none NONE 10:00:15 20/11/2013 30d 23:55 0d 00:00
We've accumulated 30d of total time overnight ?!
(yes, the clocks are in sync ...)
--
Gareth Bult
“The odds of hitting your target go up dramatically when you aim at it.”
_______________________________________________
Users mailing list
Users at lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
</blockquote>
--
--
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
</blockquote>
--
--
Ruben S. Montero, PhD
Project co-Lead and Chief Architect
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131120/0060a610/attachment-0002.htm>
More information about the Users
mailing list