<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<br>
<div class="moz-cite-prefix">On 07/02/13 11:49, Jürgen Weber wrote:<br>
</div>
<blockquote cite="mid:5112FA1D.20706@theiconic.com.au" type="cite">I
can now get NEW VM's working on my new HOST. Now that I know this
functionality works I am back to attempting to migrate. To test
this I am attempting to start a new VM on the MASTER server which
looks impossible. Why? Because now if I have both hosts in the
same cluster it will automatically just deploy on the HOST if I
remove them from the cluster it will not deploy at all just sit
there on PENDING.<br>
<br>
If I attempt to manipulate this using REQUIREMENTS="NAME =
\"MASTER*\"", it will sit there doing nothing with a state of
PENDING in both are in the same cluster (ID_100). What I am
noticing is that OpenNebula/SunStone is inserting some default
REQUIREMENT which is "
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
CLUSTER_ID = 100" into the template, how can I turn that off? How
should I handle this? I do not require clustering I would just
like the MASTER and NODE to host VM's.</blockquote>
<br>
Ok, I had no CPU resrouces left this is why I could not deploy
antoher to the MASTER, I have identified and resolved all of these
issues. One questiosn about the concept of Clusters. If I remove the
created Cluster under Infrastructure --> Clusters in SunStone
will this effect anything else? Is it just a tag/grouping mechanism.
I ask because when I hit 'delete' in SunStone it says "This will
delete the selected VMs from the database". That of course scares me
but I think its wrong.<br>
<br>
Now I am back to where I started, I am unable to migrate.<br>
<br>
Thu Feb 7 13:47:46 2013 [VMM][I]: Successfully execute network
driver operation: pre.<br>
Thu Feb 7 13:47:46 2013 [VMM][I]: Command execution fail:
/var/tmp/one/vmm/kvm/restore
/var/lib/one//datastores/0/191/checkpoint HOST.matrix 191
HOST.matrix<br>
Thu Feb 7 13:47:46 2013 [VMM][E]: restore: Command "virsh --connect
qemu:///system restore /var/lib/one//datastores/0/191/checkpoint"
failed: error: Failed to restore domain from
/var/lib/one//datastores/0/191/checkpoint<br>
Thu Feb 7 13:47:46 2013 [VMM][I]: error: Failed to create file
'/var/lib/one//datastores/0/191/checkpoint': Operation not permitted<br>
Thu Feb 7 13:47:46 2013 [VMM][E]: Could not restore from
/var/lib/one//datastores/0/191/checkpoint<br>
Thu Feb 7 13:47:46 2013 [VMM][I]: ExitCode: 1<br>
Thu Feb 7 13:47:46 2013 [VMM][I]: Failed to execute virtualization
driver operation: restore.<br>
Thu Feb 7 13:47:46 2013 [VMM][E]: Error restoring VM: Could not
restore from /var/lib/one//datastores/0/191/checkpoint<br>
Thu Feb 7 13:47:46 2013 [DiM][I]: New VM state is FAILED<br>
<br>
Some questions, this command what machine does it run on? The
machine that it is on, or the machine it is migrating to? (So in my
case Master being the machien the VM is on, and HOST being the
machine we are migrating to).<br>
<br>
As explained in my other emails, I can startup a new host without
issues so its just this checkpoint/restore process that breaks. <br>
<br>
I can run the command manually as root and the oneadmin user on the
HOST.<br>
<br>
So where do I look? What is my problem?<br>
<br>
Thanks<br>
<br>
Jurgen<br>
<pre class="moz-signature" cols="72">--
Jürgen Weber
Systems Engineer
IT Infrastructure Team Leader
THE ICONIC | E <a class="moz-txt-link-abbreviated" href="mailto:jurgen.weber@theiconic.com.au">jurgen.weber@theiconic.com.au</a> | <a class="moz-txt-link-abbreviated" href="http://www.theiconic.com.au">www.theiconic.com.au</a></pre>
</body>
</html>