<div style="line-height:1.7;color:#000000;font-size:14px;font-family:arial">Hi,<div>i use opennebula4.2 install on centos6.4 and has two esxi hypervisor.(10.24.101.72 and 10.24.101.73)</div><div>i have change my front-end's oned.conf like</div><div style="color: rgb(255, 0, 0);"><div style="color: rgb(255, 0, 0);">HOST_HOOK = [</div><div style="color: rgb(255, 0, 0);"> name = "error",</div><div style="color: rgb(255, 0, 0);"> on = "ERROR",</div><div style="color: rgb(255, 0, 0);"> command = "ft/host_error.rb",</div><div style="color: rgb(255, 0, 0);"> arguments = "$ID -r",</div><div style="color: rgb(255, 0, 0);"> remote = "no" ]</div><div><br></div><div style="color: rgb(255, 0, 0);">VM_HOOK = [</div><div style="color: rgb(255, 0, 0);"> name = "on_failure_recreate",</div><div style="color: rgb(255, 0, 0);"> on = "UNKNOWN",</div><div style="color: rgb(255, 0, 0);"> command = "/usr/bin/env onevm delete --recreate",</div><div style="color: rgb(255, 0, 0);"> arguments = "$ID" ]</div></div><div><br></div><div>then i create one vm on 10.24.101.73 , after the vm running, i power off the 10.24.101.73,but the vm dose not migrate to 10.24.101.72</div><div>i also create one cluser one the sunstone which include the two esxis.</div><div>the vm log </div><div><div><br></div><div>Sun Jan 5 13:54:53 2014 [DiM][I]: New VM state is ACTIVE.</div><div>Sun Jan 5 13:54:53 2014 [LCM][I]: New VM state is PROLOG.</div><div>Sun Jan 5 13:55:05 2014 [LCM][I]: New VM state is BOOT</div><div>Sun Jan 5 13:55:05 2014 [VMM][I]: Generating deployment file: /var/lib/one/vms/9/deployment.0</div><div>Sun Jan 5 13:55:05 2014 [VMM][I]: Successfully execute network driver operation: pre.</div><div>Sun Jan 5 13:55:21 2014 [VMM][I]: Successfully execute virtualization driver operation: deploy.</div><div>Sun Jan 5 13:55:21 2014 [VMM][I]: Successfully execute network driver operation: post.</div><div>Sun Jan 5 13:55:21 2014 [LCM][I]: New VM state is RUNNING</div><div>Sun Jan 5 13:58:03 2014 [LCM][I]: New VM state is UNKNOWN</div><div>Sun Jan 5 13:58:03 2014 [LCM][I]: New VM state is CLEANUP.</div><div>Sun Jan 5 13:58:03 2014 [VMM][I]: Driver command for 9 cancelled</div><div>Sun Jan 5 13:58:03 2014 [DiM][I]: New VM state is PENDING</div><div>Sun Jan 5 13:58:03 2014 [HKM][I]: Success executing Hook: on_failure_recreate: . </div><div>Sun Jan 5 13:58:06 2014 [VMM][W]: Ignored: LOG I 9 Command execution fail: /var/lib/one/remotes/vmm/vmware/cancel 'one-9' '10.24.101.73' 9 10.24.101.73</div><div><br></div><div>Sun Jan 5 13:58:06 2014 [VMM][W]: Ignored: LOG E 9 cancel: Error executing: virsh -c 'esx://10.24.101.73/?no_verify=1&auto_answer=1' destroy one-9 err: ExitCode: 1</div><div><br></div><div>Sun Jan 5 13:58:06 2014 [VMM][W]: Ignored: LOG I 9 out:</div><div><br></div><div>Sun Jan 5 13:58:06 2014 [VMM][W]: Ignored: LOG I 9 error: internal error HTTP response code 503 for call to 'RetrieveServiceContent'</div><div><br></div><div>Sun Jan 5 13:58:06 2014 [VMM][W]: Ignored: LOG I 9 error: failed to connect to the hypervisor</div><div><br></div><div>Sun Jan 5 13:58:06 2014 [VMM][W]: Ignored: LOG I 9 ExitCode: 1</div><div><br></div><div>Sun Jan 5 13:58:06 2014 [VMM][W]: Ignored: LOG I 9 Failed to execute virtualization driver operation: cancel.</div><div><br></div><div>Sun Jan 5 13:58:06 2014 [VMM][W]: Ignored: LOG I 9 Successfully execute network driver operation: clean.</div><div><br></div><div>Sun Jan 5 13:58:06 2014 [VMM][W]: Ignored: LOG I 9 Successfully execute transfer manager driver operation: tm_delete.</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG I 9 Command execution fail: /var/lib/one/remotes/tm/vmfs/delete 10.24.101.73:/vmfs/volumes/100/9 9 100</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG E 9 delete: Command "rm -rf /vmfs/volumes/100/9" failed: rm: can't remove '/vmfs/volumes/100/9/disk.0/disk-flat.vmdk': Device or resource busy</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG I 9 rm: can't remove '/vmfs/volumes/100/9/disk.0/vmx-one-9-2219911413-1.vswp': Device or resource busy</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG I 9 rm: can't remove '/vmfs/volumes/100/9/disk.0/one-9.vmx.lck': Device or resource busy</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG I 9 rm: can't remove '/vmfs/volumes/100/9/disk.0/one-9-845128f5.vswp': Device or resource busy</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG I 9 rm: can't remove '/vmfs/volumes/100/9/disk.0': Directory not empty</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG I 9 rm: can't remove '/vmfs/volumes/100/9/disk.1': Device or resource busy</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG I 9 rm: can't remove '/vmfs/volumes/100/9': Directory not empty</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG E 9 Error deleting /vmfs/volumes/100/9</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG I 9 ExitCode: 1</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: LOG I 9 Failed to execute transfer manager driver operation: tm_delete.</div><div><br></div><div>Sun Jan 5 13:58:07 2014 [VMM][W]: Ignored: CLEANUP SUCCESS 9 </div></div><div><br></div><div><br><br><br><br><div></div><div id="divNeteaseMailCard"></div><br><br><blockquote id="isReplyContent" style="PADDING-LEFT: 1ex; MARGIN: 0px 0px 0px 0.8ex; BORDER-LEFT: #ccc 1px solid"><div class="gmail_extra">
</div>
</blockquote></div></div><br><br><span title="neteasefooter"><span id="netease_mail_footer"></span></span>