There was a bug in the driver that caused error when deploying several VMs at the same time. To fix it change the file /var/lib/one/remotes/vmm/vcenter/vcenter_driver.rb at line 120 from this code:<br><div><br></div><div><div> def find_vm_template(uuid)</div><div> vms = @dc.vmFolder.childEntity.grep(RbVmomi::VIM::VirtualMachine)</div><div><br></div><div> return vms.find{ |v| v.config.uuid == uuid }</div><div> end</div></div><div><br></div><div>to this other one:</div><div><br></div><div><div> def find_vm_template(uuid)</div><div> vms = @dc.vmFolder.childEntity.grep(RbVmomi::VIM::VirtualMachine)</div><div><br></div><div> return vms.find{ |v| v.config && v.config.uuid == uuid }</div><div> end</div></div><div><br></div><div>We are still looking into the problem when deleting several VMs.</div><div><br></div><div>Thanks for telling us.</div><br><div class="gmail_quote">On Thu Nov 13 2014 at 12:59:55 PM Javier Fontan <<a href="mailto:jfontan@opennebula.org">jfontan@opennebula.org</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<div><br></div><div>We have opened an issue to track this problem:</div><div><br></div><div><a href="http://dev.opennebula.org/issues/3334" target="_blank">http://dev.opennebula.org/issues/3334</a><br></div><div><br></div><div>Meanwhile you can decrease the number of actions sent changing in /etc/one/oned.conf the parameter -t (number of threads) for VM driver. For example:</div><div><br></div><div><div>VM_MAD = [</div><div> name = "vcenter",</div><div> executable = "one_vmm_sh",</div><div> arguments = "-p -t 2 -r 0 vcenter -s sh",</div><div> type = "xml" ]</div></div><div><br></div><div>Cheers</div><div><br><div class="gmail_quote">On Wed Nov 12 2014 at 5:40:00 PM Sebastiaan Smit <<a href="mailto:bas@echelon.nl" target="_blank">bas@echelon.nl</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi list,<br>
<br>
We're testing the vCenter functionality in version 4.10 and see some strange behaviour while doing bulk actions.<br>
<br>
Deleting VM's sometimes leave stray VM's on our cluster. We see the following in de VM log:<br>
<br>
Sun Nov 9 15:51:34 2014 [Z0][LCM][I]: New VM state is RUNNING<br>
Wed Nov 12 17:30:36 2014 [Z0][LCM][I]: New VM state is CLEANUP.<br>
Wed Nov 12 17:30:36 2014 [Z0][VMM][I]: Driver command for 60 cancelled<br>
Wed Nov 12 17:30:36 2014 [Z0][DiM][I]: New VM state is DONE<br>
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Command execution fail: /var/lib/one/remotes/vmm/<u></u>vcent<u></u>er/cancel '423cdcae-b6b3-07c1-def6-<u></u>96b9f<u></u>3f4b7b3' 'demo-01' 60 demo-01<br>
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Cancel of VM 423cdcae-b6b3-07c1-def6-<u></u>96b9f3<u></u>f4b7b3 on host demo-01 failed due to "ManagedObjectNotFound: The object has already been deleted or has not been completely created"<br>
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 ExitCode: 255<br>
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Failed to execute virtualization driver operation: cancel.<br>
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: LOG I 60 Successfully execute network driver operation: clean.<br>
Wed Nov 12 17:30:41 2014 [Z0][VMM][W]: Ignored: CLEANUP SUCCESS 60<br>
<br>
We see it in a different manner while bulk creating VM's (20+ at a time):<br>
<br>
Sun Nov 9 16:01:34 2014 [Z0][DiM][I]: New VM state is ACTIVE.<br>
Sun Nov 9 16:01:34 2014 [Z0][LCM][I]: New VM state is PROLOG.<br>
Sun Nov 9 16:01:34 2014 [Z0][LCM][I]: New VM state is BOOT<br>
Sun Nov 9 16:01:34 2014 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/81/<u></u>deployment<u></u>.0<br>
Sun Nov 9 16:01:34 2014 [Z0][VMM][I]: Successfully execute network driver operation: pre.<br>
Sun Nov 9 16:01:36 2014 [Z0][VMM][I]: Command execution fail: /var/lib/one/remotes/vmm/<u></u>vcent<u></u>er/deploy '/var/lib/one/vms/81/<u></u>deploymen<u></u>t.0' 'demo-01' 81 demo-01<br>
Sun Nov 9 16:01:36 2014 [Z0][VMM][I]: Deploy of VM 81 on host demo-01 with /var/lib/one/vms/81/<u></u>deployment<u></u>.0 failed due to "undefined method `uuid' for nil:NilClass"<br>
Sun Nov 9 16:01:36 2014 [Z0][VMM][I]: ExitCode: 255<br>
Sun Nov 9 16:01:36 2014 [Z0][VMM][I]: Failed to execute virtualization driver operation: deploy.<br>
Sun Nov 9 16:01:36 2014 [Z0][VMM][E]: Error deploying virtual machine<br>
Sun Nov 9 16:01:36 2014 [Z0][DiM][I]: New VM state is FAILED<br>
Wed Nov 12 17:30:19 2014 [Z0][DiM][I]: New VM state is DONE.<br>
Wed Nov 12 17:30:19 2014 [Z0][LCM][E]: epilog_success_action, VM in a wrong state<br>
<br>
<br>
I think these have two different root causes. The cluster is not under load.<br>
<br>
<br>
Has anyone else seen this behaviour?<br>
<br>
Best regards,<br>
--<br>
Sebastiaan Smit<br>
Echelon BV<br>
<br>
E: <a href="mailto:bas@echelon.nl" target="_blank">bas@echelon.nl</a><br>
W: <a href="http://www.echelon.nl" target="_blank">www.echelon.nl</a><br>
T: (088) 3243566 (gewijzigd nummer)<br>
T: (088) 3243505 (servicedesk)<br>
F: (053) 4336222<br>
<br>
KVK: 06055381<br>
<br>
<br>
______________________________<u></u><u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/<u></u>li<u></u>stinfo.cgi/users-opennebula.<u></u>or<u></u>g</a><br>
</blockquote></div></div></blockquote></div>