Thanks Tino,<div>That is probably more a problem of libvirt, since VMWare IM Driver use it in order to access information about the hosts.<div>In order to get information about the hosts OpenNebula launches a virsh command and parses the output.</div>
<div>The script that does this work is located in $ONE_LOCATION/lib/remotes/im and the output of the virsh command is:</div><div><div><font class="Apple-style-span" face="'courier new', monospace">oneadmin@custom2:~/lib/remotes/im$ virsh -c esx://<a href="http://custom6.sns.it/?no_verify=1">custom6.sns.it/?no_verify=1</a> nodeinfo</font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">Enter username for <a href="http://custom6.sns.it">custom6.sns.it</a> [root]: </font></div><div><font class="Apple-style-span" face="'courier new', monospace">Enter root's password for <a href="http://custom6.sns.it">custom6.sns.it</a>: </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">CPU model: AMD Opteron(tm) Processor 246</font></div><div><font class="Apple-style-span" face="'courier new', monospace">CPU(s): 2</font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">CPU frequency: 1992 MHz</font></div><div><font class="Apple-style-span" face="'courier new', monospace">CPU socket(s): 2</font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">Core(s) per socket: 1</font></div><div><font class="Apple-style-span" face="'courier new', monospace">Thread(s) per core: 1</font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">NUMA cell(s): 2</font></div><div><font class="Apple-style-span" face="'courier new', monospace">Memory size: 2096460 kB</font></div>
</div><div><font class="Apple-style-span" face="'courier new', monospace"><br></font></div><div><font class="Apple-style-span" face="arial, helvetica, sans-serif">I always get the same output, no matter how many VMs are running on the cluster node.</font></div>
<div>That is why OpenNebula returns with an output like this:</div><div><br></div><div><div><font class="Apple-style-span" face="'courier new', monospace">oneadmin@custom2:~/var/96$ onehost show 1</font></div><div>
<font class="Apple-style-span" face="'courier new', monospace">HOST 1 INFORMATION </font></div><div><font class="Apple-style-span" face="'courier new', monospace">ID : 1 </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">NAME : <a href="http://custom6.sns.it">custom6.sns.it</a> </font></div><div><font class="Apple-style-span" face="'courier new', monospace">CLUSTER : default </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">STATE : MONITORING </font></div><div><font class="Apple-style-span" face="'courier new', monospace">IM_MAD : im_vmware </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">VM_MAD : vmm_vmware </font></div><div><font class="Apple-style-span" face="'courier new', monospace">TM_MAD : tm_vmware </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace"><br></font></div><div><font class="Apple-style-span" face="'courier new', monospace">HOST SHARES </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">MAX MEM : 2096460 </font></div><div><font class="Apple-style-span" face="'courier new', monospace">USED MEM (REAL) : 0 </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">USED MEM (ALLOCATED) : 0 </font></div><div><font class="Apple-style-span" face="'courier new', monospace">MAX CPU : 200 </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">USED CPU (REAL) : 0 </font></div><div><font class="Apple-style-span" face="'courier new', monospace">USED CPU (ALLOCATED) : 0 </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">RUNNING VMS : 1 </font></div><div><font class="Apple-style-span" face="'courier new', monospace"><br></font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">MONITORING INFORMATION </font></div><div><font class="Apple-style-span" face="'courier new', monospace">CPUSPEED=1992</font></div>
<div><font class="Apple-style-span" face="'courier new', monospace">HYPERVISOR=vmware</font></div><div><font class="Apple-style-span" face="'courier new', monospace">TOTALCPU=200</font></div><div><font class="Apple-style-span" face="'courier new', monospace">TOTALMEMORY=2096460</font></div>
</div><div><br></div><div>OpenNebula polls cluster nodes periodically and gets only information about hypervisor type, cpu frequency, total cpu, total memory size.</div><div>The limitation here is caused by libvirt (virsh) which is unable to return more information about the actual usage of resources.</div>
<div><br></div><div>The integration of OpenNebula with Xen can rely on ssh access to the cluster nodes.</div><div>The IM Driver for Xen hypervisors, launches xentop on every cluster node in order to get information about the VMs and then parses the output.</div>
<div>As an example here is the output of commands xm and xentop (some info is purged):</div><div><div><font class="Apple-style-span" face="'courier new', monospace" size="1">custom9:/ # xentop -bi2</font></div><div>
<font class="Apple-style-span" face="'courier new', monospace" size="1"> NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k)</font></div><div><font class="Apple-style-span" face="'courier new', monospace" size="1"> Domain-0 -----r 102 0.0 1930260 93.7 no limit n/a 2 0 0 0 </font></div>
<div><font class="Apple-style-span" face="'courier new', monospace" size="1"> NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k)</font></div><div><font class="Apple-style-span" face="'courier new', monospace" size="1"> Domain-0 -----r 102 0.3 1930260 93.7 no limit n/a 2 0 0 0</font></div>
</div><div><font class="Apple-style-span" face="'courier new', monospace" size="1"><br></font></div><div><div><div><div style="font-family: 'courier new', monospace; ">custom9:/ # xm info</div><div style="font-family: 'courier new', monospace; ">
host : custom9</div><div style="font-family: 'courier new', monospace; ">release : 2.6.34.7-0.5-xen</div><div style="font-family: 'courier new', monospace; ">version : #1 SMP 2010-10-25 08:40:12 +0200</div>
<div style="font-family: 'courier new', monospace; ">machine : x86_64</div><div style="font-family: 'courier new', monospace; ">nr_cpus : 2</div><div style="font-family: 'courier new', monospace; ">
nr_nodes : 2</div><div style="font-family: 'courier new', monospace; ">cores_per_socket : 1</div><div style="font-family: 'courier new', monospace; ">threads_per_core : 1</div><div style="font-family: 'courier new', monospace; ">
cpu_mhz : 1991</div><div style="font-family: 'courier new', monospace; ">[...]</div><div style="font-family: 'courier new', monospace; ">total_memory : 2011</div><div style="font-family: 'courier new', monospace; ">
free_memory : 135</div><div style="font-family: 'courier new', monospace; ">free_cpus : 0</div><div style="font-family: 'courier new', monospace; ">max_free_memory : 1508</div>
<div style="font-family: 'courier new', monospace; ">max_para_memory : 1504</div><div style="font-family: 'courier new', monospace; ">max_hvm_memory : 1492</div><div style="font-family: 'courier new', monospace; ">
[...]</div><div style="font-family: 'courier new', monospace; "><br></div><div><font class="Apple-style-span" face="arial, helvetica, sans-serif">The script $ONE_LOCATION/lib/remotes/im/xen.d/xen.rb parses those two outputs and retrieves data about memory, cpu, and network usage.</font></div>
</div></div></div><div><font class="Apple-style-span" face="arial, helvetica, sans-serif"><br></font></div><div><font class="Apple-style-span" face="arial, helvetica, sans-serif">I think that VMWare drivers are scarcely useful if they can't provide the degree of information which can be achieved with xen hypervisors and OpenNebula, I've tested the effects of this issue in my tests.</font></div>
<div><font class="Apple-style-span" face="arial, helvetica, sans-serif"><br></font></div><div>
<div class="gmail_quote">On Tue, Feb 8, 2011 at 6:34 PM, Tino Vazquez <span dir="ltr"><<a href="mailto:tinova@opennebula.org" target="_blank">tinova@opennebula.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Luigi,<br>
<br>
There is a bug in the IM driver for VMware, is not reporting the Free<br>
memory at all. I've opened a ticket to keep track of the issue [1], it<br>
will be solved in the next release.<br>
<br>
Regards,<br>
<br>
-Tino<br>
<br>
[1] <a href="http://dev.opennebula.org/issues/481" target="_blank">http://dev.opennebula.org/issues/481</a><br>
<font color="#888888"><br>
--<br>
Constantino Vázquez Blanco, MSc<br>
OpenNebula Major Contributor / Cloud Researcher<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | @tinova79<br>
<br>
<br>
<br>
On Tue, Feb 8, 2011 at 12:56 PM, Luigi Fortunati<br>
</font><div><div></div><div><<a href="mailto:luigi.fortunati@gmail.com" target="_blank">luigi.fortunati@gmail.com</a>> wrote:<br>
> Ok, I tried some tests today.<br>
> The hardware/software environment includes 2 cluster nodes (ESXi 4.1), 2Gb<br>
> of RAM, 2 AMD Opteron 246 Processors (2GHz), trial version licenses. The<br>
> opennebula installation is self-contained.<br>
> 800MB of memory are used by the hypervisor itself (that info comes from<br>
> vSphere Client) so only 1,2 GB are free, but OpenNebula seems unaware of<br>
> that :-(<br>
> oneadmin@custom2:/srv/cloud/templates/vm$ onehost list<br>
> ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM<br>
> STAT<br>
> 2 <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> default 0 200 200 200 2G 0K<br>
> on<br>
> 1 <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> default 0 200 200 200 2G 0K<br>
> on<br>
> oneadmin@custom2:/srv/cloud/templates/vm$ onehost show 1<br>
> HOST 1 INFORMATION<br>
><br>
> ID : 1<br>
> NAME : <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a><br>
> CLUSTER : default<br>
> STATE : MONITORED<br>
> IM_MAD : im_vmware<br>
> VM_MAD : vmm_vmware<br>
> TM_MAD : tm_vmware<br>
> HOST SHARES<br>
><br>
> MAX MEM : 2096460<br>
> USED MEM (REAL) : 0<br>
> USED MEM (ALLOCATED) : 0<br>
> MAX CPU : 200<br>
> USED CPU (REAL) : 0<br>
> USED CPU (ALLOCATED) : 0<br>
><br>
> In each test I tried to start 3 VM using a nonpersistent image. The<br>
> requirements of all of the three VM cannot be satisfied by a single cluster<br>
> node.<br>
> FIRST TEST:<br>
> The VM template for the first test is:<br>
> NAME = "Debian Server"<br>
> CPU = 1<br>
> MEMORY = 1024<br>
> OS = [ ARCH = "i686" ]<br>
> DISK = [IMAGE="Debian Server"]<br>
> Only CPU and Memory info.<br>
> Here is the result:<br>
> oneadmin@custom2:/srv/cloud/templates/vm$ onevm list<br>
> ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
> 66 oneadmin Debian S pend 0 0K 00 00:07:47<br>
> 67 oneadmin Debian S pend 0 0K 00 00:07:45<br>
> 68 oneadmin Debian S pend 0 0K 00 00:07:18<br>
> Forever in "pending" state... The VMs don't get scheduled<br>
> oned.log doesn't report anything but resource polling informational<br>
> messages.<br>
> sched.log repeats this sequence:<br>
> Tue Feb 8 10:02:06 2011 [HOST][D]: Discovered Hosts (enabled): 1 2<br>
> Tue Feb 8 10:02:06 2011 [VM][D]: Pending virtual machines : 66 67 68<br>
> Tue Feb 8 10:02:06 2011 [RANK][W]: No rank defined for VM<br>
> Tue Feb 8 10:02:06 2011 [RANK][W]: No rank defined for VM<br>
> Tue Feb 8 10:02:06 2011 [RANK][W]: No rank defined for VM<br>
> Tue Feb 8 10:02:06 2011 [SCHED][I]: Select hosts<br>
> PRI HID<br>
> -------------------<br>
> Virtual Machine: 66<br>
> Virtual Machine: 67<br>
> Virtual Machine: 68<br>
> SECOND TEST:<br>
> VM template:<br>
> NAME = "Debian Server"<br>
> VCPU = 1<br>
> MEMORY = 1024<br>
> OS = [ ARCH = "i686" ]<br>
> DISK = [IMAGE="Debian Server"]<br>
> Only VCPU and MEMORY info.<br>
> Results:<br>
> oneadmin@custom2:/srv/cloud/templates/vm$ onevm list<br>
> ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
> 76 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:07:40<br>
> 77 oneadmin Debian S runn 0 0K <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> 00 00:07:38<br>
> 78 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:05:58<br>
> Everything seems fine, but it's not since, as I said previously, each host<br>
> has only 1.2 GB of memory free, so there's should be no space for two VMs on<br>
> the same host.<br>
> oneadmin@custom2:/srv/cloud/templates/vm$ onehost list<br>
> ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM<br>
> STAT<br>
> 2 <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> default 2 200 200 200 2G 0K<br>
> on<br>
> 1 <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> default 1 200 200 200 2G 0K<br>
> on<br>
> Both the hosts and the VMs report no useful info on the resource usage.<br>
> Logging to the VM of each console and executing "free -m" command I checked<br>
> that every VM has 1GB of total memory allocated. So i decided to test the GB<br>
> of memory on both VM at the same time using the utility called "memtester"<br>
> which allocate a given amount of free memory using malloc and test it. The<br>
> results reported memory access problems.<br>
> I decided here to go on and check if OpenNebula and VMWare ESXi fail to<br>
> allocate VMs exceeding the resource capacity of the hosts, by starting two<br>
> more VMs (requiring 1VCPU and 1GB memory each).<br>
> Results:<br>
> oneadmin@custom2:~/var/79$ onevm list<br>
> ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
> 76 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:54:47<br>
> 77 oneadmin Debian S runn 0 0K <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> 00 00:54:45<br>
> 78 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:53:05<br>
> 79 oneadmin Debian S boot 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:10:22<br>
> 80 oneadmin Debian S boot 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:09:47<br>
> The new VM are allocated on custom7 machine (why???) but remain frozen on<br>
> "boot" state.<br>
> That is a problem because those two new VM should not be allocated to any<br>
> cluster node.<br>
> THIRD TEST:<br>
> Here I followed Ruben suggestion...<br>
> The VM template:<br>
> oneadmin@custom2:/srv/cloud/templates/vm$ cat debian.vm<br>
> NAME = "Debian Server"<br>
> CPU = 1<br>
> VCPU = 1<br>
> MEMORY = 1024<br>
> OS = [ ARCH = "i686" ]<br>
> DISK = [IMAGE="Debian Server"]<br>
> Both CPU/VCPU and MEMORY info.<br>
> Output with 3 VM:<br>
> oneadmin@custom2:~/var$ onevm list<br>
> ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
> 81 oneadmin Debian S pend 0 0K 00 00:02:32<br>
> 82 oneadmin Debian S pend 0 0K 00 00:02:30<br>
> 83 oneadmin Debian S pend 0 0K 00 00:02:29<br>
> As in FIRST TEST the VMs don't get scheduled and remain in "pending" state.<br>
> sched.log repeats this message:<br>
> Tue Feb 8 12:00:05 2011 [HOST][D]: Discovered Hosts (enabled): 1 2<br>
> Tue Feb 8 12:00:05 2011 [VM][D]: Pending virtual machines : 81 82 83<br>
> Tue Feb 8 12:00:05 2011 [RANK][W]: No rank defined for VM<br>
> Tue Feb 8 12:00:05 2011 [RANK][W]: No rank defined for VM<br>
> Tue Feb 8 12:00:05 2011 [RANK][W]: No rank defined for VM<br>
> Tue Feb 8 12:00:05 2011 [SCHED][I]: Select hosts<br>
> PRI HID<br>
> -------------------<br>
> Virtual Machine: 81<br>
> Virtual Machine: 82<br>
> Virtual Machine: 83<br>
> Here I assumed that probably I should not declare the number of physical CPU<br>
> in the VM template.<br>
> Another last test...<br>
> FOURTH TEST:<br>
> Here I disabled an host, custom6, and started 3 VMs.<br>
> The VM template is the one that worked before:<br>
> oneadmin@custom2:/srv/cloud/templates/vm$ cat debian.vm<br>
> NAME = "Debian Server"<br>
> VCPU = 1<br>
> MEMORY = 1024<br>
> OS = [ ARCH = "i686" ]<br>
> DISK = [IMAGE="Debian Server"]<br>
> Output:<br>
> oneadmin@custom2:~$ onehost list<br>
> ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM<br>
> STAT<br>
> 2 <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> default 3 200 200 200 2G 0K<br>
> on<br>
> 1 <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> default 0 200 200 200 2G 0K<br>
> off<br>
> oneadmin@custom2:~$ onevm list<br>
> ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
> 92 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:12:53<br>
> 93 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:12:46<br>
> 94 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:12:46<br>
> I verified if the VM were up and running by logging to the console of each<br>
> one of them through vSphere Client and they were all running and declaring<br>
> an amount of 1GB of total memory on each one of them. Since there is less<br>
> than 1.2 GB of memory effectively free on a cluster node before the VMs<br>
> instantiation how can those VMs run consistently? Why OpenNebula schedule<br>
> those VM on the same machine exceeding even the host resource capacity?<br>
> On Fri, Feb 4, 2011 at 11:04 PM, Ruben S. Montero <<a href="mailto:rubensm@dacya.ucm.es" target="_blank">rubensm@dacya.ucm.es</a>><br>
> wrote:<br>
>><br>
>> Hi,<br>
>> You have to add also de CPU capacity for the VM (apart from the number of<br>
>> virtual cpus CPUs). The CPU value is used at the allocation phase. However<br>
>> you are specifying MEMORY and should be included in the allocated memeory<br>
>> (USED MEMORY in onehost show) So I guess there should be other problem with<br>
>> your template.<br>
>> Cheers<br>
>> Ruben<br>
>><br>
>> On Fri, Feb 4, 2011 at 10:50 AM, Luigi Fortunati<br>
>> <<a href="mailto:luigi.fortunati@gmail.com" target="_blank">luigi.fortunati@gmail.com</a>> wrote:<br>
>>><br>
>>> I can post the VM template content on monday. However, as far as I<br>
>>> remember, the vm template was really simple:<br>
>>> NAME="Debian"<br>
>>> VCPU= 2<br>
>>> MEMORY=1024<br>
>>> DISK=[IMAGE="Debian5-i386"]<br>
>>> OS=[ARCH=i686]<br>
>>> The VMs can boot and run, I can log on console through vSphere Client on<br>
>>> the newly created VMs.<br>
>>> I noticed that if you don't declare the number on VCPU the VM doesn't get<br>
>>> scheduled on a cluster node. This option seems mandatory but I didn't find<br>
>>> any mention about it on the documentation.<br>
>>> Another thing that seems mandatory is declaring the cpu architecture as<br>
>>> i686, otherwise OpenNebula will return error when writing the deployment.0<br>
>>> file.<br>
>>><br>
>>> On Thu, Feb 3, 2011 at 5:42 PM, Ruben S. Montero <<a href="mailto:rubensm@dacya.ucm.es" target="_blank">rubensm@dacya.ucm.es</a>><br>
>>> wrote:<br>
>>>><br>
>>>> Hi,<br>
>>>> I am not sure this is related to the VMware monitoring... Can you send<br>
>>>> the VM Templates?<br>
>>>> Thanks<br>
>>>> Ruben<br>
>>>><br>
>>>> On Thu, Feb 3, 2011 at 5:10 PM, Luigi Fortunati<br>
>>>> <<a href="mailto:luigi.fortunati@gmail.com" target="_blank">luigi.fortunati@gmail.com</a>> wrote:<br>
>>>>><br>
>>>>> Hi,<br>
>>>>> I noticed a serious problem about the usage of VMWare ESXi 4.1 and<br>
>>>>> OpenNebula 2.0.1.<br>
>>>>> I'm actually using the VMWare driver addon which can be found on the<br>
>>>>> opennebula website (ver. 1.0) and libvirt (ver. 0.8.7).<br>
>>>>> It happens that OpenNebula can't get information about the usage of<br>
>>>>> resources on the cluster nodes.<br>
>>>>> By running 2 VM (each one requires 2 VCPU and 1 GB of memory) and<br>
>>>>> executing some commands I get this output.<br>
>>>>> oneadmin@custom2:~/src$ onehost list<br>
>>>>> ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM<br>
>>>>> FMEM STAT<br>
>>>>> 2 <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> default 0 200 200 200 2G<br>
>>>>> 0K off<br>
>>>>> 1 <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> default 2 200 200 200 2G<br>
>>>>> 0K on<br>
>>>>> oneadmin@custom2:~/src$ onehost show 1<br>
>>>>> HOST 1 INFORMATION<br>
>>>>><br>
>>>>> ID : 1<br>
>>>>> NAME : <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a><br>
>>>>> CLUSTER : default<br>
>>>>> STATE : MONITORED<br>
>>>>> IM_MAD : im_vmware<br>
>>>>> VM_MAD : vmm_vmware<br>
>>>>> TM_MAD : tm_vmware<br>
>>>>> HOST SHARES<br>
>>>>><br>
>>>>> MAX MEM : 2096460<br>
>>>>> USED MEM (REAL) : 0<br>
>>>>> USED MEM (ALLOCATED) : 0<br>
>>>>> MAX CPU : 200<br>
>>>>> USED CPU (REAL) : 0<br>
>>>>> USED CPU (ALLOCATED) : 0<br>
>>>>> RUNNING VMS : 2<br>
>>>>> MONITORING INFORMATION<br>
>>>>><br>
>>>>> CPUSPEED=1992<br>
>>>>> HYPERVISOR=vmware<br>
>>>>> TOTALCPU=200<br>
>>>>> TOTALMEMORY=2096460<br>
>>>>> As you can see OpenNebula is unable to get correct information about<br>
>>>>> the usage of resources on the cluster nodes.<br>
>>>>> As these informations are used by the VM scheduler, OpenNebula is<br>
>>>>> unable to schedule the VM correctly.<br>
>>>>> I tried to create several VM and all of them were placed on the same<br>
>>>>> host even if the latter was unable to satisfy the resource requirements of<br>
>>>>> all the VMs.<br>
>>>>> I think that this problem is strongly related to libvirt as OpenNebula<br>
>>>>> use it to recover information about hosts and vm.<br>
>>>>> Do you get the same behavior? Do you know if there is a way to solve<br>
>>>>> this big issue?<br>
>>>>> --<br>
>>>>> Luigi Fortunati<br>
>>>>><br>
>>>>> _______________________________________________<br>
>>>>> Users mailing list<br>
>>>>> <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
>>>>> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
>>>>><br>
>>>><br>
>>>><br>
>>>><br>
>>>> --<br>
>>>> Dr. Ruben Santiago Montero<br>
>>>> Associate Professor (Profesor Titular), Complutense University of Madrid<br>
>>>><br>
>>>> URL: <a href="http://dsa-research.org/doku.php?id=people:ruben" target="_blank">http://dsa-research.org/doku.php?id=people:ruben</a><br>
>>>> Weblog: <a href="http://blog.dsa-research.org/?author=7" target="_blank">http://blog.dsa-research.org/?author=7</a><br>
>>><br>
>>><br>
>>><br>
>>> --<br>
>>> Luigi Fortunati<br>
>><br>
>><br>
>><br>
>> --<br>
>> Dr. Ruben Santiago Montero<br>
>> Associate Professor (Profesor Titular), Complutense University of Madrid<br>
>><br>
>> URL: <a href="http://dsa-research.org/doku.php?id=people:ruben" target="_blank">http://dsa-research.org/doku.php?id=people:ruben</a><br>
>> Weblog: <a href="http://blog.dsa-research.org/?author=7" target="_blank">http://blog.dsa-research.org/?author=7</a><br>
><br>
><br>
><br>
> --<br>
> Luigi Fortunati<br>
><br>
> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
><br>
><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Luigi Fortunati<br>
</div></div>