Thanks Tino!<div>In contrast with what it's written on the documentation <u>it is possible</u> to connect to the ESXi hypervisors machines via ssh and launch commands (but only as root user). I noticed that on the ESXi 4.1 machines that we got installed there is a nice program called esxtop which can also be executed in batch mode. That command can output more of the informations that are needed by opennebula to work and schedule VMs correctly. I believe that maybe it's a good idea to rethink the IM Driver in order to gather information about resource usage using esxtop instead of virsh commands. With the latter I didn't find a command capable of retrieving the memory usage of the hypervisor, which is 800 MB in my case.</div>
<div><br></div><div><br><div class="gmail_quote">On Fri, Feb 11, 2011 at 3:10 PM, Tino Vazquez <span dir="ltr"><<a href="mailto:tinova@opennebula.org">tinova@opennebula.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi Luigi,<br>
<br>
I've updated the ticket, I will be implementing this for the next release.<br>
<br>
Regards,<br>
<br>
-Tino<br>
<div class="im"><br>
--<br>
Constantino Vázquez Blanco, MSc<br>
OpenNebula Major Contributor / Cloud Researcher<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | @tinova79<br>
<br>
<br>
<br>
</div>On Wed, Feb 9, 2011 at 3:14 PM, Luigi Fortunati<br>
<div><div></div><div class="h5"><<a href="mailto:luigi.fortunati@gmail.com">luigi.fortunati@gmail.com</a>> wrote:<br>
> Thanks Tino,<br>
> That is probably more a problem of libvirt, since VMWare IM Driver use it in<br>
> order to access information about the hosts.<br>
> In order to get information about the hosts OpenNebula launches a virsh<br>
> command and parses the output.<br>
> The script that does this work is located in $ONE_LOCATION/lib/remotes/im<br>
> and the output of the virsh command is:<br>
> oneadmin@custom2:~/lib/remotes/im$ virsh -c<br>
> esx://<a href="http://custom6.sns.it/?no_verify=1" target="_blank">custom6.sns.it/?no_verify=1</a> nodeinfo<br>
> Enter username for <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> [root]:<br>
> Enter root's password for <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a>:<br>
> CPU model: AMD Opteron(tm) Processor 246<br>
> CPU(s): 2<br>
> CPU frequency: 1992 MHz<br>
> CPU socket(s): 2<br>
> Core(s) per socket: 1<br>
> Thread(s) per core: 1<br>
> NUMA cell(s): 2<br>
> Memory size: 2096460 kB<br>
> I always get the same output, no matter how many VMs are running on the<br>
> cluster node.<br>
> That is why OpenNebula returns with an output like this:<br>
> oneadmin@custom2:~/var/96$ onehost show 1<br>
> HOST 1 INFORMATION<br>
><br>
> ID : 1<br>
> NAME : <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a><br>
> CLUSTER : default<br>
> STATE : MONITORING<br>
> IM_MAD : im_vmware<br>
> VM_MAD : vmm_vmware<br>
> TM_MAD : tm_vmware<br>
> HOST SHARES<br>
><br>
> MAX MEM : 2096460<br>
> USED MEM (REAL) : 0<br>
> USED MEM (ALLOCATED) : 0<br>
> MAX CPU : 200<br>
> USED CPU (REAL) : 0<br>
> USED CPU (ALLOCATED) : 0<br>
> RUNNING VMS : 1<br>
> MONITORING INFORMATION<br>
><br>
> CPUSPEED=1992<br>
> HYPERVISOR=vmware<br>
> TOTALCPU=200<br>
> TOTALMEMORY=2096460<br>
> OpenNebula polls cluster nodes periodically and gets only information about<br>
> hypervisor type, cpu frequency, total cpu, total memory size.<br>
> The limitation here is caused by libvirt (virsh) which is unable to return<br>
> more information about the actual usage of resources.<br>
> The integration of OpenNebula with Xen can rely on ssh access to the cluster<br>
> nodes.<br>
> The IM Driver for Xen hypervisors, launches xentop on every cluster node in<br>
> order to get information about the VMs and then parses the output.<br>
> As an example here is the output of commands xm and xentop (some info is<br>
> purged):<br>
> custom9:/ # xentop -bi2<br>
> NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%)<br>
> VCPUS NETS NETTX(k) NETRX(k)<br>
> Domain-0 -----r 102 0.0 1930260 93.7 no limit n/a<br>
> 2 0 0 0<br>
> NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%)<br>
> VCPUS NETS NETTX(k) NETRX(k)<br>
> Domain-0 -----r 102 0.3 1930260 93.7 no limit n/a<br>
> 2 0 0 0<br>
> custom9:/ # xm info<br>
> host : custom9<br>
> release : 2.6.34.7-0.5-xen<br>
> version : #1 SMP 2010-10-25 08:40:12 +0200<br>
> machine : x86_64<br>
> nr_cpus : 2<br>
> nr_nodes : 2<br>
> cores_per_socket : 1<br>
> threads_per_core : 1<br>
> cpu_mhz : 1991<br>
> [...]<br>
> total_memory : 2011<br>
> free_memory : 135<br>
> free_cpus : 0<br>
> max_free_memory : 1508<br>
> max_para_memory : 1504<br>
> max_hvm_memory : 1492<br>
> [...]<br>
> The script $ONE_LOCATION/lib/remotes/im/xen.d/xen.rb parses those two<br>
> outputs and retrieves data about memory, cpu, and network usage.<br>
> I think that VMWare drivers are scarcely useful if they can't provide the<br>
> degree of information which can be achieved with xen hypervisors and<br>
> OpenNebula, I've tested the effects of this issue in my tests.<br>
> On Tue, Feb 8, 2011 at 6:34 PM, Tino Vazquez <<a href="mailto:tinova@opennebula.org">tinova@opennebula.org</a>> wrote:<br>
>><br>
>> Hi Luigi,<br>
>><br>
>> There is a bug in the IM driver for VMware, is not reporting the Free<br>
>> memory at all. I've opened a ticket to keep track of the issue [1], it<br>
>> will be solved in the next release.<br>
>><br>
>> Regards,<br>
>><br>
>> -Tino<br>
>><br>
>> [1] <a href="http://dev.opennebula.org/issues/481" target="_blank">http://dev.opennebula.org/issues/481</a><br>
>><br>
>> --<br>
>> Constantino Vázquez Blanco, MSc<br>
>> OpenNebula Major Contributor / Cloud Researcher<br>
>> <a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | @tinova79<br>
>><br>
>><br>
>><br>
>> On Tue, Feb 8, 2011 at 12:56 PM, Luigi Fortunati<br>
>> <<a href="mailto:luigi.fortunati@gmail.com">luigi.fortunati@gmail.com</a>> wrote:<br>
>> > Ok, I tried some tests today.<br>
>> > The hardware/software environment includes 2 cluster nodes (ESXi 4.1),<br>
>> > 2Gb<br>
>> > of RAM, 2 AMD Opteron 246 Processors (2GHz), trial version licenses. The<br>
>> > opennebula installation is self-contained.<br>
>> > 800MB of memory are used by the hypervisor itself (that info comes from<br>
>> > vSphere Client) so only 1,2 GB are free, but OpenNebula seems unaware of<br>
>> > that :-(<br>
>> > oneadmin@custom2:/srv/cloud/templates/vm$ onehost list<br>
>> > ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM<br>
>> > STAT<br>
>> > 2 <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> default 0 200 200 200 2G 0K<br>
>> > on<br>
>> > 1 <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> default 0 200 200 200 2G 0K<br>
>> > on<br>
>> > oneadmin@custom2:/srv/cloud/templates/vm$ onehost show 1<br>
>> > HOST 1 INFORMATION<br>
>> ><br>
>> > ID : 1<br>
>> > NAME : <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a><br>
>> > CLUSTER : default<br>
>> > STATE : MONITORED<br>
>> > IM_MAD : im_vmware<br>
>> > VM_MAD : vmm_vmware<br>
>> > TM_MAD : tm_vmware<br>
>> > HOST SHARES<br>
>> ><br>
>> > MAX MEM : 2096460<br>
>> > USED MEM (REAL) : 0<br>
>> > USED MEM (ALLOCATED) : 0<br>
>> > MAX CPU : 200<br>
>> > USED CPU (REAL) : 0<br>
>> > USED CPU (ALLOCATED) : 0<br>
>> ><br>
>> > In each test I tried to start 3 VM using a nonpersistent image. The<br>
>> > requirements of all of the three VM cannot be satisfied by a single<br>
>> > cluster<br>
>> > node.<br>
>> > FIRST TEST:<br>
>> > The VM template for the first test is:<br>
>> > NAME = "Debian Server"<br>
>> > CPU = 1<br>
>> > MEMORY = 1024<br>
>> > OS = [ ARCH = "i686" ]<br>
>> > DISK = [IMAGE="Debian Server"]<br>
>> > Only CPU and Memory info.<br>
>> > Here is the result:<br>
>> > oneadmin@custom2:/srv/cloud/templates/vm$ onevm list<br>
>> > ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
>> > 66 oneadmin Debian S pend 0 0K 00 00:07:47<br>
>> > 67 oneadmin Debian S pend 0 0K 00 00:07:45<br>
>> > 68 oneadmin Debian S pend 0 0K 00 00:07:18<br>
>> > Forever in "pending" state... The VMs don't get scheduled<br>
>> > oned.log doesn't report anything but resource polling informational<br>
>> > messages.<br>
>> > sched.log repeats this sequence:<br>
>> > Tue Feb 8 10:02:06 2011 [HOST][D]: Discovered Hosts (enabled): 1 2<br>
>> > Tue Feb 8 10:02:06 2011 [VM][D]: Pending virtual machines : 66 67 68<br>
>> > Tue Feb 8 10:02:06 2011 [RANK][W]: No rank defined for VM<br>
>> > Tue Feb 8 10:02:06 2011 [RANK][W]: No rank defined for VM<br>
>> > Tue Feb 8 10:02:06 2011 [RANK][W]: No rank defined for VM<br>
>> > Tue Feb 8 10:02:06 2011 [SCHED][I]: Select hosts<br>
>> > PRI HID<br>
>> > -------------------<br>
>> > Virtual Machine: 66<br>
>> > Virtual Machine: 67<br>
>> > Virtual Machine: 68<br>
>> > SECOND TEST:<br>
>> > VM template:<br>
>> > NAME = "Debian Server"<br>
>> > VCPU = 1<br>
>> > MEMORY = 1024<br>
>> > OS = [ ARCH = "i686" ]<br>
>> > DISK = [IMAGE="Debian Server"]<br>
>> > Only VCPU and MEMORY info.<br>
>> > Results:<br>
>> > oneadmin@custom2:/srv/cloud/templates/vm$ onevm list<br>
>> > ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
>> > 76 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:07:40<br>
>> > 77 oneadmin Debian S runn 0 0K <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> 00 00:07:38<br>
>> > 78 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:05:58<br>
>> > Everything seems fine, but it's not since, as I said previously, each<br>
>> > host<br>
>> > has only 1.2 GB of memory free, so there's should be no space for two<br>
>> > VMs on<br>
>> > the same host.<br>
>> > oneadmin@custom2:/srv/cloud/templates/vm$ onehost list<br>
>> > ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM<br>
>> > STAT<br>
>> > 2 <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> default 2 200 200 200 2G 0K<br>
>> > on<br>
>> > 1 <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> default 1 200 200 200 2G 0K<br>
>> > on<br>
>> > Both the hosts and the VMs report no useful info on the resource usage.<br>
>> > Logging to the VM of each console and executing "free -m" command I<br>
>> > checked<br>
>> > that every VM has 1GB of total memory allocated. So i decided to test<br>
>> > the GB<br>
>> > of memory on both VM at the same time using the utility called<br>
>> > "memtester"<br>
>> > which allocate a given amount of free memory using malloc and test it.<br>
>> > The<br>
>> > results reported memory access problems.<br>
>> > I decided here to go on and check if OpenNebula and VMWare ESXi fail to<br>
>> > allocate VMs exceeding the resource capacity of the hosts, by starting<br>
>> > two<br>
>> > more VMs (requiring 1VCPU and 1GB memory each).<br>
>> > Results:<br>
>> > oneadmin@custom2:~/var/79$ onevm list<br>
>> > ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
>> > 76 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:54:47<br>
>> > 77 oneadmin Debian S runn 0 0K <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> 00 00:54:45<br>
>> > 78 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:53:05<br>
>> > 79 oneadmin Debian S boot 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:10:22<br>
>> > 80 oneadmin Debian S boot 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:09:47<br>
>> > The new VM are allocated on custom7 machine (why???) but remain frozen<br>
>> > on<br>
>> > "boot" state.<br>
>> > That is a problem because those two new VM should not be allocated to<br>
>> > any<br>
>> > cluster node.<br>
>> > THIRD TEST:<br>
>> > Here I followed Ruben suggestion...<br>
>> > The VM template:<br>
>> > oneadmin@custom2:/srv/cloud/templates/vm$ cat debian.vm<br>
>> > NAME = "Debian Server"<br>
>> > CPU = 1<br>
>> > VCPU = 1<br>
>> > MEMORY = 1024<br>
>> > OS = [ ARCH = "i686" ]<br>
>> > DISK = [IMAGE="Debian Server"]<br>
>> > Both CPU/VCPU and MEMORY info.<br>
>> > Output with 3 VM:<br>
>> > oneadmin@custom2:~/var$ onevm list<br>
>> > ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
>> > 81 oneadmin Debian S pend 0 0K 00 00:02:32<br>
>> > 82 oneadmin Debian S pend 0 0K 00 00:02:30<br>
>> > 83 oneadmin Debian S pend 0 0K 00 00:02:29<br>
>> > As in FIRST TEST the VMs don't get scheduled and remain in "pending"<br>
>> > state.<br>
>> > sched.log repeats this message:<br>
>> > Tue Feb 8 12:00:05 2011 [HOST][D]: Discovered Hosts (enabled): 1 2<br>
>> > Tue Feb 8 12:00:05 2011 [VM][D]: Pending virtual machines : 81 82 83<br>
>> > Tue Feb 8 12:00:05 2011 [RANK][W]: No rank defined for VM<br>
>> > Tue Feb 8 12:00:05 2011 [RANK][W]: No rank defined for VM<br>
>> > Tue Feb 8 12:00:05 2011 [RANK][W]: No rank defined for VM<br>
>> > Tue Feb 8 12:00:05 2011 [SCHED][I]: Select hosts<br>
>> > PRI HID<br>
>> > -------------------<br>
>> > Virtual Machine: 81<br>
>> > Virtual Machine: 82<br>
>> > Virtual Machine: 83<br>
>> > Here I assumed that probably I should not declare the number of physical<br>
>> > CPU<br>
>> > in the VM template.<br>
>> > Another last test...<br>
>> > FOURTH TEST:<br>
>> > Here I disabled an host, custom6, and started 3 VMs.<br>
>> > The VM template is the one that worked before:<br>
>> > oneadmin@custom2:/srv/cloud/templates/vm$ cat debian.vm<br>
>> > NAME = "Debian Server"<br>
>> > VCPU = 1<br>
>> > MEMORY = 1024<br>
>> > OS = [ ARCH = "i686" ]<br>
>> > DISK = [IMAGE="Debian Server"]<br>
>> > Output:<br>
>> > oneadmin@custom2:~$ onehost list<br>
>> > ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM FMEM<br>
>> > STAT<br>
>> > 2 <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> default 3 200 200 200 2G 0K<br>
>> > on<br>
>> > 1 <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> default 0 200 200 200 2G 0K<br>
>> > off<br>
>> > oneadmin@custom2:~$ onevm list<br>
>> > ID USER NAME STAT CPU MEM HOSTNAME TIME<br>
>> > 92 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:12:53<br>
>> > 93 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:12:46<br>
>> > 94 oneadmin Debian S runn 0 0K <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> 00 00:12:46<br>
>> > I verified if the VM were up and running by logging to the console of<br>
>> > each<br>
>> > one of them through vSphere Client and they were all running and<br>
>> > declaring<br>
>> > an amount of 1GB of total memory on each one of them. Since there is<br>
>> > less<br>
>> > than 1.2 GB of memory effectively free on a cluster node before the VMs<br>
>> > instantiation how can those VMs run consistently? Why OpenNebula<br>
>> > schedule<br>
>> > those VM on the same machine exceeding even the host resource capacity?<br>
>> > On Fri, Feb 4, 2011 at 11:04 PM, Ruben S. Montero <<a href="mailto:rubensm@dacya.ucm.es">rubensm@dacya.ucm.es</a>><br>
>> > wrote:<br>
>> >><br>
>> >> Hi,<br>
>> >> You have to add also de CPU capacity for the VM (apart from the number<br>
>> >> of<br>
>> >> virtual cpus CPUs). The CPU value is used at the allocation phase.<br>
>> >> However<br>
>> >> you are specifying MEMORY and should be included in the allocated<br>
>> >> memeory<br>
>> >> (USED MEMORY in onehost show) So I guess there should be other problem<br>
>> >> with<br>
>> >> your template.<br>
>> >> Cheers<br>
>> >> Ruben<br>
>> >><br>
>> >> On Fri, Feb 4, 2011 at 10:50 AM, Luigi Fortunati<br>
>> >> <<a href="mailto:luigi.fortunati@gmail.com">luigi.fortunati@gmail.com</a>> wrote:<br>
>> >>><br>
>> >>> I can post the VM template content on monday. However, as far as I<br>
>> >>> remember, the vm template was really simple:<br>
>> >>> NAME="Debian"<br>
>> >>> VCPU= 2<br>
>> >>> MEMORY=1024<br>
>> >>> DISK=[IMAGE="Debian5-i386"]<br>
>> >>> OS=[ARCH=i686]<br>
>> >>> The VMs can boot and run, I can log on console through vSphere Client<br>
>> >>> on<br>
>> >>> the newly created VMs.<br>
>> >>> I noticed that if you don't declare the number on VCPU the VM doesn't<br>
>> >>> get<br>
>> >>> scheduled on a cluster node. This option seems mandatory but I didn't<br>
>> >>> find<br>
>> >>> any mention about it on the documentation.<br>
>> >>> Another thing that seems mandatory is declaring the cpu architecture<br>
>> >>> as<br>
>> >>> i686, otherwise OpenNebula will return error when writing the<br>
>> >>> deployment.0<br>
>> >>> file.<br>
>> >>><br>
>> >>> On Thu, Feb 3, 2011 at 5:42 PM, Ruben S. Montero<br>
>> >>> <<a href="mailto:rubensm@dacya.ucm.es">rubensm@dacya.ucm.es</a>><br>
>> >>> wrote:<br>
>> >>>><br>
>> >>>> Hi,<br>
>> >>>> I am not sure this is related to the VMware monitoring... Can you<br>
>> >>>> send<br>
>> >>>> the VM Templates?<br>
>> >>>> Thanks<br>
>> >>>> Ruben<br>
>> >>>><br>
>> >>>> On Thu, Feb 3, 2011 at 5:10 PM, Luigi Fortunati<br>
>> >>>> <<a href="mailto:luigi.fortunati@gmail.com">luigi.fortunati@gmail.com</a>> wrote:<br>
>> >>>>><br>
>> >>>>> Hi,<br>
>> >>>>> I noticed a serious problem about the usage of VMWare ESXi 4.1 and<br>
>> >>>>> OpenNebula 2.0.1.<br>
>> >>>>> I'm actually using the VMWare driver addon which can be found on the<br>
>> >>>>> opennebula website (ver. 1.0) and libvirt (ver. 0.8.7).<br>
>> >>>>> It happens that OpenNebula can't get information about the usage of<br>
>> >>>>> resources on the cluster nodes.<br>
>> >>>>> By running 2 VM (each one requires 2 VCPU and 1 GB of memory) and<br>
>> >>>>> executing some commands I get this output.<br>
>> >>>>> oneadmin@custom2:~/src$ onehost list<br>
>> >>>>> ID NAME CLUSTER RVM TCPU FCPU ACPU TMEM<br>
>> >>>>> FMEM STAT<br>
>> >>>>> 2 <a href="http://custom7.sns.it" target="_blank">custom7.sns.it</a> default 0 200 200 200 2G<br>
>> >>>>> 0K off<br>
>> >>>>> 1 <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a> default 2 200 200 200 2G<br>
>> >>>>> 0K on<br>
>> >>>>> oneadmin@custom2:~/src$ onehost show 1<br>
>> >>>>> HOST 1 INFORMATION<br>
>> >>>>><br>
>> >>>>> ID : 1<br>
>> >>>>> NAME : <a href="http://custom6.sns.it" target="_blank">custom6.sns.it</a><br>
>> >>>>> CLUSTER : default<br>
>> >>>>> STATE : MONITORED<br>
>> >>>>> IM_MAD : im_vmware<br>
>> >>>>> VM_MAD : vmm_vmware<br>
>> >>>>> TM_MAD : tm_vmware<br>
>> >>>>> HOST SHARES<br>
>> >>>>><br>
>> >>>>> MAX MEM : 2096460<br>
>> >>>>> USED MEM (REAL) : 0<br>
>> >>>>> USED MEM (ALLOCATED) : 0<br>
>> >>>>> MAX CPU : 200<br>
>> >>>>> USED CPU (REAL) : 0<br>
>> >>>>> USED CPU (ALLOCATED) : 0<br>
>> >>>>> RUNNING VMS : 2<br>
>> >>>>> MONITORING INFORMATION<br>
>> >>>>><br>
>> >>>>> CPUSPEED=1992<br>
>> >>>>> HYPERVISOR=vmware<br>
>> >>>>> TOTALCPU=200<br>
>> >>>>> TOTALMEMORY=2096460<br>
>> >>>>> As you can see OpenNebula is unable to get correct information about<br>
>> >>>>> the usage of resources on the cluster nodes.<br>
>> >>>>> As these informations are used by the VM scheduler, OpenNebula is<br>
>> >>>>> unable to schedule the VM correctly.<br>
>> >>>>> I tried to create several VM and all of them were placed on the same<br>
>> >>>>> host even if the latter was unable to satisfy the resource<br>
>> >>>>> requirements of<br>
>> >>>>> all the VMs.<br>
>> >>>>> I think that this problem is strongly related to libvirt as<br>
>> >>>>> OpenNebula<br>
>> >>>>> use it to recover information about hosts and vm.<br>
>> >>>>> Do you get the same behavior? Do you know if there is a way to solve<br>
>> >>>>> this big issue?<br>
>> >>>>> --<br>
>> >>>>> Luigi Fortunati<br>
>> >>>>><br>
>> >>>>> _______________________________________________<br>
>> >>>>> Users mailing list<br>
>> >>>>> <a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
>> >>>>> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
>> >>>>><br>
>> >>>><br>
>> >>>><br>
>> >>>><br>
>> >>>> --<br>
>> >>>> Dr. Ruben Santiago Montero<br>
>> >>>> Associate Professor (Profesor Titular), Complutense University of<br>
>> >>>> Madrid<br>
>> >>>><br>
>> >>>> URL: <a href="http://dsa-research.org/doku.php?id=people:ruben" target="_blank">http://dsa-research.org/doku.php?id=people:ruben</a><br>
>> >>>> Weblog: <a href="http://blog.dsa-research.org/?author=7" target="_blank">http://blog.dsa-research.org/?author=7</a><br>
>> >>><br>
>> >>><br>
>> >>><br>
>> >>> --<br>
>> >>> Luigi Fortunati<br>
>> >><br>
>> >><br>
>> >><br>
>> >> --<br>
>> >> Dr. Ruben Santiago Montero<br>
>> >> Associate Professor (Profesor Titular), Complutense University of<br>
>> >> Madrid<br>
>> >><br>
>> >> URL: <a href="http://dsa-research.org/doku.php?id=people:ruben" target="_blank">http://dsa-research.org/doku.php?id=people:ruben</a><br>
>> >> Weblog: <a href="http://blog.dsa-research.org/?author=7" target="_blank">http://blog.dsa-research.org/?author=7</a><br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > Luigi Fortunati<br>
>> ><br>
>> > _______________________________________________<br>
>> > Users mailing list<br>
>> > <a href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
>> > <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
>> ><br>
>> ><br>
><br>
><br>
><br>
> --<br>
> Luigi Fortunati<br>
><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Luigi Fortunati<br>
</div>