<div dir="ltr">This seems to be a problem when upgrading the DB, See the inconsistency in fgtest14: <div><br></div><div><RUNNING_VMS>5</RUNNING_VMS>....<VMS></VMS><br></div><div><br></div><div>That's the reason for not seeing any action taken on VM 26 it is not registered in the host (empty <VM> set)</div>
<div><br></div><div>I suggest to stop oned and execute onedb fsck</div><div><br></div><div>Cheers</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 30, 2014 at 4:44 PM, Steven Timm <span dir="ltr"><<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">OK--I have now installed the opennebula-node-kvm rpm on<br>
all of the VM hosts (SURPRISE), made sure that the collectd<br>
that is running is the current one from opennebula 4.6,<br>
and verified that the run_probes kvm-probes can<br>
run interactively as oneadmin on all of the nodes. the one on<br>
fgtest14 correctly reports that there are no running VM's,<br>
and the two machines that do have running vm's correctly report<br>
that they do have running VM's.<br>
<br>
Only problem is, the five virtual machines that opennebula still thinks<br>
are running on fgtest14, still report back as running<br>
even though opennebula hasn't made any attempt to monitor them?<br>
<br>
How do we get things back into sync and tell opennebula that VM #26<br>
isn't really running anymore? Is there a way to force this vm into "unknown" state so we can do a onevm boot on it? Database hackery included? Even better, has someone come up with an XML hacker to<br>
do the XML substitition of one field in the huge mysql field?<br>
<br>
Even more important: it's clear that the monitoring was obviously<br>
failing and failing for a long time because we didn't have the<br>
sudoers file there that the opennebula-node-kvm provides.<br>
But there was absolutely no warning of that.. as far as the<br>
head node was concerned we were happy as a clam.<br>
<br>
<br>
----<br>
<br>
The important pieces of output from run_probes kvm-probes<br>
<br>
fgtest19<br>
ARCH=x86_64<br>
MODELNAME="Intel(R) Xeon(R) CPU E5450 @ 3.00GHz"<br>
HYPERVISOR=kvm<br>
TOTALCPU=800<br>
CPUSPEED=2992<br>
TOTALMEMORY=33010680<br>
USEDMEMORY=1586216<br>
FREEMEMORY=31424464<br>
FREECPU=800.0<br>
USEDCPU=0.0<br>
NETRX=5958104400<br>
NETTX=2323329968<br>
DS_LOCATION_USED_MB=1924<br>
DS_LOCATION_TOTAL_MB=280380<br>
DS_LOCATION_FREE_MB=264129<br>
DS = [<br>
ID = 102,<br>
USED_MB = 1924,<br>
TOTAL_MB = 280380,<br>
FREE_MB = 264129<br>
]<br>
HOSTNAME=<a href="http://fgtest19.fnal.gov" target="_blank">fgtest19.fnal.gov</a><br>
VM_POLL=YES<br>
VM=[<br>
ID=55,<br>
DEPLOY_ID=one-55,<br>
POLL="NETRX=25289118 USEDCPU=0.0 NETTX=214808 USEDMEMORY=4194304 STATE=a" ]<br>
VERSION="4.6.0"<br>
fgtest20<br>
ARCH=x86_64<br>
MODELNAME="Intel(R) Xeon(R) CPU E5450 @ 3.00GHz"<br>
HYPERVISOR=kvm<br>
TOTALCPU=800<br>
CPUSPEED=2992<br>
TOTALMEMORY=32875804<br>
USEDMEMORY=8801100<br>
FREEMEMORY=24074704<br>
FREECPU=793.6<br>
USEDCPU=6.39999999999998<br>
NETRX=184155823062<br>
NETTX=58685116817<br>
DS_LOCATION_USED_MB=50049<br>
DS_LOCATION_TOTAL_MB=281012<br>
DS_LOCATION_FREE_MB=216499<br>
DS = [<br>
ID = 102,<br>
USED_MB = 50049,<br>
TOTAL_MB = 281012,<br>
FREE_MB = 216499<br>
]<br>
HOSTNAME=<a href="http://fgtest20.fnal.gov" target="_blank">fgtest20.fnal.gov</a><br>
VM_POLL=YES<br>
VM=[<br>
ID=31,<br>
DEPLOY_ID=one-31,<br>
POLL="NETRX=71728978887 USEDCPU=0.5 NETTX=54281255903 USEDMEMORY=4270812 STATE=a" ]<br>
VM=[<br>
ID=24,<br>
DEPLOY_ID=one-24,<br>
POLL="NETRX=2383960153 USEDCPU=0.0 NETTX=17345416 USEDMEMORY=4194304 STATE=a" ]<br>
VM=[<br>
ID=48,<br>
DEPLOY_ID=one-48,<br>
POLL="NETRX=<a href="tel:2546074171" value="+12546074171" target="_blank">2546074171</a> USEDCPU=0.0 NETTX=145782495 USEDMEMORY=4194304 STATE=a" ]<br>
VERSION="4.6.0"<br>
<br>
fgtest14<br>
ARCH=x86_64<br>
MODELNAME="Intel(R) Xeon(R) CPU E5450 @ 3.00GHz"<br>
HYPERVISOR=kvm<br>
TOTALCPU=800<br>
CPUSPEED=2992<br>
TOTALMEMORY=24736796<br>
USEDMEMORY=937004<br>
FREEMEMORY=23799792<br>
FREECPU=800.0<br>
USEDCPU=0.0<br>
NETRX=285471609<br>
NETTX=25467521<br>
DS_LOCATION_USED_MB=179498<br>
DS_LOCATION_TOTAL_MB=561999<br>
DS_LOCATION_FREE_MB=353864<br>
DS = [<br>
ID = 102,<br>
USED_MB = 179498,<br>
TOTAL_MB = 561999,<br>
FREE_MB = 353864<br>
]<br>
<br>
-------------------------<br>
And the appropriate excerpts from oned.log:<br>
<br>
/var/log/one/oned.log.<u></u>20140728111811:Fri Jul 25 15:22:05 2014 [DiM][D]: Restarting VM 26<br>
/var/log/one/oned.log.<u></u>20140728111811:Fri Jul 25 15:22:05 2014 [DiM][E]: Could not restart VM 26, wrong state.<br>
/var/log/one/oned.log.<u></u>20140728111811:Fri Jul 25 15:37:48 2014 [DiM][D]: Stopping VM 26<br>
/var/log/one/oned.log.<u></u>20140728111811:Fri Jul 25 15:37:48 2014 [VMM][D]: VM 26 successfully monitored: STATE=-<br>
------------------------------<u></u>-----<br>
<br>
This is the mysql row in host_pool for host fgtest14<br>
mysql><br>
mysql> select * from host_pool where oid=8 \G<br>
*************************** 1. row ***************************<br>
oid: 8<br>
name: fgtest14<br>
body: <HOST><ID>8</ID><NAME><u></u>fgtest14</NAME><STATE>2</<u></u>STATE><IM_MAD>kvm</IM_MAD><VM_<u></u>MAD>kvm</VM_MAD><VN_MAD>dummy<<u></u>/VN_MAD><LAST_MON_TIME><u></u>1406731190</LAST_MON_TIME><<u></u>CLUSTER_ID>101</CLUSTER_ID><<u></u>CLUSTER>ipv6</CLUSTER><HOST_<u></u>SHARE><DISK_USAGE>0</DISK_<u></u>USAGE><MEM_USAGE>0</MEM_USAGE><u></u><CPU_USAGE>0</CPU_USAGE><MAX_<u></u>DISK>561999</MAX_DISK><MAX_<u></u>MEM>24736796</MAX_MEM><MAX_<u></u>CPU>800</MAX_CPU><FREE_DISK><u></u>353864</FREE_DISK><FREE_MEM><u></u>23802216</FREE_MEM><FREE_CPU><u></u>800</FREE_CPU><USED_DISK><u></u>179498</USED_DISK><USED_MEM><u></u>934580</USED_MEM><USED_CPU>0</<u></u>USED_CPU><RUNNING_VMS>5</<u></u>RUNNING_VMS><DATASTORES><DS><<u></u>FREE_MB><![CDATA[353864]]></<u></u>FREE_MB><ID><![CDATA[102]]></<u></u>ID><TOTAL_MB><![CDATA[561999]]<u></u>></TOTAL_MB><USED_MB><![CDATA[<u></u>179498]]></USED_MB></DS></<u></u>DATASTORES></HOST_SHARE><VMS><<u></u>/VMS><TEMPLATE><ARCH><![CDATA[<u></u>x86_64]]></ARCH><CPUSPEED><![<u></u>CDATA[2992]]></CPUSPEED><<u></u>HOSTNAME><![CDATA[<a href="http://fgtest14.fnal.gov" target="_blank">fgtest14.<u></u>fnal.gov</a>]]></HOSTNAME><<u></u>HYPERVISOR><![CDATA[kvm]]></<u></u>HYPERVISOR><MODELNAME><![<u></u>CDATA[Intel(R) Xeon(R) CPU E5450 @ 3.00GHz]]></MODELNAME><NETRX><<u></u>![CDATA[285677608]]></NETRX><<u></u>NETTX><![CDATA[25489275]]></<u></u>NETTX><RESERVED_CPU><![CDATA[]<u></u>]></RESERVED_CPU><RESERVED_<u></u>MEM><![CDATA[]]></RESERVED_<u></u>MEM><VERSION><![CDATA[4.6.0]]><u></u></VERSION></TEMPLATE></HOST><br>
state: 2<br>
last_mon_time: 1406731190<br>
uid: 0<br>
gid: 0<br>
owner_u: 1<br>
group_u: 0<br>
other_u: 0<br>
cid: 101<br>
1 row in set (0.00 sec)<br>
<br>
<br>
<br>
And this is the row in vm_pool for VM id 26<br>
<br>
*************************** 1. row ***************************<br>
oid: 26<br>
name: fgt6x4-26<br>
body: <VM><ID>26</ID><UID>0</UID><<u></u>GID>0</GID><UNAME>oneadmin</<u></u>UNAME><GNAME>oneadmin</GNAME><<u></u>NAME>fgt6x4-26</NAME><<u></u>PERMISSIONS><OWNER_U>1</OWNER_<u></u>U><OWNER_M>1</OWNER_M><OWNER_<u></u>A>0</OWNER_A><GROUP_U>0</<u></u>GROUP_U><GROUP_M>0</GROUP_M><<u></u>GROUP_A>0</GROUP_A><OTHER_U>0<<u></u>/OTHER_U><OTHER_M>0</OTHER_M><<u></u>OTHER_A>0</OTHER_A></<u></u>PERMISSIONS><LAST_POLL><u></u>1406320668</LAST_POLL><STATE><u></u>3</STATE><LCM_STATE>3</LCM_<u></u>STATE><RESCHED>0</RESCHED><<u></u>STIME>1396463735</STIME><<u></u>ETIME>0</ETIME><DEPLOY_ID>one-<u></u>26</DEPLOY_ID><MEMORY>4194304<<u></u>/MEMORY><CPU>6</CPU><NET_TX><u></u>748982286</NET_TX><NET_RX><u></u>1588690678</NET_RX><TEMPLATE><<u></u>AUTOMATIC_REQUIREMENTS><![<u></u>CDATA[CLUSTER_ID = 101 & !(PUBLIC_CLOUD = YES)]]></AUTOMATIC_<u></u>REQUIREMENTS><CONTEXT><CTX_<u></u>USER><![CDATA[PFVTRVI+<u></u>PElEPjA8L0lEPjxHSUQ+<u></u>MDwvR0lEPjxHUk9VUFM+<u></u>PElEPjA8L0lEPjwvR1JPVVBTPjxHTk<u></u>FNRT5vbmVhZG1pbjwvR05BTUU+<u></u>PE5BTUU+b25lYWRtaW48L05BTUU+<u></u>PFBBU1NXT1JEPjFmNjQxYzdlMzZkZW<u></u>U5MmUzNDQ0Mjk2NmI1OTYwMGJkMGE3<u></u>ZmU5ZDQ8L1BBU1NXT1JEPjxBVVRIX0<u></u>RSSVZFUj5jb3JlPC9BVVRIX0RSSVZF<u></u>Uj48RU5BQkxFRD4xPC9FTkFCTEVEPj<u></u>xURU1QTEFURT48VE9LRU5fUEFTU1dP<u></u>UkQ+<u></u>PCFbQ0RBVEFbNzFhYzU0OWM5MzhmNj<u></u>A0NmY3NDEzMDI4Y2ZhOGNjODU2YzI2<u></u>ZGNhNV1dPjwvVE9LRU5fUEFTU1dPUk<u></u>Q+<u></u>PC9URU1QTEFURT48REFUQVNUT1JFX1<u></u>FVT1RBPjwvREFUQVNUT1JFX1FVT1RB<u></u>PjxORVRXT1JLX1FVT1RBPjwvTkVUV0<u></u>9SS19RVU9UQT48Vk1fUVVPVEE+<u></u>PC9WTV9RVU9UQT48SU1BR0VfUVVPVE<u></u>E+PC9JTUFHRV9RVU9UQT48L1VTRVI+<u></u>]]></CTX_USER><DISK_ID><![<u></u>CDATA[2]]></DISK_ID><ETH0_DNS><u></u><![CDATA[131.225.0.254]]></<u></u>ETH0_DNS><ETH0_GATEWAY><![<u></u>CDATA<a href="tel:%5B131.225.41.200" value="+13122541200" target="_blank">[131.225.41.200</a>]]></ETH0_<u></u>GATEWAY><ETH0_IP><![CDATA[131.<u></u>225.41.169]]></ETH0_IP><ETH0_<u></u>IPV6><![CDATA[2001:400:2410:<u></u>29::169]]></ETH0_IPV6><ETH0_<u></u>MAC><![CDATA[00:16:3e:06:06:<u></u>04]]></ETH0_MAC><ETH0_MASK><![<u></u>CDATA[255.255.255.128]]></<u></u>ETH0_MASK><FILES><![CDATA[/<u></u>cloud/images/OpenNebula/<u></u>scripts/one3.2/<u></u>contextualization/init.sh /cloud/images/OpenNebula/<u></u>scripts/one3.2/<u></u>contextualization/credentials.<u></u>sh /cloud/images/OpenNebula/<u></u>scripts/one3.2/<u></u>contextualization/kerberos.sh]<u></u>]></FILES><GATEWAY><![CDATA[<u></u><a href="tel:131.225.41.200" value="+13122541200" target="_blank">131.225.41.200</a>]]></GATEWAY><<u></u>INIT_SCRIPTS><![CDATA[init.sh credentials.sh kerberos.sh]]></INIT_SCRIPTS><<u></u>IP_PUBLIC><![CDATA[131.225.41.<u></u>169]]></IP_PUBLIC><NETMASK><![<u></u>CDATA[255.255.255.128]]></<u></u>NETMASK><NETWORK><![CDATA[YES]<u></u>]></NETWORK><ROOT_PUBKEY><![<u></u>CDATA[id_dsa.pub]]></ROOT_<u></u>PUBKEY><TARGET><![CDATA[hdc]]><u></u></TARGET><USERNAME><![CDATA[<u></u>opennebula]]></USERNAME><USER_<u></u>PUBKEY><![CDATA[id_dsa.pub]]><<u></u>/USER_PUBKEY></CONTEXT><CPU><!<u></u>[CDATA[1]]></CPU><DISK><CLONE><u></u><![CDATA[NO]]></CLONE><CLONE_<u></u>TARGET><![CDATA[SYSTEM]]></<u></u>CLONE_TARGET><CLUSTER_ID><![<u></u>CDATA[101]]></CLUSTER_ID><<u></u>DATASTORE><![CDATA[ip6_img_ds]<u></u>]></DATASTORE><DATASTORE_ID><!<u></u>[CDATA[101]]></DATASTORE_ID><<u></u>DEV_PREFIX><![CDATA[hd]]></<u></u>DEV_PREFIX><DISK_ID><![CDATA[<u></u>0]]></DISK_ID><IMAGE><![CDATA[<u></u>fgt6x4_os]]></IMAGE><IMAGE_ID><u></u><![CDATA[5]]></IMAGE_ID><<u></u>IMAGE_UNAME><![CDATA[oneadmin]<u></u>]></IMAGE_UNAME><LN_TARGET><![<u></u>CDATA[SYSTEM]]></LN_TARGET><<u></u>PERSISTENT><![CDATA[YES]]></<u></u>PERSISTENT><READONLY><![CDATA[<u></u>NO]]></READONLY><SAVE><![<u></u>CDATA[YES]]></SAVE><SIZE><![<u></u>CDATA[46080]]></SIZE><SOURCE><<u></u>![CDATA[/var/lib/one//<u></u>datastores/101/<u></u>3078b4235100008fbdbf9dff7eea95<u></u>b1]]></SOURCE><TARGET><![<u></u>CDATA[vda]]></TARGET><TM_MAD><<u></u>![CDATA[ssh]]></TM_MAD><TYPE><<u></u>![CDATA[FILE]]></TYPE></DISK><<u></u>DISK><DEV_PREFIX><![CDATA[hd]]<u></u>></DEV_PREFIX><DISK_ID><![<u></u>CDATA[1]]></DISK_ID><SIZE><![<u></u>CDATA[5120]]></SIZE><TARGET><!<u></u>[CDATA[vdb]]></TARGET><TYPE><!<u></u>[CDATA[swap]]></TYPE></DISK><<u></u>FEATURES><ACPI><![CDATA[yes]]><u></u></ACPI></FEATURES><GRAPHICS><<u></u>AUTOPORT><![CDATA[yes]]></<u></u>AUTOPORT><KEYMAP><![CDATA[en-<u></u>us]]></KEYMAP><LISTEN><![<u></u>CDATA[127.0.0.1]]></LISTEN><<u></u>PORT><![CDATA[5926]]></PORT><<u></u>TYPE><![CDATA[vnc]]></TYPE></<u></u>GRAPHICS><MEMORY><![CDATA[<u></u>4096]]></MEMORY><NIC><BRIDGE><<u></u>![CDATA[br0]]></BRIDGE><<u></u>CLUSTER_ID><![CDATA[101]]></<u></u>CLUSTER_ID><IP><![CDATA[131.<u></u>225.41.169]]></IP><IP6_LINK><!<u></u>[CDATA[fe80::216:3eff:fe06:<u></u>604]]></IP6_LINK><MAC><![<u></u>CDATA[00:16:3e:06:06:04]]></<u></u>MAC><MODEL><![CDATA[virtio]]><<u></u>/MODEL><NETWORK><![CDATA[<u></u>Static_IPV6_Public]]></<u></u>NETWORK><NETWORK_ID><![CDATA[<u></u>1]]></NETWORK_ID><NETWORK_<u></u>UNAME><![CDATA[oneadmin]]></<u></u>NETWORK_UNAME><NIC_ID><![<u></u>CDATA[0]]></NIC_ID><VLAN><![<u></u>CDATA[NO]]></VLAN></NIC><OS><<u></u>ARCH><![CDATA[x86_64]]></ARCH><u></u></OS><RAW><DATA><![CDATA[<br>
<devices><br>
<serial type='pty'><br>
<target port='0'/><br>
</serial><br>
<console type='pty'><br>
<target type='serial' port='0'/><br>
</console><br>
<br>
</devices>]]></DATA><TYPE><![<u></u>CDATA[kvm]]></TYPE></RAW><<u></u>TEMPLATE_ID><![CDATA[6]]></<u></u>TEMPLATE_ID><VCPU><![CDATA[2]]<u></u>></VCPU><VMID><![CDATA[26]]></<u></u>VMID></TEMPLATE><USER_<u></u>TEMPLATE><ERROR><![CDATA[Fri Jul 25 15:37:48 2014 : Error saving VM state: Could not save one-26 to /var/lib/one/datastores/102/<u></u>26/checkpoint]]></ERROR><<u></u>NPTYPE><![CDATA[NPERNLM]]></<u></u>NPTYPE><RANK><![CDATA[<u></u>FREEMEMORY]]></RANK><USERVO><!<u></u>[CDATA[test181818]]></USERVO><<u></u>/USER_TEMPLATE><HISTORY_<u></u>RECORDS><HISTORY><OID>26</OID><u></u><SEQ>0</SEQ><HOSTNAME><u></u>fgtest14</HOSTNAME><HID>10</<u></u>HID><CID>101</CID><STIME><u></u>1396463752</STIME><ETIME>0</<u></u>ETIME><VMMMAD>kvm</VMMMAD><<u></u>VNMMAD>dummy</VNMMAD><TMMAD><u></u>ssh</TMMAD><DS_LOCATION>/var/<u></u>lib/one/datastores</DS_<u></u>LOCATION><DS_ID>102</DS_ID><<u></u>PSTIME>1396463752</PSTIME><<u></u>PETIME>1396465032</PETIME><<u></u>RSTIME>1396465032</RSTIME><<u></u>RETIME>0</RETIME><ESTIME>0</<u></u>ESTIME><EETIME>0</EETIME><<u></u>REASON>0</REASON><ACTION>0</<u></u>ACTION></HISTORY></HISTORY_<u></u>RECORDS></VM><br>
uid: 0<br>
gid: 0<br>
last_poll: 1406320668<br>
state: 3<br>
lcm_state: 3<br>
owner_u: 1<br>
group_u: 0<br>
other_u: 0<br>
1 row in set (0.00 sec)<br>
<br>
<br>
------------------------------<u></u>-<div class="HOEnZb"><div class="h5"><br>
<br>
<br>
<br>
On Wed, 30 Jul 2014, Steven Timm wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Wed, 30 Jul 2014, Ruben S. Montero wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Not really sure what can be going on... The monitor scripts return the<br>
information of all VMs running in the node. In 4.6 the<br>
monitoring system uses a push approach, through UDP, so you may have the<br>
information being reported by misbehaved monitoring<br>
daemons. Sometimes this may happen in dev environments if you are<br>
resetting the DB,... <br>
</blockquote>
<br>
when we ran the update to take this database from ONE4.4 to ONE4.6, one host (the aforementioned fgtest14) and one datastore (image store 101) got<br>
wiped out of the database, I reinserted them both back in and restarted opennebula.<br>
<br>
Steve Timm<br>
<br>
<br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
On Jul 28, 2014 6:32 PM, "Steven Timm" <<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a>> wrote:<br>
<br>
I am currently dealing with an unexplained monitoring question<br>
in OpenNebula 4.6 on my development cloud.<br>
<br>
I frequently see OpenNebula return that the status of a ONe<br>
host is "ON" even in the case of a system misconfiguration where,<br>
given the credentials, it is impossible for opennebula to<br>
even ssh into the node as oneadmin.<br>
<br>
<br>
I've fixed all those instances, restarted OpenNebula,<br>
but opennebula still reports a number of VM's<br>
in state "running" even though the node they are running<br>
on was rebooted three days ago and is running no<br>
virtual machines whatsoever.<br>
<br>
I think I could be dealing with database corruption of some type<br>
(generated on the one4.4->one4.6 update), or there could<br>
be some problem with the remote scripts on the nodes.<br>
I saw, and I think I fixed, the problems with the database<br>
corruption (namely one of the hosts and one of the datastores<br>
got knocked out of the database for reasons unknown, and I<br>
re-inserted them). But in any case there is some<br>
error handling that is not working in the monitoring<br>
and something is exiting with status 0 that shouldn't be.<br>
<br>
ideas? Has anyone else seen something like this?<br>
<br>
Steve Timm<br>
<br>
<br>
<br>
------------------------------<u></u>------------------------------<u></u>------<br>
Steven C. Timm, Ph.D <a href="tel:%28630%29%20840-8525" value="+16308408525" target="_blank">(630) 840-8525</a><br>
<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a> <a href="http://home.fnal.gov/~timm/" target="_blank">http://home.fnal.gov/~timm/</a><br>
Fermilab Scientific Computing Division, Scientific Computing<br>
Services Quad.<br>
Grid and Cloud Services Dept., Associate Dept. Head for Cloud<br>
Computing<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
http: //<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">lists.opennebula.org/<u></u>listinfo.cgi/users-opennebula.<u></u>org</a><br>
<br>
<br>
<br>
</blockquote>
<br>
------------------------------<u></u>------------------------------<u></u>------<br>
Steven C. Timm, Ph.D <a href="tel:%28630%29%20840-8525" value="+16308408525" target="_blank">(630) 840-8525</a><br>
<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a> <a href="http://home.fnal.gov/~timm/" target="_blank">http://home.fnal.gov/~timm/</a><br>
Fermilab Scientific Computing Division, Scientific Computing Services Quad.<br>
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing<br>
<br>
</blockquote>
<br>
------------------------------<u></u>------------------------------<u></u>------<br>
Steven C. Timm, Ph.D <a href="tel:%28630%29%20840-8525" value="+16308408525" target="_blank">(630) 840-8525</a><br>
<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a> <a href="http://home.fnal.gov/~timm/" target="_blank">http://home.fnal.gov/~timm/</a><br>
Fermilab Scientific Computing Division, Scientific Computing Services Quad.<br>
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div><div>-- <br></div></div>Ruben S. Montero, PhD<br>Project co-Lead and Chief Architect<div>
OpenNebula - Flexible Enterprise Cloud Made Simple<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula</div>
</div>
</div>