<p dir="ltr">Maybe you could try to execute the monitor probes in the node, </p>
<p dir="ltr">1. ssh the node<br>
2. Go to /var/tmp/one/im <br>
3. Execute run_probes kvm-probes</p>
<p dir="ltr">Make sure you do not have a host using the same hostname fgtest14 and running a collectd process</p>
<div class="gmail_quote">On Jul 29, 2014 4:35 PM, "Steven Timm" <<a href="mailto:timm@fnal.gov">timm@fnal.gov</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
I am still trying to debug a nasty monitoring inconsistency.<br>
<br>
-bash-4.1$ onevm list | grep fgtest14<br>
26 oneadmin oneadmin fgt6x4-26 runn 6 4G fgtest14 117d 19h50<br>
27 oneadmin oneadmin fgt5x4-27 runn 10 4G fgtest14 117d 17h57<br>
28 oneadmin oneadmin fgt1x1-28 runn 10 4.1G fgtest14 117d 16h59<br>
30 oneadmin oneadmin fgt5x1-30 runn 0 4G fgtest14 116d 23h50<br>
33 oneadmin oneadmin ip6sl5vda-33 runn 6 4G fgtest14 116d 19h57<br>
-bash-4.1$ onehost list<br>
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT<br>
3 fgtest11 ipv6 0 0 / 400 (0%) 0K / 15.7G (0%) on<br>
4 fgtest12 ipv6 0 0 / 400 (0%) 0K / 15.7G (0%) on<br>
7 fgtest13 ipv6 0 0 / 800 (0%) 0K / 23.6G (0%) on<br>
8 fgtest14 ipv6 5 0 / 800 (0%) 0K / 23.6G (0%) on<br>
9 fgtest20 ipv6 3 300 / 800 (37%) 12G / 31.4G (38%) on<br>
11 fgtest19 ipv6 0 0 / 800 (0%) 0K / 31.5G (0%) on<br>
-bash-4.1$ onehost show 8<br>
HOST 8 INFORMATION<br>
ID : 8<br>
NAME : fgtest14<br>
CLUSTER : ipv6<br>
STATE : MONITORED<br>
IM_MAD : kvm<br>
VM_MAD : kvm<br>
VN_MAD : dummy<br>
LAST MONITORING TIME : 07/29 09:25:45<br>
<br>
HOST SHARES<br>
TOTAL MEM : 23.6G<br>
USED MEM (REAL) : 876.4M<br>
USED MEM (ALLOCATED) : 0K<br>
TOTAL CPU : 800<br>
USED CPU (REAL) : 0<br>
USED CPU (ALLOCATED) : 0<br>
RUNNING VMS : 5<br>
<br>
LOCAL SYSTEM DATASTORE #102 CAPACITY<br>
TOTAL: : 548.8G<br>
USED: : 175.3G<br>
FREE: : 345.6G<br>
<br>
MONITORING INFORMATION<br>
ARCH="x86_64"<br>
CPUSPEED="2992"<br>
HOSTNAME="<a href="http://fgtest14.fnal.gov" target="_blank">fgtest14.fnal.gov</a>"<br>
HYPERVISOR="kvm"<br>
MODELNAME="Intel(R) Xeon(R) CPU E5450 @ 3.00GHz"<br>
NETRX="234844577"<br>
NETTX="21553126"<br>
RESERVED_CPU=""<br>
RESERVED_MEM=""<br>
VERSION="4.6.0"<br>
<br>
VIRTUAL MACHINES<br>
<br>
ID USER GROUP NAME STAT UCPU UMEM HOST TIME<br>
26 oneadmin oneadmin fgt6x4-26 runn 6 4G fgtest14 117d 19h50<br>
27 oneadmin oneadmin fgt5x4-27 runn 10 4G fgtest14 117d 17h57<br>
28 oneadmin oneadmin fgt1x1-28 runn 10 4.1G fgtest14 117d 17h00<br>
30 oneadmin oneadmin fgt5x1-30 runn 0 4G fgtest14 116d 23h50<br>
33 oneadmin oneadmin ip6sl5vda-33 runn 6 4G fgtest14 116d 19h57<br>
------------------------------<u></u>------------------------------<u></u>-----------------------<br>
<br>
All of this looks great, right?<br>
Just one problem: There are no VM's running on fgtest14 and<br>
haven't been for 4 days.<br>
<br>
[root@fgtest14 ~]# virsh list<br>
Id Name State<br>
------------------------------<u></u>----------------------<br>
<br>
[root@fgtest14 ~]#<br>
<br>
------------------------------<u></u>------------------------------<u></u>-------------<br>
Yet the monitoring reports no errors.<br>
<br>
Tue Jul 29 09:28:10 2014 [InM][D]: Host fgtest14 (8) successfully monitored.<br>
<br>
------------------------------<u></u>------------------------------<u></u>-----------------<br>
At the same time, there is no evidence that ONE is actually trying to or<br>
succeeding to monitor these five vm's yet they are still stuck in "runn"<br>
which means I can't do a onevm restart to restart them.<br>
(the vm images of these 5 vm's are still out there on the VM host and<br>
I would like to save and restart them if I can).<br>
<br>
What is the remotes command that ONE4.6 would use to monitor this host?<br>
Can I do it manually and see what output I get?<br>
<br>
Are we dealing with some kind of a bug, or just a very confused system?<br>
Any help is appreciated. I have to get this sorted out before<br>
I dare deploy one4.x in production.<br>
<br>
Steve Timm<br>
<br>
<br>
------------------------------<u></u>------------------------------<u></u>------<br>
Steven C. Timm, Ph.D <a href="tel:%28630%29%20840-8525" value="+16308408525" target="_blank">(630) 840-8525</a><br>
<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a> <a href="http://home.fnal.gov/~timm/" target="_blank">http://home.fnal.gov/~timm/</a><br>
Fermilab Scientific Computing Division, Scientific Computing Services Quad.<br>
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/<u></u>listinfo.cgi/users-opennebula.<u></u>org</a><br>
</blockquote></div>