<div dir="ltr">Sorry for the noise, just saw it in the other thread...</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 30, 2014 at 5:01 PM, Ruben S. Montero <span dir="ltr"><<a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">BTW, Could you paste the ouput of run_probes commands once it finish?</div><div class="HOEnZb"><div class="h5">
<div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 30, 2014 at 4:58 PM, Ruben S. Montero <span dir="ltr"><<a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">This seems to be a bug, when collectd does not respond (because of waiting for sudo password) OpenNebula does not move the hosts to ERROR. The probes are designed to not start another collectd process; but probably we should check that a running one it is not working and send the ERROR message to OpenNebula.<div>
<br></div><div>Pointer to the issue:</div><div><a href="http://dev.opennebula.org/issues/3118" target="_blank">http://dev.opennebula.org/issues/3118</a><br></div><div><br></div><div>Cheers</div></div><div>
<div><div class="gmail_extra"><br><br><div class="gmail_quote">
On Wed, Jul 30, 2014 at 4:53 PM, Steven Timm <span dir="ltr"><<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>On Wed, 30 Jul 2014, Ruben S. Montero wrote:<br>
<br>
</div><div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
1.- monitor_ds.sh may use LVM commands (vgdisplay) that needs sudo access. It should be automatically setup by the opennebula node<br>
packages. <br>
<br>
2.- It is not a real daemon, the first time a host is monitored a process is left to periodically send information. OpenNebula<br>
restarts it if no information is received in 3 monitor steps. Nothing needs to be set up...<br>
<br>
Cheers<br>
<br>
</blockquote>
<br></div>
On further inspection I found that this collectd was running on my nodes, and obviously failing up until now because the sudoers was not set correctly. But there was nothing to warn us about it. Nothing on<br>
the opennebula head node to even tell us that the information was stale.<br>
No log file on the node to show the errors we were getting. In short,<br>
it was just quietly dying and we had no idea. How to make sure this<br>
doesn't happen again in the future?<br>
<br>
Steve Timm<div><div><br>
<br>
<br>
<br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
On Wed, Jul 30, 2014 at 3:50 PM, Steven Timm <<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a>> wrote:<br>
On Wed, 30 Jul 2014, Ruben S. Montero wrote:<br>
<br>
<br>
Maybe you could try to execute the monitor probes in the node, <br>
<br>
1. ssh the node<br>
2. Go to /var/tmp/one/im<br>
3. Execute run_probes kvm-probes<br>
<br>
<br>
When I do that, (using sh -x ) I get the following:<br>
<br>
-bash-4.1$ sh -x ./run_probes kvm-probes<br>
++ dirname ./run_probes<br>
+ source ./../scripts_common.sh<br>
++ export LANG=C<br>
++ LANG=C<br>
++ export<br>
PATH=/bin:/sbin:/usr/bin:/usr/<u></u>krb5/bin:/usr/lib64/qt-3.3/<u></u>bin:/usr/local/bin:/bin:/usr/<u></u>bin:/usr/local/sbin:/usr/sbin:<u></u>/sbin<br>
++<br>
PATH=/bin:/sbin:/usr/bin:/usr/<u></u>krb5/bin:/usr/lib64/qt-3.3/<u></u>bin:/usr/local/bin:/bin:/usr/<u></u>bin:/usr/local/sbin:/usr/sbin:<u></u>/sbin<br>
++ AWK=awk<br>
++ BASH=bash<br>
++ CUT=cut<br>
++ DATE=date<br>
++ DD=dd<br>
++ DF=df<br>
++ DU=du<br>
++ GREP=grep<br>
++ ISCSIADM=iscsiadm<br>
++ LVCREATE=lvcreate<br>
++ LVREMOVE=lvremove<br>
++ LVRENAME=lvrename<br>
++ LVS=lvs<br>
++ LN=ln<br>
++ MD5SUM=md5sum<br>
++ MKFS=mkfs<br>
++ MKISOFS=genisoimage<br>
++ MKSWAP=mkswap<br>
++ QEMU_IMG=qemu-img<br>
++ RADOS=rados<br>
++ RBD=rbd<br>
++ READLINK=readlink<br>
++ RM=rm<br>
++ SCP=scp<br>
++ SED=sed<br>
++ SSH=ssh<br>
++ SUDO=sudo<br>
++ SYNC=sync<br>
++ TAR=tar<br>
++ TGTADM=tgtadm<br>
++ TGTADMIN=tgt-admin<br>
++ TGTSETUPLUN=tgt-setup-lun-one<br>
++ TR=tr<br>
++ VGDISPLAY=vgdisplay<br>
++ VMKFSTOOLS=vmkfstools<br>
++ WGET=wget<br>
+++ uname -s<br>
++ '[' xLinux = xLinux ']'<br>
++ SED='sed -r'<br>
+++ basename ./run_probes<br>
++ SCRIPT_NAME=run_probes<br>
+ export LANG=C<br>
+ LANG=C<br>
+ HYPERVISOR_DIR=kvm-probes.d<br>
+ ARGUMENTS=kvm-probes<br>
++ dirname ./run_probes<br>
+ SCRIPTS_DIR=.<br>
+ cd .<br>
++ '[' -d kvm-probes.d ']'<br>
++ run_dir kvm-probes.d<br>
++ cd kvm-probes.d<br>
+++ ls architecture.sh collectd-client-shepherd.sh cpu.sh kvm.rb monitor_ds.sh name.sh poll.sh version.sh<br>
++ for i in '`ls *`'<br>
++ '[' -x architecture.sh ']'<br>
++ ./architecture.sh kvm-probes<br>
++ EXIT_CODE=0<br>
++ '[' x0 '!=' x0 ']'<br>
++ for i in '`ls *`'<br>
++ '[' -x collectd-client-shepherd.sh ']'<br>
++ ./collectd-client-shepherd.sh kvm-probes<br>
++ EXIT_CODE=0<br>
++ '[' x0 '!=' x0 ']'<br>
++ for i in '`ls *`'<br>
++ '[' -x cpu.sh ']'<br>
++ ./cpu.sh kvm-probes<br>
++ EXIT_CODE=0<br>
++ '[' x0 '!=' x0 ']'<br>
++ for i in '`ls *`'<br>
++ '[' -x kvm.rb ']'<br>
++ ./kvm.rb kvm-probes<br>
++ EXIT_CODE=0<br>
++ '[' x0 '!=' x0 ']'<br>
++ for i in '`ls *`'<br>
++ '[' -x monitor_ds.sh ']'<br>
++ ./monitor_ds.sh kvm-probes<br>
[sudo] password for oneadmin:<br>
<br>
and it stays hung on the password for oneadmin.<br>
<br>
What's going on?<br>
<br>
Also, you mentioned a collectd--are you saying that OpenNebula 4.6 now needs to run a daemon on every single VM host?<br>
Where is it documented<br>
on how to set it up?<br>
<br>
Steve<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Make sure you do not have a host using the same hostname fgtest14 and running a collectd process<br>
<br>
On Jul 29, 2014 4:35 PM, "Steven Timm" <<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a>> wrote:<br>
<br>
I am still trying to debug a nasty monitoring inconsistency.<br>
<br>
-bash-4.1$ onevm list | grep fgtest14<br>
26 oneadmin oneadmin fgt6x4-26 runn 6 4G fgtest14 117d 19h50<br>
27 oneadmin oneadmin fgt5x4-27 runn 10 4G fgtest14 117d 17h57<br>
28 oneadmin oneadmin fgt1x1-28 runn 10 4.1G fgtest14 117d 16h59<br>
30 oneadmin oneadmin fgt5x1-30 runn 0 4G fgtest14 116d 23h50<br>
33 oneadmin oneadmin ip6sl5vda-33 runn 6 4G fgtest14 116d 19h57<br>
-bash-4.1$ onehost list<br>
ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT<br>
3 fgtest11 ipv6 0 0 / 400 (0%) 0K / 15.7G (0%) on<br>
4 fgtest12 ipv6 0 0 / 400 (0%) 0K / 15.7G (0%) on<br>
7 fgtest13 ipv6 0 0 / 800 (0%) 0K / 23.6G (0%) on<br>
8 fgtest14 ipv6 5 0 / 800 (0%) 0K / 23.6G (0%) on<br>
9 fgtest20 ipv6 3 300 / 800 (37%) 12G / 31.4G (38%) on<br>
11 fgtest19 ipv6 0 0 / 800 (0%) 0K / 31.5G (0%) on<br>
-bash-4.1$ onehost show 8<br>
HOST 8 INFORMATION<br>
ID : 8<br>
NAME : fgtest14<br>
CLUSTER : ipv6<br>
STATE : MONITORED<br>
IM_MAD : kvm<br>
VM_MAD : kvm<br>
VN_MAD : dummy<br>
LAST MONITORING TIME : 07/29 09:25:45<br>
<br>
HOST SHARES<br>
TOTAL MEM : 23.6G<br>
USED MEM (REAL) : 876.4M<br>
USED MEM (ALLOCATED) : 0K<br>
TOTAL CPU : 800<br>
USED CPU (REAL) : 0<br>
USED CPU (ALLOCATED) : 0<br>
RUNNING VMS : 5<br>
<br>
LOCAL SYSTEM DATASTORE #102 CAPACITY<br>
TOTAL: : 548.8G<br>
USED: : 175.3G<br>
FREE: : 345.6G<br>
<br>
MONITORING INFORMATION<br>
ARCH="x86_64"<br>
CPUSPEED="2992"<br>
HOSTNAME="<a href="http://fgtest14.fnal.gov" target="_blank">fgtest14.fnal.gov</a>"<br>
HYPERVISOR="kvm"<br>
MODELNAME="Intel(R) Xeon(R) CPU E5450 @ 3.00GHz"<br>
NETRX="234844577"<br>
NETTX="21553126"<br>
RESERVED_CPU=""<br>
RESERVED_MEM=""<br>
VERSION="4.6.0"<br>
<br>
VIRTUAL MACHINES<br>
<br>
ID USER GROUP NAME STAT UCPU UMEM HOST TIME<br>
26 oneadmin oneadmin fgt6x4-26 runn 6 4G fgtest14 117d 19h50<br>
27 oneadmin oneadmin fgt5x4-27 runn 10 4G fgtest14 117d 17h57<br>
28 oneadmin oneadmin fgt1x1-28 runn 10 4.1G fgtest14 117d 17h00<br>
30 oneadmin oneadmin fgt5x1-30 runn 0 4G fgtest14 116d 23h50<br>
33 oneadmin oneadmin ip6sl5vda-33 runn 6 4G fgtest14 116d 19h57<br>
------------------------------<u></u>------------------------------<u></u>-----------------------<br>
<br>
All of this looks great, right?<br>
Just one problem: There are no VM's running on fgtest14 and<br>
haven't been for 4 days.<br>
<br>
[root@fgtest14 ~]# virsh list<br>
Id Name State<br>
------------------------------<u></u>----------------------<br>
<br>
[root@fgtest14 ~]#<br>
<br>
------------------------------<u></u>------------------------------<u></u>-------------<br>
Yet the monitoring reports no errors.<br>
<br>
Tue Jul 29 09:28:10 2014 [InM][D]: Host fgtest14 (8) successfully monitored.<br>
<br>
------------------------------<u></u>------------------------------<u></u>-----------------<br>
At the same time, there is no evidence that ONE is actually trying to or<br>
succeeding to monitor these five vm's yet they are still stuck in "runn"<br>
which means I can't do a onevm restart to restart them.<br>
(the vm images of these 5 vm's are still out there on the VM host and<br>
I would like to save and restart them if I can).<br>
<br>
What is the remotes command that ONE4.6 would use to monitor this host?<br>
Can I do it manually and see what output I get?<br>
<br>
Are we dealing with some kind of a bug, or just a very confused system?<br>
Any help is appreciated. I have to get this sorted out before<br>
I dare deploy one4.x in production.<br>
<br>
Steve Timm<br>
<br>
<br>
------------------------------<u></u>------------------------------<u></u>------<br>
Steven C. Timm, Ph.D <a href="tel:%28630%29%20840-8525" value="+16308408525" target="_blank">(630) 840-8525</a><br>
<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a> <a href="http://home.fnal.gov/~timm/" target="_blank">http://home.fnal.gov/~timm/</a><br>
Fermilab Scientific Computing Division, Scientific Computing Services Quad.<br>
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/<u></u>listinfo.cgi/users-opennebula.<u></u>org</a><br>
<br>
<br>
<br>
<br>
------------------------------<u></u>------------------------------<u></u>------<br>
Steven C. Timm, Ph.D <a href="tel:%28630%29%20840-8525" value="+16308408525" target="_blank">(630) 840-8525</a><br>
<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a> <a href="http://home.fnal.gov/~timm/" target="_blank">http://home.fnal.gov/~timm/</a><br>
Fermilab Scientific Computing Division, Scientific Computing Services Quad.<br>
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing<br>
<br>
<br>
<br>
<br>
--<br>
-- <br>
Ruben S. Montero, PhD<br>
Project co-Lead and Chief Architect OpenNebula - Flexible Enterprise Cloud Made Simple<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula<br>
<br>
<br>
</blockquote>
<br>
------------------------------<u></u>------------------------------<u></u>------<br>
Steven C. Timm, Ph.D <a href="tel:%28630%29%20840-8525" value="+16308408525" target="_blank">(630) 840-8525</a><br>
<a href="mailto:timm@fnal.gov" target="_blank">timm@fnal.gov</a> <a href="http://home.fnal.gov/~timm/" target="_blank">http://home.fnal.gov/~timm/</a><br>
Fermilab Scientific Computing Division, Scientific Computing Services Quad.<br>
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div><div>-- <br></div></div>Ruben S. Montero, PhD<br>Project co-Lead and Chief Architect<div>
OpenNebula - Flexible Enterprise Cloud Made Simple<br><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula</div>
</div>
</div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div><div>-- <br></div></div>Ruben S. Montero, PhD<br>Project co-Lead and Chief Architect<div>OpenNebula - Flexible Enterprise Cloud Made Simple<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula</div></div>
</div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div><div>-- <br></div></div>Ruben S. Montero, PhD<br>Project co-Lead and Chief Architect<div>OpenNebula - Flexible Enterprise Cloud Made Simple<br>
<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula</div></div>
</div>