<div dir="ltr">Hi Rhesa,<div><br></div><div>Maybe we are just trying to submit to many VMs at the same time for your system, so cLVM get stuck. Are you experience this when deploying multiple VMs? If so we can either reduce the number of threads of the transfer driver to serialize the operations or tweak the scheduler to be less aggressive.</div>

<div><br></div><div>Cheers</div><div><br></div><div>Ruben </div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Feb 27, 2014 at 11:05 AM, Rhesa Mahendra <span dir="ltr"><<a href="mailto:rhesa@lintasmediadanawa.com" target="_blank">rhesa@lintasmediadanawa.com</a>></span> wrote:<br>

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    Ruben,<br>
    <br>
    Thanks for your answer, once again, why command ../lvm/monitor
    (vgdisplay) take to long to get info monitor LVM, so our frontend
    have many process, and make everything stuck, how to fix this?
    thanks,<br>
    <br>
    Rhesa.<div><div class="h5"><br>
     <br>
    <div>On 02/27/2014 05:02 AM, Ruben S.
      Montero wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">Hi, 
        <div><br>
        </div>
        <div>Yes, given the use of clvm in OpenNebula I think we are
          safe without fencing. I cannot think of a  split-brain
          condition where fencing would be needed in our case.</div>
        <div><br>
        </div>
        <div>Cheers</div>
        <div><br>
        </div>
        <div>Ruben</div>
      </div>
      <div class="gmail_extra"><br>
        <br>
        <div class="gmail_quote">On Thu, Feb 27, 2014 at 1:23 AM, Rhesa
          Mahendra <span dir="ltr"><<a href="mailto:rhesa@lintasmediadanawa.com" target="_blank">rhesa@lintasmediadanawa.com</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div dir="auto">
              <div>Ruben,</div>
              <div><br>
              </div>
              <div>I get error in Fencing, fencing agent not working
                fine, so if one node cannot connect fencing this cluster
                will be stuck, i read from forum, this fence can connect
                to ipmi, i think opennebula just need clvm, so i decide
                to use cluster without fence, i hope everythink is fine,
                thanks.<br>
                <br>
                Regards,
                <div>Rhesa Mahendra.</div>
              </div>
              <div>
                <div>
                  <div><br>
                    On 26 Feb 2014, at 23:09, "Ruben S. Montero" <<a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>>
                    wrote:<br>
                    <br>
                  </div>
                  <blockquote type="cite">
                    <div>
                      <div dir="ltr">
                        <div>Hi Rhesa</div>
                        <div><br>
                        </div>
                        <div>I agree that the problem is related to lvm,
                          probably clvmd cannot acquire locking through
                          DLM. I assume that as you are running the
                          cluster during 3-4 days it is not
                          mis-configured, I've seen this before related
                          to networking problems (usually filtering
                          multicast traffic), can you double check that
                          iptables is allowing all the required cluster
                          traffic?. </div>
                        <div><br>
                        </div>
                        <div>Also what is the output of clustat, during
                          the failure?</div>
                        <div><br>
                        </div>
                        <div><br>
                        </div>
                        <div>Cheers</div>
                        <div><br>
                        </div>
                        <div>Ruben</div>
                      </div>
                      <div class="gmail_extra"><br>
                        <br>
                        <div class="gmail_quote">
                          On Wed, Feb 26, 2014 at 3:50 AM, Rhesa
                          Mahendra <span dir="ltr"><<a href="mailto:rhesa@lintasmediadanawa.com" target="_blank">rhesa@lintasmediadanawa.com</a>></span>
                          wrote:<br>
                          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                            Guys,<br>
                            <br>
                            I will create production use San Storage, so
                            i think opennebula need LVM/CLVM for do,
                            it's have been 3 month for do this, but
                            after i create 50 VM use one template with 3
                            node, this lvm/clvm not working fine, status
                            VM still Prolog after two days, please see :<br>
                            <br>
                            <br>
                            0:00 bash -c if [ -x
                            "/var/tmp/one/im/run_probes" ]; then
                            /var/tmp/one/im/run_probes kvm
                            /var/lib/one//datastores 4124 20 0
                            idc-conode001; else<br>
                            14447 ?        S      0:00 /bin/bash
                            /var/tmp/one/im/run_probes kvm
                            /var/lib/one//datastores 4124 20 0
                            idc-conode001<br>
                            14454 ?        S      0:00 /bin/bash
                            /var/tmp/one/im/run_probes kvm
                            /var/lib/one//datastores 4124 20 0
                            idc-conode001<br>
                            14455 ?        S      0:00 /bin/bash
                            /var/tmp/one/im/run_probes kvm
                            /var/lib/one//datastores 4124 20 0
                            idc-conode001<br>
                            14460 ?        S      0:00 /bin/bash
                            ./collectd-client_control.sh kvm
                            /var/lib/one//datastores 4124 20 0
                            idc-conode001<br>
                            14467 ?        S      0:00 /bin/bash
                            /var/tmp/one/im/kvm.d/../run_probes
                            kvm-probes /var/lib/one//datastores 4124 20
                            0 idc-conode001<br>
                            14474 ?        S      0:00 /bin/bash
                            /var/tmp/one/im/kvm.d/../run_probes
                            kvm-probes /var/lib/one//datastores 4124 20
                            0 idc-conode001<br>
                            14475 ?        S      0:00 /bin/bash
                            /var/tmp/one/im/kvm.d/../run_probes
                            kvm-probes /var/lib/one//datastores 4124 20
                            0 idc-conode001<br>
                            14498 ?        S      0:00 /bin/bash
                            ./monitor_ds.sh kvm-probes
                            /var/lib/one//datastores 4124 20 0
                            idc-conode001<br>
                            14525 ?        S      0:00 /bin/bash
                            ./monitor_ds.sh kvm-probes
                            /var/lib/one//datastores 4124 20 0
                            idc-conode001<br>
                            14526 ?        S      0:00 sudo vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-0<br>
                            14527 ?        S      0:00 vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-0<br>
                            15417 ?        S      0:00 [kdmflush]<br>
                            15452 ?        Ss     0:00 sshd: oneadmin
                            [priv]<br>
                            15454 ?        S      0:00 sshd:
                            oneadmin@notty<br>
                            15455 ?        Ss     0:00 bash -s<br>
                            15510 ?        Ss     0:00 sshd: oneadmin
                            [priv]<br>
                            15512 ?        S      0:00 sshd:
                            oneadmin@notty<br>
                            15513 ?        Ss     0:00 sh -s<br>
                            15527 ?        S      0:00 sudo lvremove -f
                            /dev/vg-one/lv-one-179-596-0<br>
                            15528 ?        S      0:00 lvremove -f
                            /dev/vg-one/lv-one-179-596-0<br>
                            <br>
                            <br>
                            I use locking type 3, i have 3 node and 1
                            front end, i use cman and this is
                            configuration cluster.conf<br>
                            <br>
                            <?xml version="1.0"?><br>
                            <cluster name="idccluster"
                            config_version="9"><br>
                            <br>
                              <clusternodes><br>
                              <clusternode name="idc-vcoz01"
                            votes="1"
                            nodeid="1"><fence><method
                            name="single"><device
                            name="idc-vcoz01"/></method></fence></clusternode><clusternode
                            name="idc-conode001" votes="1"
                            nodeid="2"><fence><method
                            name="single"><device
                            name="idc-conode001"/></method></fence></clusternode><clusternode
                            name="idc-conode002" votes="1"
                            nodeid="3"><fence><method
                            name="single"><device
                            name="idc-conode002"/></method></fence></clusternode><clusternode
                            name="idc-conode003" votes="1"
                            nodeid="4"><fence><method
                            name="single"><device
                            name="idc-conode003"/></method></fence></clusternode></clusternodes><br>
                            <br>
                              <fencedevices><br>
                              <fencedevice name="idc-vcoz01"
                            agent="fence_ipmilan"/><fencedevice
                            name="idc-conode001"
                            agent="fence_ipmilan"/><fencedevice
                            name="idc-conode002"
                            agent="fence_ipmilan"/><fencedevice
                            name="idc-conode003"
                            agent="fence_ipmilan"/></fencedevices><br>
                            <br>
                              <rm><br>
                                <failoverdomains/><br>
                                <resources/><br>
                              </rm><br>
                            </cluster><br>
                            <br>
                            i shared /etc/cluster/cluster.conf use NFS,<br>
                            this command use cman_tools<br>
                            <br>
                            Node  Sts   Inc   Joined               Name<br>
                               1   M    304   2014-02-20 16:08:37
                             idc-vcoz01<br>
                               2   M    288   2014-02-20 16:08:37
                             idc-conode001<br>
                               3   M    304   2014-02-20 16:08:37
                             idc-conode002<br>
                               4   M    312   2014-02-26 09:44:04
                             idc-conode003<br>
                            <br>
                            i think, this vm cannot running because so
                            take a long for waiting lvcreate or
                            vgdisplay, see this:<br>
                            <br>
                            30818 ?        S      0:00 sudo vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30819 ?        S      0:00 sudo vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30820 ?        S      0:00 sudo vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30821 ?        S      0:00 sudo vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30824 ?        S      0:00 sudo vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30825 ?        S      0:00 sudo vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30827 ?        S      0:00 sudo vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30842 ?        S      0:00 vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30843 ?        S      0:00 vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30844 ?        S      0:00 vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30845 ?        S      0:00 vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30846 ?        S      0:00 sudo vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30847 ?        S      0:00 vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30852 ?        S      0:00 vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30853 ?        S      0:00 vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            30857 ?        S      0:00 vgdisplay
                            --separator : --units m -o vg_size,vg_free
                            --nosuffix --noheadings -C vg-one-1<br>
                            <br>
                            <br>
                            or :<br>
                            <br>
                            <br>
                            30859 ?        S      0:00 sudo lvcreate
                            -L20480.00M -n lv-one-179-610-0 vg-one<br>
                            30860 ?        S      0:00 lvcreate
                            -L20480.00M -n lv-one-179-610-0 vg-one<br>
                            <br>
                            If i try to restart all server, and all
                            service everything is fine, but after 3 or 4
                            days, this problem come again.<br>
                            This Infrastructure will be production, and
                            i think i must find out how to fix this, iam
                            not ready if this configuration will be
                            production, so please help me, and thanks.<br>
                            <br>
                            Rhesa.<br>
                            _______________________________________________<br>
                            Users mailing list<br>
                            <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
                            <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
                          </blockquote>
                        </div>
                        <br>
                        <br clear="all">
                        <div><br>
                        </div>
                        -- <br>
                        <div dir="ltr">
                          <div>
                            <div>-- <br>
                            </div>
                          </div>
                          Ruben S. Montero, PhD<br>
                          Project co-Lead and Chief Architect
                          <div>OpenNebula - Flexible Enterprise Cloud
                            Made Simple<br>
                            <a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>
                            | @OpenNebula</div>
                        </div>
                      </div>
                    </div>
                  </blockquote>
                </div>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
        <br clear="all">
        <div><br>
        </div>
        -- <br>
        <div dir="ltr">
          <div>
            <div>-- <br>
            </div>
          </div>
          Ruben S. Montero, PhD<br>
          Project co-Lead and Chief Architect
          <div>OpenNebula - Flexible Enterprise Cloud Made Simple<br>
            <a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>
            | @OpenNebula</div>
        </div>
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div><div>-- <br></div></div>Ruben S. Montero, PhD<br>Project co-Lead and Chief Architect<div>OpenNebula - Flexible Enterprise Cloud Made Simple<br>

<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula</div></div>
</div>