<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>Ruben,</div><div><br></div><div>I use 4.4, i will try many thanks for your answer.<br><br>Regards,<div>Rhesa Mahendra.</div></div><div><br>On 28 Feb 2014, at 16:55, "Ruben S. Montero" <<a href="mailto:rsmontero@opennebula.org">rsmontero@opennebula.org</a>> wrote:<br><br></div><blockquote type="cite"><div><div dir="ltr">Ok I see, it is about the monitoring frequency of the datastores, maybe you could increase the value of MONITOR_INTERVAL in oned.conf, if you are in 4.4 this should not interfere with the intervals for hosts and VMs.<div>

<br></div><div><br></div><div>Cheers</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Feb 28, 2014 at 4:16 AM, Rhesa Mahendra <span dir="ltr"><<a href="mailto:rhesa@lintasmediadanawa.com" target="_blank">rhesa@lintasmediadanawa.com</a>></span> wrote:<br>

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
  
    
  
  <div bgcolor="#FFFFFF" text="#000000">
    Ruben,<br>
    <br>
    This VM is running normal, i just create 7 VM but in process i see
    so many process /lvm/monitor see this:<br>
    <br>
     8645 ?        SN     0:00 sh -c
    /var/lib/one/remotes/datastore/lvm/monitor
    PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMjU8L0lEPjxVSUQ+MzwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5yaGVzYTwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5MVk0tU1RPUkU8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L09USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PERTX01BRD5sdm08L0RTX01BRD48VE1fTUFEPmx2bTwvVE1fTUFEPjxCQVNFX1BBVEg+L3Zhci9saWIvb25lLy9kYXRhc3RvcmVzLzEyNTwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT4yPC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+LTE8L0NMVVNURVJfSUQ+PENMVVNURVI+PC9DTFVTVEVSPjxUT1RBTF9NQj4xMDk5NTEwODwvVE9UQUxfTUI+PEZSRUVfTUI+MTA3MDUxOTM8L0ZSRUVfTUI+PFVTRURfTUI+Mjg5OTE0PC9VU0VEX01CPjxJTUFHRVM+PElEPjE4MTwvSUQ+PElEPjE4MjwvSUQ+PElEPjE4NDwvSUQ+PElEPjE4NTwvSUQ+PElEPjE4NjwvSUQ+PElEPjE4NzwvSUQ+PElEPjE4ODwvSUQ+PElEPjE4OTwvSUQ+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDRE
 FUQVtsb2N
hbGhvc3RdXT48L0JSSURHRV9MSVNUPjxDTE9ORV9UQVJHRVQ+PCFbQ0RBVEFbU0VMRl1dPjwvQ0xPTkVfVEFSR0VUPjxEQVRBU1RPUkVfQ0FQQUNJVFlfQ0hFQ0s+PCFbQ0RBVEFbbm9dXT48L0RBVEFTVE9SRV9DQVBBQ0lUWV9DSEVDSz48RElTS19UWVBFPjwhW0NEQVRBW0JMT0NLXV0+PC9ESVNLX1RZUEU+PERTX01BRD48IVtDREFUQVtsdm1dXT48L0RTX01BRD48TE5fVEFSR0VUPjwhW0NEQVRBW05PTkVdXT48L0xOX1RBUkdFVD48VE1fTUFEPjwhW0NEQVRBW2x2bV1dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjxWR19OQU1FPjwhW0NEQVRBW3ZnLW9uZV1dPjwvVkdfTkFNRT48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjwvRFNfRFJJVkVSX0FDVElPTl9EQVRBPg==
    125 ; echo ExitCode: $? 1>&2<br>
     8647 ?        SN     0:00 /bin/bash
    /var/lib/one/remotes/datastore/lvm/monitor
    PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMjU8L0lEPjxVSUQ+MzwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5yaGVzYTwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5MVk0tU1RPUkU8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L09USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PERTX01BRD5sdm08L0RTX01BRD48VE1fTUFEPmx2bTwvVE1fTUFEPjxCQVNFX1BBVEg+L3Zhci9saWIvb25lLy9kYXRhc3RvcmVzLzEyNTwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT4yPC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+LTE8L0NMVVNURVJfSUQ+PENMVVNURVI+PC9DTFVTVEVSPjxUT1RBTF9NQj4xMDk5NTEwODwvVE9UQUxfTUI+PEZSRUVfTUI+MTA3MDUxOTM8L0ZSRUVfTUI+PFVTRURfTUI+Mjg5OTE0PC9VU0VEX01CPjxJTUFHRVM+PElEPjE4MTwvSUQ+PElEPjE4MjwvSUQ+PElEPjE4NDwvSUQ+PElEPjE4NTwvSUQ+PElEPjE4NjwvSUQ+PElEPjE4NzwvSUQ+PElEPjE4ODwvSUQ+PElEPjE4OTwvSUQ+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDRE
 FUQVtsb2N
hbGhvc3RdXT48L0JSSURHRV9MSVNUPjxDTE9ORV9UQVJHRVQ+PCFbQ0RBVEFbU0VMRl1dPjwvQ0xPTkVfVEFSR0VUPjxEQVRBU1RPUkVfQ0FQQUNJVFlfQ0hFQ0s+PCFbQ0RBVEFbbm9dXT48L0RBVEFTVE9SRV9DQVBBQ0lUWV9DSEVDSz48RElTS19UWVBFPjwhW0NEQVRBW0JMT0NLXV0+PC9ESVNLX1RZUEU+PERTX01BRD48IVtDREFUQVtsdm1dXT48L0RTX01BRD48TE5fVEFSR0VUPjwhW0NEQVRBW05PTkVdXT48L0xOX1RBUkdFVD48VE1fTUFEPjwhW0NEQVRBW2x2bV1dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjxWR19OQU1FPjwhW0NEQVRBW3ZnLW9uZV1dPjwvVkdfTkFNRT48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjwvRFNfRFJJVkVSX0FDVElPTl9EQVRBPg==
    125<br>
     8699 ?        SN     0:00 /bin/bash
    /var/lib/one/remotes/datastore/lvm/monitor
    PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMjU8L0lEPjxVSUQ+MzwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5yaGVzYTwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5MVk0tU1RPUkU8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L09USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PERTX01BRD5sdm08L0RTX01BRD48VE1fTUFEPmx2bTwvVE1fTUFEPjxCQVNFX1BBVEg+L3Zhci9saWIvb25lLy9kYXRhc3RvcmVzLzEyNTwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT4yPC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+LTE8L0NMVVNURVJfSUQ+PENMVVNURVI+PC9DTFVTVEVSPjxUT1RBTF9NQj4xMDk5NTEwODwvVE9UQUxfTUI+PEZSRUVfTUI+MTA3MDUxOTM8L0ZSRUVfTUI+PFVTRURfTUI+Mjg5OTE0PC9VU0VEX01CPjxJTUFHRVM+PElEPjE4MTwvSUQ+PElEPjE4MjwvSUQ+PElEPjE4NDwvSUQ+PElEPjE4NTwvSUQ+PElEPjE4NjwvSUQ+PElEPjE4NzwvSUQ+PElEPjE4ODwvSUQ+PElEPjE4OTwvSUQ+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDRE
 FUQVtsb2N
hbGhvc3RdXT48L0JSSURHRV9MSVNUPjxDTE9ORV9UQVJHRVQ+PCFbQ0RBVEFbU0VMRl1dPjwvQ0xPTkVfVEFSR0VUPjxEQVRBU1RPUkVfQ0FQQUNJVFlfQ0hFQ0s+PCFbQ0RBVEFbbm9dXT48L0RBVEFTVE9SRV9DQVBBQ0lUWV9DSEVDSz48RElTS19UWVBFPjwhW0NEQVRBW0JMT0NLXV0+PC9ESVNLX1RZUEU+PERTX01BRD48IVtDREFUQVtsdm1dXT48L0RTX01BRD48TE5fVEFSR0VUPjwhW0NEQVRBW05PTkVdXT48L0xOX1RBUkdFVD48VE1fTUFEPjwhW0NEQVRBW2x2bV1dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjxWR19OQU1FPjwhW0NEQVRBW3ZnLW9uZV1dPjwvVkdfTkFNRT48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjwvRFNfRFJJVkVSX0FDVElPTl9EQVRBPg==
    125<br>
    <br>
    And see this for vgdisplay : <br>
    <br>
     8711 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     8713 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     8882 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     8884 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9014 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9016 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9179 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9181 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9351 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9353 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9532 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9534 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9667 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9669 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9833 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
     9835 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10021 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10023 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10176 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10178 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10329 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10331 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10492 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10494 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10668 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10670 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10846 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10848 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10997 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    10999 ?        S      0:00 vgdisplay -o vg_size --units M -C
    --noheadings --nosuffix vg-one<br>
    <br>
    <br>
    This VM running normal, but sometimes lvm/monitor will be check, and
    so take a long for get information monitor, this process queue, and
    all process like (create VM, create image) stuck, until one night,
    how to fix it? thanks.<span class="HOEnZb"><font color="#888888"><br>
    <br>
    Rhesa.  <br></font></span><div><div class="h5">
    <div>On 02/27/2014 05:10 AM, Ruben S.
      Montero wrote:<br>
    </div>
    <blockquote type="cite">
      <div dir="ltr">Hi Rhesa,
        <div><br>
        </div>
        <div>Maybe we are just trying to submit to many VMs at the same
          time for your system, so cLVM get stuck. Are you experience
          this when deploying multiple VMs? If so we can either reduce
          the number of threads of the transfer driver to serialize the
          operations or tweak the scheduler to be less aggressive.</div>
        <div><br>
        </div>
        <div>Cheers</div>
        <div><br>
        </div>
        <div>Ruben </div>
      </div>
      <div class="gmail_extra"><br>
        <br>
        <div class="gmail_quote">On Thu, Feb 27, 2014 at 11:05 AM, Rhesa
          Mahendra <span dir="ltr"><<a href="mailto:rhesa@lintasmediadanawa.com" target="_blank">rhesa@lintasmediadanawa.com</a>></span>
          wrote:<br>
          <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
            <div bgcolor="#FFFFFF" text="#000000"> Ruben,<br>
              <br>
              Thanks for your answer, once again, why command
              ../lvm/monitor (vgdisplay) take to long to get info
              monitor LVM, so our frontend have many process, and make
              everything stuck, how to fix this? thanks,<br>
              <br>
              Rhesa.
              <div>
                <div><br>
                   <br>
                  <div>On 02/27/2014 05:02 AM, Ruben S. Montero wrote:<br>
                  </div>
                  <blockquote type="cite">
                    <div dir="ltr">Hi, 
                      <div><br>
                      </div>
                      <div>Yes, given the use of clvm in OpenNebula I
                        think we are safe without fencing. I cannot
                        think of a  split-brain condition where fencing
                        would be needed in our case.</div>
                      <div><br>
                      </div>
                      <div>Cheers</div>
                      <div><br>
                      </div>
                      <div>Ruben</div>
                    </div>
                    <div class="gmail_extra"><br>
                      <br>
                      <div class="gmail_quote">On Thu, Feb 27, 2014 at
                        1:23 AM, Rhesa Mahendra <span dir="ltr"><<a href="mailto:rhesa@lintasmediadanawa.com" target="_blank">rhesa@lintasmediadanawa.com</a>></span>
                        wrote:<br>
                        <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
                          <div dir="auto">
                            <div>Ruben,</div>
                            <div><br>
                            </div>
                            <div>I get error in Fencing, fencing agent
                              not working fine, so if one node cannot
                              connect fencing this cluster will be
                              stuck, i read from forum, this fence can
                              connect to ipmi, i think opennebula just
                              need clvm, so i decide to use cluster
                              without fence, i hope everythink is fine,
                              thanks.<br>
                              <br>
                              Regards,
                              <div>Rhesa Mahendra.</div>
                            </div>
                            <div>
                              <div>
                                <div><br>
                                  On 26 Feb 2014, at 23:09, "Ruben S.
                                  Montero" <<a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>>

                                  wrote:<br>
                                  <br>
                                </div>
                                <blockquote type="cite">
                                  <div>
                                    <div dir="ltr">
                                      <div>Hi Rhesa</div>
                                      <div><br>
                                      </div>
                                      <div>I agree that the problem is
                                        related to lvm, probably clvmd
                                        cannot acquire locking through
                                        DLM. I assume that as you are
                                        running the cluster during 3-4
                                        days it is not mis-configured,
                                        I've seen this before related to
                                        networking problems (usually
                                        filtering multicast traffic),
                                        can you double check that
                                        iptables is allowing all the
                                        required cluster traffic?. </div>
                                      <div><br>
                                      </div>
                                      <div>Also what is the output of
                                        clustat, during the failure?</div>
                                      <div><br>
                                      </div>
                                      <div><br>
                                      </div>
                                      <div>Cheers</div>
                                      <div><br>
                                      </div>
                                      <div>Ruben</div>
                                    </div>
                                    <div class="gmail_extra"><br>
                                      <br>
                                      <div class="gmail_quote"> On Wed,
                                        Feb 26, 2014 at 3:50 AM, Rhesa
                                        Mahendra <span dir="ltr"><<a href="mailto:rhesa@lintasmediadanawa.com" target="_blank">rhesa@lintasmediadanawa.com</a>></span>
                                        wrote:<br>
                                        <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> Guys,<br>
                                          <br>
                                          I will create production use
                                          San Storage, so i think
                                          opennebula need LVM/CLVM for
                                          do, it's have been 3 month for
                                          do this, but after i create 50
                                          VM use one template with 3
                                          node, this lvm/clvm not
                                          working fine, status VM still
                                          Prolog after two days, please
                                          see :<br>
                                          <br>
                                          <br>
                                          0:00 bash -c if [ -x
                                          "/var/tmp/one/im/run_probes"
                                          ]; then
                                          /var/tmp/one/im/run_probes kvm
                                          /var/lib/one//datastores 4124
                                          20 0 idc-conode001; else<br>
                                          14447 ?        S      0:00
                                          /bin/bash
                                          /var/tmp/one/im/run_probes kvm
                                          /var/lib/one//datastores 4124
                                          20 0 idc-conode001<br>
                                          14454 ?        S      0:00
                                          /bin/bash
                                          /var/tmp/one/im/run_probes kvm
                                          /var/lib/one//datastores 4124
                                          20 0 idc-conode001<br>
                                          14455 ?        S      0:00
                                          /bin/bash
                                          /var/tmp/one/im/run_probes kvm
                                          /var/lib/one//datastores 4124
                                          20 0 idc-conode001<br>
                                          14460 ?        S      0:00
                                          /bin/bash
                                          ./collectd-client_control.sh
                                          kvm /var/lib/one//datastores
                                          4124 20 0 idc-conode001<br>
                                          14467 ?        S      0:00
                                          /bin/bash
                                          /var/tmp/one/im/kvm.d/../run_probes
                                          kvm-probes
                                          /var/lib/one//datastores 4124
                                          20 0 idc-conode001<br>
                                          14474 ?        S      0:00
                                          /bin/bash
                                          /var/tmp/one/im/kvm.d/../run_probes
                                          kvm-probes
                                          /var/lib/one//datastores 4124
                                          20 0 idc-conode001<br>
                                          14475 ?        S      0:00
                                          /bin/bash
                                          /var/tmp/one/im/kvm.d/../run_probes
                                          kvm-probes
                                          /var/lib/one//datastores 4124
                                          20 0 idc-conode001<br>
                                          14498 ?        S      0:00
                                          /bin/bash ./monitor_ds.sh
                                          kvm-probes
                                          /var/lib/one//datastores 4124
                                          20 0 idc-conode001<br>
                                          14525 ?        S      0:00
                                          /bin/bash ./monitor_ds.sh
                                          kvm-probes
                                          /var/lib/one//datastores 4124
                                          20 0 idc-conode001<br>
                                          14526 ?        S      0:00
                                          sudo vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-0<br>
                                          14527 ?        S      0:00
                                          vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-0<br>
                                          15417 ?        S      0:00
                                          [kdmflush]<br>
                                          15452 ?        Ss     0:00
                                          sshd: oneadmin [priv]<br>
                                          15454 ?        S      0:00
                                          sshd: oneadmin@notty<br>
                                          15455 ?        Ss     0:00
                                          bash -s<br>
                                          15510 ?        Ss     0:00
                                          sshd: oneadmin [priv]<br>
                                          15512 ?        S      0:00
                                          sshd: oneadmin@notty<br>
                                          15513 ?        Ss     0:00 sh
                                          -s<br>
                                          15527 ?        S      0:00
                                          sudo lvremove -f
                                          /dev/vg-one/lv-one-179-596-0<br>
                                          15528 ?        S      0:00
                                          lvremove -f
                                          /dev/vg-one/lv-one-179-596-0<br>
                                          <br>
                                          <br>
                                          I use locking type 3, i have 3
                                          node and 1 front end, i use
                                          cman and this is configuration
                                          cluster.conf<br>
                                          <br>
                                          <?xml version="1.0"?><br>
                                          <cluster name="idccluster"
                                          config_version="9"><br>
                                          <br>
                                            <clusternodes><br>
                                            <clusternode
                                          name="idc-vcoz01" votes="1"
                                          nodeid="1"><fence><method
                                          name="single"><device
                                          name="idc-vcoz01"/></method></fence></clusternode><clusternode

                                          name="idc-conode001" votes="1"
                                          nodeid="2"><fence><method

                                          name="single"><device
                                          name="idc-conode001"/></method></fence></clusternode><clusternode

                                          name="idc-conode002" votes="1"
                                          nodeid="3"><fence><method

                                          name="single"><device
                                          name="idc-conode002"/></method></fence></clusternode><clusternode

                                          name="idc-conode003" votes="1"
                                          nodeid="4"><fence><method

                                          name="single"><device
name="idc-conode003"/></method></fence></clusternode></clusternodes><br>
                                          <br>
                                            <fencedevices><br>
                                            <fencedevice
                                          name="idc-vcoz01"
                                          agent="fence_ipmilan"/><fencedevice
                                          name="idc-conode001"
                                          agent="fence_ipmilan"/><fencedevice
                                          name="idc-conode002"
                                          agent="fence_ipmilan"/><fencedevice
                                          name="idc-conode003"
                                          agent="fence_ipmilan"/></fencedevices><br>
                                          <br>
                                            <rm><br>
                                              <failoverdomains/><br>
                                              <resources/><br>
                                            </rm><br>
                                          </cluster><br>
                                          <br>
                                          i shared
                                          /etc/cluster/cluster.conf use
                                          NFS,<br>
                                          this command use cman_tools<br>
                                          <br>
                                          Node  Sts   Inc   Joined      
                                                  Name<br>
                                             1   M    304   2014-02-20
                                          16:08:37  idc-vcoz01<br>
                                             2   M    288   2014-02-20
                                          16:08:37  idc-conode001<br>
                                             3   M    304   2014-02-20
                                          16:08:37  idc-conode002<br>
                                             4   M    312   2014-02-26
                                          09:44:04  idc-conode003<br>
                                          <br>
                                          i think, this vm cannot
                                          running because so take a long
                                          for waiting lvcreate or
                                          vgdisplay, see this:<br>
                                          <br>
                                          30818 ?        S      0:00
                                          sudo vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30819 ?        S      0:00
                                          sudo vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30820 ?        S      0:00
                                          sudo vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30821 ?        S      0:00
                                          sudo vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30824 ?        S      0:00
                                          sudo vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30825 ?        S      0:00
                                          sudo vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30827 ?        S      0:00
                                          sudo vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30842 ?        S      0:00
                                          vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30843 ?        S      0:00
                                          vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30844 ?        S      0:00
                                          vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30845 ?        S      0:00
                                          vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30846 ?        S      0:00
                                          sudo vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30847 ?        S      0:00
                                          vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30852 ?        S      0:00
                                          vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30853 ?        S      0:00
                                          vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          30857 ?        S      0:00
                                          vgdisplay --separator :
                                          --units m -o vg_size,vg_free
                                          --nosuffix --noheadings -C
                                          vg-one-1<br>
                                          <br>
                                          <br>
                                          or :<br>
                                          <br>
                                          <br>
                                          30859 ?        S      0:00
                                          sudo lvcreate -L20480.00M -n
                                          lv-one-179-610-0 vg-one<br>
                                          30860 ?        S      0:00
                                          lvcreate -L20480.00M -n
                                          lv-one-179-610-0 vg-one<br>
                                          <br>
                                          If i try to restart all
                                          server, and all service
                                          everything is fine, but after
                                          3 or 4 days, this problem come
                                          again.<br>
                                          This Infrastructure will be
                                          production, and i think i must
                                          find out how to fix this, iam
                                          not ready if this
                                          configuration will be
                                          production, so please help me,
                                          and thanks.<br>
                                          <br>
                                          Rhesa.<br>
_______________________________________________<br>
                                          Users mailing list<br>
                                          <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
                                          <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
                                        </blockquote>
                                      </div>
                                      <br>
                                      <br clear="all">
                                      <div><br>
                                      </div>
                                      -- <br>
                                      <div dir="ltr">
                                        <div>
                                          <div>-- <br>
                                          </div>
                                        </div>
                                        Ruben S. Montero, PhD<br>
                                        Project co-Lead and Chief
                                        Architect
                                        <div>OpenNebula - Flexible
                                          Enterprise Cloud Made Simple<br>
                                          <a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a>
                                          | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>
                                          | @OpenNebula</div>
                                      </div>
                                    </div>
                                  </div>
                                </blockquote>
                              </div>
                            </div>
                          </div>
                        </blockquote>
                      </div>
                      <br>
                      <br clear="all">
                      <div><br>
                      </div>
                      -- <br>
                      <div dir="ltr">
                        <div>
                          <div>-- <br>
                          </div>
                        </div>
                        Ruben S. Montero, PhD<br>
                        Project co-Lead and Chief Architect
                        <div>OpenNebula - Flexible Enterprise Cloud Made
                          Simple<br>
                          <a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>
                          | @OpenNebula</div>
                      </div>
                    </div>
                  </blockquote>
                  <br>
                </div>
              </div>
            </div>
          </blockquote>
        </div>
        <br>
        <br clear="all">
        <div><br>
        </div>
        -- <br>
        <div dir="ltr">
          <div>
            <div>-- <br>
            </div>
          </div>
          Ruben S. Montero, PhD<br>
          Project co-Lead and Chief Architect
          <div>OpenNebula - Flexible Enterprise Cloud Made Simple<br>
            <a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a>
            | @OpenNebula</div>
        </div>
      </div>
    </blockquote>
    <br>
  </div></div></div>

</blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div><div>-- <br></div></div>Ruben S. Montero, PhD<br>Project co-Lead and Chief Architect<div>OpenNebula - Flexible Enterprise Cloud Made Simple<br>

<a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:rsmontero@opennebula.org" target="_blank">rsmontero@opennebula.org</a> | @OpenNebula</div></div>
</div>
</div></blockquote></body></html>