[one-users] Opennebula with LVM

Rhesa Mahendra rhesa at lintasmediadanawa.com
Fri Feb 28 02:38:01 PST 2014


Ruben,

I use 4.4, i will try many thanks for your answer.

Regards,
Rhesa Mahendra.

On 28 Feb 2014, at 16:55, "Ruben S. Montero" <rsmontero at opennebula.org> wrote:

> Ok I see, it is about the monitoring frequency of the datastores, maybe you could increase the value of MONITOR_INTERVAL in oned.conf, if you are in 4.4 this should not interfere with the intervals for hosts and VMs.
> 
> 
> Cheers
> 
> 
> On Fri, Feb 28, 2014 at 4:16 AM, Rhesa Mahendra <rhesa at lintasmediadanawa.com> wrote:
>> Ruben,
>> 
>> This VM is running normal, i just create 7 VM but in process i see so many process /lvm/monitor see this:
>> 
>>  8645 ?        SN     0:00 sh -c /var/lib/one/remotes/datastore/lvm/monitor PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMjU8L0lEPjxVSUQ+MzwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5yaGVzYTwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5MVk0tU1RPUkU8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L09USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PERTX01BRD5sdm08L0RTX01BRD48VE1fTUFEPmx2bTwvVE1fTUFEPjxCQVNFX1BBVEg+L3Zhci9saWIvb25lLy9kYXRhc3RvcmVzLzEyNTwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT4yPC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+LTE8L0NMVVNURVJfSUQ+PENMVVNURVI+PC9DTFVTVEVSPjxUT1RBTF9NQj4xMDk5NTEwODwvVE9UQUxfTUI+PEZSRUVfTUI+MTA3MDUxOTM8L0ZSRUVfTUI+PFVTRURfTUI+Mjg5OTE0PC9VU0VEX01CPjxJTUFHRVM+PElEPjE4MTwvSUQ+PElEPjE4MjwvSUQ+PElEPjE4NDwvSUQ+PElEPjE4NTwvSUQ+PElEPjE4NjwvSUQ+PElEPjE4NzwvSUQ+PElEPjE4ODwvSUQ+PElEPjE4OTwvSUQ+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDRE  FUQVtsb2N hbGhvc3RdXT48L0JSSURHRV9MSVNUPjxDTE9ORV9UQVJHRVQ+PCFbQ0RBVEFbU0VMRl1dPjwvQ0xPTkVfVEFSR0VUPjxEQVRBU1RPUkVfQ0FQQUNJVFlfQ0hFQ0s+PCFbQ0RBVEFbbm9dXT48L0RBVEFTVE9SRV9DQVBBQ0lUWV9DSEVDSz48RElTS19UWVBFPjwhW0NEQVRBW0JMT0NLXV0+PC9ESVNLX1RZUEU+PERTX01BRD48IVtDREFUQVtsdm1dXT48L0RTX01BRD48TE5fVEFSR0VUPjwhW0NEQVRBW05PTkVdXT48L0xOX1RBUkdFVD48VE1fTUFEPjwhW0NEQVRBW2x2bV1dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjxWR19OQU1FPjwhW0NEQVRBW3ZnLW9uZV1dPjwvVkdfTkFNRT48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjwvRFNfRFJJVkVSX0FDVElPTl9EQVRBPg== 125 ; echo ExitCode: $? 1>&2
>>  8647 ?        SN     0:00 /bin/bash /var/lib/one/remotes/datastore/lvm/monitor PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMjU8L0lEPjxVSUQ+MzwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5yaGVzYTwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5MVk0tU1RPUkU8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L09USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PERTX01BRD5sdm08L0RTX01BRD48VE1fTUFEPmx2bTwvVE1fTUFEPjxCQVNFX1BBVEg+L3Zhci9saWIvb25lLy9kYXRhc3RvcmVzLzEyNTwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT4yPC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+LTE8L0NMVVNURVJfSUQ+PENMVVNURVI+PC9DTFVTVEVSPjxUT1RBTF9NQj4xMDk5NTEwODwvVE9UQUxfTUI+PEZSRUVfTUI+MTA3MDUxOTM8L0ZSRUVfTUI+PFVTRURfTUI+Mjg5OTE0PC9VU0VEX01CPjxJTUFHRVM+PElEPjE4MTwvSUQ+PElEPjE4MjwvSUQ+PElEPjE4NDwvSUQ+PElEPjE4NTwvSUQ+PElEPjE4NjwvSUQ+PElEPjE4NzwvSUQ+PElEPjE4ODwvSUQ+PElEPjE4OTwvSUQ+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDRE  FUQVtsb2N hbGhvc3RdXT48L0JSSURHRV9MSVNUPjxDTE9ORV9UQVJHRVQ+PCFbQ0RBVEFbU0VMRl1dPjwvQ0xPTkVfVEFSR0VUPjxEQVRBU1RPUkVfQ0FQQUNJVFlfQ0hFQ0s+PCFbQ0RBVEFbbm9dXT48L0RBVEFTVE9SRV9DQVBBQ0lUWV9DSEVDSz48RElTS19UWVBFPjwhW0NEQVRBW0JMT0NLXV0+PC9ESVNLX1RZUEU+PERTX01BRD48IVtDREFUQVtsdm1dXT48L0RTX01BRD48TE5fVEFSR0VUPjwhW0NEQVRBW05PTkVdXT48L0xOX1RBUkdFVD48VE1fTUFEPjwhW0NEQVRBW2x2bV1dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjxWR19OQU1FPjwhW0NEQVRBW3ZnLW9uZV1dPjwvVkdfTkFNRT48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjwvRFNfRFJJVkVSX0FDVElPTl9EQVRBPg== 125
>>  8699 ?        SN     0:00 /bin/bash /var/lib/one/remotes/datastore/lvm/monitor PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMjU8L0lEPjxVSUQ+MzwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5yaGVzYTwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5MVk0tU1RPUkU8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L09USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PERTX01BRD5sdm08L0RTX01BRD48VE1fTUFEPmx2bTwvVE1fTUFEPjxCQVNFX1BBVEg+L3Zhci9saWIvb25lLy9kYXRhc3RvcmVzLzEyNTwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT4yPC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+LTE8L0NMVVNURVJfSUQ+PENMVVNURVI+PC9DTFVTVEVSPjxUT1RBTF9NQj4xMDk5NTEwODwvVE9UQUxfTUI+PEZSRUVfTUI+MTA3MDUxOTM8L0ZSRUVfTUI+PFVTRURfTUI+Mjg5OTE0PC9VU0VEX01CPjxJTUFHRVM+PElEPjE4MTwvSUQ+PElEPjE4MjwvSUQ+PElEPjE4NDwvSUQ+PElEPjE4NTwvSUQ+PElEPjE4NjwvSUQ+PElEPjE4NzwvSUQ+PElEPjE4ODwvSUQ+PElEPjE4OTwvSUQ+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDRE  FUQVtsb2N hbGhvc3RdXT48L0JSSURHRV9MSVNUPjxDTE9ORV9UQVJHRVQ+PCFbQ0RBVEFbU0VMRl1dPjwvQ0xPTkVfVEFSR0VUPjxEQVRBU1RPUkVfQ0FQQUNJVFlfQ0hFQ0s+PCFbQ0RBVEFbbm9dXT48L0RBVEFTVE9SRV9DQVBBQ0lUWV9DSEVDSz48RElTS19UWVBFPjwhW0NEQVRBW0JMT0NLXV0+PC9ESVNLX1RZUEU+PERTX01BRD48IVtDREFUQVtsdm1dXT48L0RTX01BRD48TE5fVEFSR0VUPjwhW0NEQVRBW05PTkVdXT48L0xOX1RBUkdFVD48VE1fTUFEPjwhW0NEQVRBW2x2bV1dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjxWR19OQU1FPjwhW0NEQVRBW3ZnLW9uZV1dPjwvVkdfTkFNRT48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjwvRFNfRFJJVkVSX0FDVElPTl9EQVRBPg== 125
>> 
>> And see this for vgdisplay : 
>> 
>>  8711 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  8713 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  8882 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  8884 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9014 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9016 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9179 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9181 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9351 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9353 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9532 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9534 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9667 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9669 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9833 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>>  9835 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10021 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10023 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10176 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10178 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10329 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10331 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10492 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10494 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10668 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10670 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10846 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10848 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10997 ?        S      0:00 sudo vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 10999 ?        S      0:00 vgdisplay -o vg_size --units M -C --noheadings --nosuffix vg-one
>> 
>> 
>> This VM running normal, but sometimes lvm/monitor will be check, and so take a long for get information monitor, this process queue, and all process like (create VM, create image) stuck, until one night, how to fix it? thanks.
>> 
>> Rhesa.  
>> On 02/27/2014 05:10 AM, Ruben S.       Montero wrote:
>>> Hi Rhesa,
>>> 
>>> Maybe we are just trying to submit to many VMs at the same time for your system, so cLVM get stuck. Are you experience this when deploying multiple VMs? If so we can either reduce the number of threads of the transfer driver to serialize the operations or tweak the scheduler to be less aggressive.
>>> 
>>> Cheers
>>> 
>>> Ruben 
>>> 
>>> 
>>> On Thu, Feb 27, 2014 at 11:05 AM, Rhesa Mahendra <rhesa at lintasmediadanawa.com>           wrote:
>>>> Ruben,
>>>> 
>>>> Thanks for your answer, once again, why command ../lvm/monitor (vgdisplay) take to long to get info monitor LVM, so our frontend have many process, and make everything stuck, how to fix this? thanks,
>>>> 
>>>> Rhesa.
>>>> 
>>>>  
>>>> On 02/27/2014 05:02 AM, Ruben S. Montero wrote:
>>>>> Hi, 
>>>>> 
>>>>> Yes, given the use of clvm in OpenNebula I think we are safe without fencing. I cannot think of a  split-brain condition where fencing would be needed in our case.
>>>>> 
>>>>> Cheers
>>>>> 
>>>>> Ruben
>>>>> 
>>>>> 
>>>>> On Thu, Feb 27, 2014 at 1:23 AM, Rhesa Mahendra <rhesa at lintasmediadanawa.com> wrote:
>>>>>> Ruben,
>>>>>> 
>>>>>> I get error in Fencing, fencing agent not working fine, so if one node cannot connect fencing this cluster will be stuck, i read from forum, this fence can connect to ipmi, i think opennebula just need clvm, so i decide to use cluster without fence, i hope everythink is fine, thanks.
>>>>>> 
>>>>>> Regards,
>>>>>> Rhesa Mahendra.
>>>>>> 
>>>>>> On 26 Feb 2014, at 23:09, "Ruben S. Montero" <rsmontero at opennebula.org> wrote:
>>>>>> 
>>>>>>> Hi Rhesa
>>>>>>> 
>>>>>>> I agree that the problem is related to lvm, probably clvmd cannot acquire locking through DLM. I assume that as you are running the cluster during 3-4 days it is not mis-configured, I've seen this before related to networking problems (usually filtering multicast traffic), can you double check that iptables is allowing all the required cluster traffic?. 
>>>>>>> 
>>>>>>> Also what is the output of clustat, during the failure?
>>>>>>> 
>>>>>>> 
>>>>>>> Cheers
>>>>>>> 
>>>>>>> Ruben
>>>>>>> 
>>>>>>> 
>>>>>>> On Wed, Feb 26, 2014 at 3:50 AM, Rhesa Mahendra <rhesa at lintasmediadanawa.com> wrote:
>>>>>>>> Guys,
>>>>>>>> 
>>>>>>>> I will create production use San Storage, so i think opennebula need LVM/CLVM for do, it's have been 3 month for do this, but after i create 50 VM use one template with 3 node, this lvm/clvm not                                           working fine, status VM still Prolog after two days, please see :
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 0:00 bash -c if [ -x "/var/tmp/one/im/run_probes" ]; then /var/tmp/one/im/run_probes kvm /var/lib/one//datastores 4124 20 0 idc-conode001; else
>>>>>>>> 14447 ?        S      0:00 /bin/bash /var/tmp/one/im/run_probes kvm /var/lib/one//datastores 4124 20 0 idc-conode001
>>>>>>>> 14454 ?        S      0:00 /bin/bash /var/tmp/one/im/run_probes kvm /var/lib/one//datastores 4124 20 0 idc-conode001
>>>>>>>> 14455 ?        S      0:00 /bin/bash /var/tmp/one/im/run_probes kvm /var/lib/one//datastores 4124 20 0 idc-conode001
>>>>>>>> 14460 ?        S      0:00 /bin/bash ./collectd-client_control.sh kvm /var/lib/one//datastores 4124 20 0 idc-conode001
>>>>>>>> 14467 ?        S      0:00 /bin/bash /var/tmp/one/im/kvm.d/../run_probes kvm-probes /var/lib/one//datastores 4124 20 0 idc-conode001
>>>>>>>> 14474 ?        S      0:00 /bin/bash /var/tmp/one/im/kvm.d/../run_probes kvm-probes /var/lib/one//datastores 4124 20 0 idc-conode001
>>>>>>>> 14475 ?        S      0:00 /bin/bash /var/tmp/one/im/kvm.d/../run_probes kvm-probes /var/lib/one//datastores 4124 20 0 idc-conode001
>>>>>>>> 14498 ?        S      0:00 /bin/bash ./monitor_ds.sh                                           kvm-probes /var/lib/one//datastores 4124 20 0 idc-conode001
>>>>>>>> 14525 ?        S      0:00 /bin/bash ./monitor_ds.sh                                           kvm-probes /var/lib/one//datastores 4124 20 0 idc-conode001
>>>>>>>> 14526 ?        S      0:00 sudo vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-0
>>>>>>>> 14527 ?        S      0:00 vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-0
>>>>>>>> 15417 ?        S      0:00 [kdmflush]
>>>>>>>> 15452 ?        Ss     0:00 sshd: oneadmin [priv]
>>>>>>>> 15454 ?        S      0:00 sshd: oneadmin at notty
>>>>>>>> 15455 ?        Ss     0:00 bash -s
>>>>>>>> 15510 ?        Ss     0:00 sshd: oneadmin [priv]
>>>>>>>> 15512 ?        S      0:00 sshd: oneadmin at notty
>>>>>>>> 15513 ?        Ss     0:00 sh -s
>>>>>>>> 15527 ?        S      0:00 sudo lvremove -f /dev/vg-one/lv-one-179-596-0
>>>>>>>> 15528 ?        S      0:00 lvremove -f /dev/vg-one/lv-one-179-596-0
>>>>>>>> 
>>>>>>>> 
>>>>>>>> I use locking type 3, i have 3 node and 1 front end, i use cman and this is configuration cluster.conf
>>>>>>>> 
>>>>>>>> <?xml version="1.0"?>
>>>>>>>> <cluster name="idccluster" config_version="9">
>>>>>>>> 
>>>>>>>>   <clusternodes>
>>>>>>>>   <clusternode name="idc-vcoz01" votes="1" nodeid="1"><fence><method name="single"><device name="idc-vcoz01"/></method></fence></clusternode><clusternode name="idc-conode001" votes="1" nodeid="2"><fence><method name="single"><device name="idc-conode001"/></method></fence></clusternode><clusternode name="idc-conode002" votes="1" nodeid="3"><fence><method name="single"><device name="idc-conode002"/></method></fence></clusternode><clusternode name="idc-conode003" votes="1" nodeid="4"><fence><method name="single"><device name="idc-conode003"/></method></fence></clusternode></clusternodes>
>>>>>>>> 
>>>>>>>>   <fencedevices>
>>>>>>>>   <fencedevice name="idc-vcoz01" agent="fence_ipmilan"/><fencedevice name="idc-conode001" agent="fence_ipmilan"/><fencedevice name="idc-conode002" agent="fence_ipmilan"/><fencedevice name="idc-conode003" agent="fence_ipmilan"/></fencedevices>
>>>>>>>> 
>>>>>>>>   <rm>
>>>>>>>>     <failoverdomains/>
>>>>>>>>     <resources/>
>>>>>>>>   </rm>
>>>>>>>> </cluster>
>>>>>>>> 
>>>>>>>> i shared /etc/cluster/cluster.conf use NFS,
>>>>>>>> this command use cman_tools
>>>>>>>> 
>>>>>>>> Node  Sts   Inc   Joined                                                         Name
>>>>>>>>    1   M    304   2014-02-20 16:08:37  idc-vcoz01
>>>>>>>>    2   M    288   2014-02-20 16:08:37  idc-conode001
>>>>>>>>    3   M    304   2014-02-20 16:08:37  idc-conode002
>>>>>>>>    4   M    312   2014-02-26 09:44:04  idc-conode003
>>>>>>>> 
>>>>>>>> i think, this vm cannot running because so take a long for waiting lvcreate or vgdisplay, see this:
>>>>>>>> 
>>>>>>>> 30818 ?        S      0:00 sudo vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30819 ?        S      0:00 sudo vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30820 ?        S      0:00 sudo vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30821 ?        S      0:00 sudo vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30824 ?        S      0:00 sudo vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30825 ?        S      0:00 sudo vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30827 ?        S      0:00 sudo vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30842 ?        S      0:00 vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30843 ?        S      0:00 vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30844 ?        S      0:00 vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30845 ?        S      0:00 vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30846 ?        S      0:00 sudo vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30847 ?        S      0:00 vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30852 ?        S      0:00 vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30853 ?        S      0:00 vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 30857 ?        S      0:00 vgdisplay --separator : --units m -o vg_size,vg_free --nosuffix --noheadings -C vg-one-1
>>>>>>>> 
>>>>>>>> 
>>>>>>>> or :
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 30859 ?        S      0:00 sudo lvcreate -L20480.00M -n lv-one-179-610-0 vg-one
>>>>>>>> 30860 ?        S      0:00 lvcreate -L20480.00M -n lv-one-179-610-0 vg-one
>>>>>>>> 
>>>>>>>> If i try to restart all server, and all service everything is fine, but after 3 or 4 days, this problem come again.
>>>>>>>> This Infrastructure will be                                           production, and i think i must find out how to fix this, iam not ready if this configuration will be production, so please help me, and thanks.
>>>>>>>> 
>>>>>>>> Rhesa.
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list
>>>>>>>> Users at lists.opennebula.org
>>>>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> -- 
>>>>>>> -- 
>>>>>>> Ruben S. Montero, PhD
>>>>>>> Project co-Lead and Chief Architect
>>>>>>> OpenNebula - Flexible Enterprise Cloud Made Simple
>>>>>>> www.OpenNebula.org | rsmontero at opennebula.org                                           | @OpenNebula
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> -- 
>>>>> Ruben S. Montero, PhD
>>>>> Project co-Lead and Chief Architect
>>>>> OpenNebula - Flexible Enterprise Cloud Made Simple
>>>>> www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
>>> 
>>> 
>>> 
>>> -- 
>>> -- 
>>> Ruben S. Montero, PhD
>>> Project co-Lead and Chief Architect
>>> OpenNebula - Flexible Enterprise Cloud Made Simple
>>> www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
> 
> 
> 
> -- 
> -- 
> Ruben S. Montero, PhD
> Project co-Lead and Chief Architect
> OpenNebula - Flexible Enterprise Cloud Made Simple
> www.OpenNebula.org | rsmontero at opennebula.org | @OpenNebula
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140228/eedf3ac1/attachment-0002.htm>


More information about the Users mailing list