[one-users] VMware VMFS vs NFS

Dmitri Chebotarov dchebota at gmu.edu
Tue Oct 15 08:19:24 PDT 2013


Tino

Thank you - it works now.
I've added BRIDGE_LIST, mounted DS on Vmware ESXi hosts.
Space monitor works OK and I'm able to create new images (vmdk disks on
the DS).

--
Thank you,

Dmitri Chebotarov
VCL Sys Eng, Engineering & Architectural Support, TSD - Ent Servers &
Messaging
223 Aquia Building, Ffx, MSN: 1B5
Phone: (703) 993-6175 | Fax: (703) 993-3404







On 10/15/13 11:05 , "Tino Vazquez" <cvazquez at c12g.com> wrote:

>Hi Dmitri,
>
>Did you add the BRIDGE_LIST parameter to the datastore template? It is
>needed for the drivers to know which ESX hosts to use to register
>images in the datastores. It needs an space separated list.
>
>Regards,
>
>-Tino
>--
>OpenNebula - Flexible Enterprise Cloud Made Simple
>
>--
>Constantino Vázquez Blanco, PhD, MSc
>Senior Infrastructure Architect at C12G Labs
>www.c12g.com | @C12G | es.linkedin.com/in/tinova
>
>--
>Confidentiality Warning: The information contained in this e-mail and
>any accompanying documents, unless otherwise expressly indicated, is
>confidential and privileged, and is intended solely for the person
>and/or entity to whom it is addressed (i.e. those identified in the
>"To" and "cc" box). They are the property of C12G Labs S.L..
>Unauthorized distribution, review, use, disclosure, or copying of this
>communication, or any part thereof, is strictly prohibited and may be
>unlawful. If you have received this e-mail in error, please notify us
>immediately by e-mail at abuse at c12g.com and delete the e-mail and
>attachments and any copy from your system. C12G thanks you for your
>cooperation.
>
>
>On Tue, Oct 15, 2013 at 4:46 PM, Dmitri Chebotarov <dchebota at gmu.edu>
>wrote:
>> Tino
>>
>> The reason for NFS is hardware only provides NFS exports (NetApp filer,
>>no
>> iSCSI license).
>>
>> I may need a little bit more help.
>> I've added a datastore (id 139) as Vmware VMFS, checked that both DS_MAD
>> and TS_MAD set to 'vmfs', also set Base Path to '/vmfs/volumes' (before
>> adding DS).
>>
>> When ONED tries to monitor new Vmware VMFS datastore I get the error:
>>
>> Tue Oct 15 10:34:42 2013 [ImM][I]: Command execution fail:
>> /var/lib/one/remotes/datastore/vmfs/monitor
>> 
>>PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMzk8L0lEPjxVSUQ+NDwvV
>>Ul
>> 
>>EPjxHSUQ+MDwvR0lEPjxVTkFNRT5kY2hlYm90YTwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9H
>>Tk
>> 
>>FNRT48TkFNRT52bXdhcmU8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U
>>+P
>> 
>>E9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1
>>VQ
>> 
>>X1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L
>>09
>> 
>>USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlT
>>U0
>> 
>>lPTlM+PERTX01BRD52bWZzPC9EU19NQUQ+PFRNX01BRD52bWZzPC9UTV9NQUQ+PEJBU0VfUEF
>>US
>> 
>>D4vdm1mcy92b2x1bWVzLzEzOTwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT
>>4w
>> 
>>PC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+MTA3PC9DTFVTVEVSX0lEPjxDTFVTVEVSPnZtd2FyZ
>>Tw
>> 
>>vQ0xVU1RFUj48VE9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNF
>>RF
>> 
>>9NQj4wPC9VU0VEX01CPjxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxEU19NQUQ+PCFbQ0R
>>BV
>> 
>>EFbdm1mc11dPjwvRFNfTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvVE1fTUFEPjxUWV
>>BF
>> 
>>PjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU
>>19
>> EUklWRVJfQUNUSU9OX0RBVEE+ 139
>> Tue Oct 15 10:34:42 2013 [ImM][I]: expr: division by zero
>> Tue Oct 15 10:34:42 2013 [ImM][I]: ExitCode: 255
>> Tue Oct 15 10:34:42 2013 [ImM][E]: Error monitoring datastore 139: -
>>
>>
>> Š
>>
>> Running the command manually:
>>
>> [root at ONE ~]# /var/lib/one/remotes/datastore/vmfs/monitor
>> 
>>PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMzk8L0lEPjxVSUQ+NDwvV
>>Ul
>> 
>>EPjxHSUQ+MDwvR0lEPjxVTkFNRT5kY2hlYm90YTwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9H
>>Tk
>> 
>>FNRT48TkFNRT52bXdhcmU8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U
>>+P
>> 
>>E9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1
>>VQ
>> 
>>X1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L
>>09
>> 
>>USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlT
>>U0
>> 
>>lPTlM+PERTX01BRD52bWZzPC9EU19NQUQ+PFRNX01BRD52bWZzPC9UTV9NQUQ+PEJBU0VfUEF
>>US
>> 
>>D4vdm1mcy92b2x1bWVzLzEzOTwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT
>>4w
>> 
>>PC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+MTA3PC9DTFVTVEVSX0lEPjxDTFVTVEVSPnZtd2FyZ
>>Tw
>> 
>>vQ0xVU1RFUj48VE9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNF
>>RF
>> 
>>9NQj4wPC9VU0VEX01CPjxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxEU19NQUQ+PCFbQ0R
>>BV
>> 
>>EFbdm1mc11dPjwvRFNfTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvVE1fTUFEPjxUWV
>>BF
>> 
>>PjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU
>>19
>> EUklWRVJfQUNUSU9OX0RBVEE+ 139
>> expr: division by zero
>> ERROR: monitor: Command "" failed:
>> ERROR MESSAGE --8<------
>> Cannot monitor USED_MB=$(du -sLm /vmfs/volumes/139 2>/dev/null | cut
>>-f1)
>>
>> DF_STR=$(df -m | grep /vmfs/volumes/139 | sed 's/ \+/:/g')
>>
>> TOTAL_MB=$(echo $DF_STR | cut -d':' -f 2)
>> FREE_MB=$(echo $DF_STR | cut -d':' -f 4)
>>
>> echo "USED_MB=$USED_MB"
>> echo "TOTAL_MB=$TOTAL_MB"
>> echo "FREE_MB=$FREE_MB"
>> ERROR MESSAGE ------>8--
>>
>>
>> Not sure where to go from here...
>>
>> --
>> Thank you,
>>
>> Dmitri Chebotarov
>> VCL Sys Eng, Engineering & Architectural Support, TSD - Ent Servers &
>> Messaging
>> 223 Aquia Building, Ffx, MSN: 1B5
>> Phone: (703) 993-6175 | Fax: (703) 993-3404
>>
>>
>>
>>
>>
>>
>>
>> On 10/15/13 10:25 , "Tino Vazquez" <cvazquez at c12g.com> wrote:
>>
>>>Hi,
>>>
>>>Indeed, the process is exactly the same, what changes is the method
>>>you use to mount the datastore in the ESX host, but for OpenNebula is
>>>completely transparent (both of them will be handled with the 'vmfs'
>>>drivers).
>>>
>>>Any reason in particular for using NFS instead of VMFS? due to the
>>>available hardware or is the reason something else?
>>>
>>>Regards,
>>>
>>>-Tino
>>>--
>>>OpenNebula - Flexible Enterprise Cloud Made Simple
>>>
>>>--
>>>Constantino Vázquez Blanco, PhD, MSc
>>>Senior Infrastructure Architect at C12G Labs
>>>www.c12g.com | @C12G | es.linkedin.com/in/tinova
>>>
>>>--
>>>Confidentiality Warning: The information contained in this e-mail and
>>>any accompanying documents, unless otherwise expressly indicated, is
>>>confidential and privileged, and is intended solely for the person
>>>and/or entity to whom it is addressed (i.e. those identified in the
>>>"To" and "cc" box). They are the property of C12G Labs S.L..
>>>Unauthorized distribution, review, use, disclosure, or copying of this
>>>communication, or any part thereof, is strictly prohibited and may be
>>>unlawful. If you have received this e-mail in error, please notify us
>>>immediately by e-mail at abuse at c12g.com and delete the e-mail and
>>>attachments and any copy from your system. C12G thanks you for your
>>>cooperation.
>>>
>>>
>>>On Tue, Oct 15, 2013 at 4:22 PM, Dmitri Chebotarov <dchebota at gmu.edu>
>>>wrote:
>>>> Hi
>>>>
>>>> I'm trying to add few ESXi hosts to ONE.
>>>> Is it a different approach when using NFS shared storage and not
>>>>VMware
>>>> VMFS?
>>>> Documentation I find on web-site suggests using VWware VMFS, but I'm
>>>>not
>>>> sure if the same procedure applies in case of NFS storage...
>>>>
>>>> --
>>>> Thank you,
>>>>
>>>> Dmitri Chebotarov
>>>> VCL Sys Eng, Engineering & Architectural Support, TSD - Ent Servers &
>>>> Messaging
>>>> 223 Aquia Building, Ffx, MSN: 1B5
>>>> Phone: (703) 993-6175 | Fax: (703) 993-3404
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opennebula.org
>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org




More information about the Users mailing list