[one-users] ESXi image datastore (default) not working?

Tino Vazquez cvazquez at c12g.com
Thu Jan 23 02:54:37 PST 2014


Hi Davide,

I think I got confused by this sentence:

>> opennebula tries to write on /vmfs/volumes/1 instead that /vmfs/volumes/datastore1

If you have datastore "1" mounted on the ESX, then you should already
have  /vmfs/volumes/1 in the ESX, and it is OK for OpenNebula to write
on it.

Regards,

-Tino
--
OpenNebula - Flexible Enterprise Cloud Made Simple

--
Constantino Vázquez Blanco, PhD, MSc
Senior Infrastructure Architect at C12G Labs
www.c12g.com | @C12G | es.linkedin.com/in/tinova

--
Confidentiality Warning: The information contained in this e-mail and
any accompanying documents, unless otherwise expressly indicated, is
confidential and privileged, and is intended solely for the person
and/or entity to whom it is addressed (i.e. those identified in the
"To" and "cc" box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this
communication, or any part thereof, is strictly prohibited and may be
unlawful. If you have received this e-mail in error, please notify us
immediately by e-mail at abuse at c12g.com and delete the e-mail and
attachments and any copy from your system. C12G thanks you for your
cooperation.


On Thu, Jan 23, 2014 at 11:45 AM, Davide Papini <dadopap at gmail.com> wrote:
> Hi Tino, Thanks for the fast answer,
>
> do I also need to keep the NFS FrontEnd resources mounted on the esxi?
> Specifically:
>
> /var/lib/one/datastores/0
> /var/lib/one/datastores/1
>
> I guess I have to delete the last one otherwise is going o conflict, should
> I delete also the first one?
>
> Thanks
>
>
>
>
> On Thu, Jan 23, 2014 at 10:03 AM, Tino Vazquez <cvazquez at c12g.com> wrote:
>>
>> Hi Davide,
>>
>> You need to rename datastore1 to just "1" in the ESX.
>>
>> Also, please update datastore "0" and "1" BASE_PATH in OpenNebula to
>> "/vmfs/volumes":
>>
>> $ onedatastore update 0
>> SHARED="YES"
>> TM_MAD="vmfs"
>> TYPE="SYSTEM_DS"
>> BASE_PATH="/vmfs/volumes"
>>
>> $ onedatastore update 1
>> TM_MAD="vmfs"
>> DS_MAD="vmfs"
>> BASE_PATH="/vmfs/volumes"
>> CLONE_TARGET="SYSTEM"
>> DISK_TYPE="FILE"
>> LN_TARGET="NONE"
>> TYPE="IMAGE_DS"
>> BRIDGE_LIST="esx-ip"
>>
>> Hope it helps,
>>
>> -Tino
>>
>> --
>> OpenNebula - Flexible Enterprise Cloud Made Simple
>>
>> --
>> Constantino Vázquez Blanco, PhD, MSc
>> Senior Infrastructure Architect at C12G Labs
>> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>>
>> --
>> Confidentiality Warning: The information contained in this e-mail and
>> any accompanying documents, unless otherwise expressly indicated, is
>> confidential and privileged, and is intended solely for the person
>> and/or entity to whom it is addressed (i.e. those identified in the
>> "To" and "cc" box). They are the property of C12G Labs S.L..
>> Unauthorized distribution, review, use, disclosure, or copying of this
>> communication, or any part thereof, is strictly prohibited and may be
>> unlawful. If you have received this e-mail in error, please notify us
>> immediately by e-mail at abuse at c12g.com and delete the e-mail and
>> attachments and any copy from your system. C12G thanks you for your
>> cooperation.
>>
>>
>> On Wed, Jan 22, 2014 at 6:53 PM, Davide Papini <dadopap at gmail.com> wrote:
>> > I sent earlier the same email without subject, I apologize for that, I
>> > am
>> > resending the same one with a subject now.
>> >
>> > I have a problem downloading images from marketplace and I think it is
>> > due
>> > to the fact that openebula cannot interface with esxi.
>> >
>> > I use opennebula 4.4 and esxi5.1 (fresh install)
>> >
>> > When I try to import an image from the marketplace I get a space error.
>> > I tryed to tell opennebula to skip free space check and then I get
>> > another
>> > error:
>> >
>> > ERROR Wed Jan 22 16:01:54 2014 : Error copying image in the datastore:
>> > Error
>> > copying /var/lib/one/tmp/eba28569ed15dbf0bbe425d431683657 to
>> > /var/lib/one//datastores/1/eba28569ed15dbf0bbe425d431683657 through SCP
>> >
>> > could someone please help me with that?
>> >
>> > I think the problem is in the vmfs datastore configuration.
>> >
>> > to configure opennebula I used the guide at:
>> >
>> >
>> > http://docs.opennebula.org/stable/design_and_installation/quick_starts/qs_centos_vmware.html#qs-centos-vmware
>> >
>> > what troubles me is that I think opennebula tries to write on
>> > /vmfs/volumes/1 instead that /vmfs/volumes/datastore1 (on the esxi
>> > machine)
>> >
>> >
>> >
>> > the output of  onedatastore show 1 -x is:
>> >
>> >
>> > <DATASTORE>
>> >   <ID>1</ID>
>> >   <UID>0</UID>
>> >   <GID>0</GID>
>> >   <UNAME>oneadmin</UNAME>
>> >   <GNAME>oneadmin</GNAME>
>> >   <NAME>default</NAME>
>> >   <PERMISSIONS>
>> >     <OWNER_U>1</OWNER_U>
>> >     <OWNER_M>1</OWNER_M>
>> >     <OWNER_A>0</OWNER_A>
>> >     <GROUP_U>1</GROUP_U>
>> >     <GROUP_M>0</GROUP_M>
>> >     <GROUP_A>0</GROUP_A>
>> >     <OTHER_U>1</OTHER_U>
>> >     <OTHER_M>0</OTHER_M>
>> >     <OTHER_A>0</OTHER_A>
>> >   </PERMISSIONS>
>> >   <DS_MAD>vmfs</DS_MAD>
>> >   <TM_MAD>vmfs</TM_MAD>
>> >   <BASE_PATH>/var/lib/one//datastores/1</BASE_PATH>
>> >   <TYPE>0</TYPE>
>> >   <DISK_TYPE>0</DISK_TYPE>
>> >   <CLUSTER_ID>-1</CLUSTER_ID>
>> >   <CLUSTER/>
>> >   <TOTAL_MB>0</TOTAL_MB>
>> >   <FREE_MB>0</FREE_MB>
>> >   <USED_MB>31</USED_MB>
>> >   <IMAGES>
>> >     <ID>0</ID>
>> >   </IMAGES>
>> >   <TEMPLATE>
>> >     <BRIDGE_LIST><![CDATA[10.202.20.13]]></BRIDGE_LIST>
>> >     <CLONE_TARGET><![CDATA[SYSTEM]]></CLONE_TARGET>
>> >     <DATASTORE_CAPACITY_CHECK><![CDATA[NO]]></DATASTORE_CAPACITY_CHECK>
>> >     <DISK_TYPE><![CDATA[FILE]]></DISK_TYPE>
>> >     <DS_MAD><![CDATA[vmfs]]></DS_MAD>
>> >     <LN_TARGET><![CDATA[NONE]]></LN_TARGET>
>> >     <TM_MAD><![CDATA[vmfs]]></TM_MAD>
>> >     <TYPE><![CDATA[IMAGE_DS]]></TYPE>
>> >   </TEMPLATE>
>> > </DATASTORE>
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users at lists.opennebula.org
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> >
>
>



More information about the Users mailing list