[one-users] datastore and san disks

Michael Curran michael.curran at connectsolutions.com
Tue Jul 2 09:47:32 PDT 2013


In my test env. I can do that, in my production env I cannot do that -- but for the sake of testing , I think I know what I need to for now

Thanks for the assistance and patiently answering my questions!

Michael Curran | connectsolutions | Lead Network Architect
Phone 614.568.2285 | Mobile 614.403.6320 | www.connectsolutions.com

-----Original Message-----
From: Tino Vazquez [mailto:cvazquez at c12g.com] 
Sent: Tuesday, July 02, 2013 12:05 PM
To: Michael Curran
Cc: users at lists.opennebula.org
Subject: Re: [one-users] datastore and san disks

Hi,

Exactly. If I understood correctly, you have 10 LUNs that you can aggregate to present bigger datastores to the ESX. If this is the case, I would suggest to create only two datastores (ESX and OpenNebula), aggregating LUNs for the image datastore so it has space for the golden image, and leaving the rest for the system datastore.
Please note that you can use different configuration for the LUNs (RAID-wise, for instance), since they are going to have different use.

Regards,

-Tino
--
Constantino Vázquez Blanco, PhD, MSc
C12G Labs - OpenNebula for the Enterprise www.c12g.com | cvazquez at c12g.com | @C12G

--
Confidentiality Warning: The information contained in this e-mail and any accompanying documents, unless otherwise expressly indicated, is confidential and privileged, and is intended solely for the person and/or entity to whom it is addressed (i.e. those identified in the "To" and "cc" box). They are the property of C12G Labs S.L..
Unauthorized distribution, review, use, disclosure, or copying of this communication, or any part thereof, is strictly prohibited and may be unlawful. If you have received this e-mail in error, please notify us immediately by e-mail at abuse at c12g.com and delete the e-mail and attachments and any copy from your system. C12G thanks you for your cooperation.


On Tue, Jul 2, 2013 at 4:41 PM, Michael Curran <michael.curran at connectsolutions.com> wrote:
> This is the learning curve going from a standard installation to a 
> cloud installation and how resources are managed
>
> All my luns are presented to 10 esxi hosts (single cluster) -- The 
> OpenNebula master node has the hosts in a its config as a single 
> cluster I should identify all my luns into the system datastore since 
> they could all be running a vm on a esx host
>
> The images datastore is where I keep my images for deploying a vm 
> (windows / linux / etc...) The images datastore could be multiple luns too -- I would imagine best practice says not to mix the images datastore and the system datastore luns though?
>
> Sorry for so many questions, just making sure I get the transition 
> right in my head for documentation I need to use to present this 
> solution over OpenStack
>
> Michael Curran | connectsolutions | Lead Network Architect Phone 
> 614.568.2285 | Mobile 614.403.6320 | www.connectsolutions.com
>
> -----Original Message-----
> From: Tino Vazquez [mailto:cvazquez at c12g.com]
> Sent: Tuesday, July 02, 2013 9:54 AM
> To: Michael Curran
> Cc: users at lists.opennebula.org
> Subject: Re: [one-users] datastore and san disks
>
> Hi Michael,
>
> The system datastore stores all the images of the VMs that are currently running. Having this into account, you should dimension it so all the VMs to be run in a cluster fit in that datastore. I'm saying this because you can configure a different system datastore per cluster.
>
> The images datastores hold the images available to conform a VM. They can be multiple lun's, indeed.
>
> Depending on the storage configuration (VMFS or NFS drivers, having SSH enabled or not) and the VM images (being persistent or not persistent), when a VM is launched the images are either copied from the images datastores to the systems datastore, or merely linked, making the VM provisioning much faster.
>
> Regards,
>
> -Tino
> --
> Constantino Vázquez Blanco, PhD, MSc
> C12G Labs - OpenNebula for the Enterprise www.c12g.com | 
> cvazquez at c12g.com | @C12G
>
> --
> Confidentiality Warning: The information contained in this e-mail and any accompanying documents, unless otherwise expressly indicated, is confidential and privileged, and is intended solely for the person and/or entity to whom it is addressed (i.e. those identified in the "To" and "cc" box). They are the property of C12G Labs S.L..
> Unauthorized distribution, review, use, disclosure, or copying of this communication, or any part thereof, is strictly prohibited and may be unlawful. If you have received this e-mail in error, please notify us immediately by e-mail at abuse at c12g.com and delete the e-mail and attachments and any copy from your system. C12G thanks you for your cooperation.
>
>
> On Tue, Jul 2, 2013 at 1:30 PM, Michael Curran <michael.curran at connectsolutions.com> wrote:
>> I think this is the portion that is confusing me the most
>>
>>
>> *    The OpenNebula front-end doesn't need to mount any datastore.
>>
>> *    The ESX servers needs to present or mount as iSCSI both the system datastore and the image datastore (naming them <datastore-id>, for instance 0 -system datastore- and 1 -image datastore-).
>>
>> The system datastore -- keeps images of all the VM's ?? If I have 600 VM's across hundreds of luns, how will they all end up on one system datastore if it isn't the same size as my total storage? How do I identify it separately, is it just within the one datastore, and so long as its presented across the all the esx hosts in the cluster that is sufficient?
>>
>> The images datastore -- that can be multiple lun's correct? Just identify each one by name within the one datastore command?
>>
>> Michael Curran | connectsolutions | Lead Network Architect Phone
>> 614.568.2285 | Mobile 614.403.6320 | www.connectsolutions.com
>>
>> -----Original Message-----
>> From: Tino Vazquez [mailto:cvazquez at c12g.com]
>> Sent: Monday, July 01, 2013 4:36 PM
>> To: Michael Curran
>> Cc: users at lists.opennebula.org
>> Subject: Re: [one-users] datastore and san disks
>>
>> Hi,
>>
>> comments inline,
>>
>> On Mon, Jul 1, 2013 at 9:28 PM, Michael Curran <michael.curran at connectsolutions.com> wrote:
>>> Even though the each hosts already see them as SAN disks, they need 
>>> to be mounted like nfs volumes on each EACh host because of the 
>>> method for OpenNebula?
>>
>> No, for datastores mounted as iSCSI in the ESX you can use the OpenNebula vmfs drivers (http://opennebula.org/documentation:rel4.0:vmware_ds#using_vmfs_datastores).
>> For datastores mounted through NFS in the ESX you can use the OpenNebula shared drivers (http://opennebula.org/documentation:rel4.0:vmware_ds#using_nfs_datastores).
>> But you don't need to mount each disk twice.
>>
>>>
>>> And each time we add new SAN disks, we just add another link to each 
>>> ESX host?
>>
>> You need to add the disk to each ESX in the same OpenNebula cluster, or just in the ESX hosts where you chose to. In the latter, you need to be careful with the VM templates REQUIREMENTS section so they don't end up in a host that doesn't present the datastores where the images that conform the VM reside. The idea of the former, grouping the ESX with the same datastores in the same cluster, is to avoid having to deal with this extra configuration. You can have different clusters with different datastores mounted, and when you decide to add another disk, you will need to add it to each ESX in the cluster, but not to the ESX of other clusters.
>>
>> Hope it helps,
>>
>> -Tino
>>
>>>
>>>
>>>
>>>
>>>
>>> Sent from my Android phone using TouchDown (www.nitrodesk.com)
>>>
>>>
>>> -----Original Message-----
>>> From: Tino Vazquez [cvazquez at c12g.com]
>>> Received: Monday, 01 Jul 2013, 12:27pm
>>> To: Michael Curran [michael.curran at connectsolutions.com]
>>> CC: users at lists.opennebula.org [users at lists.opennebula.org]
>>> Subject: Re: [one-users] datastore and san disks
>>>
>>> Hi Michael,
>>>
>>>> Treating the env. as SAN disks, I don't need a datastore on the 
>>>> OpenNebula VM? I just need to share a volume of some sorts on the 
>>>> ESXi hosts? Im not getting this clearly
>>>
>>> That is correct, for the configuration you are describing you will 
>>> need to use the OpenNebula datastore and transfer VMFS drivers.
>>>
>>>   * The datastore drivers are in charge of staging images from 
>>> different sources to the images datastore.
>>>   * The transfer drivers deals with moving the images from the 
>>> images datastores to the system datastore (where the running VM 
>>> images are stored).
>>>
>>> In particular, the datastore VMFS drivers have one attribute
>>> (BRDIGE_LIST) that describes the ESXs to act as gateways to stage 
>>> the images in images datastores (they will be picked in a round 
>>> robin fashion). For this reason, the OpenNebula front-end doesn't 
>>> need to mount any datastores (plus, it would be tricky to manage a 
>>> VMFS volume from a linux distribution).
>>>
>>>> And it also reads like my SAN disks should be specifically assigned 
>>>> to each ESXi hosts - our production env. has 100's of SAN disks and 
>>>> they are shared to all ESXi hosts - that seems  a bit strange
>>>
>>> You need to mount in each host all the datastores where you store 
>>> images that you want to run VMs from in that particular host. That 
>>> is, if you have 100 different disks you can have a 100 different 
>>> images datastores. You can chose to mount all 100 in all your ESX 
>>> hosts, or you can group the ESX hosts that are going to share 
>>> certain images datastores in the same cluster. OpenNebula will 
>>> deduce from the VM definition in which cluster that VM should be 
>>> placed, so the images that conform the VM can be pulled from the 
>>> images datastores that hold them.
>>>
>>> I hope the above clarifies your questions. If they don't, please 
>>> come back with your doubts and we will do our best to clarify them.
>>>
>>> Regards,
>>>
>>> -Tino
>>> --
>>> Constantino Vázquez Blanco, PhD, MSc C12G Labs - OpenNebula for the 
>>> Enterprise www.c12g.com | cvazquez at c12g.com | @C12G
>>>
>>> --
>>> Confidentiality Warning: The information contained in this e-mail 
>>> and any accompanying documents, unless otherwise expressly 
>>> indicated, is confidential and privileged, and is intended solely 
>>> for the person and/or entity to whom it is addressed (i.e. those 
>>> identified in the "To" and "cc" box). They are the property of C12G Labs S.L..
>>> Unauthorized distribution, review, use, disclosure, or copying of 
>>> this communication, or any part thereof, is strictly prohibited and 
>>> may be unlawful. If you have received this e-mail in error, please 
>>> notify us immediately by e-mail at abuse at c12g.com and delete the 
>>> e-mail and attachments and any copy from your system. C12G thanks 
>>> you for your cooperation.
>>>
>>>
>>> On Mon, Jul 1, 2013 at 2:42 PM, Michael Curran 
>>> <michael.curran at connectsolutions.com> wrote:
>>>> Hello ---
>>>>
>>>>
>>>>
>>>> I am working through setting up a test environment for OpenNebula , 
>>>> and most of it has been exceptionally straight-forward and simple.
>>>> However, working through the datastore assignment has been a bit 
>>>> tricky, because the documentation reads as if all the SAN has to be 
>>>> assigned to specific nodes within the configuration, and it is 
>>>> shared to all nodes.
>>>>
>>>>
>>>>
>>>> I am using vmware esxi 5.1 for testing and we have a rather robust 
>>>> VMware environment already, and are looking to leverage OpenNebula 
>>>> to improve our ability to stand up new VMs with less user 
>>>> interaction and speeding up the process with the tasks that 
>>>> OpenNebula can easily help automate and improve
>>>>
>>>>
>>>>
>>>> My test environment has the following
>>>>
>>>>
>>>>
>>>> 1)      OpenNebula VM for management
>>>>
>>>> 2)      2 ESXi 5.1 hosts
>>>>
>>>> 3)      3 Datastores shared to both nodes as iSCSI attached devices
>>>>
>>>>
>>>>
>>>> Treating the env. as SAN disks, I don't need a datastore on the 
>>>> OpenNebula VM? I just need to share a volume of some sorts on the 
>>>> ESXi hosts? Im not getting this clearly
>>>>
>>>>
>>>>
>>>> And it also reads like my SAN disks should be specifically assigned 
>>>> to each ESXi hosts - our production env. has 100's of SAN disks and 
>>>> they are shared to all ESXi hosts - that seems  a bit strange
>>>>
>>>>
>>>>
>>>> Could someone help me understand what I am missing here so I can 
>>>> complete the build of my test env.?
>>>>
>>>>
>>>>
>>>> Michael Curran | connectsolutions | Lead Network Architect
>>>>
>>>> Phone 614.568.2285 | Mobile 614.403.6320 | www.connectsolutions.com
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opennebula.org
>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org



More information about the Users mailing list