[one-users] My new setup for private cloud
Shankhadeep Shome
shank15217 at gmail.com
Wed Mar 20 05:38:18 PDT 2013
Alberto, take a close look at LIO or SCST as storage target servers. They
offer a whole list of options and are very robust and more often than not
iSCSI solution is a big part of the equation. Both LIO and SCST are kernel
based solutions and extremely fast. Also take a close look at openindiana
based solutions like nexenta store.
On Wed, Mar 20, 2013 at 6:54 AM, Carlos Martín Sánchez <
cmartin at opennebula.org> wrote:
> Thanks a lot Alberto, we really appreciate your taking the time to write
> this up. It's great to see the list becoming a knowledge base and not just
> a support list.
> --
> Carlos Martín, MSc
> Project Engineer
> OpenNebula - The Open-source Solution for Data Center Virtualization
> www.OpenNebula.org | cmartin at opennebula.org | @OpenNebula<http://twitter.com/opennebula><cmartin at opennebula.org>
>
>
> On Tue, Mar 19, 2013 at 1:58 PM, Alberto Zuin - Liste <
> liste at albertozuin.eu> wrote:
>
>> Just to share my experience.
>> From the beginning of my experience with OpenNebula, I made some changes
>> in storage technology and now I found a solution that works pretty well for
>> my necessities.
>> First: there isn't an "always ok" solution, every solution works in a
>> particular situation. My situation is, simply, a budget situation where the
>> pourpose is saving money and have a solution that can fit my I/O load
>> necessity.
>> In my private cloud, I have 5 KVM servers and a total of 20-30 VM: some
>> are computational only VM (mail scanner, DNS, ecc.), some are I/O intensive
>> (MySQL server for LAMP, Syslog server, ecc.): the seconds are the problem
>> because the previous solutions didn't fits the I/O necessities.
>> My first solution was a storage server made with linux that exports an
>> (c)LVM disk via AoE (just because has less overhead than iSCSI): the load
>> of the server was very high and the I/O throughput not fast. Ok, the lesson
>> I learned: if you have to make a shared storage, don't use a commodity
>> server, use an optimized one with high speed disks, powerful controllers,
>> 10GB Ethernet... or simply, buy a FC SAN ;-)
>> My second test was with a cluster storage with MooseFS: here I had
>> conflicting results. I have a customer with a well working setup with 10
>> storage server and 10 XEN Hypervisors; in my private cloud, with only 3
>> storage server (one master and two chunk) the I/O is slow as the first
>> solution: no benefit from using two server to balance the I/O load, and 2 1
>> Gb Ethernet card in bonding is another bottleneck. The lesson I learned
>> here: to have a powerful cluster storage you have to use many servers.
>> My third (last) setup that work pretty well is... to not use only one
>> shared storage, but distribute the storage across the hypervisors.
>> I thought of a thing: my old hypervisors are some Dell R410 with 2
>> quad-core CPU and 32 GB Ram; if I can fit in a Rack 1U cabinet, 2 Mini-ITX
>> Motherboard with a Core i7 (not perfectly like the Xeon of R410 but not too
>> far) and 16 GB Ram... it's the same. And if I can fit also 4 high speed
>> disk like WD Velociraptor for the data and two SSD disk for SO, I can use
>> DRBD in Active/Active mode with cLVM or GFS2 to have a decent storage in HA
>> for the VMs instantiated in this "double-server".
>> Now I have 4 mini-itx double server each with this configuration; in
>> OpenNebula, each double-server is a separate "cluster" and the DRBD disk is
>> the datastore associated with the cluster.
>> Certainly I can't migrate a VM across two cluster, but at the moment this
>> solution fits pretty well my speed necessity, the costs of each
>> "double-server" are less than a real 1U server and the power consumption in
>> datacenter decreased.
>> My 2 cents,
>> Alberto
>>
>> --
>> AZ Network Specialist
>> via Mare, 36A
>> 36030 Lugo di Vicenza (VI)
>> ITALY
>> P.I. IT04310790284
>> http://www.azns.it
>> Tel +39.3286268626
>> Fax +39.0492106654
>>
>> ______________________________**_________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/**listinfo.cgi/users-opennebula.**org<http://lists.opennebula.org/listinfo.cgi/users-opennebula.org>
>>
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20130320/25f6a6cf/attachment-0002.htm>
More information about the Users
mailing list