[one-users] Tiny Local Business scenario for openNebula

Carlos Martín Sánchez cmartin at opennebula.org
Thu Oct 27 03:33:52 PDT 2011


Hi,

OpenNebula can be used for the scenario you describe, even if you are not
going to take advantage of its on-demand cloud features.
It will provide a centralized view and management of your Images and VMs,
what will surely help to administer and monitor your virtualized
workstations.

OpenNebula can use the same computer as the front-end and host, the only
thing to keep in mind is that you need to use the shared storage transfer
manager [1] (the front-end and the hosts are "sharing" the same storage).

Knowing that all the VMs will be windows, you may want to configure remote
desktop access to the guest OS instead of VNC.

Regards.

[1] http://opennebula.org/documentation:rel3.0:sfs
--
Carlos Martín, MSc
Project Engineer
OpenNebula - The Open Source Toolkit for Cloud Computing
www.OpenNebula.org <http://www.opennebula.org/> | cmartin at opennebula.org


On Wed, Oct 26, 2011 at 3:55 PM, Diego Jacobi <jacobidiego at gmail.com> wrote:

> Hi Ben.
> I appreciate your answer.
>
> I was expecting to be able to install kvm, sshd, and openNebula on the
> same hardware. As I would not need to provide many different
> technologies.
> I think that I would have maybe 4 VM at the same time, but the virtual
> processor will be most of the time sleeping.
>
> Will this be in some software related conflict ? Or your
> recommendation is due to the load ?
>
> It sounds that the method you describe, involves the same procedures
> as installing openNebula.
>
> Kind regards,
> Diego
>
>
>
> 2011/10/26 Ben Tullis <bt at tiger-computing.co.uk>:
> > Hi Diego,
> >
> > I don't think that OpenNebula is likely to be the best tool for the job
> > in this case, as it is more geared towards on-demand cloud computing.
> >
> > However, it does sound like you could really benefit from virtualization
> > in the office. The way I would approach your situation is as follows.
> >
> > Make sure that the machine you're going to use as a server has hardware
> > virtualization support built in.
> > http://en.wikipedia.org/wiki/Intel_VT#Processor
> >
> > Use disks in pairs of equal sizes, then install Linux and configure
> > software RAID1 so that the system will be able to withstand a failure in
> > any disk.
> > http://en.wikipedia.org/wiki/Mdadm
> >
> > Install a hypervisor to enable you to run many concurrent virtual
> > machines. You might like to consider KVM, Xen and Virtualbox.
> > http://www.linux-kvm.org
> > http://wiki.xensource.com/xenwiki/
> > http://virtualbox.org
> >
> > You can then define virtual machines and install Windows onto them, in
> > order to make them available to your colleagues. You can use normal
> > Windows system management techniques (such as sysprep) to deploy
> > pre-configured Windows system images, thereby saving you time. You could
> > then use VNC to make these virtual machines available to your staff, in
> > the manner that you suggest.
> >
> > I'm currently looking at building an OpenNebula cluster to support a
> > small-business requirement, but I can't really see that there is any way
> > of ensuring high-availability in any system with fewer than four
> > physical servers in it. I think you'd be making things unnecessarily
> > hard for yourself if you tried to do it all on one server.
> >
> > I hope that helps.
> >
> > Kind regards,
> > Ben
> >
> > --
> > |Ben Tullis
> > |Tiger Computing Ltd
> > |"Linux for Business"
> > |
> > |Tel: 033 0088 1511
> > |Web: http://www.tiger-computing.co.uk
> > |
> > |Registered in England. Company number: 3389961
> > |Registered address: Wyastone Business Park,
> > |Wyastone Leys, Monmouth, NP25 3SR
> >
> >
> >
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20111027/6a0e5e28/attachment-0003.htm>


More information about the Users mailing list