[one-users] Use cases for consideration
Ruben S. Montero
rubensm at dacya.ucm.es
Thu Mar 11 11:03:43 PST 2010
Hi Sergio,
You may also be interested in this post[1], it describes some
technical details about using OpenNebula to implement the use cases
you describe in your email:
[1] http://blog.dsa-research.org/?p=98
Cheers
Ruben
On Thu, Mar 11, 2010 at 6:39 PM, Ignacio Martin Llorente
<llorente at dacya.ucm.es> wrote:
> Hi Sergio,
>
> Both scenarios can be implemented. You may be interested in the work developed by:
>
> - BIG Grid VM Working Group: http://lists.opennebula.org/pipermail/users-opennebula.org/2010-February/001459.html
> - Batch Virtualization Project at CERN: http://indico.cern.ch/sessionDisplay.py?sessionId=15&slotId=0&confId=55893#2009-09-21
>
> Regards
> --
> Ignacio M. Llorente, Full Professor (Catedratico): http://dsa-research.org/llorente
> DSA Research Group: web http://dsa-research.org and blog http://blog.dsa-research.org
> OpenNebula Open Source Toolkit for Cloud Computing: http://www.OpenNebula.org
> RESERVOIR European Project in Cloud Computing: http://www.reservoir-fp7.eu
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On 11/03/2010, at 18:19, Sergio Maffioletti, Grid Computing Competence Center wrote:
>
>> Dear All
>>
>> I just joined the mailing list as I would like to find out whether OpenNebula could serve the purposes of our usecases.
>>
>> We are going to start an explorative project trying to implement a solution for managing VMs to allow users to run complex scientific applications
>>
>> Our main requirement is the possibility of integrating whatever VM management solution with the local resources (normally controlled by LRMS: Condor, SGE, PBS)
>>
>> the first usecase is a sort of Local cloud:
>>
>> we would like to integrate our own local resources (typically a Condor pool or an SGE cluster) and allow users to run applications in VMs (basically one VM per application).
>> The overall solution should provide a user interface batch-like where users can create a job description (pretty much like in SGE,PBS or in any sort of grid).
>> The solution should be able to identify which VM correspond to the user's request; deploy the VM on the local infrastructure, start it up and have it integrated as an additional execution node of the LRMS (being Condor or SGE).
>> Finally the LRMS job is submitted (and hopefully executed within the started VM)
>>
>> The system should also monitor the status of the LRMS job and of the VM; then, once the job is completed, stop the VM.
>> The system should also be able to decide whether the VM should be stopped, paused or just left up for the next queued job.
>>
>> Accounting should also be harmonized with the LRMS
>>
>>
>> The second usecase is a sort of Hybrid cloud:
>>
>> As an extension of the above mentioned case, it should be possible to integrate external clouds (like EC2) seamlessly.
>> VM deployment and monitoring as well as accounting report should be adaptable to cope with this usecase
>> Ideally the cloud should be used to host VM that, somehow, should be seen at the LRMS as just additional compute nodes
>>
>> I would really appreciate some input and/or feedback on whether and how OpenNebula could provide the functionalities we are looking for,
>> or knowing that I'm completely off the track :)
>>
>> thanks in advance
>>
>> Cheers
>> Sergio :)
>>
>> sergio.maffioletti at gc3.uzh.ch
>> University of Zurich
>> Winterthurerstrasse 190
>> CH-8057 Zurich Switzerland
>> Tel: +41 44 635 4222
>> Fax: +41 44 635 6888
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
--
--
Dr. Ruben Santiago Montero
Associate Professor (Profesor Titular), Complutense University of Madrid
URL: http://dsa-research.org/doku.php?id=people:ruben
Weblog: http://blog.dsa-research.org/?author=7
More information about the Users
mailing list