[one-users] OpenNebuka Hosted-VLAN support
Alberto Picón Couselo
alpicon1 at gmail.com
Thu Oct 27 16:37:16 PDT 2011
As you mention, we have implemented a static vlan configuration for the
moment.
We are glad to know that these issues could be addressed in Opennebula
v3.2.
Please, consider our help for testing whatever you may need.
Thank you very much for your help and such a great product,
Best Regards,
Alberto Picón
El 27/10/2011 11:32, Jaime Melis escribió:
> I apologize, the message was both for Patrice and Alberto Picón, who
> raised the initial questions.
>
> On Thu, Oct 27, 2011 at 11:30 AM, Jaime Melis <jmelis at opennebula.org
> <mailto:jmelis at opennebula.org>> wrote:
>
> Hello Patrice,
>
> you make very valid points and we're aware of those limitations.
> They have actually been reflected on the documentation:
> http://opennebula.org/documentation:rel3.0:nm#considerations_limitations
>
> The problem is that the network management is based on the hook
> subsystem, but we have realized there should be a specific driver
> for networking. We have created a feature ticket to develop this
> functionality:
> http://dev.opennebula.org/issues/863
> The target release for this functionality is OpenNebula 3.2.
>
> For the moment, in OpenNebula 3.0, the only solution to the
> aforementioned problems is to create a static network configuration.
>
> Regards,
> Jaime
>
> On Mon, Oct 17, 2011 at 5:59 PM, Patrice LACHANCE
> <patlachance at gmail.com <mailto:patlachance at gmail.com>> wrote:
>
> Hello
>
> In previous version there was a cluster feature that was
> replaced in OpenNebula 3.0 by ozones.
> Shouldn't opennebula make sure that all the nodes in a zone
> are able to run a vm and thus handle network creation on all
> nodes before starting a new VM?
>
> Patrice
>
> 2011/9/27 Alberto Picón Couselo <alpicon1 at gmail.com
> <mailto:alpicon1 at gmail.com>>
>
> Hello,
>
> We are testing hosted VLAN support in OpenNebula to
> implement network isolation. This feature seems to work
> correctly when a new instance is deployed, as it is stated
> in oned.conf, hm-vlan hook is executed in PROLOG state.
>
> However, there are another states where VLANs and bridges
> should be created (or its existence checked) before
> executing a concrete operation:
>
> * Migration/Live migration of an instance to a hypervisor
> where VLAN and bridge of the instance has never been created
> VLAN and bridge existence should be checked and created if
> necessary before migration is executed. Opennebula 3.0 RC1
> performs migration without doing these checks and fails to
> migrate/live migrate the instance, leaving it in a FAILED
> state.
>
> * A failed instance cannot be redeployed to a hypervisor
> where VLAN and bridge of the instance has never been created
> VLAN and bridge existence should be checked and created if
> necessary to redeploy the image to the selected hypervisor.
>
> * A stopped instance cannot be resumed if VLAN and bridge
> of the instance does not exist.
> If we stop all instances of a concrete hypervisor and
> reboot the hypervisor for maintenance purposes, all
> bridges and VLANs will be deleted. Stopped instances won't
> resume because VLANs and bridges requirements are not
> satisfied and will enter in a FAILED state (performing a
> deletion of non persistent disks; BTW, we have removed
> deletion lines in tm_delete script for the moment, :D).
>
> So, VLAN and bridge existence should be checked and
> created if necessary to
> resume/migrate/livemigrate/recover_from_failed_state the
> instance to the selected hypervisor. As it is stated in
> oned.conf, hm-vlan hook could be executed on:
>
> # Virtual Machine Hooks (VM_HOOK) defined by:
> # name : for the hook, useful to track the hook
> (OPTIONAL)
> # on : when the hook should be executed,
> # - CREATE, when the VM is created (onevm
> create)
> # - PROLOG, when the VM is in the prolog state
> # - RUNNING, after the VM is successfully booted
> # - SHUTDOWN, after the VM is shutdown
> # - STOP, after the VM is stopped (including
> VM image transfers)
> # - DONE, after the VM is deleted or shutdown
> # - FAILED, when the VM enters the failed state
>
> But I'm not able to find a procedure to implement these
> functionalities in oned.conf for the states I mentioned.
>
> Please, can you give me any clues?
>
> Best Regards,
> Alberto Picón
>
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org <mailto:Users at lists.opennebula.org>
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
>
>
> --
> Patrice LACHANCE
> Manager IT Consulting, Logica : http://www.logica.com
>
> Réseau Viaduc
> Consultez mon profil:
> http://www.viaduc.com/public/profil/?memberId=00226pj42r07h9f3
> Vous inscrire sur le réseau:
> http://www.viaduc.com/invitation/00226pj42r07h9f3
>
> LinkedIn Network:
> See my profile: http://www.linkedin.com/in/plachance
> Join the network: http://www.linkedin.com
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org <mailto:Users at lists.opennebula.org>
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
>
>
> --
> Jaime Melis
> Project Engineer
>
> OpenNebula - The Open Source Toolkit for Cloud Computing
> www.OpenNebula.org <http://www.OpenNebula.org> |
> jmelis at opennebula.org <mailto:jmelis at opennebula.org>
>
>
>
>
> --
> Jaime Melis
> Project Engineer
> OpenNebula - The Open Source Toolkit for Cloud Computing
> www.OpenNebula.org <http://www.OpenNebula.org> | jmelis at opennebula.org
> <mailto:jmelis at opennebula.org>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20111028/e7d358ba/attachment-0003.htm>
More information about the Users
mailing list