[one-users] Globus Workspaces - OpenNEbula connection problems...

Alvaro Canales accleo at gmail.com
Wed Sep 3 02:06:21 PDT 2008


Hello,

On Wed, Sep 3, 2008 at 3:12 AM, William Voorsluys <williamvoor at gmail.com>wrote:

> Hello Alvaro,
>
> I'm forwarding this thread to ONE users. We are discussing on network
> customization in the context of the WS + ONE integration.
>
> More comments inline.
>
> >> Hello Alvaro,
> >>
> >> It's really good to hear you are interested in the integration between
> >> Globus WS and OpenNebula. My work regarding this integration was done
> >> as part of Google Summer of Code, which just finished a week ago.
> >> Since there are still some things to be improved, it would be nice if
> >> we joined our efforts.
> >> I'm cc'ing this reply to Borja, who mentored my project on Google SOC.
> >> Also, do you mind carrying on this discussion in the OpenNebula
> >> mailing list?
> >
> > I don't mind at all! :) I simply thought it might not be that
> interesting...
> > hehe
> >
> >>
> >> > For example, a problem I noticed, for which you already commented me a
> >> > solution, was to know the IP of a submitted VM... With the reply I got
> >> > from
> >> > you, I got clear the way to know it... I was just wondering if there
> was
> >> > a
> >> > "method" which returned the IP to the user, just in case she is not
> >> > managing
> >> > the server, for example. Or to know VM's IP if the user didn't input
> any
> >> > MAC
> >> > at all...
> >> > I've implemented a "Forward" IpConfig - AcqusitionMethod, which simply
> >> > lets
> >> > OpenNEbula configure the network (I'm totally replacing worksp-control
> >> > backend!) and it doesn't do much (nothing?), of course. But I just
> >> > imagined
> >> > an implementation were the user, even without introducing a MAC
> address,
> >> > got
> >> > an IP (previously unknown for her) after submitting its template, to
> >> > immediately start using the VM. I guess it's not possible and then I
> >> > would
> >> > just make a mapping function between MAC and IP addresses...
> >>
> >> On the Workspace Service, networking functionality (associations) is
> >> managed by a Network Adapter (NA), which is responsible for assigning
> >> IPs from a pool of available addresses. In fact, users don't input MAC
> >> address, instead the NA generates a MAC address and binds it to an IP
> >> address, then assigning this pair to a workspace configuration.
> >
> > This depends on one of the three acquisition methods imlemented, right?
> If
> > so, the thing is that I want to implement a new "Forward" acquistiono
> > method...
>
> Exactly. The configuration steps I described are related to the
> AllocateAndConfigure method.
>
> >> Later on, the DHCP server is automatically configured by the backend
> >> script
> >> to assign that IP to the VM that requests an IP lease with that MAC
> >> address.
> >
> > To which backend script are you referring?
>
> There are 3 Python scripts that deal with network customization:
> dhcp-conf-alter.py,  dhcp-config.sh and  ebtables-config.sh. They are
> part of the Workspace backend. Since I have replaced the current
> backend, these scripts are not used. However, as I mentioned it would
> not be hard to port these scripts to allow automatic network
> configuration on this integration. The Workspace front-end itself, via
> a network adapter, could invoke this script to make the necessary
> changes to the DHCP server config file. Thus, provided that the image
> submitted is configure to use DHCP and its primary NIC is bound the
> correct bridge, the VM would automatically acquire one of the IP
> address available on the chosen association.
> I have not looked deeply into the current network adapter
> implementation, so I'm not sure how this would be done.
> In summary, my suggestion to you would be trying to improve network
> customization on the WS side instead of having this "Forward"
> acquisition method.


Yes, it would be nice to improve current networking settings, but in my
approach, I must use this "Forward" method. Any case, those improvements can
be made in the near future, but I'm not planning to do them right now...


>
>
> Cheers,
>
> William.
>
> >> IMO this functionality should be preserved (possibly ported from the
> >> backend scripts to the service side) so that even if we drop the
> >> original backend scripts we would have automatic DHCP configuration.
> >
> > This would be perfect, but this should really be automagic network
> > configuration. What I want is the case in which you want an
> implementation
> > where the Workspace frontend doesn't know anything about the backend
> (one,
> > to the case) but the IP of the server hosting the backend...
> >
> >>
> >> That also means that ONE does not need to manage networking. Porting
> >> these scripts was one of the things I wanted to do but time didn't
> >> allow, but you might be interested in doing. Such an effort would
> >> allow ONE to have automatic IP assignment using the "acquisition
> >> methods" already implemented in the WS.
> >
> > My approach was different: I wanted to implement a new "Forward"
> acquistion
> > method. It would just let any backend (in this case onevm) configure
> network
> > settings. In other words, Workspace would not be responsible in assigning
> > IP, but the backend.
> > The idea was that the resourcepool file contained only one line to the
> (ONE)
> > backend server, who would be in charge to configure everything... In your
> > approach, where do you get the backend IP from? Is it, to say something,
> the
> > first line of that file?
> > Because I think that someone may provide a ONE service (I mean, a cluster
> > virtualization service) and should not know anything about any other IP
> but
> > backend's. That way, the user would only ask for an execution of a VM and
> > then would get an IP in return, to start using it... Do you think this is
> > feasible? Any help? Because I might be overlooking something...
> >
> >
> >>
> >> > Another question I came up with, was a correct mapping between
> >> > Workspace's
> >> > "reboot", "pause" and "unpause" flags to OpenNEbula's, with no proper
> >> > candidates for the moment (onevm's "save" and "restore" is not exactly
> >> > the
> >> > same!)...
> >> > As of now, I've mapped the following flags parsed to the programs:
> >> >
> >> > Workspace | worksp-control | onevm | xm:
> >> >
> >> > deploy | create | onevm submit + onevm deploy (to which machine? maybe
> >> > using
> >> > the getHosts API you commented??) | scp... + xm create...
> >> > destroy | remove | onevm shutdown | xm shutdown...
> >> > destroy --deleteall | remove deleteall | onevm cancel | xm destroy...
> >> > shutdown-save | remove + unpropagate | onevm shutdown + onevm delete |
> >> > xm
> >> > shutdown...
> >> >
> >> > But what about reboot, pause, unpause? Do you have a different
> solution?
> >> > No
> >> > direct mapping for those cases I guess...
> >>
> >> First of all, please note I haven't used the CLI to invoke ONE
> >> commands. Instead, I'm using ONE's XML-RPC API.
> >
> > I think your aproach might be better and, indeed, I was thinking of
> porting
> > my existing solution... Because I faced problems like getting the ID of
> the
> > newly submitted template that, in your aproach, is easily handled :D
> >
> >>
> >> Anyway, the mapping I used was the following:
> >>
> >> 1. deploy -> onevm submit
> >> I don't call 'onevm deploy' since I assume that scheduling will be done
> by
> >> ONE.
> >
> > Yes, I agree. Anyway, do you leave worksp-control --propagate
> unimplemented
> > (just like shutdown-save) or make it call onevm submit?
> >
> >>
> >> 2. For destroy related commands I simply call 'onevm shutdown'. I'm
> >> not using onevm delete. Maybe I should call delete sometime, but this
> >> is not currently done. Also, there is not such command 'onevm cancel',
> >> is there?
> >
> > You're right. With onevm shutdown should work... But what about using
> onevm
> > delete in the unpropagate worksp-control flag? Or leave it unimplemented?
> I
> > think it may fit... By the way, there is no such a onevm cancel, you're
> > right :p I turns out that I saw a function (action_cancel) in
> > ./src/vmm_mad/xen/one_vmm_xen.rb which calls, indeed, the xm destroy
> > command, which is, in turn, what worksp-control --remove --deleteall
> > calls...
> >
> >>
> >> 3. Shutdown save
> >> The notion of 'propagation' is not really necessary when using ONE,
> >> since it relies on NFS to propagate VM images to the nodes. So, once
> >> the VM is shutdown, its image is already unpropagated, which in the WS
> >> context means copying the VM image back from cluster nodes to the node
> >> where the WS is installed.
> >
> > You're absolutely right. So I guess Workspace --shutdown-save is like
> > Workspace --shutdown, whic in turn calls onevm shutdown too, right? Or it
> is
> > a different case to Workspace --destroy? At the end, we might have three
> > Workspace flags executing onevm shutdown... :p
> >
> >>
> >> 4. Reboot
> >> onevm stop + resume.
> >
> > But onevm stop saves the state of the machine, just like suspend does
> > (http://www.opennebula.org/doku.php?id=documentation:rel1.0:ug) and does
> not
> > shutdown the machine, right? I mean, when you resume it back again, it
> goes
> > to the previous state (with its previous configuration and errors :p) of
> the
> > VM, isn't it?
> >
> >>
> >> 5. Pause/Unpause
> >> onevm suspend  + resume
> >
> > I think this is the more similar mapping, though IMO, onevm suspend is
> like
> > "hibernation" and worksp-control --pause is like "suspend" ;)
> >
> >>
> >> I'm not sure if this mapping is perfect or not. But we can discuss
> >> this further with the OpenNebula developers.
> >>
> >> > Though I have some more annoyances in my head, those are the important
> >> > ones... What do you think, may we share information about it? ;) I
> >> > really
> >> > hope so... :)
> >> > In any case, I guess you're very busy so I'd like to absolutely thank
> >> > you
> >> > for reading until here and giving me some attention!
> >>
> >> Please e-mail me anytime!
> >
> > Thank you very very much for all your time and help provided :)
> >
> >>
> >> I will also forward you the pointers to the code I have produced
> >> during this summer.
> >>
> >> Cheers,
> >>
> >> William
> >
> > Best regards,
> >
> > Alvaro
> >
> >
> > --
> > Álvaro
> >
>
>
>
> --
> William Voorsluys
>
> williamvoor.googlepages.com
>



-- 
Álvaro
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20080903/d40b74e7/attachment-0003.htm>


More information about the Users mailing list