[one-users] Using Xen phy devices in VM templates
Stefan Freitag
stefan.freitag at udo.edu
Fri Feb 19 05:19:33 PST 2010
Hello Ruben,
I already thought about this idea, but then it would not be sufficient to
pass only
one of the disks via the RAW section. All three of them would be needed.
This is because, when I use the OpenNebula DISK section for two of my
disks and the
RAW section for the third disk, the resulting xen configuration file
will contain two lines with the disk keyword, right?
E.g.:
disk = [ 'file:/vm/udo-wn099_root.img,xvda,w',
'file:/vm/udo-wn099_swap.img,xvdb,w']
disk = [ 'phy:/dev/cciss/c0d0p4,xvdc,w']
>From what I know about Xen this is somehow an invalid configuration. Xen
will only
use one of both lines and ignore the other :-(
Cheers
Stefan
Ruben S. Montero schrieb:
> Sorry, I meant:
>
>
> RAW = [
> type = "xen",
> data = "disk=['phy:/dev/cciss/c0d0p4,xvdc,w']"
> ]
>
> On Fri, Feb 19, 2010 at 12:23 PM, Ruben S. Montero <rubensm at dacya.ucm.es>
> wrote:
>> Hi,
>>
>> I think you can try two approaches:
>>
>> * Modify the TM script so it ssh the node to make the link. You can
>> use the tm_clone.sh as starting point from the ssh suite to do so.
>>
>> * As you do not really require anything from the TM. You can use the
>> RAW attribute, add:
>> RAW = [
>> type = "xen",
>> data = " 'phy:/dev/cciss/c0d0p4,xvdc,w' ]"
>> ]
>>
>> Cheers
>>
>> Ruben
>>
>> On Fri, Feb 19, 2010 at 11:14 AM, Stefan Freitag
>> <stefan.freitag at udo.edu> wrote:
>>> Hi Ruben,
>>>
>>> thanks for the reply.
>>> I added the block and clone information to the template and
>>> tried to deploy an appliance.
>>>
>>> From what I see by browsing the log files, OpenNebula is now trying to
>>> create a link from <OpenNebula Server>:/dev/cciss/c0d0p4 to
>>> <TargetHost>:/vm/$VMID/images/disk.2.
>>>
>>> == transfer prolog log snippet==
>>> LN one:/dev/cciss/c0d0p4 udo-bl6107:/vm/72/images/disk.2
>>> == transfer prolog log snippet==
>>>
>>> == vm log snippet ==
>>> Command execution fail: /opt/one/lib/tm_commands/ssh/tm_ln.sh
>>> one:/dev/cciss/c0d0p4 udo-bl6107:/vm/72/images/disk.2
>>> == vm log snippet ==
>>>
>>> That's not ok for my use case :-(
>>>
>>> The device /dev/cciss/c0d0p4 is not a shared one and therefore does not
>>> exist at the server running OpenNebula. Each clusternode has its down
>>> device /dev/cciss/c0d0p4 as this created as partition the the internal
>>> blade server hard disk.
>>>
>>> Is it possible to pass a type of disk to OpenNebula, that is just piped
>>> through and added to the Xen configuration file? Like
>>>
>>> DISK = [
>>> type ="local"
>>> source="/dev/cciss/c0d0p4",
>>> target = "xvdc",
>>> readonly = "no" ]
>>>
>>> resulting in the needed phy entry in the configuration file?
>>>
>>>
>>> Kind regards
>>> Stefan
>>>
>>>
>>> Ruben S. Montero schrieb:
>>>> Hi
>>>>
>>>> That should be :
>>>> DISK = [
>>>> type="block"
>>>> clone="no"
>>>> source="/dev/cciss/c0d0p4",
>>>> target = "xvdc",
>>>> readonly = "no" ]
>>>>
>>>> This should generate something like
>>>>
>>>> 'phy:$VM_DIR/var/$VMID/disk.3,xvdc,w'
>>>>
>>>> Note also the clone="no" part. The Transfer Manager will try to setup
>>>> the disk in the VM home directory; if each cluster node has the same
>>>> scratch partition we can just link the device there. The script that
>>>> actually makes the link is tm_ln.sh in
>>>> $ONE_LOCATION/lib/tm_commands/nfs (or ssh if you are not using
>>>> NFS...), just in case you need to tune something...
>>>>
>>>> Cheers
>>>>
>>>> Ruben
>>>>
>>>>
>>>> On Thu, Feb 18, 2010 at 8:54 PM, Stefan Freitag
>>>> <stefan.freitag at udo.edu>
>>>> wrote:
>>>>> Dear all,
>>>>>
>>>>> at present I am using OpenNebula 1.4 to deploy virtual appliances
>>>>> to a Xen-based compute cluster. On each of the cluster nodes there
>>>>> exists a hard disk partition that should be used as scratch directory
>>>>> inside the virtual appliance (only 1 appliances is assigned to 1
>>>>> server
>>>>> at a time, so there is no conflict).
>>>>>
>>>>>
>>>>> I created a template to describe the appliances and got stuck. Here
>>>>> is
>>>>> what I did so far concerning the hard disks used in the virtual
>>>>> appliance:
>>>>>
>>>>> 1) that is the image containing the OS, boot and root directory
>>>>> DISK = [
>>>>> source = "/mnt/gridconfig/images/workernode/wn_sl54_x86_64.img",
>>>>> target = "xvda",
>>>>> readonly = "no" ]
>>>>>
>>>>> 2) a swap partition that is created on the fly by OpenNebula
>>>>> DISK = [
>>>>> type = swap,
>>>>> size = 1024,
>>>>> target = "xvdb",
>>>>> readonly = "no" ]
>>>>>
>>>>>
>>>>> 3) I thought that this could work
>>>>> DISK = [
>>>>> source="/dev/cciss/c0d0p4",
>>>>> target = "xvdc",
>>>>> readonly = "no" ]
>>>>>
>>>>> but in the OpenNebula documentation one can read that without a
>>>>> specifying a type, "disk" is assumed and I need a Xen phy: device.
>>>>>
>>>>> The thing I need to express with OpenNebula needs to be translated
>>>>> to
>>>>> something like
>>>>>
>>>>> disk = [ 'file:/vm/udo-wn099_root.img,xvda,w',
>>>>> 'file:/vm/udo-wn099_swap.img,xvdb,w', 'phy:/dev/cciss/c0d0p4,xvdc,w'
>>>>> ]
>>>>>
>>>>> in Xen-speak.
>>>>>
>>>>>
>>>>> What do I need to specify to make use of the phy: partition located
>>>>> at
>>>>> each of the cluster nodes?
>>>>>
>>>>>
>>>>> Kind regards,
>>>>> Stefan
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at lists.opennebula.org
>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Dr. Ruben Santiago Montero
>>>> Associate Professor, Complutense University of Madrid
>>>>
>>>> URL: http://dsa-research.org/doku.php?id=people:ruben
>>>> Weblog: http://blog.dsa-research.org/?author=7
>>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>> --
>> Dr. Ruben Santiago Montero
>> Associate Professor, Complutense University of Madrid
>>
>> URL: http://dsa-research.org/doku.php?id=people:ruben
>> Weblog: http://blog.dsa-research.org/?author=7
>>
>
>
>
> --
> Dr. Ruben Santiago Montero
> Associate Professor, Complutense University of Madrid
>
> URL: http://dsa-research.org/doku.php?id=people:ruben
> Weblog: http://blog.dsa-research.org/?author=7
>
More information about the Users
mailing list