[one-users] SSH error

Zeeshan Ali Shah zashah at pdc.kth.se
Wed Feb 9 04:32:56 PST 2011


what about  extending tm driver for ssh which copy the needed kernel 
alongwith images to node ?

Zeeshan

On 02/09/2011 01:30 PM, Ruben Diez wrote:
> Hi Zeeshan
>
> You MUST put the kernels in all the nodes. You can use the 
> parallel-scp stuff or export them to all the nodes by NFS... It is 
> your decision... The concrete mechanism is not the matter... The 
> important stuff is to make kernels accessible by all the physical 
> nodes....
>
> Regards
>
> Zeeshan Ali Shah escribió:
>> Hi Ruben  ,
>> thanks for answer ,
>>
>> But if i have 10 kernels and 10 nodes , is there any easy way to 
>> transfer kernels beside NFS ..
>>
>> in Eucalyptus we can upload kernel and initrd alongwith image in 
>> repository , any way to do the same in opennebula ..?
>>
>>
>> BR
>>
>> Zeeshan
>>
>> On 02/08/2011 06:31 PM, Ruben Diez wrote:
>>> You must copy the kernel on the correct path to every physical 
>>> nodes....
>>>
>>> You can use a tool like parallel-scp ( 
>>> http://code.google.com/p/parallel-ssh/ ) for do this... (perhaps 
>>> your distro have a package for parallel-ssh)
>>>
>>> Other possibility is use full virtualization and not 
>>> paravirtualization....
>>>
>>> Regards
>>>
>>> Zeeshan Ali Shah escribió:
>>>> Hi, When creating vm i am getting this
>>>> Tue Feb  8 11:43:07 2011 [TM][I]: tm_clone.sh: 
>>>> frontnebula:/srv/cloud/one/var//images/8625d68b699fd30e64360471eb2c38fed47fcfb6 
>>>> nebula1:/srv/cloud/one/var//0/images/disk.0
>>>> Tue Feb  8 11:43:07 2011 [TM][I]: tm_clone.sh: DST: 
>>>> /srv/cloud/one/var//0/images/disk.0
>>>> Tue Feb  8 11:43:07 2011 [TM][I]: tm_clone.sh: Creating directory 
>>>> /srv/cloud/one/var//0/images
>>>> Tue Feb  8 11:43:07 2011 [TM][I]: tm_clone.sh: Executed 
>>>> "/usr/bin/ssh nebula1 mkdir -p /srv/cloud/one/var//0/images".
>>>> Tue Feb  8 11:43:07 2011 [TM][I]: tm_clone.sh: Cloning 
>>>> frontnebula:/srv/cloud/one/var//images/8625d68b699fd30e64360471eb2c38fed47fcfb6 
>>>>
>>>> Tue Feb  8 11:43:07 2011 [TM][I]: tm_clone.sh: Executed 
>>>> "/usr/bin/scp 
>>>> frontnebula:/srv/cloud/one/var//images/8625d68b699fd30e64360471eb2c38fed47fcfb6 
>>>> nebula1:/srv/cloud/one/var//0/images/disk.0".
>>>> Tue Feb  8 11:43:07 2011 [TM][I]: tm_clone.sh: Executed 
>>>> "/usr/bin/ssh nebula1 chmod a+rw /srv/cloud/one/var//0/images/disk.0".
>>>> Tue Feb  8 11:43:07 2011 [LCM][I]: New VM state is BOOT
>>>> Tue Feb  8 11:43:07 2011 [VMM][I]: Generating deployment file: 
>>>> /srv/cloud/one/var/0/deployment.0
>>>> Tue Feb  8 11:43:08 2011 [VMM][I]: Command execution fail: 'if [ -x 
>>>> "/var/tmp/one/vmm/xen/deploy" ]; then /var/tmp/one/vmm/xen/deploy 
>>>> /srv/cloud/one/var//0/images/deployment.0; 
>>>> else                              exit 42; fi'
>>>> Tue Feb  8 11:43:08 2011 [VMM][I]: STDERR follows.
>>>> Tue Feb  8 11:43:08 2011 [VMM][I]: Error: Kernel image does not 
>>>> exist: */srv/cloud/one/ttylinux-xen/vmlinuz-xen*
>>>> Tue Feb  8 11:43:08 2011 [VMM][I]: ExitCode: 1
>>>> Tue Feb  8 11:43:08 2011 [VMM][E]: Error deploying virtual machine: 
>>>> Error: Kernel image does not exist: 
>>>> /srv/cloud/one/ttylinux-xen/vmlinuz-xen
>>>> Tue Feb  8 11:43:08 2011 [DiM][I]: New VM state is FAILED
>>>>
>>>>
>>>> In my vmtemplate it has kernel   = 
>>>> "/srv/cloud/one/ttylinux-xen/vmlinuz-xen",
>>>>   initrd   = "/srv/cloud/one/ttylinux-xen/initrd.gz",
>>>>
>>>>
>>>> Now as i am using SSH (with out NFS) on Nodes the location 
>>>> /srv/cloud/one/ttylinux-xen does not exist since I only untar it on 
>>>> Front End.
>>>>
>>>> Am I missing something ?
>>>>
>>>> Thanks in advance.
>>>>
>>>> Zeeshan
>>>>
>>>> ------------------------------------------------------------------------ 
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at lists.opennebula.org
>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>>
>>
>>
>


-- 
Regards

Zeeshan Ali Shah
System Administrator
PDC-Center for High Performance Computing
KTH-Royal Institute of Technology , Sweden
+46 8 790 9115




More information about the Users mailing list