[one-users] Kernel panic - not syncing: Attempted to kill init!

anoop Lekshmanan anpl1980 at gmail.com
Tue Feb 8 07:12:34 PST 2011


Hi Steve,

Sorry to disturb you again. I was stuck on a pygrub error and I could not
fix it myself.

 STDERR follows.
Tue Feb  8 20:23:14 2011 [VMM][I]: Traceback (most recent call last):
Tue Feb  8 20:23:14 2011 [VMM][I]: File "/usr/bin/pygrub", line 682, in ?
Tue Feb  8 20:23:14 2011 [VMM][I]: chosencfg = run_grub(file, entry, fs)
Tue Feb  8 20:23:14 2011 [VMM][I]: File "/usr/bin/pygrub", line 536, in
run_grub
Tue Feb  8 20:23:14 2011 [VMM][I]: g = Grub(file, fs)
Tue Feb  8 20:23:14 2011 [VMM][I]: File "/usr/bin/pygrub", line 203, in
__init__
Tue Feb  8 20:23:14 2011 [VMM][I]: self.read_config(file, fs)
Tue Feb  8 20:23:14 2011 [VMM][I]: File "/usr/bin/pygrub", line 397, in
read_config
Tue Feb  8 20:23:14 2011 [VMM][I]: raise RuntimeError, "couldn't find
bootloader config file in the image provided."
Tue Feb  8 20:23:14 2011 [VMM][I]: RuntimeError: couldn't find bootloader
config file in the image provided.
Tue Feb  8 20:23:14 2011 [VMM][I]: No handlers could be found for logger
"xend"
Tue Feb  8 20:23:14 2011 [VMM][I]: Error: Boot loader didn't return any
data!


My VM template:

NAME   = ttylinux
CPU    = 0.1
MEMORY = 64
OS      =   [ bootloader = "/usr/bin/pygrub" ]

DISK  = [  image = "tty"]

DISK   = [
 type     = swap,
 size     = 5120,
 target   = sdb ]

DISK   = [
 type     = fs,
 size     = 4096,
 format   = ext3,
 save     = yes,
 target   = sdc
 ]

NIC = [ BRIDGE = "xenbr0", MAC = "00:16:3E:02:03:05" ]
FEATURES=[ acpi="no" ]

GRAPHICS = [
 type    = "vnc",
 listen  = "127.0.0.1",
 port    = "5916" ]

CONTEXT = [
   hostname    = "$NAME",
   ip_public   = "192.168.0.16",
   netmask     = "255.255.252.0",
   gateway     = "192.168.0.245",
   ns          = "192.168.1.9",
   files      = "/opt/cloud/one/images/init.sh",
   #target      = "hdc",
   root_pubkey = "id_rsa.pub",
   root_pubkey = "id_rsa.pub",
   username    = "oneadmin",
   user_pubkey = "id_rsa.pub"
]

REQUIREMENTS = "HYPERVISOR=\"xen\""

I had some errors with #target      = "hdc", and I commented it out. Have
you faced any errors like this? I could  not find nothing useful from
xend.log

Thanks,
Anoop


On Sat, Feb 5, 2011 at 1:56 AM, anoop Lekshmanan <anpl1980 at gmail.com> wrote:

> Steve, Thank you so much!
>
> I was really stuck here and you saved me.
>
> :)
>
> Thanks,
> Anoop
>
>
> On Sat, Feb 5, 2011 at 1:38 AM, Steven Timm <timm at fnal.gov> wrote:
>
>>
>> This is my template for running a Xen VM out of the image repository.
>> [timm at fcl002 ~/OpenNebula]$ cat cloudlvs_xen.one
>> NAME   = cloudlvs.fnal.gov
>> CPU    = 2
>> VCPU   = 2
>> MEMORY = 2048
>>
>> #OS     = [
>> #  kernel     = /vmlinuz,
>> #  initrd     = /initrd.img,
>> #  root       = sda1,
>> #  kernel_cmd = "ro xencons=tty console=tty1"]
>>
>> OS      =   [ bootloader = "/usr/bin/pygrub" ]
>>
>> DISK  = [  image = "cloudlvs-persist-xen.img" ]
>>
>> DISK   = [
>>  type     = swap,
>>  size     = 5120,
>>  target   = sdb ]
>>
>> DISK   = [
>>  type     = fs,
>>  size     = 4096,
>>  format   = ext3,
>>  save     = yes,
>>  target   = sdc,
>>  bus      = scsi ]
>>
>> #NIC    = [ NETWORK = "FermiCloud" ]
>> NIC = [ BRIDGE = "xenbr0", MAC = "00:16:3E:02:03:05" ]
>>
>> FEATURES=[ acpi="no" ]
>>
>> GRAPHICS = [
>>  type    = "vnc",
>>  listen  = "127.0.0.1",
>>  port    = "5916" ]
>>
>> CONTEXT = [
>>    hostname    = "$NAME",
>>    ip_public   = "131.225.154.207",
>>    netmask     = "255.255.254.0",
>>    gateway     = "131.225.154.1",
>>    ns          = "131.225.8.120",
>>    files      = "/cloud/images/OpenNebula/templates/init.sh
>> /home/timm/OpenNebula/k5login",
>>    target      = "hdc",
>>    root_pubkey = "id_dsa.pub",
>>    username    = "opennebula",
>>    user_pubkey = "id_dsa.pub"
>> ]
>>
>> REQUIREMENTS = "HYPERVISOR=\"xen\""
>>
>> --------------
>>
>> and here is the declaration of my image in the image repo.
>>
>> [timm at fcl002 ~/OpenNebula]$ oneimage show 56
>> IMAGE  INFORMATION
>> ID             : 56
>> NAME           : cloudlvs-persist-xen.img
>> TYPE           : OS
>> REGISTER TIME  : 02/03 17:02:52
>> PUBLIC         : No
>> PERSISTENT     : Yes
>> SOURCE         :
>> /var/lib/one/image-repo/d75ce946cc408f9db71bdf14ba6eecd5d20750a5
>> STATE          : used
>> RUNNING_VMS    : 1
>>
>> IMAGE TEMPLATE
>> BUS=scsi
>> DESCRIPTION=cloudlvs xen
>> DEV_PREFIX=sd
>> NAME=cloudlvs-persist-xen.img
>> PATH=/cloud/images/OpenNebula/images/cloudlvs-persist-xen.img
>> TYPE=OS
>>
>>
>> For Xen you are best to try to mount / as sda and that's what
>> this os template will do.
>>
>> Steve
>>
>>
>>
>>
>> On Fri, 4 Feb 2011, Steven Timm wrote:
>>
>>
>>> The error you are getting is probably due to a malformed ramdisk
>>> for your Xen kernel. Likely what is happening is that the ramdisk
>>> is trying to load the real scsi device as sda1  rather than the xenblk
>>> block device.  Try to replace the ramdisk and see if you do any better.
>>> I had this same error a while ago and building a ramdisk on a xen
>>> VM that was installed statically is what it took for me to make it work.
>>>
>>> Also, just so you know, when you say  KERNEL = /boot/vmlinuz....
>>> etc, then the kernel and ramdisk have to be in that location
>>> on your VM host, not inside the LVM you are trying to boot.
>>> If you want to use the kernel/ramdisk inside the xen virtual machine
>>> then you should use pygrub, which is what I use.
>>>
>>> Steve
>>>
>>>
>>> On Sat, 5 Feb 2011, anoop Lekshmanan wrote:
>>>
>>>  I get this error on loading ttylinux or any other image in to Xen Node
>>>> and
>>>> LVM, I have tried native kernel as well, but that did not work.
>>>>
>>>> device-mapper: uevent: version 1.0.3
>>>> device-mapper: ioctl: 4.11.5-ioctl (2007-12-12) initialised:
>>>> dm-devel at redhat.com
>>>> device-mapper: dm-raid45: initialized v0.2594l
>>>> Kernel panic - not syncing: Attempted to kill init!
>>>>
>>>> My VM template:
>>>>
>>>> NAME   = test
>>>> CPU    = 1
>>>> MEMORY = 256
>>>> OS = [
>>>>      KERNEL     = /boot/vmlinuz-2.6.18-194.32.1.el5xen,
>>>>      INITRD     = /boot/initrd-2.6.18-194.32.1.el5xen.img,
>>>>      ROOT       = /dev/vg00/lv-one--0
>>>>      #BOOTLOADER  = /usr/bin/pygrub,
>>>>      #KERNEL_CMD = "ro"
>>>>    ]
>>>>
>>>> DISK   = [
>>>>  IMAGE    = "ttylin",
>>>>  #source   = "/dev/vg00/xenvm01",
>>>>  target   = "hdb",
>>>>  readonly = "no" ]
>>>>
>>>> NIC    = [ NETWORK = "Small network" ]
>>>>
>>>> FEATURES=[ acpi="no" ]
>>>>
>>>> GRAPHICS=[
>>>>  AUTOPORT=yes,
>>>>  KEYMAP=en-us,
>>>>  LISTEN=127.0.0.1,
>>>>  PORT=5901,
>>>>  TYPE=vnc ]
>>>>
>>>> #CONTEXT = [
>>>> #    hostname    = "$NAME",
>>>> #     ip_public   = "192.168.0.16",
>>>> #    files      = "/opt/cloud/one/images/init.sh
>>>> /opt/cloud/one/.ssh/id_rsa.pub",
>>>> #    target      = "hdc",
>>>> #    root_pubkey = "id_rsa.pub",
>>>> #    username    = "oneadmin",
>>>> #    user_pubkey = "id_rsa.pub"
>>>>
>>>> VM deployment file generated:
>>>>
>>>> name = 'one-65'
>>>> #O CPU_CREDITS = 256
>>>> memory  = '256'
>>>> kernel = '/boot/vmlinuz-2.6.18-194.32.1.el5xen'
>>>> ramdisk = '/boot/initrd-2.6.18-194.32.1.el5xen.img'
>>>> root = '/dev//dev/vg00/lv-one--0'
>>>> disk = [
>>>>   'tap:aio:/opt/cloud/one/var//65/images/disk.0,hdb,w',
>>>> ]
>>>> vif = [
>>>>   ' mac=02:00:c0:a8:1e:06,ip=192.168.30.6,bridge=xenbr0',
>>>> ]
>>>> vfb = ['type=vnc,vnclisten=127.0.0.1,vncdisplay=1,keymap=en-us']
>>>>
>>>>
>>>>
>>>> I am trying to load this in to LVM and ONE creates the LV successfully
>>>> and
>>>> VM state ins "RUNN"
>>>>
>>>> Any help would be appreciated.
>>>>
>>>> Thanks,
>>>> Anoop
>>>>
>>>>
>>>
>>>
>> --
>> ------------------------------------------------------------------
>> Steven C. Timm, Ph.D  (630) 840-8525
>> timm at fnal.gov  http://home.fnal.gov/~timm/
>> Fermilab Computing Division, Scientific Computing Facilities,
>> Grid Facilities Department, FermiGrid Services Group, Group Leader.
>> Lead of FermiCloud project.
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20110208/0ddb5332/attachment-0003.htm>


More information about the Users mailing list