[one-users] Migration problem

Madhurranjan Mohaan mohanma at thoughtworks.com
Fri Mar 25 04:56:22 PDT 2011


Hello ,

I faced this scenario a few days back and this is due to the fact that there
are 2 versions of qemu that you have .  The machine='pc' part will not
appear if you use the qemu that is installed in /usr/libexec/qemu-kvm
instead of the other one.

You can create a soft link as follows :

ls -l /usr/bin/kvm
lrwxrwxrwx 1 root root 21 Mar 22 00:44 /usr/bin/kvm -> /usr/libexec/qemu-kvm

Hope that helps.

Copy pasting from another thread:

 By default,
libvirt will use rhel5.4.0 machine type, which is supported by
kvm-83-164.el5_5.12 but not supported by upstream qemu (you can see the
supported machine types in the log above).

So you have two options:
- either use /usr/libexec/qemu-kvm as the emulator
- or explicitly change machine type to something which is supported by
  /usr/bin/qemu-system-x86_64 (e.g., pc-0.12) --- >* this supports your
"machine = pc" part.*


cheers

Ranjan

---------- Forwarded message ----------
From: <users-request at lists.opennebula.org>
Date: Fri, Mar 25, 2011 at 4:40 PM
Subject: Users Digest, Vol 37, Issue 74
To: users at lists.opennebula.org


Send Users mailing list submissions to
       users at lists.opennebula.org

To subscribe or unsubscribe via the World Wide Web, visit
       http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
or, via email, send a message with subject or body 'help' to
       users-request at lists.opennebula.org

You can reach the person managing the list at
       users-owner at lists.opennebula.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."


Today's Topics:

  1. Re: Migration problem (George L. Emigh)
  2. libvirt unstalbe with open nebula (christophe bonnaud)
  3. Re: force to update cluster nodes info (knawnd at gmail.com)
  4. VM config template (knawnd at gmail.com)
  5. Re: Permission libvirt/qemu driver (Guillaume GENS)


----------------------------------------------------------------------

Message: 1
Date: Thu, 24 Mar 2011 16:08:03 -0400
From: "George L. Emigh" <george at podglobal.com>
To: users at lists.opennebula.org
Subject: Re: [one-users] Migration problem
Message-ID: <201103241608.03152.george at podglobal.com>
Content-Type: Text/Plain;  charset="iso-8859-1"

How can I change the machine type from the default of rhel5.4.0 to pc from
within the VM template?

if I add to the vm template:
OS = [ ARCH = "x86_64", MACHINE = "pc" ]

and even though onevm show reveals:
OS=[
 ARCH=x86_64,
 MACHINE=pc ]

deployment.0 still only shows:
<type arch='x86_64'>hvm</type>

as opposed to the expected:
<type arch='x86_64' machine='pc'>hvm</type>

On Thursday March 17 2011, George L. Emigh wrote:
 > Seems I have missed some key detail, when I attempt to migrate a vm it
 > fails.
 >
 > I can run a vm on the other host with libvirt / virt manager fine.
 >
 > Any ideas come to mind or a suggestion on where to look?
 >
 > Before attempted migration the image was owned by oneadmin:cloud
 > The image file becomes owned by root
 > s -l /var/lib/one/images/229ec3e88658934ce75dac63633b83a60ac48cf2
 > -rw-rw---- 1 root root 8589934592 Mar 17 15:08
 > /var/lib/one/images/229ec3e88658934ce75dac63633b83a60ac48cf2
 >
 >
 > The VM log file shows
 >
 > Thu Mar 17 15:08:31 2011 [LCM][I]: New VM state is BOOT
 > Thu Mar 17 15:08:31 2011 [VMM][I]: Generating deployment file:
 > /var/lib/one/49/deployment.0 Thu Mar 17 15:08:32 2011 [LCM][I]: New VM
 > state is RUNNING
 > Thu Mar 17 15:09:01 2011 [LCM][I]: New VM state is SAVE_MIGRATE
 > Thu Mar 17 15:09:09 2011 [LCM][I]: New VM state is PROLOG_MIGRATE
 > Thu Mar 17 15:09:09 2011 [TM][I]: tm_mv.sh: Will not move, source and
 > destination are equal Thu Mar 17 15:09:09 2011 [LCM][I]: New VM state is
 > BOOT
 > Thu Mar 17 15:09:12 2011 [VMM][I]: Command execution fail: 'if [ -x
 > "/var/tmp/one/vmm/kvm/restore" ]; then /var/tmp/one/vmm/kvm/restore
 > /var/lib/one//49/images/checkpoint; else
 > exit 42; fi' Thu Mar 17 15:09:12 2011 [VMM][I]: STDERR follows.
 > Thu Mar 17 15:09:12 2011 [VMM][I]: error: Failed to restore domain from
 > /var/lib/one//49/images/checkpoint Thu Mar 17 15:09:12 2011 [VMM][I]:
 > error: cannot close file: Bad file descriptor Thu Mar 17 15:09:12 2011
 > [VMM][I]: ExitCode: 1
 > Thu Mar 17 15:09:12 2011 [VMM][E]: Error restoring VM, error: Failed to
 > restore domain from /var/lib/one//49/images/checkpoint Thu Mar 17
 > 15:09:13 2011 [DiM][I]: New VM state is FAILED
 > Thu Mar 17 15:09:13 2011 [TM][W]: Ignored: LOG - 49 tm_delete.sh:
 > Deleting /var/lib/one//49/images
 >
 > Thu Mar 17 15:09:13 2011 [TM][W]: Ignored: LOG - 49 tm_delete.sh:
 > Executed "rm -rf /var/lib/one//49/images".
 >
 > Thu Mar 17 15:09:13 2011 [TM][W]: Ignored: TRANSFER SUCCESS 49 -
 >
 >
 > The other host seems ok
 > onehost list
 >   ID NAME              CLUSTER  RVM   TCPU   FCPU   ACPU    TMEM    FMEM
 > STAT 0 shag              default    0    400    400    400    7.6G
 > 6.8G   on 1 klingon           default    1    400    400    300    7.6G
 >     7G   on
 >
 >
 > From the libvirt log on the other host
 > cat one-49.log
 > 2011-03-17 15:09:09.857: starting up
 > LC_ALL=C
 > PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin HOME=/
 > USER=root QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M rhel5.4.0 -enable-kvm
 > -m 2048 -smp 2,sockets=2,cores=1,threads=1 -name one-49 -uuid
 > 7b18a07e-bcc4-ceed-6ace-c31ecf90378a -nodefconfig -nodefaults -chardev
 > socket,id=monitor,path=/var/lib/libvirt/qemu/one-49.monitor,server,nowai
 > t -mon chardev=monitor,mode=control -rtc base=utc -no-acpi -boot c -drive
 > file=/var/lib/one//49/images/disk.0,if=none,id=drive-virtio-disk0,boot=o
 > n,format=raw -device
 > virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-dis
 > k0 -drive
 > file=/var/lib/one//49/images/disk.1,if=none,id=drive-virtio-disk1,format
 > =raw -device
 > virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-dis
 > k1 -drive
 > file=/var/lib/one//49/images/disk.2,if=none,media=cdrom,id=drive-ide0-1-
 > 0,readonly=on,format=raw -device
 > ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 - netdev
 > tap,fd=44,id=hostnet0 -device
 > virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:c0:a8:45:03,bus=pci.0,a
 > ddr=0x3 -usb -vnc 127.0.0.1:49 -vga cirrus -incoming exec:cat -device
 > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 Supported machines
 > are:
 > pc         Standard PC (alias of pc-0.13)
 > pc-0.13    Standard PC (default)
 > pc-0.12    Standard PC
 > pc-0.11    Standard PC, qemu 0.11
 > pc-0.10    Standard PC, qemu 0.10
 > isapc      ISA-only PC
 > 2011-03-17 15:09:12.859: shutting down
 >
 >
 >
 > From top of oned.log
 >
 > DB=BACKEND=sqlite
 > DEBUG_LEVEL=3
 > DEFAULT_DEVICE_PREFIX=hd
 > DEFAULT_IMAGE_TYPE=OS
 > HM_MAD=EXECUTABLE=one_hm
 > HOST_MONITORING_INTERVAL=60
 > IMAGE_REPOSITORY_PATH=/var/lib/one//images
 > IM_MAD=ARGUMENTS=kvm,EXECUTABLE=one_im_ssh,NAME=im_kvm
 > MAC_PREFIX=02:00
 > MANAGER_TIMER=15
 > NETWORK_SIZE=254
 > PORT=2633
 > SCRIPTS_REMOTE_DIR=/var/tmp/one
 > TM_MAD=ARGUMENTS=tm_nfs/tm_nfs.conf,EXECUTABLE=one_tm,NAME=tm_nfs
 > VM_DIR=/var/lib/one/
 > VM_HOOK=ARGUMENTS=$VMID,COMMAND=image.rb,NAME=image,ON=DONE
 > VM_MAD=ARGUMENTS=kvm,DEFAULT=vmm_ssh/vmm_ssh_kvm.conf,EXECUTABLE=one_vmm_
 > ssh,NAME=vmm_kvm,TYPE=kvm VM_POLLING_INTERVAL=60
 > VNC_BASE_PORT=5900
 >
 >
 > Thanks in advance.

--
George L. Emigh
CIO
Pod Global, LLC
10 Glenwood Ave
Osprey, FL 34229
(941)806-0276
http://podglobal.com/

NOTICE: This electronic mail message and any files transmitted with it are
 intended exclusively for the individual or entity to which it is addressed.
 The message, together with any attachment, contain confidential and/or
 privileged information. Any unauthorized review, use, printing, saving,
 copying, disclosure or distribution is strictly prohibited. If you have
 received this message in error, please immediately advise the sender by
 reply email and delete all copies.


------------------------------

Message: 2
Date: Fri, 25 Mar 2011 15:07:12 +0900
From: christophe bonnaud <takyon77 at gmail.com>
To: users at lists.opennebula.org
Subject: [one-users] libvirt unstalbe with open nebula
Message-ID:
       <AANLkTi=F7a0EDaPxxxMa70ibcshwo4EOjYEqY=ta0O_V at mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

Hi,

I have install opennebula vers. 2.0.1-1 with libvirt vers. 0.7.7

When I create a virtual machine, it's working fine but few minutes after,
libvirt crash.

If I don't run any virtual machine, libvrit has no problem.

If I start a machine manually using virsh:
  - if opennebula is working: libvirt has the same  problem.
  - if opennebula is stop: libvirt has no problem

If seems that every 10 minutes, when opennebula try to monitor the virtual
machine, libvirt crash at the same moment.

In oned.log I have:

Fri Mar 25 14:59:02 2011 [ReM][D]: HostPoolInfo method invoked
Fri Mar 25 14:59:02 2011 [ReM][D]: VirtualMachinePoolInfo method invoked
Fri Mar 25 14:59:16 2011 [VMM][I]: Monitoring VM 29.
Fri Mar 25 14:59:32 2011 [ReM][D]: HostPoolInfo method invoked
Fri Mar 25 14:59:32 2011 [ReM][D]: VirtualMachinePoolInfo method invoked

And if I run libvirt in gdb + debug:

14:59:16.135: debug : virEventRunOnce:592 : Poll on 8 handles 0x2aaaac00b390
timeout -1
14:59:16.135: debug : virEventRunOnce:594 : Poll got 1 event
14:59:16.135: debug : virEventDispatchTimeouts:404 : Dispatch 2
14:59:16.135: debug : virEventDispatchHandles:449 : Dispatch 8
14:59:16.135: debug : virEventDispatchHandles:463 : i=0 w=1
14:59:16.135: debug : virEventDispatchHandles:463 : i=1 w=2
14:59:16.135: debug : virEventDispatchHandles:463 : i=2 w=3
14:59:16.135: debug : virEventDispatchHandles:463 : i=3 w=4
14:59:16.135: debug : virEventDispatchHandles:463 : i=4 w=5
14:59:16.135: debug : virEventDispatchHandles:463 : i=5 w=6
14:59:16.135: debug : virEventDispatchHandles:463 : i=6 w=7
14:59:16.135: debug : virEventDispatchHandles:463 : i=7 w=8
14:59:16.135: debug : virEventDispatchHandles:476 : Dispatch n=7 f=15 w=8
e=4 0x8aa160
14:59:16.135: debug : virEventUpdateHandleImpl:146 : Update handle w=8 e=1
14:59:16.135: debug : virEventInterruptLocked:663 : Skip interrupt, 1
1084229952
14:59:16.135: debug : virEventCleanupTimeouts:494 : Cleanup 2
14:59:16.135: debug : virEventCleanupHandles:535 : Cleanupo 8
14:59:16.135: debug : virEventCleanupTimeouts:494 : Cleanup 2
14:59:16.135: debug : virEventCleanupHandles:535 : Cleanupo 8
14:59:16.135: debug : virEventMakePollFDs:372 : Prepare n=0 w=1, f=5 e=1
14:59:16.135: debug : virEventMakePollFDs:372 : Prepare n=1 w=2, f=7 e=1
14:59:16.135: debug : virEventMakePollFDs:372 : Prepare n=2 w=3, f=12 e=25
14:59:16.135: debug : virEventMakePollFDs:372 : Prepare n=3 w=4, f=13 e=25
14:59:16.135: debug : virEventMakePollFDs:372 : Prepare n=4 w=5, f=14 e=1
14:59:16.135: debug : virEventMakePollFDs:372 : Prepare n=5 w=6, f=10 e=25
14:59:16.135: debug : virEventMakePollFDs:372 : Prepare n=6 w=7, f=9 e=25
14:59:16.135: debug : virEventMakePollFDs:372 : Prepare n=7 w=8, f=15 e=1
14:59:16.135: debug : virEventCalculateTimeout:313 : Calculate expiry of 2
timers
14:59:16.135: debug : virEventCalculateTimeout:343 : Timeout at 0 due in -1
ms
14:59:16.135: debug : virEventRunOnce:592 : Poll on 8 handles 0x2aaaac0078f0
timeout -1
14:59:16.135: debug : virEventRunOnce:594 : Poll got 1 event
14:59:16.135: debug : virEventDispatchTimeouts:404 : Dispatch 2
14:59:16.135: debug : virEventDispatchHandles:449 : Dispatch 8
14:59:16.135: debug : virEventDispatchHandles:463 : i=0 w=1
14:59:16.135: debug : virEventDispatchHandles:463 : i=1 w=2
14:59:16.135: debug : virEventDispatchHandles:463 : i=2 w=3
14:59:16.135: debug : virEventDispatchHandles:463 : i=3 w=4
14:59:16.135: debug : virEventDispatchHandles:463 : i=4 w=5
14:59:16.135: debug : virEventDispatchHandles:463 : i=5 w=6
14:59:16.135: debug : virEventDispatchHandles:463 : i=6 w=7
14:59:16.135: debug : virEventDispatchHandles:463 : i=7 w=8
14:59:16.135: debug : virEventDispatchHandles:476 : Dispatch n=7 f=15 w=8
e=1 0x8aa160
14:59:16.135: debug : virEventUpdateHandleImpl:146 : Update handle w=8 e=1
14:59:16.135: debug : virEventInterruptLocked:663 : Skip interrupt, 1
1084229952
14:59:16.135: debug : virEventUpdateHandleImpl:146 : Update handle w=8 e=1
14:59:16.135: debug : virEventInterruptLocked:663 : Skip interrupt, 1
1084229952
14:59:16.136: debug : virEventCleanupTimeouts:494 : Cleanup 2
14:59:16.136: debug : virEventCleanupHandles:535 : Cleanupo 8
14:59:16.136: debug : remoteDispatchClientRequest:368 : prog=536903814 ver=1
type=0 satus=0 serial=6 proc=122
14:59:16.136: debug : virEventCleanupTimeouts:494 : Cleanup 2
14:59:16.136: debug : virEventCleanupHandles:535 : Cleanupo 8
14:59:16.136: debug : virEventMakePollFDs:372 : Prepare n=0 w=1, f=5 e=1
14:59:16.136: debug : virEventMakePollFDs:372 : Prepare n=1 w=2, f=7 e=1
14:59:16.136: debug : virEventMakePollFDs:372 : Prepare n=2 w=3, f=12 e=25
14:59:16.136: debug : virEventMakePollFDs:372 : Prepare n=3 w=4, f=13 e=25

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x43204940 (LWP 32689)]
0x00000033c0a79a30 in strlen () from /lib64/libc.so.6
(gdb) backtrace
#0  0x00000033c0a79a30 in strlen () from /lib64/libc.so.6
#1  0x000000000043d861 in qemudNodeGetSecurityModel (conn=<value optimized
out>, secmodel=0x43203c20) at qemu/qemu_driver.c:4910
#2  0x0000003972458589 in virNodeGetSecurityModel (conn=0x0, secmodel=0x0)
at libvirt.c:5118
#3  0x000000000041ef1b in remoteDispatchNodeGetSecurityModel (server=<value
optimized out>, client=<value optimized out>, conn=0x2aaaac007740,
hdr=<value optimized out>, rerr=0x43203f30,
   args=<value optimized out>, ret=0x43203e80) at remote.c:1306
#4  0x000000000041fbc1 in remoteDispatchClientCall (server=0x8aa160,
client=0x8ad920, msg=0x2aaaac03e700) at dispatch.c:506
#5  0x000000000041ff62 in remoteDispatchClientRequest (server=0x8aa160,
client=0x8ad920, msg=0x2aaaac03e700) at dispatch.c:388
#6  0x0000000000416ad7 in qemudWorker (data=<value optimized out>) at
libvirtd.c:1528
#7  0x00000033c160673d in start_thread () from /lib64/libpthread.so.0
#8  0x00000033c0ad3f6d in clone () from /lib64/libc.so.6

( I can provide more logs if necessary )

Does anyone met this situation or have any clues concerning the origin of
this problem?

Cheers,

Chris.


--
------------------------------------------------------
Bonnaud Christophe
GSDC
Korea Institute of Science and Technology Information
Fax. +82-42-869-0789
Tel. +82-42-869-0660
Mobile +82-10-4664-3193
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <
http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20110325/c10cb40f/attachment-0001.htm
>

------------------------------

Message: 3
Date: Fri, 25 Mar 2011 10:54:32 +0300
From: knawnd at gmail.com
To: users at lists.opennebula.org
Subject: Re: [one-users] force to update cluster nodes info
Message-ID: <4D8C4A38.7080707 at gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed

Sorry for the noise! It seems the problem is in xentop (for some reason
it reports wrong free memory capacity what doesn't correspond to the
'free -m' output.

Nikolay.

knawnd at gmail.com wrote on 24/03/11 17:28:
> Dear all,
>
> Is there any way to force to update cluster nodes info (I mean that
> one which is shown when 'onehost list' is executed).
>
> Thanks.
> Nikolay.


------------------------------

Message: 4
Date: Fri, 25 Mar 2011 12:56:38 +0300
From: knawnd at gmail.com
To: users at lists.opennebula.org
Subject: [one-users] VM config template
Message-ID: <4D8C66D6.8000703 at gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed

Dear all,

is it possible to change ONE VM template format in case of OpenVZ
hypervisor and would it be enough to take those changes into account  in
OvzDriver.cc only? Or ONE VM template format is robust and the changes
in other files are required as well (if yes then in which files?)?
For example, I'd like to write MEMORY parameter for OpenVZ VM as
MEMORY  = [ KMEMSIZE="14372700:14790164",
            LOCKEDPAGES="2048:2048",
            PRIVVMPAGES="65536:69632",
            SHMPAGES="21504:21504",
            PHYSPAGES="0:unlimited",
            VMGUARPAGES="33792:unlimited",
            OOMGUARPAGES="26112:unlimited" ]

whereas for xen now it looks like

MEMORY = 256.

Nikolay.


------------------------------

Message: 5
Date: Fri, 25 Mar 2011 12:08:52 +0100
From: Guillaume GENS <guillaume.gens at cnam.fr>
To: users at lists.opennebula.org
Subject: Re: [one-users] Permission libvirt/qemu driver
Message-ID:
       <AANLkTinh-GPrncxLcDKGj3TUvNsTjv-LGVznQtw9NtyE at mail.gmail.com>
Content-Type: text/plain; charset=ISO-8859-1

small up

nobody help me ?
nobody had a installation problem  with kvm on ubuntu 10.04 ?

---
Guillaume GENS


On Wed, Mar 23, 2011 at 11:33 AM, Guillaume GENS <guillaume.gens at cnam.fr>
wrote:
> hi
>
> I've got a problem deployment about ttylinux image in my Opennebula
> infrastruture (self-contained installation). I think It's a permission
> problem between libvirt and his qemu driver, I tried many solution but
> i can't resolve it.
>
> in /srv/cloud/one/var/14/vm.log:
> Wed Mar 23 11:08:10 2011 [VMM][I]: Generating deployment file:
> /srv/cloud/one/var/14/deployment.0
> Wed Mar 23 11:08:40 2011 [VMM][I]: Command execution fail: 'if [ -x
> "/srv/cloud/one/var/remotes/vmm/kvm/deploy" ]; then
> /srv/cloud/one/var/remotes/vmm/kvm/deploy
> /srv/cloud/one/var/images/14/images/deployment.0; else
> ? ? ? ? ? ? exit 42; fi'
> Wed Mar 23 11:08:40 2011 [VMM][I]: STDERR follows.
> Wed Mar 23 11:08:40 2011 [VMM][I]: error: Failed to create domain from
> /srv/cloud/one/var/images/14/images/deployment.0
> Wed Mar 23 11:08:40 2011 [VMM][I]: error: cannot set ownership on
> /srv/cloud/one/var/images/14/images/disk.0: No such file or directory
> Wed Mar 23 11:08:40 2011 [VMM][I]: ExitCode: 255
> Wed Mar 23 11:08:40 2011 [VMM][E]: Error deploying virtual machine:
> error: Failed to create domain from
> /srv/cloud/one/var/images/14/images/deployment.0
> Wed Mar 23 11:08:41 2011 [DiM][I]: New VM state is FAILED
>
>
> in : /var/log/libvirt/qemu/one-14.log
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin
> /usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 64 -smp 1 -name
> one-14 -uuid 279a251a-e841-f910-68d3-0ebe89d6bff9 -nographic -chardev
> socket,id=monitor,path=/var/lib/libvirt/qemu/one-14.monitor,server,nowait
> -monitor chardev:monitor -no-acpi -boot c -drive
>
file=/srv/cloud/one/var/images/14/images/disk.0,if=ide,index=0,boot=on,format=raw
> -drive
file=/srv/cloud/one/var/images/14/images/disk.1,if=ide,media=cdrom,index=2,format=raw
> -net nic,macaddr=54:52:a3:ad:e7:15,vlan=0,name=nic.0 -net
> tap,fd=33,vlan=0,name=tap.0 -serial none -parallel none -usb
> libvir: QEMU error : cannot set ownership on
> /srv/cloud/one/var/images/14/images/disk.0: No such file or directory
>
> in /etc/group :
> kvm:x:115:oneadmin
> libvirtd:x:116:guigui,oneadmin
> cloud:x:9000:
>
> oneadmin at frontend:~$ id
> uid=9000(oneadmin) gid=9000(cloud)
groups=115(kvm),116(libvirtd),9000(cloud)
>
>
> in /etc/libvirt/libvirtd.conf :
> unix_sock_group = "libvirtd"
> unix_sock_ro_perms = "0777"
> unix_sock_rw_perms = "0777"
>
>
> in /etc/libvirt/qemu.conf :
> user = "oneadmin"
> group = "libvirtd"
>
>
> thanks ahead
>
> PS: the download ?wget
> http://dev.opennebula.org/attachments/download/170/ttylinux.tar.gz
> always restart past 97%. you've got a problem of EndOfFile in your
> repository.
>
> ---
> Guillaume GENS
>


------------------------------

_______________________________________________
Users mailing list
Users at lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org


End of Users Digest, Vol 37, Issue 74
*************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20110325/9cd276ee/attachment-0003.htm>


More information about the Users mailing list