[one-users] Race condition--onevm shutdown vs. onevm delete

Steven Timm timm at fnal.gov
Fri Jan 4 07:06:07 PST 2013


I am seeing this happen in
OpenNebula 2.0, hypervisor KVM.

The user invokes "onevm shutdown 3823" followed about 30
seconds later by "onevm delete 3823".  The VM is still in "epilog"
state at that time.

Fri Jan  4 07:17:07 2013 [VMM][D]: Monitor Information:
         CPU   : -1
         Memory: 2097152
         Net_TX: -1
         Net_RX: -1
Fri Jan  4 07:20:06 2013 [LCM][I]: New VM state is SHUTDOWN
Fri Jan  4 07:20:30 2013 [LCM][I]: New VM state is EPILOG
Fri Jan  4 07:20:44 2013 [TM][I]: tm_delete.sh: Deleting 
/var/lib/one/3823/images
Fri Jan  4 07:20:44 2013 [TM][I]: tm_delete.sh: Executed "/usr/bin/ssh 
fcl012 rm -rf /var/lib/one/3823/images".
Fri Jan  4 07:20:44 2013 [DiM][I]: New VM state is DONE
Fri Jan  4 07:20:44 2013 [HKM][I]: Hook image successfully executed.

What happens on the VM host is the following:

Jan  4 07:20:26 fcl012 kernel: br0: port 15(vnet13) entering disabled 
state
Jan  4 07:20:26 fcl012 kernel: br0: port 15(vnet13) entering disabled 
state
Jan  4 07:20:26 fcl012 kernel: device vnet13 left promiscuous mode
Jan  4 07:20:26 fcl012 kernel: br0: port 15(vnet13) entering disabled 
state
Jan  4 07:20:26 fcl012 libvirtd: 07:20:26.855: error : 
qemuMonitorCommandWithHan
dler:362 : cannot send monitor command 'info balloon': Connection reset by 
peer
Jan  4 07:20:26 fcl012 libvirtd: 07:20:26.855: error : 
qemuMonitorTextGetBalloon
Info:682 : operation failed: could not query memory balloon allocation
Jan  4 07:20:26 fcl012 kernel: libvirtd[5461]: segfault at 
0000000000000000 rip
00000039c4e4a0b1 rsp 0000000043f2bcb0 error 4

---
and libvirtd dies with a segfault.

The strange thing is--according to this template, there is
nothing to save at all.  Why would it go into "epil" state at all?

I am presuming that this condition can also exist in OpenNebula 3.x
versions.  Is there any way to prevent it?  As it is, a determined
user by doing onevm shutdown/onevm delete can crash my whole set
of VM hosts if he wants to.  A similar race condition can
exist with onevm stop / onevm delete.

Steve Timm




[oneadmin at fcl002 one]$ onevm show 3823
VIRTUAL MACHINE 3823 INFORMATION
ID             : 3823
NAME           : gums-5
STATE          : DONE
LCM_STATE      : LCM_INIT
START TIME     : 01/03 14:12:57
END TIME       : 01/04 07:20:44
DEPLOY ID:     : one-3823

VIRTUAL MACHINE MONITORING
NET_TX         : 0
USED CPU       : 0
USED MEMORY    : 2097152
NET_RX         : 0

VIRTUAL MACHINE TEMPLATE
CONTEXT=[
   FILES=/cloud/images/OpenNebula/templates/init.sh 
/cloud/login/weigand/OpenNebula/k5login,
   GATEWAY=131.225.154.1,
   IP_PUBLIC=131.225.154.44,
   NETMASK=255.255.254.0,
   NS=131.225.8.120,
   ROOT_PUBKEY=id_dsa.pub,
   TARGET=hdc,
   USERNAME=opennebula,
   USER_PUBKEY=id_dsa.pub ]
DISK=[
   BUS=virtio,
   CLONE=YES,
   DISK_ID=0,
   IMAGE=SLF 5 Base,
   IMAGE_ID=159,
   READONLY=NO,
   SAVE=NO,
   SOURCE=/var/lib/one/image-repo/e0db5bdb2592065514ddda06ef52caf6fc7971f2,
   TARGET=vda,
   TYPE=DISK ]
DISK=[
   DISK_ID=1,
   SIZE=4096,
   TARGET=vdb,
   TYPE=swap ]
FEATURES=[
   ACPI=yes ]
GRAPHICS=[
   AUTOPORT=yes,
   KEYMAP=en-us,
   LISTEN=127.0.0.1,
   PORT=-1,
   TYPE=vnc ]
MEMORY=2048
NAME=gums-5
NIC=[
   BRIDGE=br0,
   IP=131.225.154.44,
   MAC=54:52:00:02:13:00,
   MODEL=virtio,
   NETWORK=FermiCloud,
   NETWORK_ID=2 ]
PUBLIC=YES
RANK=FREEMEMORY
REQUIREMENTS=HYPERVISOR="kvm"
VCPU=1
VMID=3823
[oneadmin at f


------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
timm at fnal.gov  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Group Leader.
Lead of FermiCloud project.


More information about the Users mailing list