[one-users] blktap xen 4.2 and opennebula 4.2

Carlos Martín Sánchez cmartin at opennebula.org
Tue Apr 8 02:30:44 PDT 2014


This is  a long shot, but do you have a scheduled shutdown action in your
VMs?

Can you paste the output of 'onevm show 1687 --all' ?

Regards
--
Carlos Martín, MSc
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | cmartin at opennebula.org |
@OpenNebula<http://twitter.com/opennebula><cmartin at opennebula.org>


On Mon, Apr 7, 2014 at 2:47 PM, <kenny.kenny at bol.com.br> wrote:

> If i try to use tap2:aio, yes.
>
> I changed xen and increased the loop the devices to 64, and now i´m using
> file as driver.
>
> ------------------------------
>
> *De:* cmartin at opennebula.org
> *Enviada:* Segunda-feira, 7 de Abril de 2014 10:52
>
> *Para:* kenny.kenny at bol.com.br
> *Assunto:* [one-users] blktap xen 4.2 and opennebula 4.2
>
> Hi,
>
> Take a look at these lines:
>
> Sun Mar 30 23:59:04 2014 [ReM][D]: Req:7520 UID:0 VirtualMachineDeploy
> invoked, 1687, 3, false
> Sun Mar 30 23:59:13 2014 [ReM][D]: Req:5120 UID:0 VirtualMachineAction
> invoked, "delete", 1687
>
> onevm delete was called. Is this happening for all your VMs?
>
> Regards
>  --
> Carlos Martín, MSc
> Project Engineer
> OpenNebula - Flexible Enterprise Cloud Made Simple
> www.OpenNebula.org | cmartin at opennebula.org | @OpenNebula
>
>
> On Sat, Apr 5, 2014 at 8:00 AM, <kenny.kenny at bol.com.br> wrote:
>
>> i didt find anything wrong , just something about driver..
>>
>>
>> thanks.
>>
>> Sun Mar 30 23:58:55 2014 [ReM][D]: Req:1280 UID:0 VirtualMachineAllocate
>> invoked, "CPU="0.1"...", false
>> Sun Mar 30 23:58:56 2014 [ReM][D]: Req:1280 UID:0 VirtualMachineAllocate
>> result SUCCESS, 1687
>> Sun Mar 30 23:59:01 2014 [ReM][D]: Req:5632 UID:2 HostPoolInfo invoked
>> Sun Mar 30 23:59:01 2014 [ReM][D]: Req:5632 UID:2 HostPoolInfo result
>> SUCCESS, "<HOST_POOL><ID..."
>>
>> Sun Mar 30 23:59:02 2014 [ReM][D]: Req:8016 UID:0 VirtualMachinePoolInfo
>> invoked, -2, -1, -1, -1
>>  Sun Mar 30 23:59:02 2014 [ReM][D]: Req:8016 UID:0 VirtualMachinePoolInfo
>> result SUCCESS, "<VM_POOL>162..."
>>
>> Sun Mar 30 23:59:02 2014 [ReM][D]: Req:9776 UID:0 VirtualMachineInfo
>> invoked, 1624
>>  Sun Mar 30 23:59:02 2014 [ReM][D]: Req:9776 UID:0 VirtualMachineInfo
>> result SUCCESS, "1624<UI..."
>>
>> Sun Mar 30 23:59:03 2014 [ReM][D]: Req:496 UID:0 VirtualMachinePoolInfo
>> invoked, -2, -1, -1, -1
>>  Sun Mar 30 23:59:03 2014 [ReM][D]: Req:496 UID:0 VirtualMachinePoolInfo
>> result SUCCESS, "<VM_POOL>162..."
>>
>> Sun Mar 30 23:59:03 2014 [ReM][D]: Req:496 UID:0 VirtualMachinePoolInfo
>> invoked, -2, -1, -1, -1
>>  Sun Mar 30 23:59:03 2014 [ReM][D]: Req:496 UID:0 VirtualMachinePoolInfo
>> result SUCCESS, "<VM_POOL>162..."
>>
>> Sun Mar 30 23:59:03 2014 [ReM][D]: Req:5056 UID:0 VirtualMachineInfo
>> invoked, 1624
>>  Sun Mar 30 23:59:03 2014 [ReM][D]: Req:5056 UID:0 VirtualMachineInfo
>> result SUCCESS, "1624<UI..."
>>
>> Sun Mar 30 23:59:03 2014 [ReM][D]: Req:496 UID:0 VirtualMachinePoolInfo
>> invoked, -2, -1, -1, -1
>>  Sun Mar 30 23:59:04 2014 [ReM][D]: Req:496 UID:0 VirtualMachinePoolInfo
>> result SUCCESS, "<VM_POOL>162..."
>>
>> Sun Mar 30 23:59:04 2014 [ReM][D]: Req:2640 UID:0 VirtualMachinePoolInfo
>> invoked, -2, -1, -1, -1
>>  Sun Mar 30 23:59:04 2014 [ReM][D]: Req:2640 UID:0 VirtualMachinePoolInfo
>> result SUCCESS, "<VM_POOL>162..."
>>
>> Sun Mar 30 23:59:04 2014 [ReM][D]: Req:9120 UID:0 HostPoolInfo invoked
>>  Sun Mar 30 23:59:04 2014 [ReM][D]: Req:9120 UID:0 HostPoolInfo result
>> SUCCESS, "<HOST_POOL><ID..."
>>
>> Sun Mar 30 23:59:04 2014 [ReM][D]: Req:6560 UID:0 ClusterPoolInfo invoked
>> Sun Mar 30 23:59:04 2014 [ReM][D]: Req:7520 UID:0 VirtualMachineDeploy
>> invoked, 1687, 3, false
>> Sun Mar 30 23:59:04 2014 [DiM][D]: Deploying VM 1687
>> Sun Mar 30 23:59:04 2014 [ReM][D]: Req:7520 UID:0 VirtualMachineDeploy
>> result SUCCESS, 1687
>> Sun Mar 30 23:59:05 2014 [ReM][D]: Req:5696 UID:2 UserPoolInfo invoked
>> Sun Mar 30 23:59:05 2014 [ReM][D]: Req:5696 UID:2 UserPoolInfo result
>> SUCCESS, "<USER_POOL><ID..."
>>
>> Sun Mar 30 23:59:06 2014 [VMM][I]: --Mark--
>> Sun Mar 30 23:59:08 2014 [ReM][D]: Req:272 UID:2 AclInfo invoked
>> Sun Mar 30 23:59:08 2014 [ReM][D]: Req:272 UID:2 AclInfo result SUCCESS,
>> "<ACL_POOL>0..."
>>
>> Sun Mar 30 23:59:13 2014 [ReM][D]: Req:5120 UID:0 VirtualMachineAction
>> invoked, "delete", 1687
>> Sun Mar 30 23:59:13 2014 [DiM][D]: Finalizing VM 1687
>> Sun Mar 30 23:59:13 2014 [ReM][D]: Req:5120 UID:0 VirtualMachineAction
>> result SUCCESS, 1687
>> Sun Mar 30 23:59:14 2014 [TM][D]: Message received: LOG I 1687 Driver
>> command for 1687 cancelled
>> Sun Mar 30 23:59:15 2014 [ReM][D]: Req:2192 UID:2 VirtualMachinePoolInfo
>> invoked, -2, -1, -1, -1
>> Sun Mar 30 23:59:15 2014 [ReM][D]: Req:2192 UID:2 VirtualMachinePoolInfo
>> result SUCCESS, "<VM_POOL>162..."
>>
>> Sun Mar 30 23:59:19 2014 [ReM][D]: Req:3520 UID:2 ImagePoolInfo invoked,
>> -2, -1, -1
>> Sun Mar 30 23:59:19 2014 [ReM][D]: Req:3520 UID:2 ImagePoolInfo result
>> SUCCESS, "<IMAGE_POOL><..."
>>  ------------------------------
>>
>> *De:* cmartin at opennebula.org
>> *Enviada:*Quinta-feira, 3 de Abril de 2014 10:48
>>
>> *Para:* kenny.kenny at bol.com.br
>> *Assunto:* [one-users] blktap xen 4.2 and opennebula 4.2
>>
>>    Hi,
>>
>> On Tue, Apr 1, 2014 at 5:07 PM, <kenny.kenny at bol.com.br> wrote:
>>
>>> no i didn't do that.
>>>
>>> after a few seconds, the vm name disapear from onevm list. it doesnt
>>> stay with prolog, boot etc.
>>>
>>
>>  Can you check in oned.log if there is any call to onevm delete? Maybe
>> there is a script somewhere causing this...
>> --
>> Carlos Martín, MSc
>> Project Engineer
>> OpenNebula - Flexible Enterprise Cloud Made Simple
>> www.OpenNebula.org | cmartin at opennebula.org | @OpenNebula
>>
>>
>>
>>> ------------------------------
>>>
>>> *De:* jfontan at opennebula.org
>>> *Enviada:* Terça-feira, 1 de Abril de 2014 09:40
>>> *Para:* kenny.kenny at bol.com.br,users at lists.opennebula.org
>>>
>>> *Assunto:* [one-users] blktap xen 4.2 and opennebula 4.2
>>>
>>> That message is only seen when you cancel actions like when you
>>> execute onevm delete. Have you done that? Maybe you didn't wait until
>>> the image transfer was done.
>>>
>>> On Mon, Mar 31, 2014 at 5:08 AM, <kenny.kenny at bol.com.br> wrote:
>>> > Hello, i try to force the driver in the image but i got this error in
>>> > var/log/one/XXXX.log
>>> >
>>> >
>>> >
>>> > Sun Mar 30 23:59:04 2014 [DiM][I]: New VM state is ACTIVE.
>>> > Sun Mar 30 23:59:04 2014 [LCM][I]: New VM state is PROLOG.
>>> > Sun Mar 30 23:59:04 2014 [VM][I]: Virtual Machine has no context
>>> > Sun Mar 30 23:59:13 2014 [LCM][I]: New VM state is CLEANUP.
>>> > Sun Mar 30 23:59:14 2014 [DiM][I]: New VM state is DONE
>>> > Sun Mar 30 23:59:14 2014 [TM][W]: Ignored: LOG I 1687 Driver command
>>> for
>>> > 1687 cancelled
>>> >
>>> > Sun Mar 30 23:59:42 2014 [TM][W]: Ignored: TRANSFER SUCCESS 1687 -
>>> >
>>> > And vm didnt start .
>>> > do you know what is this ?
>>> > thanks.
>>> > ________________________________
>>> >
>>> > De: jfontan at opennebula.org
>>> > Enviada: Sexta-feira, 28 de Março de 2014 16:11
>>> >
>>> > Para: kenny.kenny at bol.com.br
>>> > Assunto: [one-users] blktap xen 4.2 and opennebula 4.2
>>> >
>>> > On Fri, Mar 28, 2014 at 3:37 PM, <kenny.kenny at bol.com.br> wrote:
>>> >> Thanks for the reply, i forgot to say im using NFS.
>>> >> Its a problem ?
>>> >
>>> > It should not be a problem.
>>> >
>>> >> I will check the files in the remote host.
>>> >> It´s in the same folder ?
>>> >
>>> > No, in the nodes they reside in /var/tmp/one.
>>> >
>>> >> I will see if the image has a driver.
>>> >> I can chage the image driver or i need to create a new one ?
>>> >
>>> > You can use "oneimage update" command or the Sunstone web interface
>>> >
>>> >
>>> >>
>>> >>
>>> >> ________________________________
>>> >>
>>> >> De: jfontan at opennebula.org
>>> >> Enviada: Sexta-feira, 28 de Março de 2014 12:36
>>> >>
>>> >> Para: kenny.kenny at bol.com.br
>>> >> Assunto: [one-users] blktap xen 4.2 and opennebula 4.2
>>> >>
>>> >> Is it possible that you have a driver defined in the image you are
>>> >> attaching? An explicit driver takes precedence over the default one
>>> >> configured.
>>> >>
>>> >> Still, I've made the test with a vanilla OpenNebula 4.2 installation
>>> >> and the tips I've sent you work nicely. To test this I've changed the
>>> >> prefix to a wrong one so we can see the command:
>>> >>
>>> >> * Changed /var/lib/one/remotes/vmm/xen4/xenrc and set this line:
>>> >>
>>> >> export DEFAULT_FILE_PREFIX="this:should:not:work"
>>> >>
>>> >> * Executed "onehost sync" as oneadmin user. This part is very
>>> >> important as the nodes have a copy of driver files (remotes) and we
>>> >> need to update those files in the nodes when we modify something in
>>> >> the frontend.
>>> >>
>>> >> * Waited for the next monitoring cycle of all nodes. In OpenNebula 4.2
>>> >> the copy of remotes is done in the monitorization phase
>>> >>
>>> >> * Attach disk:
>>> >>
>>> >> $ onevm disk-attach 0 --image data
>>> >>
>>> >> * Got the error:
>>> >>
>>> >> ERROR="Fri Mar 28 12:41:33 2014 : Error attaching new VM Disk: Could
>>> >> not attach
>>> >>
>>> this:should:not:work:/home/one/one/install-4.2/var//datastores/0/0/disk.1
>>> >> (sda) to one-0"
>>> >>
>>> >> On Thu, Mar 27, 2014 at 8:18 PM, <kenny.kenny at bol.com.br> wrote:
>>> >>> see atached files.
>>> >>>
>>> >>>
>>> >>> thanks.
>>> >>> ________________________________
>>> >>>
>>> >>> De: jfontan at gmail.com
>>> >>> Enviada: Quinta-feira, 27 de Março de 2014 19:18
>>> >>>
>>> >>> Para: kenny.kenny at bol.com.br
>>> >>> Assunto: [one-users] blktap xen 4.2 and opennebula 4.2
>>> >>>
>>> >>> Disregard the --force. I've misread the problem. The parameter
>>> --force
>>> >>> does not work in one 4.2. Just execute:
>>> >>>
>>> >>> onehost sync
>>> >>>
>>> >>> On Thu, Mar 27, 2014 at 7:07 PM, Javier Fontan <jfontan at gmail.com>
>>> wrote:
>>> >>>> Are you sure that the drivers uncomented are xen4 and not xen3?
>>> >>>>
>>> >>>> Also, can you send me the file xenrc you've changed? That "invalid
>>> >>>> option: --force" is so strange.
>>> >>>>
>>> >>>> On Thu, Mar 27, 2014 at 7:03 PM, <kenny.kenny at bol.com.br> wrote:
>>> >>>>> it didnt work.
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>> i received this message
>>> >>>>> invalid option: --force
>>> >>>>>
>>> >>>>> and it always use file insstead of tap2:aio
>>> >>>>> i dont know what to do.
>>> >>>>> ________________________________
>>> >>>>>
>>> >>>>> De: jfontan at gmail.com
>>> >>>>> Enviada: Quinta-feira, 27 de Março de 2014 18:08
>>> >>>>> Para: kenny.kenny at bol.com.br
>>> >>>>> Assunto: [one-users] blktap xen 4.2 and opennebula 4.2
>>> >>>>>
>>> >>>>>
>>> >>>>> You can change it in in "/var/lib/one/remotes/vmm/xen4/xenrc", the
>>> >>>>> parameter is DEFAULT_FILE_PREFIX.
>>> >>>>>
>>> >>>>> Remember to do a onehost sync --force so these files are copied to
>>> the
>>> >>>>> remote hosts.
>>> >>>>>
>>> >>>>> On Thu, Mar 27, 2014 at 3:54 AM, <kenny.kenny at bol.com.br> wrote:
>>> >>>>>> Hello, i need to use blktap instead of default disk drive.
>>> >>>>>>
>>> >>>>>> i changed /var/lib/one/remotes/vmm/xen4/attach_disk and
>>> >>>>>> /etc/one/vmm_exec/vmm_exec_xen4.conf , but when take a look at
>>> >>>>>> deployment.0
>>> >>>>>> , it always with "file:".
>>> >>>>>> What i need to do to change that ?
>>> >>>>>>
>>> >>>>>> I will change it beacuase with file i can run just 8 vm per node.
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> thank
>>> >>>>>>
>>> >>>>>> _______________________________________________
>>> >>>>>> Users mailing list
>>> >>>>>> Users at lists.opennebula.org
>>> >>>>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>> >>>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>> --
>>> >>>>> Javier Fontán Muiños
>>> >>>>> OpenNebula Developer
>>> >>>>> OpenNebula - The Open Source Toolkit for Data Center Virtualization
>>> >>>>> www.OpenNebula.org | @OpenNebula | github.com/jfontan
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> --
>>> >>>> Javier Fontán Muiños
>>> >>>> OpenNebula Developer
>>> >>>> OpenNebula - The Open Source Toolkit for Data Center Virtualization
>>> >>>> www.OpenNebula.org | @OpenNebula | github.com/jfontan
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Javier Fontán Muiños
>>> >>> OpenNebula Developer
>>> >>> OpenNebula - The Open Source Toolkit for Data Center Virtualization
>>> >>> www.OpenNebula.org | @OpenNebula | github.com/jfontan
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Javier Fontán Muiños
>>> >> Developer
>>> >> OpenNebula - The Open Source Toolkit for Data Center Virtualization
>>> >> www.OpenNebula.org | @OpenNebula | github.com/jfontan
>>> >
>>> >
>>> >
>>> > --
>>> > Javier Fontán Muiños
>>> > Developer
>>> > OpenNebula - The Open Source Toolkit for Data Center Virtualization
>>> > www.OpenNebula.org | @OpenNebula | github.com/jfontan
>>>
>>>
>>>
>>> --
>>> Javier Fontán Muiños
>>> Developer
>>> OpenNebula - The Open Source Toolkit for Data Center Virtualization
>>> www.OpenNebula.org | @OpenNebula | github.com/jfontan
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at lists.opennebula.org
>>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140408/7aa9e4d9/attachment-0002.htm>


More information about the Users mailing list