From himbeere at meine-oma.de Mon Mar 2 06:16:57 2015 From: himbeere at meine-oma.de (Thomas Stein) Date: Mon, 02 Mar 2015 15:16:57 +0100 Subject: [one-users] 4.12 beta and cost_cpu function Message-ID: <9081304.up3HTb5Wik@rather> Hello. Just tried out the new feature but whenever i try to start a VM with this feature activated as a regular user i get: "[TemplateInstantiate] User [2] : VM Template includes a restricted attribute CPU_COST." Where and what should be changed to get this working? cheers t. From cmartin at opennebula.org Mon Mar 2 06:30:25 2015 From: cmartin at opennebula.org (=?UTF-8?Q?Carlos_Mart=C3=ADn_S=C3=A1nchez?=) Date: Mon, 2 Mar 2015 15:30:25 +0100 Subject: [one-users] 4.12 beta and cost_cpu function In-Reply-To: <9081304.up3HTb5Wik@rather> References: <9081304.up3HTb5Wik@rather> Message-ID: Hi, This list will be closed shortly. Please forward your question to the new forum: https://forum.opennebula.org/ Thank you. -- Carlos Mart?n, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | cmartin at opennebula.org | @OpenNebula On Mon, Mar 2, 2015 at 3:16 PM, Thomas Stein wrote: > Hello. > > Just tried out the new feature but whenever i try to start a VM with this > feature activated as a regular user i get: > > "[TemplateInstantiate] User [2] : VM Template includes a restricted > attribute > CPU_COST." > > Where and what should be changed to get this working? > > cheers > t. > _______________________________________________ > Users mailing list > Users at lists.opennebula.org > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cmartin at opennebula.org Mon Mar 2 06:34:52 2015 From: cmartin at opennebula.org (=?UTF-8?Q?Carlos_Mart=C3=ADn_S=C3=A1nchez?=) Date: Mon, 2 Mar 2015 15:34:52 +0100 Subject: [one-users] This list is now closed Message-ID: Dear community, This is a reminder that these mailing lists will be closed. >From now on, please use the new community forum: https://forum.opennebula.org Thank you. -- Carlos Mart?n, MSc Project Engineer OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | cmartin at opennebula.org | @OpenNebula -------------- next part -------------- An HTML attachment was scrubbed... URL: From opennebula at sylconia.nl Fri Mar 6 01:23:37 2015 From: opennebula at sylconia.nl (opennebula at sylconia.nl) Date: Fri, 06 Mar 2015 10:23:37 +0100 Subject: [one-users] Ruby problem Centos 7 pcsd and Opennebula Message-ID: <54F97219.5090700@sylconia.nl> Good morning, I am building a block LVM KVM cluster with 4 machines, but i am having a strange problem which keeps me busy for days now, i am using a standard Centos 7 server as frontend and Centos 7 nodes with opennebula and opennebula-server installed. software installed opennebula-ruby-4.8.0-1.x86_64 opennebula-common-4.8.0-1.x86_64 opennebula-sunstone-4.8.0-1.x86_64 opennebula-4.8.0-1.x86_64 opennebula-server-4.8.0-1.x86_64 I also installed pcs-0.9.115-32.el7_0.1.x86_64 to use clvmd en dlm locks on the opennebula nodes and frontend. Now the problem i am facing i have installed pcsd (ruby) and opennebula (ruby) on the same machine, when i install rack and sinatra via gems as ordered by the install in the documentation the psc daemon binds only to the localhost interface. If i uninstall sinatra and rack then the psc daemon listens as ordered via the config file on all interfaces but opennebula won't start! After installing sinatra via gems their are 2 versions of sinatra on the system /usr/lib/pcsd/vendor/bundle/ruby/gems/sinatra-1.4.4/lib/sinatra /usr/local/share/gems/gems/sinatra-1.4.5/lib/sinatra So i assume something is "overruling" BindAddress directive in the config of pscd on centos 7 when i install the software needed to run opennebula. I have tried setting the BindAddress directive in ssl.rb to *, nil and :: but nothing fixes the problem also my knowledge of ruby is to low to investigate further. see below howto reproduce this problem [root at cloudmanager rack-1.6.0]# !lsof lsof -i | grep ruby ruby 1368 root 9u IPv6 1474379 0t0 TCP localhost:efi-mg (LISTEN) ruby 1368 root 10u IPv4 1474380 0t0 TCP localhost:efi-mg (LISTEN) ruby 13803 oneadmin 10u IPv4 913310 0t0 TCP *:9869 (LISTEN) [root at cloudmanager rack-1.6.0]# gem uninstall sinatra rack Successfully uninstalled sinatra-1.4.5 Successfully uninstalled rack-1.6.0 [root at cloudmanager rack-1.6.0]# systemctl restart pcsd.service [root at cloudmanager rack-1.6.0]# !lsof lsof -i | grep ruby ruby 4377 root 9u IPv4 1739444 0t0 TCP *:efi-mg (LISTEN) ruby 13803 oneadmin 10u IPv4 913310 0t0 TCP *:9869 (LISTEN) [root at cloudmanager tmp]# gem install sinatra Successfully installed rack-1.6.0 Successfully installed sinatra-1.4.5 [root at cloudmanager tmp]# systemctl restart pcsd.service [root at cloudmanager tmp]# !lsof lsof -i | grep ruby ruby 4551 root 9u IPv6 1740682 0t0 TCP localhost:efi-mg (LISTEN) ruby 4551 root 10u IPv4 1740683 0t0 TCP localhost:efi-mg (LISTEN) ruby 13803 oneadmin 10u IPv4 913310 0t0 TCP *:9869 (LISTEN) Any tips how i can solve this problem? Regards Constan From jfontan at opennebula.org Fri Mar 6 09:36:28 2015 From: jfontan at opennebula.org (Javier Fontan) Date: Fri, 06 Mar 2015 17:36:28 +0000 Subject: [one-users] CentOS 7 image from marketplace. References: Message-ID: I've updated the image in the marketplace. Now NetworkManager is disabled. I hope it solves the problem. On Thu, Feb 26, 2015 at 2:08 PM Madko wrote: > Similar problem here. It works better if you disable NetworkManager. The > vmcontext rpm used in this image is still using basic network service. You > are free to adapt it. > > Le Thu Feb 26 2015 at 13:19:09, Leszek Master a > ?crit : > > I've downloaded CentOS 7 image from market place and i noticed that there >> is problem with contextualizing it. After i start a VM with this image it >> doesn't get contextualized by the first boot time. After I manually run >> init scripts everything works, it gave my network interfaces ip address and >> set up my hostname, even restart my VM with cloud-init config. But it's >> annoying that after i create my VM i need to log in to it using vnc and >> then contextualize it manually. Anyone had similiar problems? >> _______________________________________________ >> Users mailing list >> Users at lists.opennebula.org >> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org >> > _______________________________________________ > Users mailing list > Users at lists.opennebula.org > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zashah at pdc.kth.se Tue Mar 10 06:28:15 2015 From: zashah at pdc.kth.se (Zeeshan Ali Shah) Date: Tue, 10 Mar 2015 14:28:15 +0100 Subject: [one-users] Access VM from different hosts like openstack Message-ID: HI, If i have a vm inhost 1 and anotther in host 2 , both vms has private IPs. how to connect both vms ? GRE tunnel ? or any other option.. I am using dummy network driver but can be switch to openvswitch if needed. -- Regards Zeeshan Ali Shah System Administrator - PDC HPC -------------- next part -------------- An HTML attachment was scrubbed... URL: From zashah at pdc.kth.se Tue Mar 10 07:58:29 2015 From: zashah at pdc.kth.se (Zeeshan Ali Shah) Date: Tue, 10 Mar 2015 15:58:29 +0100 Subject: [one-users] Multiple vnm on hosts , broken Message-ID: Hi, Now i have bit wierd setup.. I have three sort of networks . 1) Public IP 2) Private IP and 3) GRE tunnel OVS based for first 2 VNM driver (dummy) works for third we need vnm driver (ovswitch).. Now how to create hosts with different vnm drivers and based on network selected launch appropriate script- as it seems currently vnm driver bind to host i think it should bind to network . any idea ? -- Regards Zeeshan Ali Shah System Administrator - PDC HPC PhD researcher (IT security) Kungliga Tekniska Hogskolan +46 8 790 9115 http://www.pdc.kth.se/members/zashah -------------- next part -------------- An HTML attachment was scrubbed... URL: From imllorente at opennebula.org Tue Mar 10 08:23:17 2015 From: imllorente at opennebula.org (Ignacio M. Llorente) Date: Tue, 10 Mar 2015 16:23:17 +0100 Subject: [one-users] Multiple vnm on hosts , broken In-Reply-To: References: Message-ID: Hello Please use the forum at forum.opennebula.org This list is no longer active. Thanks On Tuesday, March 10, 2015, Zeeshan Ali Shah wrote: > Hi, Now i have bit wierd setup.. I have three sort of networks . 1) Public > IP 2) Private IP and 3) GRE tunnel OVS based > > for first 2 VNM driver (dummy) works > for third we need vnm driver (ovswitch).. > > Now how to create hosts with different vnm drivers and based on network > selected launch appropriate script- > > as it seems currently vnm driver bind to host i think it should bind to > network . > > any idea ? > > -- > > Regards > > Zeeshan Ali Shah > System Administrator - PDC HPC > PhD researcher (IT security) > Kungliga Tekniska Hogskolan > +46 8 790 9115 > http://www.pdc.kth.se/members/zashah > -- Ignacio M. Llorente, PhD, MBA Project Director OpenNebula - Flexible Enterprise Cloud Made Simple www.OpenNebula.org | imllorente at opennebula.org | @OpenNebula -------------- next part -------------- An HTML attachment was scrubbed... URL: From Max.Petit at neurones.net Tue Mar 10 10:31:36 2015 From: Max.Petit at neurones.net (Max.Petit at neurones.net) Date: Tue, 10 Mar 2015 18:31:36 +0100 Subject: [one-users] import existing aws VM Message-ID: <31B4871B91D721458C6801E92192A4D60119A80A6232@nitmail01.neuronesit.priv> Hello, Can we manage virtual machine already create on the AWS site web through OpenNebula ? Cheers Max PETIT -------------- next part -------------- An HTML attachment was scrubbed... URL: From zashah at pdc.kth.se Tue Mar 10 13:21:30 2015 From: zashah at pdc.kth.se (Zeeshan Ali Shah) Date: Tue, 10 Mar 2015 21:21:30 +0100 Subject: [one-users] Multiple vnm on hosts , broken In-Reply-To: References: Message-ID: thanks, which opensource software u are using for forum.opennebula.org ? looks nice On Tue, Mar 10, 2015 at 4:23 PM, Ignacio M. Llorente < imllorente at opennebula.org> wrote: > > Hello > > Please use the forum at forum.opennebula.org > > This list is no longer active. > > Thanks > > > > > On Tuesday, March 10, 2015, Zeeshan Ali Shah wrote: > >> Hi, Now i have bit wierd setup.. I have three sort of networks . 1) >> Public IP 2) Private IP and 3) GRE tunnel OVS based >> >> for first 2 VNM driver (dummy) works >> for third we need vnm driver (ovswitch).. >> >> Now how to create hosts with different vnm drivers and based on network >> selected launch appropriate script- >> >> as it seems currently vnm driver bind to host i think it should bind to >> network . >> >> any idea ? >> >> -- >> >> Regards >> >> Zeeshan Ali Shah >> System Administrator - PDC HPC >> PhD researcher (IT security) >> Kungliga Tekniska Hogskolan >> +46 8 790 9115 >> http://www.pdc.kth.se/members/zashah >> > > > -- > Ignacio M. Llorente, PhD, MBA > Project Director > OpenNebula - Flexible Enterprise Cloud Made Simple > www.OpenNebula.org | imllorente at opennebula.org | @OpenNebula > > > -- Regards Zeeshan Ali Shah System Administrator - PDC HPC PhD researcher (IT security) Kungliga Tekniska Hogskolan +46 8 790 9115 http://www.pdc.kth.se/members/zashah -------------- next part -------------- An HTML attachment was scrubbed... URL: From vhpc.dist at gmail.com Tue Mar 10 17:57:14 2015 From: vhpc.dist at gmail.com (VHPC 15) Date: Wed, 11 Mar 2015 01:57:14 +0100 Subject: [one-users] CfP 10th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '15) Message-ID: ================================================================= CALL FOR PAPERS 10th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '15) held in conjunction with Euro-Par 2015, August 24-28, Vienna, Austria (Springer LNCS) ================================================================= Date: August 25, 2015 Workshop URL: http://vhpc.org Paper Submission Deadline: May 22, 2015 CALL FOR PAPERS Virtualization technologies constitute a key enabling factor for flexible resource management in modern data centers, cloud environments, and increasingly in HPC as well. Providers need to dynamically manage complex infrastructures in a seamless fashion for varying workloads and hosted applications, independently of the customers deploying software or users submitting highly dynamic and heterogeneous workloads. Thanks to virtualization, we have the ability to manage vast computing and networking resources dynamically and close to the marginal cost of providing the services, which is unprecedented in the history of scientific and commercial computing. Various virtualization technologies contribute to the overall picture in different ways: machine virtualization, with its capability to enable consolidation of multiple under-utilized servers with heterogeneous software and operating systems (OSes), and its capability to live-migrate a fully operating virtual machine (VM) with a very short downtime, enables novel and dynamic ways to manage physical servers; OS-level virtualization, with its capability to isolate multiple user-space environments and to allow for their co-existence within the same OS kernel, promises to provide many of the advantages of machine virtualization with high levels of responsiveness and performance; I/O Virtualization allows physical network adapters to take traffic from multiple VMs; network virtualization, with its capability to create logical network overlays that are independent of the underlying physical topology and IP addressing, provides the fundamental ground on top of which evolved network services can be realized with an unprecedented level of dynamicity and flexibility; These technologies have to be inter-mixed and integrated in an intelligent way, to support workloads that are increasingly demanding in terms of absolute performance, responsiveness and interactivity, and have to respect well-specified Service- Level Agreements (SLAs), as needed for industrial-grade provided services. Indeed, among emerging and increasingly interesting application domains for virtualization, we can find big-data application workloads in cloud infrastructures, interactive and real-time multimedia services in the cloud, including real-time big-data streaming platforms such as used in real-time analytics supporting nowadays a plethora of application domains. Distributed cloud infrastructures promise to offer unprecedented responsiveness levels for hosted applications, but that is only possible if the underlying virtualization technologies can overcome most of the latency impairments typical of current virtualized infrastructures (e.g., far worse tail-latency). The Workshop on Virtualization in High-Performance Cloud Computing (VHPC) aims to bring together researchers and industrial practitioners facing the challenges posed by virtualization in order to foster discussion, collaboration, mutual exchange of knowledge and experience, enabling research to ultimately provide novel solutions for virtualized computing systems of tomorrow. The workshop will be one day in length, composed of 20 min paper presentations, each followed by 10 min discussion sections, and lightning talks, limited to 5 minutes. Presentations may be accompanied by interactive demonstrations. TOPICS Topics of interest include, but are not limited to: - Virtualization in supercomputing environments, HPC clusters, cloud HPC and grids - Optimizations of virtual machine monitor platforms, hypervisors and OS-level virtualization - Hypervisor and network virtualization QoS and SLAs - Cloud based network and system management for SDN and NFV - Management, deployment and monitoring of virtualized environments - Performance measurement, modelling and monitoring of virtualized/cloud workloads - Programming models for virtualized environments - Cloud reliability, fault-tolerance, high-availability and security - Heterogeneous virtualized environments, virtualized accelerators, GPUs and co-processors - Optimized communication libraries/protocols in the cloud and for HPC in the cloud - Topology management and optimization for distributed virtualized applications - Cluster provisioning in the cloud and cloud bursting - Adaptation of emerging HPC technologies (high performance networks, RDMA, etc..) - I/O and storage virtualization, virtualization aware file systems - Job scheduling/control/policy in virtualized environments - Checkpointing and migration of VM-based large compute jobs - Cloud frameworks and APIs - Energy-efficient / power-aware virtualization Important Dates April 29, 2015 - Abstract registration May 22, 2015 - Full paper submission June 19, 2014 - Acceptance notification October 2, 2014 - Camera-ready version due August 25, 2014 - Workshop Date TPC CHAIR Michael Alexander (chair), TU Wien, Austria Anastassios Nanos (co-chair), NTUA, Greece Balazs Gerofi (co-chair), RIKEN Advanced Institute for Computational Science, Japan PROGRAM COMMITTEE Stergios Anastasiadis, University of Ioannina, Greece Costas Bekas, IBM Zurich Research Laboratory, Switzerland Jakob Blomer, CERN Ron Brightwell, Sandia National Laboratories, USA Roberto Canonico, University of Napoli Federico II, Italy Julian Chesterfield, OnApp, UK Patrick Dreher, MIT, USA William Gardner, University of Guelph, Canada Kyle Hale, Northwestern University, USA Marcus Hardt, Karlsruhe Institute of Technology, Germany Iftekhar Hussain, Infinera, USA Krishna Kant, Temple University, USA Eiji Kawai, National Institute of Information and Communications Technology, Japan Romeo Kinzler, IBM, Switzerland Kornilios Kourtis, ETH, Switzerland Nectarios Koziris, National Technical University of Athens, Greece Massimo Lamanna, CERN Che-Rung Roger Lee, National Tsing Hua University, Taiwan Helge Meinhard, CERN Jean-Marc Menaud, Ecole des Mines de Nantes France Christine Morin, INRIA, France Amer Qouneh, University of Florida, USA Seetharami Seelam, IBM Watson Research Center, USA Josh Simons, VMWare, USA Borja Sotomayor, University of Chicago, USA Kurt Tutschku, Blekinge Institute of Technology, Sweden Yasuhiro Watashiba, Osaka University, Japan Chao-Tung Yang, Tunghai University, Taiwan PAPER SUBMISSION-PUBLICATION Papers submitted to the workshop will be reviewed by at least two members of the program committee and external reviewers. Submissions should include abstract, key words, the e-mail address of the corresponding author, and must not exceed 10 pages, including tables and figures at a main font size no smaller than 11 point. Submission of a paper should be regarded as a commitment that, should the paper be accepted, at least one of the authors will register and attend the conference to present the work. Accepted papers will be published in the Springer LNCS series - the format must be according to the Springer LNCS Style. Initial submissions are in PDF; authors of accepted papers will be requested to provide source files. Format Guidelines: http://www.springer.de/comp/lncs/authors.html Submission Link: https://easychair.org/conferences/?conf=europar2015ws GENERAL INFORMATION The workshop is one day in length and will be held in conjunction with Euro-Par 2015, 24-28 August, Vienna, Austria -------------- next part -------------- An HTML attachment was scrubbed... URL: From bart at pleh.info Thu Mar 12 03:12:32 2015 From: bart at pleh.info (Bart) Date: Thu, 12 Mar 2015 11:12:32 +0100 Subject: [one-users] Sunstone - Time out errors Message-ID: Hi Everyone, We're currently experiencing a weird issue on our OpenNebula management node. The node is setup with the following configuration: - CentOS 6.6 - Two interfaces in an balance-alb bond. - MySQL backend - /var/lib/one mounted on glusterfs and shared with all nodes. - OpenNebula 4.12 - We're behind a proxy, so all proxy variables are set in the environment for all users. When working on the commandline everything works as a charm, you can list all nodes/datastores, view the details, etc. All working smooth and fast! The problem we face however is working inside the sunstone interface. It seems that listing resources (nodes and datastores) works normally but when we click a datastore/node/whatever it takes ages (minutes) before it shows the contents. The sunstone.error logs only give the following info (in debug mode): Errno::ETIMEDOUT - Connection timed out - connect(2): /usr/lib/ruby/1.8/net/http.rb:560:in `initialize' /usr/lib/ruby/1.8/net/http.rb:560:in `open' /usr/lib/ruby/1.8/net/http.rb:560:in `connect' /usr/lib/ruby/1.8/timeout.rb:53:in `timeout' /usr/lib/ruby/1.8/timeout.rb:101:in `timeout' /usr/lib/ruby/1.8/net/http.rb:560:in `connect' /usr/lib/ruby/1.8/net/http.rb:553:in `do_start' /usr/lib/ruby/1.8/net/http.rb:542:in `start' /usr/lib/ruby/1.8/net/http.rb:1035:in `request' /usr/lib/ruby/1.8/net/http.rb:772:in `get' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:80:in `perform_request' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:40:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:87:in `with_net_http_connection' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:32:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/retry.rb:20:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/encode_json.rb:21:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/request/multipart.rb:14:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/upload.rb:16:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/etag_cache.rb:31:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/request/authorization.rb:38:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/response.rb:8:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/parse_iso_dates.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/logger.rb:20:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/callback.rb:14:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/response.rb:8:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/raise_error.rb:9:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/rack_builder.rb:139:in `build_response' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/connection.rb:377:in `run_request' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/connection.rb:140:in `get' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/actions.rb:104:in `find!' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/actions.rb:119:in `find' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/collection.rb:62:in `send' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/collection.rb:62:in `find' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/client.rb:56:in `current_user' /usr/lib/one/sunstone/routes/support.rb:66:in `zendesk_client' /usr/lib/one/sunstone/routes/support.rb:121:in `GET /support/request' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:863:in `call' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:863:in `route' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:521:in `instance_eval' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:521:in `route_eval' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:500:in `route!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:497:in `catch' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:497:in `route!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:476:in `each' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:476:in `route!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:601:in `dispatch!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:411:in `call!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `instance_eval' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `invoke' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `catch' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `invoke' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:411:in `call!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:399:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/deflater.rb:13:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/session/abstract/id.rb:63:in `context' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/session/abstract/id.rb:58:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:979:in `call' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in `synchronize' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:979:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/chunked.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:84:in `pre_process' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:82:in `catch' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:82:in `pre_process' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:57:in `process' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:42:in `receive_data' /usr/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run_machine' /usr/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/backends/base.rb:61:in `start' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/server.rb:159:in `start' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/handler/thin.rb:14:in `run' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:946:in `run!' /usr/lib/one/sunstone/sunstone-server.rb:627 When I click a datastore in the interface, nothing will happen in the logs (only access logs give some info). Then, when this error appears, only then will we get information about the datastore. So everything works eventually, but there's a massive timeout causing this problem. I've also noticed that I'm getting these connection time out errors without using the interface, so it might be something in the background doing some action?! If so, this action seems to block all other activities untill its finally done. At first I thought this was IPv6 related, so we've disabled that, then I doubted the bonding interface, but that wasnt it either. Then DNS, but even that didnt budge. We even did traces on the OS but didn't find anything suspicious. Before we ran version 4.10, so after upgrading we hoped it would solve something, but that also didn't happen. So right now we're kinda stuck... Does anyone have an idea on how to troubleshoot and solve this issue? What is causing this connection time out? -- Bart G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bart at pleh.info Thu Mar 12 03:50:15 2015 From: bart at pleh.info (Bart) Date: Thu, 12 Mar 2015 11:50:15 +0100 Subject: [one-users] Sunstone - Time out errors Message-ID: Hi Everyone, We're currently experiencing a weird issue on our OpenNebula management node. The node is setup with the following configuration: - CentOS 6.6 - Two interfaces in an balance-alb bond. - MySQL backend - /var/lib/one mounted on glusterfs and shared with all nodes. - OpenNebula 4.12 - We're behind a proxy, so all proxy variables are set in the environment for all users. When working on the commandline everything works as a charm, you can list all nodes/datastores, view the details, etc. All working smooth and fast! The problem we face however is working inside the sunstone interface. It seems that listing resources (nodes and datastores) works normally but when we click a datastore/node/whatever it takes ages (minutes) before it shows the contents. The sunstone.error logs only give the following info (in debug mode): Errno::ETIMEDOUT - Connection timed out - connect(2): /usr/lib/ruby/1.8/net/http.rb:560:in `initialize' /usr/lib/ruby/1.8/net/http.rb:560:in `open' /usr/lib/ruby/1.8/net/http.rb:560:in `connect' /usr/lib/ruby/1.8/timeout.rb:53:in `timeout' /usr/lib/ruby/1.8/timeout.rb:101:in `timeout' /usr/lib/ruby/1.8/net/http.rb:560:in `connect' /usr/lib/ruby/1.8/net/http.rb:553:in `do_start' /usr/lib/ruby/1.8/net/http.rb:542:in `start' /usr/lib/ruby/1.8/net/http.rb:1035:in `request' /usr/lib/ruby/1.8/net/http.rb:772:in `get' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:80:in `perform_request' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:40:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:87:in `with_net_http_connection' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:32:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/retry.rb:20:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/encode_json.rb:21:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/request/multipart.rb:14:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/upload.rb:16:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/etag_cache.rb:31:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/request/authorization.rb:38:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/response.rb:8:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/parse_iso_dates.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/logger.rb:20:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/callback.rb:14:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/response.rb:8:in `call' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/raise_error.rb:9:in `call' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/rack_builder.rb:139:in `build_response' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/connection.rb:377:in `run_request' /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/connection.rb:140:in `get' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/actions.rb:104:in `find!' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/actions.rb:119:in `find' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/collection.rb:62:in `send' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/collection.rb:62:in `find' /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/client.rb:56:in `current_user' /usr/lib/one/sunstone/routes/support.rb:66:in `zendesk_client' /usr/lib/one/sunstone/routes/support.rb:121:in `GET /support/request' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:863:in `call' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:863:in `route' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:521:in `instance_eval' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:521:in `route_eval' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:500:in `route!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:497:in `catch' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:497:in `route!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:476:in `each' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:476:in `route!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:601:in `dispatch!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:411:in `call!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `instance_eval' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `invoke' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `catch' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `invoke' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:411:in `call!' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:399:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/deflater.rb:13:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/session/abstract/id.rb:63:in `context' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/session/abstract/id.rb:58:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:979:in `call' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in `synchronize' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:979:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/chunked.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:84:in `pre_process' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:82:in `catch' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:82:in `pre_process' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:57:in `process' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:42:in `receive_data' /usr/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run_machine' /usr/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/backends/base.rb:61:in `start' /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/server.rb:159:in `start' /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/handler/thin.rb:14:in `run' /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:946:in `run!' /usr/lib/one/sunstone/sunstone-server.rb:627 When I click a datastore in the interface, nothing will happen in the logs (only access logs give some info). Then, when this error appears, only then will we get information about the datastore. So everything works eventually, but there's a massive timeout causing this problem. I've also noticed that I'm getting these connection time out errors without using the interface, so it might be something in the background doing some action?! If so, this action seems to block all other activities untill its finally done. At first I thought this was IPv6 related, so we've disabled that, then I doubted the bonding interface, but that wasnt it either. Then DNS, but even that didnt budge. We even did traces on the OS but didn't find anything suspicious. Before we ran version 4.10, so after upgrading we hoped it would solve something, but that also didn't happen. So right now we're kinda stuck... Does anyone have an idea on how to troubleshoot and solve this issue? What is causing this connection time out? -- Bart G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From keksior at gmail.com Mon Mar 16 04:16:59 2015 From: keksior at gmail.com (Leszek Master) Date: Mon, 16 Mar 2015 12:16:59 +0100 Subject: [one-users] Update from 4.10 to 4.12 ceph storage is now full. Message-ID: After upgrading from 4.10 to 4.12 my ceph datastore is now 100% full with 5.2/5.2 TB used. I don't know from where opennebula gets this data if i have in my ceph This: 6485 GB data, 13704 GB used, 8022 GB / 21726 GB avail Can anyone help solving it out ? I can't create new VM's, and that's a big problem for me. Please help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gerry at scss.tcd.ie Tue Mar 24 08:51:14 2015 From: gerry at scss.tcd.ie (Gerry O'Brien) Date: Tue, 24 Mar 2015 15:51:14 +0000 Subject: [one-users] VMs in 'UNKNOWN' state after migration Message-ID: <551187F2.1040207@scss.tcd.ie> Hi, I have 2 VMs in 'UNKNOWN' state after migration. They are running fine and can be accessed through the VNC. Also, other VMs on same host are in 'RUNNING' state. Any ides what is happening here and how it can be rectified? Regards, Gerry -- Gerry O'Brien Systems Manager School of Computer Science and Statistics Trinity College Dublin Dublin 2 IRELAND 00 353 1 896 1341 From dchebota at gmu.edu Fri Mar 27 08:04:21 2015 From: dchebota at gmu.edu (Dmitri Chebotarov) Date: Fri, 27 Mar 2015 15:04:21 +0000 Subject: [one-users] Sunstone - Time out errors In-Reply-To: References: Message-ID: <74048FB7-D410-4A06-A427-4CDD0E140FD3@gmu.edu> Bart, Try to run 'onedb fsck' to check for any errors in DB [1]. [1] http://docs.opennebula.org/4.12/administration/references/onedb.html -- Thank you, Dmitri Chebotarov VCL Sys Eng, Engineering & Architectural Support, TSD - Ent Servers & Messaging 223 Aquia Building, Ffx, MSN: 1B5 Phone: (703) 993-6175 | Fax: (703) 993-3404 > On Mar 12, 2015, at 6:50 , Bart wrote: > > Hi Everyone, > > We're currently experiencing a weird issue on our OpenNebula management node. > > The node is setup with the following configuration: > > ? CentOS 6.6 > ? Two interfaces in an balance-alb bond. > ? MySQL backend > ? /var/lib/one mounted on glusterfs and shared with all nodes. > ? OpenNebula 4.12 > ? We're behind a proxy, so all proxy variables are set in the environment for all users. > > > When working on the commandline everything works as a charm, you can list all nodes/datastores, view the details, etc. All working smooth and fast! > > The problem we face however is working inside the sunstone interface. It seems that listing resources (nodes and datastores) works normally but when we click a datastore/node/whatever it takes ages (minutes) before it shows the contents. > > The sunstone.error logs only give the following info (in debug mode): > > Errno::ETIMEDOUT - Connection timed out - connect(2): > /usr/lib/ruby/1.8/net/http.rb:560:in `initialize' > /usr/lib/ruby/1.8/net/http.rb:560:in `open' > /usr/lib/ruby/1.8/net/http.rb:560:in `connect' > /usr/lib/ruby/1.8/timeout.rb:53:in `timeout' > /usr/lib/ruby/1.8/timeout.rb:101:in `timeout' > /usr/lib/ruby/1.8/net/http.rb:560:in `connect' > /usr/lib/ruby/1.8/net/http.rb:553:in `do_start' > /usr/lib/ruby/1.8/net/http.rb:542:in `start' > /usr/lib/ruby/1.8/net/http.rb:1035:in `request' > /usr/lib/ruby/1.8/net/http.rb:772:in `get' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:80:in `perform_request' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:40:in `call' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:87:in `with_net_http_connection' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/adapter/net_http.rb:32:in `call' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/retry.rb:20:in `call' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/encode_json.rb:21:in `call' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/request/multipart.rb:14:in `call' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/upload.rb:16:in `call' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/request/etag_cache.rb:31:in `call' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/request/authorization.rb:38:in `call' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/response.rb:8:in `call' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/parse_iso_dates.rb:11:in `call' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/logger.rb:20:in `call' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/callback.rb:14:in `call' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/response.rb:8:in `call' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/middleware/response/raise_error.rb:9:in `call' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/rack_builder.rb:139:in `build_response' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/connection.rb:377:in `run_request' > /usr/lib/ruby/gems/1.8/gems/faraday-0.9.1/lib/faraday/connection.rb:140:in `get' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/actions.rb:104:in `find!' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/actions.rb:119:in `find' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/collection.rb:62:in `send' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/collection.rb:62:in `find' > /usr/lib/ruby/gems/1.8/gems/zendesk_api-1.4.6/lib/zendesk_api/client.rb:56:in `current_user' > /usr/lib/one/sunstone/routes/support.rb:66:in `zendesk_client' > /usr/lib/one/sunstone/routes/support.rb:121:in `GET /support/request' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:863:in `call' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:863:in `route' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:521:in `instance_eval' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:521:in `route_eval' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:500:in `route!' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:497:in `catch' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:497:in `route!' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:476:in `each' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:476:in `route!' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:601:in `dispatch!' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:411:in `call!' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `instance_eval' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `invoke' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `catch' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:566:in `invoke' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:411:in `call!' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:399:in `call' > /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/commonlogger.rb:18:in `call' > /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/deflater.rb:13:in `call' > /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/session/abstract/id.rb:63:in `context' > /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/session/abstract/id.rb:58:in `call' > /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/showexceptions.rb:24:in `call' > /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/methodoverride.rb:24:in `call' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:979:in `call' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:1005:in `synchronize' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:979:in `call' > /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/content_length.rb:13:in `call' > /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/chunked.rb:15:in `call' > /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:84:in `pre_process' > /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:82:in `catch' > /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:82:in `pre_process' > /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:57:in `process' > /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/connection.rb:42:in `receive_data' > /usr/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run_machine' > /usr/lib/ruby/gems/1.8/gems/eventmachine-0.12.10/lib/eventmachine.rb:256:in `run' > /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/backends/base.rb:61:in `start' > /usr/lib/ruby/gems/1.8/gems/thin-1.2.8/lib/thin/server.rb:159:in `start' > /usr/lib/ruby/gems/1.8/gems/rack-1.1.0/lib/rack/handler/thin.rb:14:in `run' > /usr/lib/ruby/gems/1.8/gems/sinatra-1.0/lib/sinatra/base.rb:946:in `run!' > /usr/lib/one/sunstone/sunstone-server.rb:627 > > > > When I click a datastore in the interface, nothing will happen in the logs (only access logs give some info). Then, when this error appears, only then will we get information about the datastore. > > So everything works eventually, but there's a massive timeout causing this problem. > > I've also noticed that I'm getting these connection time out errors without using the interface, so it might be something in the background doing some action?! If so, this action seems to block all other activities untill its finally done. > > At first I thought this was IPv6 related, so we've disabled that, then I doubted the bonding interface, but that wasnt it either. Then DNS, but even that didnt budge. We even did traces on the OS but didn't find anything suspicious. > > Before we ran version 4.10, so after upgrading we hoped it would solve something, but that also didn't happen. > > So right now we're kinda stuck... Does anyone have an idea on how to troubleshoot and solve this issue? What is causing this connection time out? > > > > -- > Bart G. > _______________________________________________ > Users mailing list > Users at lists.opennebula.org > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org From maria.jular at fcsc.es Tue Mar 31 04:16:31 2015 From: maria.jular at fcsc.es (Maria Jular) Date: Tue, 31 Mar 2015 13:16:31 +0200 Subject: [one-users] Error to restore a virtual machine Message-ID: <6E63DA784FC69A4AB89D50F1ABE4F4B7030480889E9A@ExServer.lo.fcsc.es> Hello, I have an OpenNebula 4.6.2 with vmware. When I suspend a virtual machine and I try to boot after that state (SUSPENDED), always fails and VM doesn't change of state. In vmware the VM appears power off and when I try boot it from there, it boots fine but it isn't synchronized with Sunstone. In Sunstone it appears suspended. The problem is: Tue Mar 31 12:17:49 2015 [VMM][I]: Command execution fail: /var/lib/one/remotes/vmm/vmware/restore '/vmfs/volumes/0/128/checkpoint' 'X.X.X.X' 'one-128' 128 X.X.X.X Tue Mar 31 12:17:49 2015 [VMM][I]: /var/lib/one/remotes/vmm/vmware/vmware_driver.rb:212: warning: Object#id will be deprecated; use Object#object_id Tue Mar 31 12:17:49 2015 [VMM][E]: restore: Error executing: virsh -c 'esx://X.X.X.X/?no_verify=1&auto_answer=1' snapshot-revert one-128 checkpoint err: ExitCode: 1 Tue Mar 31 12:17:49 2015 [VMM][I]: out: Tue Mar 31 12:17:49 2015 [VMM][I]: error: internal error Could not revert to snapshot 'checkpoint': FileLocked - Unable to access file since it is locked Tue Mar 31 12:17:49 2015 [VMM][I]: Tue Mar 31 12:17:49 2015 [VMM][I]: ExitCode: 1 Tue Mar 31 12:17:49 2015 [VMM][I]: Failed to execute virtualization driver operation: restore. Tue Mar 31 12:17:49 2015 [VMM][E]: Error restoring VM Tue Mar 31 12:17:49 2015 [LCM][I]: Fail to boot VM. New VM state is SUSPENDED How can I solve it? Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: