<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>
Hello Hector,<br><br>Unfortunately, still the same problem. And I'm using this packages from that repository :|<br><br><br><div><div id="SkyDrivePlaceholder"></div>> To: licks0re@hotmail.com<br>> Subject: Re: [one-users] sunstone says : "OpenNebula is not running"<br>> Date: Wed, 2 May 2012 11:44:50 +0200<br>> CC: users@lists.opennebula.org<br>> From: hsanjuan@opennebula.org<br>> <br>> This is probably because you changed the contents of sunstone_auth. For <br>> that to work you need to to update the password of the serveradmin user <br>> with the sha1 password (oneuser passwd serveradmin susenebula --sha1).<br>> <br>> Im not sure your original problem was a packaging thing, but you could <br>> also try the opensuse packages from http://opennebula.org/software:software<br>> <br>> Hector<br>> <br>> <br>> En Wed, 02 May 2012 11:31:48 +0200, Guigui 6675636b206f6666 <br>> <licks0re@hotmail.com> escribió:<br>> <br>> ><br>> > Good morning all,<br>> ><br>> > So. I re-did everything from scratch.<br>> ><br>> > 1) re-installed the machine with opensuse 12.1<br>> > 2) re-installed the softwares opennebula & sunstone<br>> > 3) added the one_auth variable to oneadmin .bashrc profile to avoid <br>> > confusion with the one_auth file<br>> ><br>> > Unfortunately, still the same problem. But now, I see some auth problem <br>> > in the oned.log (at the end of this message)<br>> ><br>> > Wed May 2 11:27:39 2012 [ReM][D]: UserPoolInfo method invoked<br>> ><br>> > Wed May 2 11:27:39 2012 [ReM][E]: [UserPoolInfo] User couldn't be <br>> > authenticated, aborting call.<br>> ><br>> ><br>> > Am I confusing something here ? sunstone credentials are the credentials <br>> > given in the sunstone_auth file, am I correct ?<br>> > (in fact, I set all the passwords to "susenebula"... oneadmin system <br>> > account, in one_auth file and in sunstone_auth file)<br>> ><br>> ><br>> > To start the services, I do :<br>> ><br>> > "sudo - oneadmin"<br>> > "one start"<br>> > "sunstone-server start"<br>> ><br>> > It gives me no errors. One & sunstone logs at the end of this message.<br>> ><br>> > So here are all the revelant config files:<br>> ><br>> > one_auth file:<br>> > oneadmin:susenebula<br>> ><br>> > sunstone_auth file:<br>> > oneadmin:susenebula<br>> ><br>> > file rights :<br>> > -rw-r--r-- 1 oneadmin cloud 20 Jan 27 14:34 one_auth<br>> > -rw------- 1 oneadmin cloud 20 Jan 27 15:15 sunstone_auth<br>> ><br>> > /etc/one/oned.conf<br>> ><br>> ><br>> > #*******************************************************************************<br>> > # OpenNebula Configuration file<br>> > #*******************************************************************************<br>> ><br>> > #*******************************************************************************<br>> > # Daemon configuration attributes<br>> > #-------------------------------------------------------------------------------<br>> > # MANAGER_TIMER: Time in seconds the core uses to evaluate periodical <br>> > functions.<br>> > # HOST_MONITORING_INTERVAL and VM_POLLING_INTERVAL can not have smaller <br>> > values<br>> > # than MANAGER_TIMER.<br>> > #<br>> > # HOST_MONITORING_INTERVAL: Time in seconds between host monitorization.<br>> > # HOST_PER_INTERVAL: Number of hosts monitored in each interval.<br>> > #<br>> > # VM_POLLING_INTERVAL: Time in seconds between virtual machine <br>> > monitorization.<br>> > # (use 0 to disable VM monitoring).<br>> > # VM_PER_INTERVAL: Number of VMs monitored in each interval.<br>> > #<br>> > # VM_DIR: Remote path to store the VM images, it should be shared <br>> > between all<br>> > # the cluster nodes to perform live migrations. This variable is the <br>> > default<br>> > # for all the hosts in the cluster. VM_DIR IS ONLY FOR THE NODES AND <br>> > *NOT* THE<br>> > # FRONT-END<br>> > #<br>> > # SCRIPTS_REMOTE_DIR: Remote path to store the monitoring and VM <br>> > management<br>> > # scripts.<br>> > #<br>> > # PORT: Port where oned will listen for xmlrpc calls.<br>> > #<br>> > # DB: Configuration attributes for the database backend<br>> > # backend : can be sqlite or mysql (default is sqlite)<br>> > # server : (mysql) host name or an IP address for the MySQL server<br>> > # port : (mysql) port for the connection to the server.<br>> > # If set to 0, the default port is used.<br>> > # user : (mysql) user's MySQL login ID<br>> > # passwd : (mysql) the password for user<br>> > # db_name : (mysql) the database name<br>> > #<br>> > # VNC_BASE_PORT: VNC ports for VMs can be automatically set to <br>> > VNC_BASE_PORT +<br>> > # VMID<br>> > #<br>> > # DEBUG_LEVEL: 0 = ERROR, 1 = WARNING, 2 = INFO, 3 = DEBUG<br>> > #*******************************************************************************<br>> ><br>> > #MANAGER_TIMER = 30<br>> ><br>> > HOST_MONITORING_INTERVAL = 600<br>> > #HOST_PER_INTERVAL = 15<br>> ><br>> > VM_POLLING_INTERVAL = 600<br>> > #VM_PER_INTERVAL = 5<br>> ><br>> > #VM_DIR=/srv/cloud/one/var<br>> ><br>> > SCRIPTS_REMOTE_DIR=/var/tmp/one<br>> ><br>> > PORT = 2633<br>> ><br>> > DB = [ backend = "sqlite" ]<br>> ><br>> > # Sample configuration for MySQL<br>> > # DB = [ backend = "mysql",<br>> > # server = "localhost",<br>> > # port = 0,<br>> > # user = "oneadmin",<br>> > # passwd = "oneadmin",<br>> > # db_name = "opennebula" ]<br>> ><br>> > VNC_BASE_PORT = 5900<br>> ><br>> > DEBUG_LEVEL = 3<br>> ><br>> > #*******************************************************************************<br>> > # Physical Networks configuration<br>> > #*******************************************************************************<br>> > # NETWORK_SIZE: Here you can define the default size for the virtual <br>> > networks<br>> > #<br>> > # MAC_PREFIX: Default MAC prefix to be used to create the <br>> > auto-generated MAC<br>> > # addresses is defined here (this can be overrided by the Virtual <br>> > Network<br>> > # template)<br>> > #*******************************************************************************<br>> ><br>> > NETWORK_SIZE = 254<br>> ><br>> > MAC_PREFIX = "02:00"<br>> ><br>> > #*******************************************************************************<br>> > # Image Repository Configuration<br>> > #*******************************************************************************<br>> > # DEFAULT_IMAGE_TYPE: This can take values<br>> > # OS Image file holding an operating system<br>> > # CDROM Image file holding a CDROM<br>> > # DATABLOCK Image file holding a datablock,<br>> > # always created as an empty block<br>> > # DEFAULT_DEVICE_PREFIX: This can be set to<br>> > # hd IDE prefix<br>> > # sd SCSI<br>> > # xvd XEN Virtual Disk<br>> > # vd KVM virtual disk<br>> > #*******************************************************************************<br>> > DEFAULT_IMAGE_TYPE = "OS"<br>> > DEFAULT_DEVICE_PREFIX = "hd"<br>> ><br>> > #*******************************************************************************<br>> > # Information Driver Configuration<br>> > #*******************************************************************************<br>> > # You can add more information managers with different configurations <br>> > but make<br>> > # sure it has different names.<br>> > #<br>> > # name : name for this information manager<br>> > #<br>> > # executable: path of the information driver executable, can be an<br>> > # absolute path or relative to $ONE_LOCATION/lib/mads (or<br>> > # /usr/lib/one/mads/ if OpenNebula was installed in /)<br>> > #<br>> > # arguments : for the driver executable, usually a probe configuration <br>> > file,<br>> > # can be an absolute path or relative to $ONE_LOCATION/etc <br>> > (or<br>> > # /etc/one/ if OpenNebula was installed in /)<br>> > #*******************************************************************************<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # KVM Information Driver Manager Configuration<br>> > # -r number of retries when monitoring a host<br>> > # -t number of threads, i.e. number of hosts monitored at the same <br>> > time<br>> > #-------------------------------------------------------------------------------<br>> > IM_MAD = [<br>> > name = "im_kvm",<br>> > executable = "one_im_ssh",<br>> > arguments = "-r 0 -t 15 kvm" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # XEN Information Driver Manager Configuration<br>> > # -r number of retries when monitoring a host<br>> > # -t number of threads, i.e. number of hosts monitored at the same <br>> > time<br>> > #-------------------------------------------------------------------------------<br>> > #IM_MAD = [<br>> > # name = "im_xen",<br>> > # executable = "one_im_ssh",<br>> > # arguments = "xen" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # VMware Information Driver Manager Configuration<br>> > # -r number of retries when monitoring a host<br>> > # -t number of threads, i.e. number of hosts monitored at the same <br>> > time<br>> > #-------------------------------------------------------------------------------<br>> > #IM_MAD = [<br>> > # name = "im_vmware",<br>> > # executable = "one_im_sh",<br>> > # arguments = "-t 15 -r 0 vmware" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # EC2 Information Driver Manager Configuration<br>> > #-------------------------------------------------------------------------------<br>> > #IM_MAD = [<br>> > # name = "im_ec2",<br>> > # executable = "one_im_ec2",<br>> > # arguments = "im_ec2/im_ec2.conf" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-----------------------------------------------------------------------------<br>> > # Ganglia Information Driver Manager Configuration<br>> > #-----------------------------------------------------------------------------<br>> > #IM_MAD = [<br>> > # name = "im_ganglia",<br>> > # executable = "one_im_sh",<br>> > # arguments = "ganglia" ]<br>> > #-----------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # Dummy Information Driver Manager Configuration<br>> > #-------------------------------------------------------------------------------<br>> > #IM_MAD = [ name="im_dummy", executable="one_im_dummy"]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #*******************************************************************************<br>> > # Virtualization Driver Configuration<br>> > #*******************************************************************************<br>> > # You can add more virtualization managers with different configurations <br>> > but<br>> > # make sure it has different names.<br>> > #<br>> > # name : name of the virtual machine manager driver<br>> > #<br>> > # executable: path of the virtualization driver executable, can be an<br>> > # absolute path or relative to $ONE_LOCATION/lib/mads (or<br>> > # /usr/lib/one/mads/ if OpenNebula was installed in /)<br>> > #<br>> > # arguments : for the driver executable<br>> > #<br>> > # default : default values and configuration parameters for the <br>> > driver, can<br>> > # be an absolute path or relative to $ONE_LOCATION/etc (or<br>> > # /etc/one/ if OpenNebula was installed in /)<br>> > #<br>> > # type : driver type, supported drivers: xen, kvm, xml<br>> > #*******************************************************************************<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # KVM Virtualization Driver Manager Configuration<br>> > # -r number of retries when monitoring a host<br>> > # -t number of threads, i.e. number of hosts monitored at the same <br>> > time<br>> > # -l <actions[=command_name]> actions executed locally, command can be<br>> > # overridden for each action.<br>> > # Valid actions: deploy, shutdown, cancel, save, restore, <br>> > migrate, poll<br>> > # An example: "-l migrate,poll=poll_ganglia,save"<br>> > #-------------------------------------------------------------------------------<br>> > VM_MAD = [<br>> > name = "vmm_kvm",<br>> > executable = "one_vmm_exec",<br>> > arguments = "-t 15 -r 0 kvm",<br>> > default = "vmm_exec/vmm_exec_kvm.conf",<br>> > type = "kvm" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # XEN Virtualization Driver Manager Configuration<br>> > # -r number of retries when monitoring a host<br>> > # -t number of threads, i.e. number of hosts monitored at the same <br>> > time<br>> > # -l <actions[=command_name]> actions executed locally, command can be<br>> > # overridden for each action.<br>> > # Valid actions: deploy, shutdown, cancel, save, restore, <br>> > migrate, poll<br>> > # An example: "-l migrate,poll=poll_ganglia,save"<br>> > #-------------------------------------------------------------------------------<br>> > #VM_MAD = [<br>> > # name = "vmm_xen",<br>> > # executable = "one_vmm_exec",<br>> > # arguments = "-t 15 -r 0 xen",<br>> > # default = "vmm_exec/vmm_exec_xen.conf",<br>> > # type = "xen" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # VMware Virtualization Driver Manager Configuration<br>> > # -r number of retries when monitoring a host<br>> > # -t number of threads, i.e. number of hosts monitored at the same <br>> > time<br>> > #-------------------------------------------------------------------------------<br>> > #VM_MAD = [<br>> > # name = "vmm_vmware",<br>> > # executable = "one_vmm_sh",<br>> > # arguments = "-t 15 -r 0 vmware",<br>> > # default = "vmm_exec/vmm_exec_vmware.conf",<br>> > # type = "vmware" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # EC2 Virtualization Driver Manager Configuration<br>> > # arguments: default values for the EC2 driver, can be an absolute <br>> > path or<br>> > # relative to $ONE_LOCATION/etc (or /etc/one/ if <br>> > OpenNebula was<br>> > # installed in /).<br>> > #-------------------------------------------------------------------------------<br>> > #VM_MAD = [<br>> > # name = "vmm_ec2",<br>> > # executable = "one_vmm_ec2",<br>> > # arguments = "vmm_ec2/vmm_ec2.conf",<br>> > # type = "xml" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # Dummy Virtualization Driver Configuration<br>> > #-------------------------------------------------------------------------------<br>> > #VM_MAD = [ name="vmm_dummy", executable="one_vmm_dummy", type="xml" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #*******************************************************************************<br>> > # Transfer Manager Driver Configuration<br>> > #*******************************************************************************<br>> > # You can add more transfer managers with different configurations but <br>> > make<br>> > # sure it has different names.<br>> > # name : name for this transfer driver<br>> > #<br>> > # executable: path of the transfer driver executable, can be an<br>> > # absolute path or relative to $ONE_LOCATION/lib/mads (or<br>> > # /usr/lib/one/mads/ if OpenNebula was installed in /)<br>> > #<br>> > # arguments : for the driver executable, usually a commands <br>> > configuration file<br>> > # , can be an absolute path or relative to <br>> > $ONE_LOCATION/etc (or<br>> > # /etc/one/ if OpenNebula was installed in /)<br>> > #*******************************************************************************<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # SHARED Transfer Manager Driver Configuration<br>> > #-------------------------------------------------------------------------------<br>> > TM_MAD = [<br>> > name = "tm_shared",<br>> > executable = "one_tm",<br>> > arguments = "tm_shared/tm_shared.conf" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # SSH Transfer Manager Driver Configuration<br>> > #-------------------------------------------------------------------------------<br>> > #TM_MAD = [<br>> > # name = "tm_ssh",<br>> > # executable = "one_tm",<br>> > # arguments = "tm_ssh/tm_ssh.conf" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # Dummy Transfer Manager Driver Configuration<br>> > #-------------------------------------------------------------------------------<br>> > #TM_MAD = [<br>> > # name = "tm_dummy",<br>> > # executable = "one_tm",<br>> > # arguments = "tm_dummy/tm_dummy.conf" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # LVM Transfer Manager Driver Configuration<br>> > #-------------------------------------------------------------------------------<br>> > #TM_MAD = [<br>> > # name = "tm_lvm",<br>> > # executable = "one_tm",<br>> > # arguments = "tm_lvm/tm_lvm.conf" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #-------------------------------------------------------------------------------<br>> > # VMware DataStore Transfer Manager Driver Configuration<br>> > #-------------------------------------------------------------------------------<br>> > #TM_MAD = [<br>> > # name = "tm_vmware",<br>> > # executable = "one_tm",<br>> > # arguments = "tm_vmware/tm_vmware.conf" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #*******************************************************************************<br>> > # Image Manager Driver Configuration<br>> > #*******************************************************************************<br>> > # Drivers to manage the image repository, specialized for the storage <br>> > backend<br>> > # executable: path of the transfer driver executable, can be an<br>> > # absolute path or relative to $ONE_LOCATION/lib/mads (or<br>> > # /usr/lib/one/mads/ if OpenNebula was installed in /)<br>> > #<br>> > # arguments : for the driver executable<br>> > #*******************************************************************************<br>> > #-------------------------------------------------------------------------------<br>> > # FS based Image Manager Driver Configuration<br>> > # -t number of threads, i.e. number of repo operations at the same <br>> > time<br>> > #-------------------------------------------------------------------------------<br>> > IMAGE_MAD = [<br>> > executable = "one_image",<br>> > arguments = "fs -t 15" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #*******************************************************************************<br>> > # Hook Manager Configuration<br>> > #*******************************************************************************<br>> > # The Driver (HM_MAD), used to execute the Hooks<br>> > # executable: path of the hook driver executable, can be an<br>> > # absolute path or relative to $ONE_LOCATION/lib/mads (or<br>> > # /usr/lib/one/mads/ if OpenNebula was installed in /)<br>> > #<br>> > # arguments : for the driver executable, can be an absolute path or <br>> > relative<br>> > # to $ONE_LOCATION/etc (or /etc/one/ if OpenNebula was <br>> > installed<br>> > # in /)<br>> > #<br>> > # Virtual Machine Hooks (VM_HOOK) defined by:<br>> > # name : for the hook, useful to track the hook (OPTIONAL)<br>> > # on : when the hook should be executed,<br>> > # - CREATE, when the VM is created (onevm create)<br>> > # - PROLOG, when the VM is in the prolog state<br>> > # - RUNNING, after the VM is successfully booted<br>> > # - SHUTDOWN, after the VM is shutdown<br>> > # - STOP, after the VM is stopped (including VM image <br>> > transfers)<br>> > # - DONE, after the VM is deleted or shutdown<br>> > # - FAILED, when the VM enters the failed state<br>> > # command : path is relative to $ONE_LOCATION/var/remotes/hook<br>> > # (self-contained) or to /var/lib/one/remotes/hook <br>> > (system-wide).<br>> > # That directory will be copied on the hosts under<br>> > # SCRIPTS_REMOTE_DIR. It can be an absolute path that must <br>> > exist<br>> > # on the target host<br>> > # arguments : for the hook. You can access to VM information with $<br>> > # - $VMID, the ID of the virtual machine<br>> > # - $TEMPLATE, the VM template in xml and base64 encoded<br>> > # remote : values,<br>> > # - YES, The hook is executed in the host where the VM was<br>> > # allocated<br>> > # - NO, The hook is executed in the OpenNebula server <br>> > (default)<br>> > #<br>> > #<br>> > # Host Hooks (HOST_HOOK) defined by:<br>> > # name : for the hook, useful to track the hook (OPTIONAL)<br>> > # on : when the hook should be executed,<br>> > # - CREATE, when the Host is created (onehost create)<br>> > # - ERROR, when the Host enters the error state<br>> > # - DISABLE, when the Host is disabled<br>> > # command : path is relative to $ONE_LOCATION/var/remotes/hook<br>> > # (self-contained) or to /var/lib/one/remotes/hook <br>> > (system-wide).<br>> > # That directory will be copied on the hosts under<br>> > # SCRIPTS_REMOTE_DIR. It can be an absolute path that must <br>> > exist<br>> > # on the target host.<br>> > # arguments : for the hook. You can use the following Host information:<br>> > # - $HID, the ID of the host<br>> > # - $TEMPLATE, the Host template in xml and base64 encoded<br>> > # remote : values,<br>> > # - YES, The hook is executed in the host<br>> > # - NO, The hook is executed in the OpenNebula server <br>> > (default)<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > HM_MAD = [<br>> > executable = "one_hm" ]<br>> ><br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #*******************************************************************************<br>> > # Fault Tolerance Hooks<br>> > #*******************************************************************************<br>> > # This hook is used to perform recovery actions when a host fails. The <br>> > VMs<br>> > # running in the host can be deleted (use -d option) or resubmitted (-r) <br>> > in<br>> > # other host<br>> > # Last argument (force) can be "y", so suspended VMs in the host will be<br>> > # resubmitted/deleted, or "n", so suspended VMs in the host will be <br>> > ignored<br>> > #<br>> > #HOST_HOOK = [<br>> > # name = "error",<br>> > # on = "ERROR",<br>> > # command = "ft/host_error.rb",<br>> > # arguments = "$HID -r n",<br>> > # remote = "no" ]<br>> > #-------------------------------------------------------------------------------<br>> > # These two hooks can be used to automatically delete or resubmit VMs <br>> > that reach<br>> > # the "failed" state. This way, the administrator doesn't have to <br>> > interact<br>> > # manually to release its resources or retry the deployment.<br>> > #<br>> > #<br>> > # Only one of them should be uncommented.<br>> > #-------------------------------------------------------------------------------<br>> > #<br>> > #VM_HOOK = [<br>> > # name = "on_failure_delete",<br>> > # on = "FAILED",<br>> > # command = "/usr/bin/env onevm delete",<br>> > # arguments = "$VMID" ]<br>> > #<br>> > #VM_HOOK = [<br>> > # name = "on_failure_resubmit",<br>> > # on = "FAILED",<br>> > # command = "/usr/bin/env onevm resubmit",<br>> > # arguments = "$VMID" ]<br>> > #-------------------------------------------------------------------------------<br>> ><br>> > #*******************************************************************************<br>> > # Auth Manager Configuration<br>> > #*******************************************************************************<br>> > # AUTH_MAD: The Driver that will be used to authenticate (authn) and<br>> > # authorize (authz) OpenNebula requests. If defined OpenNebula will use <br>> > the<br>> > # built-in auth policies.<br>> > #<br>> > # executable: path of the auth driver executable, can be an<br>> > # absolute path or relative to $ONE_LOCATION/lib/mads (or<br>> > # /usr/lib/one/mads/ if OpenNebula was installed in /)<br>> > #<br>> > # arguments :<br>> > # --authn: list of authentication modules separated by commas, if <br>> > not<br>> > # defined all the modules available will be enabled<br>> > # --authz: authorization module<br>> > #<br>> > # SESSION_EXPIRATION_TIME: Time in seconds to keep an authenticated <br>> > token as<br>> > # valid. During this time, the driver is not used. Use 0 to disable <br>> > session<br>> > # caching<br>> > #<br>> > # ENABLE_OTHER_PERMISSIONS: Whether or not to enable the permissions for<br>> > # 'other'. Users in the oneadmin group will still be able to change<br>> > # these permissions. Values: YES or NO<br>> > #*******************************************************************************<br>> ><br>> > AUTH_MAD = [<br>> > executable = "one_auth_mad",<br>> > arguments = "--authn ssh,x509,ldap,server_cipher,server_x509"<br>> > # arguments = "--authz quota --authn <br>> > ssh,x509,ldap,server_cipher,server_x509"<br>> > ]<br>> ><br>> > SESSION_EXPIRATION_TIME = 900<br>> ><br>> > #ENABLE_OTHER_PERMISSIONS = "YES"<br>> ><br>> ><br>> > /etc/one/sunstone-server.conf<br>> ><br>> > # OpenNebula sever contact information<br>> > :one_xmlrpc: http://localhost:2633/RPC2<br>> ><br>> > # Server Configuration<br>> > :host: localhost<br>> > :port: 8080<br>> ><br>> > # Authentication driver for incomming requests<br>> > # sunstone, for OpenNebula's user-password scheme<br>> > # x509, for x509 certificates based authentication<br>> > :auth: sunstone<br>> ><br>> > # Authentication driver to communicate with OpenNebula core<br>> > # cipher, for symmetric cipher encryption of tokens<br>> > # x509, for x509 certificate encryption of tokens<br>> > :core_auth: cipher<br>> ><br>> > # VNC Configuration<br>> > :vnc_proxy_base_port: 29876<br>> > :novnc_path:<br>> ><br>> > # Default language setting<br>> > :lang: en_US<br>> ><br>> ><br>> > /var/log/one/oned.log<br>> ><br>> > Wed May 2 11:24:38 2012 [ONE][I]: Starting OpenNebula 3.2.1<br>> > ----------------------------------------<br>> > OpenNebula Configuration File<br>> > ----------------------------------------<br>> > AUTH_MAD=ARGUMENTS=--authn <br>> > ssh,x509,ldap,server_cipher,server_x509,EXECUTABLE=one_auth_mad<br>> > DB=BACKEND=sqlite<br>> > DEBUG_LEVEL=3<br>> > DEFAULT_DEVICE_PREFIX=hd<br>> > DEFAULT_IMAGE_TYPE=OS<br>> > ENABLE_OTHER_PERMISSIONS=YES<br>> > HM_MAD=EXECUTABLE=one_hm<br>> > HOST_MONITORING_INTERVAL=600<br>> > HOST_PER_INTERVAL=15<br>> > IMAGE_MAD=ARGUMENTS=fs -t 15,EXECUTABLE=one_image<br>> > IM_MAD=ARGUMENTS=-r 0 -t 15 kvm,EXECUTABLE=one_im_ssh,NAME=im_kvm<br>> > MAC_PREFIX=02:00<br>> > MANAGER_TIMER=15<br>> > NETWORK_SIZE=254<br>> > PORT=2633<br>> > SCRIPTS_REMOTE_DIR=/var/tmp/one<br>> > SESSION_EXPIRATION_TIME=900<br>> > TM_MAD=ARGUMENTS=tm_shared/tm_shared.conf,EXECUTABLE=one_tm,NAME=tm_shared<br>> > VM_DIR=/var/lib/one/<br>> > VM_MAD=ARGUMENTS=-t 15 -r 0 <br>> > kvm,DEFAULT=vmm_exec/vmm_exec_kvm.conf,EXECUTABLE=one_vmm_exec,NAME=vmm_kvm,TYPE=kvm<br>> > VM_PER_INTERVAL=5<br>> > VM_POLLING_INTERVAL=600<br>> > VNC_BASE_PORT=5900<br>> > ----------------------------------------<br>> > Wed May 2 11:24:38 2012 [ONE][I]: Log level:3 <br>> > [0=ERROR,1=WARNING,2=INFO,3=DEBUG]<br>> > Wed May 2 11:24:38 2012 [ONE][I]: Checking database version.<br>> > Wed May 2 11:24:38 2012 [VMM][I]: Starting Virtual Machine Manager...<br>> > Wed May 2 11:24:38 2012 [LCM][I]: Starting Life-cycle Manager...<br>> > Wed May 2 11:24:38 2012 [VMM][I]: Virtual Machine Manager started.<br>> > Wed May 2 11:24:38 2012 [LCM][I]: Life-cycle Manager started.<br>> > Wed May 2 11:24:38 2012 [InM][I]: Starting Information Manager...<br>> > Wed May 2 11:24:38 2012 [InM][I]: Information Manager started.<br>> > Wed May 2 11:24:38 2012 [TrM][I]: Starting Transfer Manager...<br>> > Wed May 2 11:24:38 2012 [DiM][I]: Starting Dispatch Manager...<br>> > Wed May 2 11:24:38 2012 [TrM][I]: Transfer Manager started.<br>> > Wed May 2 11:24:38 2012 [HKM][I]: Starting Hook Manager...<br>> > Wed May 2 11:24:38 2012 [DiM][I]: Dispatch Manager started.<br>> > Wed May 2 11:24:38 2012 [HKM][I]: Hook Manager started.<br>> > Wed May 2 11:24:38 2012 [AuM][I]: Starting Auth Manager...<br>> > Wed May 2 11:24:38 2012 [AuM][I]: Authorization Manager started.<br>> > Wed May 2 11:24:38 2012 [ImM][I]: Starting Image Manager...<br>> > Wed May 2 11:24:38 2012 [ImM][I]: Image Manager started.<br>> > Wed May 2 11:24:38 2012 [ReM][I]: Starting Request Manager...<br>> > Wed May 2 11:24:38 2012 [ReM][I]: Starting XML-RPC server, port 2633 ...<br>> > Wed May 2 11:24:38 2012 [ReM][I]: Request Manager started.<br>> > Wed May 2 11:24:40 2012 [VMM][I]: Loading Virtual Machine Manager <br>> > drivers.<br>> > Wed May 2 11:24:40 2012 [VMM][I]: Loading driver: vmm_kvm (KVM)<br>> > Wed May 2 11:24:40 2012 [VMM][I]: Driver vmm_kvm loaded.<br>> > Wed May 2 11:24:40 2012 [InM][I]: Loading Information Manager drivers.<br>> > Wed May 2 11:24:40 2012 [InM][I]: Loading driver: im_kvm<br>> > Wed May 2 11:24:40 2012 [InM][I]: Driver im_kvm loaded<br>> > Wed May 2 11:24:40 2012 [TM][I]: Loading Transfer Manager drivers.<br>> > Wed May 2 11:24:40 2012 [VMM][I]: Loading driver: tm_shared<br>> > Wed May 2 11:24:40 2012 [TM][I]: Driver tm_shared loaded.<br>> > Wed May 2 11:24:40 2012 [HKM][I]: Loading Hook Manager driver.<br>> > Wed May 2 11:24:40 2012 [HKM][I]: Hook Manager loaded<br>> > Wed May 2 11:24:40 2012 [ImM][I]: Loading Image Manager driver.<br>> > Wed May 2 11:24:40 2012 [ImM][I]: Image Manager loaded<br>> > Wed May 2 11:24:40 2012 [AuM][I]: Loading Auth. Manager driver.<br>> > Wed May 2 11:24:40 2012 [AuM][I]: Auth Manager loaded<br>> > Wed May 2 11:24:57 2012 [ReM][D]: HostPoolInfo method invoked<br>> > Wed May 2 11:24:57 2012 [ReM][D]: VirtualMachinePoolInfo method invoked<br>> > Wed May 2 11:24:57 2012 [ReM][D]: AclInfo method invoked<br>> > Wed May 2 11:25:27 2012 [ReM][D]: HostPoolInfo method invoked<br>> > Wed May 2 11:25:27 2012 [ReM][D]: VirtualMachinePoolInfo method invoked<br>> > Wed May 2 11:25:27 2012 [ReM][D]: AclInfo method invoked<br>> > Wed May 2 11:26:56 2012 [ReM][D]: HostPoolInfo method invoked<br>> > Wed May 2 11:26:57 2012 [ReM][D]: VirtualMachinePoolInfo method invoked<br>> > Wed May 2 11:26:57 2012 [ReM][D]: AclInfo method invoked<br>> > Wed May 2 11:27:26 2012 [ReM][D]: HostPoolInfo method invoked<br>> > Wed May 2 11:27:27 2012 [ReM][D]: VirtualMachinePoolInfo method invoked<br>> > Wed May 2 11:27:27 2012 [ReM][D]: AclInfo method invoked<br>> > Wed May 2 11:27:39 2012 [ReM][D]: UserPoolInfo method invoked<br>> > Wed May 2 11:27:39 2012 [ReM][E]: [UserPoolInfo] User couldn't be <br>> > authenticated, aborting call.<br>> ><br>> ><br>> ><br>> > /var/log/one/sunstone.log<br>> ><br>> > == Sinatra/1.3.2 has taken the stage on 8080 for development with backup <br>> > from Thin<br>> ><br>> > (and that's all until I stop the sunstone-server.)<br>> ><br>> ><br>> > Again, thanks all for your answers, Olivier, Lehel & Hector :-)<br>> ><br>> > Guillaume<br>> ><br>> ><br>> > Date: Wed, 2 May 2012 08:18:55 +0200<br>> > From: olivier.sallou@codeless.fr<br>> > To: licks0re@hotmail.com<br>> > Subject: Re: [one-users] sunstone says : "OpenNebula is not running"<br>> ><br>> ><br>> ><br>> ><br>> > Le 5/1/12 10:10 AM, Guigui 6675636b206f6666 a écrit :<br>> ><br>> > Well, it's on the same machine so I guess it tries to connect<br>> > localhost, am I wrong?<br>> ><br>> ><br>> > By the way, happy day-off 1st of may ;-)<br>> ><br>> ><br>> > All depends on configuration.<br>> ><br>> > If it listens on 127.0.0.1, it will accept connections only on this<br>> > interface. So it sunstone is configured to call oned on current ip<br>> > instead of 127.0.01 it will fail for example.<br>> ><br>> > The listen address and the connection address should exactly be the<br>> > same (or one should listen on 0.0.0.0)<br>> ><br>> ><br>> > Olivier<br>> ><br>> >> To: users@lists.opennebula.org<br>> >> Date: Tue, 1 May 2012 12:28:06 +0200<br>> >> From: hsanjuan@opennebula.org<br>> >> Subject: Re: [one-users] sunstone says : "OpenNebula is not running"<br>> >><br>> >> Hi, can you send /var/log/one/sunstone.log. It may have useful info. <br>> >> Also<br>> >> check oned.log in case it is an auth problem.<br>> >><br>> >> Hector<br>> >><br>> >> En Tue, 01 May 2012 08:50:58 +0200, Guigui 6675636b206f6666<br>> >> <licks0re@hotmail.com> escribió:<br>> >><br>> >> ><br>> >> > Good morning,<br>> >> ><br>> >> ><br>> >> ><br>> >> > I'm trying to setup open nebula on an opensuse 12.1<br>> >> ><br>> >> ><br>> >> ><br>> >> > I followed this document (which, if you do exactly the steps is NOT<br>> >> > working at the end):<br>> >> ><br>> >> ><br>> >> ><br>> >> > SDB:Cloud OpenNebula - openSUSE<br>> >> ><br>> >> ><br>> >> ><br>> >> > The problem is that I can't get sunstone server working. I double<br>> >> > checked the settings, etc, no way I'll get it to work.<br>> >> ><br>> >> ><br>> >> ><br>> >> > Sunstone server systematically answers :<br>> >> ><br>> >> ><br>> >> ><br>> >> > "OpenNebula is not running"<br>> >> ><br>> >> ><br>> >> ><br>> >> > But, I confirm that open nebula IS running, here's a ps -ef | grep <br>> >> one :<br>> >> ><br>> >> ><br>> >> ><br>> >> > oneadmin 3046 1 0 Apr30 pts/0 00:00:05 /usr/bin/oned -f<br>> >> ><br>> >> > oneadmin 3064 1 0 Apr30 pts/0 00:00:03 /usr/bin/mm_sched<br>> >> ><br>> >> > oneadmin 3067 3046 0 Apr30 pts/0 00:00:00 ruby<br>> >> > /usr/lib/one/mads/one_vmm_exec.rb -t 15 -r 0 kvm<br>> >> ><br>> >> > oneadmin 3077 3046 0 Apr30 pts/0 00:00:00 ruby<br>> >> > /usr/lib/one/mads/one_im_exec.rb -r 0 -t 15 kvm<br>> >> ><br>> >> > oneadmin 3086 3046 0 Apr30 pts/0 00:00:00 ruby<br>> >> > /usr/lib/one/mads/one_tm.rb tm_shared/tm_shared.conf<br>> >> ><br>> >> > oneadmin 3099 3046 0 Apr30 pts/0 00:00:00 ruby<br>> >> > /usr/lib/one/mads/one_hm.rb<br>> >> ><br>> >> > oneadmin 3109 3046 0 Apr30 pts/0 00:00:00 ruby<br>> >> > /usr/lib/one/mads/one_image.rb fs -t 15<br>> >> ><br>> >> > oneadmin 3118 3046 0 Apr30 pts/0 00:00:00 ruby<br>> >> > /usr/lib/one/mads/one_auth_mad.rb --authn<br>> >> > ssh,x509,ldap,server_cipher,server_x509<br>> >> ><br>> >> > root 3162 1 0 Apr30 pts/0 00:00:17 ruby<br>> >> > /usr/lib/one/sunstone/sunstone-server.rb<br>> >> ><br>> >> > root 6697 2989 0 08:18 pts/0 00:00:00 grep --color=auto one<br>> >> ><br>> >> ><br>> >> ><br>> >> > Any clues?<br>> >> ><br>> >> > Thanks in advance! <br>> >><br>> >><br>> >> --<br>> >> Hector Sanjuan<br>> >> OpenNebula Developer<br>> >> _______________________________________________<br>> >> Users mailing list<br>> >> Users@lists.opennebula.org<br>> >> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org<br>> > <br>> <br>> <br>> -- <br>> Hector Sanjuan<br>> OpenNebula Developer<br></div> </div></body>
</html>