[one-users] Users Digest, Vol 69, Issue 53
普通人
1360848475 at qq.com
Wed Nov 13 00:43:27 PST 2013
yeah,you're right.NFS shared storage is very easy and comfortable. Have you tried ceph ??
Thank you very much !!!
let's make a friend! O(∩_∩)O
my name is jayway!!!
------------------ Original ------------------
From: "users-request";<users-request at lists.opennebula.org>;
Date: Wed, Nov 13, 2013 04:37 PM
To: "users"<users at lists.opennebula.org>;
Subject: Users Digest, Vol 69, Issue 53
Send Users mailing list submissions to
users at lists.opennebula.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
or, via email, send a message with subject or body 'help' to
users-request at lists.opennebula.org
You can reach the person managing the list at
users-owner at lists.opennebula.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: ??? Users Digest, Vol 69, Issue 49 (Sharuzzaman Ahmat Raslan)
----------------------------------------------------------------------
Message: 1
Date: Wed, 13 Nov 2013 16:36:52 +0800
From: Sharuzzaman Ahmat Raslan <sharuzzaman at gmail.com>
To: ??? <1360848475 at qq.com>, users <users at lists.opennebula.org>
Subject: Re: [one-users] ??? Users Digest, Vol 69, Issue 49
Message-ID:
<CAK+zucmCQsnd9s99e3r-Xx8-rL=9ktp7LmfL3ueUuuzpOycG+A at mail.gmail.com>
Content-Type: text/plain; charset="gb2312"
Well, that bug report is beyond my knowledge.
But I still believe setting up an NFS server and moving your VM image to
NFS is still easier to be implemented now and supported in the future.
Thanks.
On Wed, Nov 13, 2013 at 4:31 PM, ??? <1360848475 at qq.com> wrote:
> I use local file system storage ,all VMs just stores native place. I
> used ssh for TM_MAD.
>
> http://dev.opennebula.org/issues/660
>
> look at this,there is a way for local caching in Opennebula 3.8.But now
> ,in Opennebubla 4.2 , not !!!
>
> ------------------ ???? ------------------
> *???:* "Sharuzzaman Ahmat Raslan";<sharuzzaman at gmail.com>;
> *????:* 2013?11?13?(???) ??4:20
> *???:* "???"<1360848475 at qq.com>;
> *??:* "users"<users at lists.opennebula.org>;
> *??:* Re: [one-users] Users Digest, Vol 69, Issue 49
>
> If you use shared storage, and have datastore 0 and datastore 100 (your
> first datastore) in shared filesystem, eg. NFS, you can have your VM
> deployed in short time, as the VM image will be linked to the original
> image, or cloned in its own filesystem, which should be faster.
>
> See http://opennebula.org/documentation:archives:rel3.4:system_ds for
> more information
>
>
> On Wed, Nov 13, 2013 at 4:05 PM, ??? <1360848475 at qq.com> wrote:
>
>> HI,everyone
>> when I create many VMs(more than 100),it still scp images from
>> image datastores.it will be very slow. Is there any way to
>> accelerate for creating VM like local caching images;
>>
>>
>> ------------------ Original ------------------
>> *From: * "users-request";<users-request at lists.opennebula.org>;
>> *Date: * Wed, Nov 13, 2013 12:43 PM
>> *To: * "users"<users at lists.opennebula.org>;
>> *Subject: * Users Digest, Vol 69, Issue 49
>>
>> Send Users mailing list submissions to
>> users at lists.opennebula.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> or, via email, send a message with subject or body 'help' to
>> users-request at lists.opennebula.org
>>
>> You can reach the person managing the list at
>> users-owner at lists.opennebula.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Users digest..."
>>
>>
>> Today's Topics:
>>
>> 1. Re: How to manage vm created by opennebula by virsh tools? (caohf)
>> 2. Re: how to reset VM ID??? (???)
>> 3. Re: VM in Opennebula for Esxi 5.1 failed (Catalina Quinde)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Wed, 13 Nov 2013 09:02:47 +0800
>> From: caohf <caohf at wedogame.com>
>> To: vishnu e divakaran <vishnued at gmail.com>
>> Cc: "users at lists.opennebula.org" <Users at lists.opennebula.org>
>> Subject: Re: [one-users] How to manage vm created by opennebula by
>> virsh tools?
>> Message-ID: <201311130902475444642 at wedogame.com>
>> Content-Type: text/plain; charset="utf-8"
>>
>> HI:
>> Here is a document for virsh-install ,this command can help you create a
>> image file.
>>
>> http://www.techotopia.com/index.php/Installing_a_CentOS_KVM_Guest_OS_from_the_Command-line_(virt-install)
>>
>> After you finsh the installlation,you can import the image file into
>> opennebula via sunstone.
>>
>>
>>
>> Best Wishes!
>> Dennis
>>
>> From: vishnu e divakaran
>> Date: 2013-11-12 21:40
>> To: caohf
>> Subject: Re: [one-users] How to manage vm created by opennebula by virsh
>> tools?
>> dear friend,
>>
>> can you tell me how can I create a image file using virsh? any sample
>> code or anything.
>>
>>
>>
>>
>> On 12 November 2013 15:03, caohf <caohf at wedogame.com> wrote:
>>
>> Thanks for your help.
>>
>>
>>
>>
>>
>> Best wishes
>> Dennis
>>
>> From: Sharuzzaman Ahmat Raslan
>> Date: 2013-11-12 14:48
>> To: caohf
>> CC: users at lists.opennebula.org
>> Subject: Re: Re: [one-users] How to manage vm created by opennebula by
>> virsh tools?
>> Hi Dennis,
>>
>>
>> When Opennebula shutdown a VM, it will destroy the XML file on the host,
>> and unregister the VM from KVM.
>>
>>
>> If you just want it to stop, so that you can do something to the image,
>> eg. making a backup, you can use the command
>>
>>
>> onevm suspend <vmid>
>>
>>
>> then the VM will still be available in the host, and can be queried by
>> virsh list command.
>>
>>
>> Read more at:
>>
>> http://opennebula.org/documentation:rel4.2:vm_guide_2
>> http://opennebula.org/doc/4.2/cli/onevm.1.html
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Nov 12, 2013 at 12:33 PM, caohf <caohf at wedogame.com> wrote:
>>
>>
>> Thanks
>>
>> But when i use virsh list --all command
>>
>>
>> -----------------------------------------------------------------
>> [root at opennebula-node1 ~]# virsh list --all
>> Id Name State
>> ----------------------------------------------------
>> 43 one-111 running
>> 44 one-104 running
>> 48 one-123 running
>> - centos5.6.kvm shut off
>> ----------------------------------------------------------------
>>
>>
>> if the vm managed by one is runnning ,it can be list by virsh list --all
>> command
>>
>> when I destory a vm.
>>
>>
>> -------------------------------------------------------------------------
>>
>> [root at opennebula-node1 ~]# virsh destroy 44
>> Domain 44 destroyed
>> ------------------------------------------------------------------------
>>
>>
>> It can't be display by virsh command
>>
>>
>> --------------------------------------------------------
>> [root at opennebula-node1 ~]# virsh list --all
>> Id Name State
>> ----------------------------------------------------
>> 43 one-111 running
>> 48 one-123 running
>> - centos5.6.kvm shut off
>> -----------------------------------------------------------
>>
>> How to fix this
>>
>>
>>
>> Best Wishes!
>> Dennis
>>
>> From: Sharuzzaman Ahmat Raslan
>> Date: 2013-11-12 11:48
>> To: caohf
>> CC: users at lists.opennebula.org
>> Subject: Re: Re: [one-users] How to manage vm created by opennebula by
>> virsh tools?
>> Hi Dennis,
>>
>>
>> Opennebula is still using libvirt for KVM-based nodes.
>>
>>
>> You can still stop/start the VM using normal virsh command, such as virsh
>> stop, virsh start, virsh attach-device etc.
>>
>>
>> But doing so directly to the VM, will cause your VM to not shutting down
>> properly if you onevm shutdown from Opennebula later. That's what I have
>> experience before.
>>
>>
>> Thanks.
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Nov 12, 2013 at 11:34 AM, caohf <caohf at wedogame.com> wrote:
>>
>> Hi :
>> I have another system use virsh(libvirt) manage the kvm vms,when i use
>> opennebula ,I also want to use my old system.
>> Opennebula provide the libvirt drivers in the old version.
>>
>>
>>
>>
>> Best Wishes!
>> Dennis
>>
>> From: Sharuzzaman Ahmat Raslan
>> Date: 2013-11-12 11:13
>> To: caohf
>> CC: users at lists.opennebula.org
>> Subject: Re: [one-users] How to manage vm created by opennebula by virsh
>> tools?
>> Hi Dennis,
>>
>>
>> What actually that you want to achieve?
>>
>>
>> Thanks.
>>
>>
>>
>>
>>
>>
>> On Tue, Nov 12, 2013 at 9:58 AM, caohf <caohf at wedogame.com> wrote:
>>
>> Dear All:
>> How to manage vm created by opennebula by virsh tool?
>>
>>
>>
>>
>> Best Wishes!
>> Dennis
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>>
>>
>>
>> --
>> Sharuzzaman Ahmat Raslan
>>
>>
>>
>> --
>> Sharuzzaman Ahmat Raslan
>>
>>
>>
>> --
>> Sharuzzaman Ahmat Raslan
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL: <
>> http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131113/d9e95aa7/attachment-0001.htm
>> >
>>
>> ------------------------------
>>
>> Message: 2
>> Date: Wed, 13 Nov 2013 10:22:40 +0900
>> From: ??? <jaekeun0208 at gmail.com>
>> To: Javier Fontan <jfontan at opennebula.org>
>> Cc: Users OpenNebula <users at lists.opennebula.org>
>> Subject: Re: [one-users] how to reset VM ID???
>> Message-ID:
>> <CAPvw6THpk-mvyvcacUN_O9PO=pGA_z1h5yTya1xaYz-1dG8RYw at mail.gmail.com>
>> Content-Type: text/plain; charset="utf-8"
>>
>> should I open all port?? from 5900 to 65535??
>>
>>
>>
>>
>> 2013/11/12 Javier Fontan <jfontan at opennebula.org>
>>
>> > There's not an way to reset ID and we discourage it. The code to
>> > generate the VNC port is
>> >
>> > --8<------
>> > int limit = 65535;
>> > oss << ( base_port + ( oid % (limit - base_port) ));
>> > ------>8--
>> >
>> > base_port is 5900 by default. You should not have problems with the
>> > port as it goes back to 5900 after reaching 65535.
>> >
>> >
>> >
>> > On Tue, Nov 12, 2013 at 9:35 AM, ??? <jaekeun0208 at gmail.com> wrote:
>> > > Dear All
>> > >
>> > > how to reset VM ID??
>> > >
>> > > I already exceed over 10400.....
>> > >
>> > > actually, I want to control vnc port.
>> > > I saw the VM log file in qemu. It show me that the vnc port is same
>> with
>> > VM
>> > > ID.
>> > >
>> > > If I specify the vnc port in opennebula template, I can`t make 2 more
>> VMs
>> > > with same template because of vnc port duplication.
>> > >
>> > > thanks :D
>> > >
>> > > _______________________________________________
>> > > Users mailing list
>> > > Users at lists.opennebula.org
>> > > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >
>> >
>> >
>> >
>> > --
>> > Javier Font?n Mui?os
>> > Developer
>> > OpenNebula - The Open Source Toolkit for Data Center Virtualization
>> > www.OpenNebula.org | @OpenNebula | github.com/jfontan
>> >
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL: <
>> http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131113/77c99a89/attachment-0001.htm
>> >
>>
>> ------------------------------
>>
>> Message: 3
>> Date: Tue, 12 Nov 2013 23:43:45 -0500
>> From: Catalina Quinde <catalinaquinde at gmail.com>
>> To: users at lists.opennebula.org
>> Subject: Re: [one-users] VM in Opennebula for Esxi 5.1 failed
>> Message-ID:
>> <CAPgz++zva8tb3hBfzJqazCSjC+QF7SqgdOZrJBQrGgbn=z9UJQ at mail.gmail.com>
>> Content-Type: text/plain; charset="iso-8859-1"
>>
>> Tino,
>>
>> In my case Esxi is installed on a flash memory and the computer hard disk
>> has partitions with several other hypervisors
>>
>> I used Vsphere client to create the datastores 102 and 103, with the
>> options Add Datastore, Disk / LUN .... but on the screen where you select
>> the hard disk on which to create the datastores, displayed the original
>> computer hard drive and not flash memory where you really should add the
>> datastore, then, there is an error message for creating datastore.
>>
>> Can not change the host configuration
>> Call "HostDatastoreSystem.QueryVmfsDatastoreCreateOptions" for object
>> "ha-datastoresystem" on ESXi "192 168 147 131" failed....
>>
>> Esxi is installed in flash memory.
>>
>> Regards, Caty.
>>
>>
>> 2013/11/12 < users-request at lists.opennebula.org>
>>
>> > Send Users mailing list submissions to
>> > users at lists.opennebula.org
>> >
>> > To subscribe or unsubscribe via the World Wide Web, visit
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > or, via email, send a message with subject or body 'help' to
>> > users-request at lists.opennebula.org
>> >
>> > You can reach the person managing the list at
>> > users-owner at lists.opennebula.org
>> >
>> > When replying, please edit your Subject line so it is more specific
>> > than "Re: Contents of Users digest..."
>> >
>> >
>> > Today's Topics:
>> >
>> > 1. Re: VM in Opennebula for Esxi 5.1 failed (Tino Vazquez)
>> >
>> >
>> > ----------------------------------------------------------------------
>> >
>> > Message: 1
>> > Date: Tue, 12 Nov 2013 18:36:43 +0100
>> > From: Tino Vazquez <tinova79 at gmail.com>
>> > To: Catalina Quinde <catalinaquinde at gmail.com>
>> > Cc: users <users at lists.opennebula.org>
>> > Subject: Re: [one-users] VM in Opennebula for Esxi 5.1 failed
>> > Message-ID:
>> > <
>> > CAHfKwc2LoKDt97kPZAjB4tCX5RqxddmbvBDRZqM6mauT1DNUvA at mail.gmail.com>
>> > Content-Type: text/plain; charset=ISO-8859-1
>> >
>> > Hi Catalina,
>> >
>> > You need to mount both datastores (102 and 103) in the ESX so they can
>> > access them. So, there should be the following routes present in the
>> > ESX
>> >
>> > * /vmfs/volumes/102
>> > * /vmfs/volumes/103
>> >
>> > Regards,
>> >
>> > -Tino
>> > --
>> > Constantino V?zquez Blanco, PhD, MSc
>> > Senior Infrastructure Architect at C12G Labs
>> > www.OpenNebula.org | @tinova79 | es.linkedin.com/in/tinova
>> >
>> >
>> > On Tue, Nov 12, 2013 at 4:52 PM, Catalina Quinde
>> > <catalinaquinde at gmail.com> wrote:
>> > > Tino, as I explained there are no directories created 102 and 103 in
>> the
>> > /
>> > > vmfs / volumes on node Esxi.
>> > >
>> > > Thanks Tino.
>> > >
>> > >
>> > > 2013/11/12 <users-request at lists.opennebula.org>
>> > >>
>> > >> Send Users mailing list submissions to
>> > >> users at lists.opennebula.org
>> > >>
>> > >> To subscribe or unsubscribe via the World Wide Web, visit
>> > >>
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> or, via email, send a message with subject or body 'help' to
>> > >> users-request at lists.opennebula.org
>> > >>
>> > >> You can reach the person managing the list at
>> > >> users-owner at lists.opennebula.org
>> > >>
>> > >> When replying, please edit your Subject line so it is more specific
>> > >> than "Re: Contents of Users digest..."
>> > >>
>> > >>
>> > >> Today's Topics:
>> > >>
>> > >> 1. Re: VM in Opennebula for Esxi 5.1 failed (Tino Vazquez)
>> > >>
>> > >>
>> > >>
>> ----------------------------------------------------------------------
>> > >>
>> > >> Message: 1
>> > >> Date: Tue, 12 Nov 2013 15:16:23 +0100
>> > >> From: Tino Vazquez <cvazquez at c12g.com>
>> > >> To: Catalina Quinde <catalinaquinde at gmail.com>
>> > >> Cc: users <users at lists.opennebula.org>
>> > >> Subject: Re: [one-users] VM in Opennebula for Esxi 5.1 failed
>> > >> Message-ID:
>> > >>
>> > >> <CAHfKwc1w930-f9oDhbLRG4GEzK9Au9HrC0D12nO8bjKqqF23dg at mail.gmail.com>
>> > >> Content-Type: text/plain; charset=ISO-8859-1
>> > >>
>> > >> Hi Catalina,
>> > >>
>> > >> Could you make sure that datastores 102 and 103 are mounted in the
>> ESX
>> > >> hosts?
>> > >>
>> > >> Regards,
>> > >>
>> > >> -Tino
>> > >>
>> > >> --
>> > >> OpenNebula - Flexible Enterprise Cloud Made Simple
>> > >>
>> > >> --
>> > >> Constantino V?zquez Blanco, PhD, MSc
>> > >> Senior Infrastructure Architect at C12G Labs
>> > >> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>> > >>
>> > >> --
>> > >> Confidentiality Warning: The information contained in this e-mail and
>> > >> any accompanying documents, unless otherwise expressly indicated, is
>> > >> confidential and privileged, and is intended solely for the person
>> > >> and/or entity to whom it is addressed (i.e. those identified in the
>> > >> "To" and "cc" box). They are the property of C12G Labs S.L..
>> > >> Unauthorized distribution, review, use, disclosure, or copying of
>> this
>> > >> communication, or any part thereof, is strictly prohibited and may be
>> > >> unlawful. If you have received this e-mail in error, please notify us
>> > >> immediately by e-mail at abuse at c12g.com and delete the e-mail and
>> > >> attachments and any copy from your system. C12G thanks you for your
>> > >> cooperation.
>> > >>
>> > >>
>> > >> On Tue, Nov 12, 2013 at 10:55 AM, Tino Vazquez <cvazquez at c12g.com>
>> > wrote:
>> > >> > Hi Catalina,
>> > >> >
>> > >> > So, the connectivity to the host seems to be OK, but still the
>> monitor
>> > >> > data doesn't look valid. Let's try to figure out why.
>> > >> >
>> > >> > What's the output of the following (as oneadmin, in the front-end)
>> > >> >
>> > >> > $ ssh 92.168.147.131 "df -m | grep /vmfs/volumes/103"
>> > >> >
>> > >> > Regards,
>> > >> >
>> > >> > -Tino
>> > >> > --
>> > >> > OpenNebula - Flexible Enterprise Cloud Made Simple
>> > >> >
>> > >> > --
>> > >> > Constantino V?zquez Blanco, PhD, MSc
>> > >> > Senior Infrastructure Architect at C12G Labs
>> > >> > www.c12g.com | @C12G | es.linkedin.com/in/tinova
>> > >> >
>> > >> > --
>> > >> > Confidentiality Warning: The information contained in this e-mail
>> and
>> > >> > any accompanying documents, unless otherwise expressly indicated,
>> is
>> > >> > confidential and privileged, and is intended solely for the person
>> > >> > and/or entity to whom it is addressed (i.e. those identified in the
>> > >> > "To" and "cc" box). They are the property of C12G Labs S.L..
>> > >> > Unauthorized distribution, review, use, disclosure, or copying of
>> this
>> > >> > communication, or any part thereof, is strictly prohibited and may
>> be
>> > >> > unlawful. If you have received this e-mail in error, please notify
>> us
>> > >> > immediately by e-mail at abuse at c12g.com and delete the e-mail and
>> > >> > attachments and any copy from your system. C12G thanks you for your
>> > >> > cooperation.
>> > >> >
>> > >> >
>> > >> > On Mon, Nov 11, 2013 at 8:20 PM, Catalina Quinde
>> > >> > <catalinaquinde at gmail.com> wrote:
>> > >> >> Hi Tino, this displays
>> > >> >>
>> > >> >> oneadmin at ubuntuOpNeb:~$ bash -x
>> > >> >> /var/lib/one/remotes/datastore/vmfs/monitor
>> > >> >>
>> > >> >>
>> >
>> PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDM8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5zc2hfZGlFc3hpPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xPC9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xPC9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wPC9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT05TPjxEU19NQUQ+dm1mczwvRFNfTUFEPjxUTV9NQUQ+dm1mczwvVE1fTUFEPjxCQVNFX1BBVEg+L3ZtZnMvdm9sdW1lcy8xMDM8L0JBU0VfUEFUSD48VFlQRT4wPC9UWVBFPjxESVNLX1RZUEU+MDwvRElTS19UWVBFPjxDTFVTVEVSX0lEPjEwMDwvQ0xVU1RFUl9JRD48Q0xVU1RFUj5Fc3hpY2x1czwvQ0xVU1RFUj48VE9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNFRF9NQj4wPC9VU0VEX01CPjxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDREFUQVsxOTIuMTY4LjE0Ny4xMzFdXT48L0JSSURHRV9MSVNUPjxEU19NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvRFNfTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvVE1fTUFEPjxUWVBFPjwh
>> > W0N
>> > >>
>> > >>
>> >
>> EQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU19EUklWRVJfQUNUSU9OX0RBVEE+
>> > >> >> 103
>> > >> >> + '[' -z '' ']'
>> > >> >> + LIB_LOCATION=/usr/lib/one
>> > >> >> + . /usr/lib/one/sh/scripts_common.sh
>> > >> >> ++ export LANG=C
>> > >> >> ++ LANG=C
>> > >> >> ++ export
>> > >> >>
>> > >> >>
>> >
>> PATH=/bin:/sbin:/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
>> > >> >> ++
>> > >> >>
>> > >> >>
>> >
>> PATH=/bin:/sbin:/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
>> > >> >> ++ AWK=awk
>> > >> >> ++ BASH=bash
>> > >> >> ++ CUT=cut
>> > >> >> ++ DATE=date
>> > >> >> ++ DD=dd
>> > >> >> ++ DU=du
>> > >> >> ++ GREP=grep
>> > >> >> ++ ISCSIADM=iscsiadm
>> > >> >> ++ LVCREATE=lvcreate
>> > >> >> ++ LVREMOVE=lvremove
>> > >> >> ++ LVRENAME=lvrename
>> > >> >> ++ LVS=lvs
>> > >> >> ++ LN=ln
>> > >> >> ++ MD5SUM=md5sum
>> > >> >> ++ MKFS=mkfs
>> > >> >> ++ MKISOFS=genisoimage
>> > >> >> ++ MKSWAP=mkswap
>> > >> >> ++ QEMU_IMG=qemu-img
>> > >> >> ++ RADOS=rados
>> > >> >> ++ RBD=rbd
>> > >> >> ++ READLINK=readlink
>> > >> >> ++ RM=rm
>> > >> >> ++ SCP=scp
>> > >> >> ++ SED=sed
>> > >> >> ++ SSH=ssh
>> > >> >> ++ SUDO=sudo
>> > >> >> ++ SYNC=sync
>> > >> >> ++ TAR=tar
>> > >> >> ++ TGTADM=tgtadm
>> > >> >> ++ TGTADMIN=tgt-admin
>> > >> >> ++ TGTSETUPLUN=tgt-setup-lun-one
>> > >> >> ++ VMKFSTOOLS=vmkfstools
>> > >> >> ++ WGET=wget
>> > >> >> +++ uname -s
>> > >> >> ++ '[' xLinux = xLinux ']'
>> > >> >> ++ SED='sed -r'
>> > >> >> +++ basename /var/lib/one/remotes/datastore/vmfs/monitor
>> > >> >> ++ SCRIPT_NAME=monitor
>> > >> >> ++ dirname /var/lib/one/remotes/datastore/vmfs/monitor
>> > >> >> + DRIVER_PATH=/var/lib/one/remotes/datastore/vmfs
>> > >> >> + source /var/lib/one/remotes/datastore/vmfs/../libfs.sh
>> > >> >> +
>> > >> >>
>> > >> >>
>> >
>> DRV_ACTION=PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDM8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5zc2hfZGlFc3hpPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xPC9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xPC9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wPC9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT05TPjxEU19NQUQ+dm1mczwvRFNfTUFEPjxUTV9NQUQ+dm1mczwvVE1fTUFEPjxCQVNFX1BBVEg+L3ZtZnMvdm9sdW1lcy8xMDM8L0JBU0VfUEFUSD48VFlQRT4wPC9UWVBFPjxESVNLX1RZUEU+MDwvRElTS19UWVBFPjxDTFVTVEVSX0lEPjEwMDwvQ0xVU1RFUl9JRD48Q0xVU1RFUj5Fc3hpY2x1czwvQ0xVU1RFUj48VE9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNFRF9NQj4wPC9VU0VEX01CPjxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDREFUQVsxOTIuMTY4LjE0Ny4xMzFdXT48L0JSSURHRV9MSVNUPjxEU19NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvRFNfTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvVE1fTUFEP
>> > jxU
>> > >>
>> > >>
>> >
>> WVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU19EUklWRVJfQUNUSU9OX0RBVEE+
>> > >> >> + ID=103
>> > >> >> + XPATH='/var/lib/one/remotes/datastore/vmfs/../xpath.rb -b
>> > >> >>
>> > >> >>
>> >
>> PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDM8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5zc2hfZGlFc3hpPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xPC9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xPC9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wPC9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT05TPjxEU19NQUQ+dm1mczwvRFNfTUFEPjxUTV9NQUQ+dm1mczwvVE1fTUFEPjxCQVNFX1BBVEg+L3ZtZnMvdm9sdW1lcy8xMDM8L0JBU0VfUEFUSD48VFlQRT4wPC9UWVBFPjxESVNLX1RZUEU+MDwvRElTS19UWVBFPjxDTFVTVEVSX0lEPjEwMDwvQ0xVU1RFUl9JRD48Q0xVU1RFUj5Fc3hpY2x1czwvQ0xVU1RFUj48VE9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNFRF9NQj4wPC9VU0VEX01CPjxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDREFUQVsxOTIuMTY4LjE0Ny4xMzFdXT48L0JSSURHRV9MSVNUPjxEU19NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvRFNfTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvVE1fTUFEPjxUWVBFPjwh
>> > W0N
>> > >>
>> > >>
>> >
>> EQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU19EUklWRVJfQUNUSU9OX0RBVEE+'
>> > >> >> + unset i XPATH_ELEMENTS
>> > >> >> + IFS=
>> > >> >> + read -r -d '' element
>> > >> >> ++ /var/lib/one/remotes/datastore/vmfs/../xpath.rb -b
>> > >> >>
>> > >> >>
>> >
>> PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDM8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5zc2hfZGlFc3hpPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xPC9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xPC9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wPC9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT05TPjxEU19NQUQ+dm1mczwvRFNfTUFEPjxUTV9NQUQ+dm1mczwvVE1fTUFEPjxCQVNFX1BBVEg+L3ZtZnMvdm9sdW1lcy8xMDM8L0JBU0VfUEFUSD48VFlQRT4wPC9UWVBFPjxESVNLX1RZUEU+MDwvRElTS19UWVBFPjxDTFVTVEVSX0lEPjEwMDwvQ0xVU1RFUl9JRD48Q0xVU1RFUj5Fc3hpY2x1czwvQ0xVU1RFUj48VE9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNFRF9NQj4wPC9VU0VEX01CPjxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDREFUQVsxOTIuMTY4LjE0Ny4xMzFdXT48L0JSSURHRV9MSVNUPjxEU19NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvRFNfTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvVE1fTUFEPjxUWVBFPjwh
>> > W0N
>> > >>
>> > >>
>> >
>> EQVRBW0lNQUdFX0RTXV0+PC9UWVBFPjwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU19EUklWRVJfQUNUSU9OX0RBVEE+
>> > >> >> /DS_DRIVER_ACTION_DATA/DATASTORE/BASE_PATH
>> > >> >> /DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST
>> > >> >> + XPATH_ELEMENTS[i++]=/vmfs/volumes/103
>> > >> >> + IFS=
>> > >> >> + read -r -d '' element
>> > >> >> + XPATH_ELEMENTS[i++]=192.168.147.131
>> > >> >> + IFS=
>> > >> >> + read -r -d '' element
>> > >> >> + BASE_PATH=/vmfs/volumes/103
>> > >> >> + BRIDGE_LIST=192.168.147.131
>> > >> >> ++ get_destination_host 103
>> > >> >> ++ HOSTS_ARRAY=($BRIDGE_LIST)
>> > >> >> +++ expr 103 % 1
>> > >> >> ++ ARRAY_INDEX=0
>> > >> >> ++ echo 192.168.147.131
>> > >> >> + HOST=192.168.147.131
>> > >> >> ++ cat
>> > >> >> + MONITOR_SCRIPT='USED_MB=$(du -sLm /vmfs/volumes/103 2>/dev/null
>> |
>> > cut
>> > >> >> -f1)
>> > >> >>
>> > >> >> DF_STR=$(df -m | grep /vmfs/volumes/103 | sed '\''s/ \+/:/g'\'')
>> > >> >>
>> > >> >> TOTAL_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 2)
>> > >> >> FREE_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 4)
>> > >> >>
>> > >> >> echo "USED_MB=$USED_MB"
>> > >> >> echo "TOTAL_MB=$TOTAL_MB"
>> > >> >> echo "FREE_MB=$FREE_MB"'
>> > >> >> ++ ssh_monitor_and_log 192.168.147.131 'USED_MB=$(du -sLm
>> > >> >> /vmfs/volumes/103
>> > >> >> 2>/dev/null | cut -f1)
>> > >> >>
>> > >> >> DF_STR=$(df -m | grep /vmfs/volumes/103 | sed '\''s/ \+/:/g'\'')
>> > >> >>
>> > >> >> TOTAL_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 2)
>> > >> >> FREE_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 4)
>> > >> >>
>> > >> >> echo "USED_MB=$USED_MB"
>> > >> >> echo "TOTAL_MB=$TOTAL_MB"
>> > >> >> echo "FREE_MB=$FREE_MB"'
>> > >> >> + MONITOR_DATA='+++ ssh 192.168.147.131 sh -s
>> > >> >> ++ SSH_EXEC_OUT='\''USED_MB=
>> > >> >> TOTAL_MB=
>> > >> >> FREE_MB='\''
>> > >> >> ++ SSH_EXEC_RC=0
>> > >> >> ++ '\''['\'' 0 -ne 0 '\'']'\''
>> > >> >> ++ echo USED_MB= TOTAL_MB= FREE_MB=
>> > >> >> USED_MB= TOTAL_MB= FREE_MB='
>> > >> >> + MONITOR_STATUS=0
>> > >> >> + '[' 0 = 0 ']'
>> > >> >> + tr ' ' '\n'
>> > >> >> + echo '+++ ssh 192.168.147.131 sh -s
>> > >> >> ++ SSH_EXEC_OUT='\''USED_MB=
>> > >> >> TOTAL_MB=
>> > >> >> FREE_MB='\''
>> > >> >> ++ SSH_EXEC_RC=0
>> > >> >> ++ '\''['\'' 0 -ne 0 '\'']'\''
>> > >> >> ++ echo USED_MB= TOTAL_MB= FREE_MB=
>> > >> >> USED_MB= TOTAL_MB= FREE_MB='
>> > >> >> +++
>> > >> >> ssh
>> > >> >> 192.168.147.131
>> > >> >> sh
>> > >> >> -s
>> > >> >> ++
>> > >> >> SSH_EXEC_OUT='USED_MB=
>> > >> >> TOTAL_MB=
>> > >> >> FREE_MB='
>> > >> >> ++
>> > >> >> SSH_EXEC_RC=0
>> > >> >> ++
>> > >> >> '['
>> > >> >> 0
>> > >> >> -ne
>> > >> >> 0
>> > >> >> ']'
>> > >> >> ++
>> > >> >> echo
>> > >> >> USED_MB=
>> > >> >> TOTAL_MB=
>> > >> >> FREE_MB=
>> > >> >> USED_MB=
>> > >> >> TOTAL_MB=
>> > >> >> FREE_MB=
>> > >> >> oneadmin at ubuntuOpNeb:~$
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >> 2013/11/11 <users-request at lists.opennebula.org>
>> > >> >>>
>> > >> >>> Send Users mailing list submissions to
>> > >> >>> users at lists.opennebula.org
>> > >> >>>
>> > >> >>> To subscribe or unsubscribe via the World Wide Web, visit
>> > >> >>>
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> or, via email, send a message with subject or body 'help' to
>> > >> >>> users-request at lists.opennebula.org
>> > >> >>>
>> > >> >>> You can reach the person managing the list at
>> > >> >>> users-owner at lists.opennebula.org
>> > >> >>>
>> > >> >>> When replying, please edit your Subject line so it is more
>> specific
>> > >> >>> than "Re: Contents of Users digest..."
>> > >> >>>
>> > >> >>>
>> > >> >>> Today's Topics:
>> > >> >>>
>> > >> >>> 1. Re: VM in Opennebula for Esxi 5.1 failed (Tino Vazquez)
>> > >> >>>
>> > >> >>>
>> > >> >>>
>> > ----------------------------------------------------------------------
>> > >> >>>
>> > >> >>> Message: 1
>> > >> >>> Date: Mon, 11 Nov 2013 18:20:21 +0100
>> > >> >>> From: Tino Vazquez <cvazquez at c12g.com>
>> > >> >>> To: Catalina Quinde <catalinaquinde at gmail.com>
>> > >> >>> Cc: users <users at lists.opennebula.org>
>> > >> >>> Subject: Re: [one-users] VM in Opennebula for Esxi 5.1 failed
>> > >> >>> Message-ID:
>> > >> >>>
>> > >> >>> <CAHfKwc3Q1_pp_Xg6p9TujOyELvrSZuzzp=
>> E+FzCKovmEEUaS8A at mail.gmail.com
>> > >
>> > >> >>> Content-Type: text/plain; charset=ISO-8859-1
>> > >> >>>
>> > >> >>> Hi Catalina,
>> > >> >>>
>> > >> >>> You need to execute it as a single line. you can copy it from
>> this
>> > >> >>> gist:
>> > >> >>>
>> > >> >>> https://gist.github.com/jfontan/aea29c477d597f3c97f8
>> > >> >>>
>> > >> >>> Regards,
>> > >> >>>
>> > >> >>> -Tino
>> > >> >>>
>> > >> >>> --
>> > >> >>> OpenNebula - Flexible Enterprise Cloud Made Simple
>> > >> >>>
>> > >> >>> --
>> > >> >>> Constantino V?zquez Blanco, PhD, MSc
>> > >> >>> Senior Infrastructure Architect at C12G Labs
>> > >> >>> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>> > >> >>>
>> > >> >>> --
>> > >> >>> Confidentiality Warning: The information contained in this e-mail
>> > and
>> > >> >>> any accompanying documents, unless otherwise expressly
>> indicated, is
>> > >> >>> confidential and privileged, and is intended solely for the
>> person
>> > >> >>> and/or entity to whom it is addressed (i.e. those identified in
>> the
>> > >> >>> "To" and "cc" box). They are the property of C12G Labs S.L..
>> > >> >>> Unauthorized distribution, review, use, disclosure, or copying of
>> > this
>> > >> >>> communication, or any part thereof, is strictly prohibited and
>> may
>> > be
>> > >> >>> unlawful. If you have received this e-mail in error, please
>> notify
>> > us
>> > >> >>> immediately by e-mail at abuse at c12g.com and delete the e-mail
>> and
>> > >> >>> attachments and any copy from your system. C12G thanks you for
>> your
>> > >> >>> cooperation.
>> > >> >>>
>> > >> >>>
>> > >> >>> On Mon, Nov 11, 2013 at 6:14 PM, Catalina Quinde
>> > >> >>> <catalinaquinde at gmail.com> wrote:
>> > >> >>> > Ready this command displays
>> > >> >>> >
>> > >> >>> > oneadmin at ubuntuOpNeb:~$ bash -x
>> > >> >>> > /var/lib/one/remotes/datastore/vmfs/monitor
>> > >> >>> > + '[' -z '' ']'
>> > >> >>> > + LIB_LOCATION=/usr/lib/one
>> > >> >>> > + . /usr/lib/one/sh/scripts_common.sh
>> > >> >>> > ++ export LANG=C
>> > >> >>> > ++ LANG=C
>> > >> >>> > ++ export
>> > >> >>> >
>> > >> >>> >
>> > >> >>> >
>> >
>> PATH=/bin:/sbin:/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
>> > >> >>> > ++
>> > >> >>> >
>> > >> >>> >
>> > >> >>> >
>> >
>> PATH=/bin:/sbin:/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
>> > >> >>> > ++ AWK=awk
>> > >> >>> > ++ BASH=bash
>> > >> >>> > ++ CUT=cut
>> > >> >>> > ++ DATE=date
>> > >> >>> > ++ DD=dd
>> > >> >>> > ++ DU=du
>> > >> >>> > ++ GREP=grep
>> > >> >>> > ++ ISCSIADM=iscsiadm
>> > >> >>> > ++ LVCREATE=lvcreate
>> > >> >>> > ++ LVREMOVE=lvremove
>> > >> >>> > ++ LVRENAME=lvrename
>> > >> >>> > ++ LVS=lvs
>> > >> >>> > ++ LN=ln
>> > >> >>> > ++ MD5SUM=md5sum
>> > >> >>> > ++ MKFS=mkfs
>> > >> >>> > ++ MKISOFS=genisoimage
>> > >> >>> > ++ MKSWAP=mkswap
>> > >> >>> > ++ QEMU_IMG=qemu-img
>> > >> >>> > ++ RADOS=rados
>> > >> >>> > ++ RBD=rbd
>> > >> >>> > ++ READLINK=readlink
>> > >> >>> > ++ RM=rm
>> > >> >>> > ++ SCP=scp
>> > >> >>> > ++ SED=sed
>> > >> >>> > ++ SSH=ssh
>> > >> >>> > ++ SUDO=sudo
>> > >> >>> > ++ SYNC=sync
>> > >> >>> > ++ TAR=tar
>> > >> >>> > ++ TGTADM=tgtadm
>> > >> >>> > ++ TGTADMIN=tgt-admin
>> > >> >>> > ++ TGTSETUPLUN=tgt-setup-lun-one
>> > >> >>> > ++ VMKFSTOOLS=vmkfstools
>> > >> >>> > ++ WGET=wget
>> > >> >>> > +++ uname -s
>> > >> >>> > ++ '[' xLinux = xLinux ']'
>> > >> >>> > ++ SED='sed -r'
>> > >> >>> > +++ basename /var/lib/one/remotes/datastore/vmfs/monitor
>> > >> >>> > ++ SCRIPT_NAME=monitor
>> > >> >>> > ++ dirname /var/lib/one/remotes/datastore/vmfs/monitor
>> > >> >>> > + DRIVER_PATH=/var/lib/one/remotes/datastore/vmfs
>> > >> >>> > + source /var/lib/one/remotes/datastore/vmfs/../libfs.sh
>> > >> >>> > + DRV_ACTION=
>> > >> >>> > + ID=
>> > >> >>> > + XPATH='/var/lib/one/remotes/datastore/vmfs/../xpath.rb -b '
>> > >> >>> > + unset i XPATH_ELEMENTS
>> > >> >>> > + IFS=
>> > >> >>> > + read -r -d '' element
>> > >> >>> > ++ /var/lib/one/remotes/datastore/vmfs/../xpath.rb -b
>> > >> >>> > /DS_DRIVER_ACTION_DATA/DATASTORE/BASE_PATH
>> > >> >>> > /DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST
>> > >> >>> > /var/lib/one/remotes/datastore/vmfs/../xpath.rb:61: undefined
>> > method
>> > >> >>> > `elements' for nil:NilClass (NoMethodError)
>> > >> >>> > from /var/lib/one/remotes/datastore/vmfs/../xpath.rb:60:in
>> > >> >>> > `each'
>> > >> >>> > from /var/lib/one/remotes/datastore/vmfs/../xpath.rb:60
>> > >> >>> > + BASE_PATH=
>> > >> >>> > + BRIDGE_LIST=
>> > >> >>> > ++ get_destination_host
>> > >> >>> > ++ HOSTS_ARRAY=($BRIDGE_LIST)
>> > >> >>> > +++ expr % 0
>> > >> >>> > expr: syntax error
>> > >> >>> > ++ ARRAY_INDEX=
>> > >> >>> > ++ echo
>> > >> >>> > + HOST=
>> > >> >>> > ++ cat
>> > >> >>> > + MONITOR_SCRIPT='USED_MB=$(du -sLm 2>/dev/null | cut -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''s/ \+/:/g'\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"'
>> > >> >>> > ++ ssh_monitor_and_log 'USED_MB=$(du -sLm 2>/dev/null | cut
>> -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''s/ \+/:/g'\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"'
>> > >> >>> > + MONITOR_DATA='+++ ssh '\''USED_MB=$(du'\'' -sLm
>> > >> >>> > '\''2>/dev/null'\''
>> > >> >>> > '\''|'\'' cut '\''-f1)'\'' '\''DF_STR=$(df'\'' -m '\''|'\''
>> grep
>> > >> >>> > '\''|'\''
>> > >> >>> > sed '\'''\''\'\'''\''s/'\'' '\''\+/:/g'\''\'\'''\'')'\''
>> > >> >>> > '\''TOTAL_MB=$(echo'\'' '\''$DF_STR'\'' '\''|'\'' cut
>> > >> >>> > '\''-d'\''\'\'''\'':'\''\'\'''\'''\'' -f '\''2)'\''
>> > >> >>> > '\''FREE_MB=$(echo'\''
>> > >> >>> > '\''$DF_STR'\'' '\''|'\'' cut
>> > '\''-d'\''\'\'''\'':'\''\'\'''\'''\''
>> > >> >>> > -f
>> > >> >>> > '\''4)'\'' echo '\''"USED_MB=$USED_MB"'\'' echo
>> > >> >>> > '\''"TOTAL_MB=$TOTAL_MB"'\''
>> > >> >>> > echo '\''"FREE_MB=$FREE_MB"'\'' sh -s
>> > >> >>> > ++ SSH_EXEC_OUT=
>> > >> >>> > ++ SSH_EXEC_RC=255
>> > >> >>> > ++ '\''['\'' 255 -ne 0 '\'']'\''
>> > >> >>> > ++ log_error '\''Command "" failed: '\''
>> > >> >>> > ++ log_function ERROR '\''Command "" failed: '\''
>> > >> >>> > ++ echo '\''ERROR: monitor: Command "" failed: '\''
>> > >> >>> > ERROR: monitor: Command "" failed:
>> > >> >>> > ++ error_message '\''Cannot monitor USED_MB=$(du -sLm
>> > 2>/dev/null |
>> > >> >>> > cut
>> > >> >>> > -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''\'\'''\''s/
>> \+/:/g'\''\'\'''\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\''\'\'''\'':'\''\'\'''\'' -f
>> 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\''\'\'''\'':'\''\'\'''\'' -f
>> 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"'\''
>> > >> >>> > ++ echo '\''ERROR MESSAGE --8<------'\''
>> > >> >>> > ERROR MESSAGE --8<------
>> > >> >>> > ++ echo '\''Cannot monitor USED_MB=$(du -sLm 2>/dev/null | cut
>> > -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''\'\'''\''s/
>> \+/:/g'\''\'\'''\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\''\'\'''\'':'\''\'\'''\'' -f
>> 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\''\'\'''\'':'\''\'\'''\'' -f
>> 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"'\''
>> > >> >>> > Cannot monitor USED_MB=$(du -sLm 2>/dev/null | cut -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''s/ \+/:/g'\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"
>> > >> >>> > ++ echo '\''ERROR MESSAGE ------>8--'\''
>> > >> >>> > ERROR MESSAGE ------>8--
>> > >> >>> > ++ exit 255'
>> > >> >>> > + MONITOR_STATUS=255
>> > >> >>> > + '[' 255 = 0 ']'
>> > >> >>> > + echo '+++ ssh '\''USED_MB=$(du'\'' -sLm '\''2>/dev/null'\''
>> > >> >>> > '\''|'\''
>> > >> >>> > cut
>> > >> >>> > '\''-f1)'\'' '\''DF_STR=$(df'\'' -m '\''|'\'' grep '\''|'\''
>> sed
>> > >> >>> > '\'''\''\'\'''\''s/'\'' '\''\+/:/g'\''\'\'''\'')'\''
>> > >> >>> > '\''TOTAL_MB=$(echo'\''
>> > >> >>> > '\''$DF_STR'\'' '\''|'\'' cut
>> > '\''-d'\''\'\'''\'':'\''\'\'''\'''\''
>> > >> >>> > -f
>> > >> >>> > '\''2)'\'' '\''FREE_MB=$(echo'\'' '\''$DF_STR'\'' '\''|'\'' cut
>> > >> >>> > '\''-d'\''\'\'''\'':'\''\'\'''\'''\'' -f '\''4)'\'' echo
>> > >> >>> > '\''"USED_MB=$USED_MB"'\'' echo '\''"TOTAL_MB=$TOTAL_MB"'\''
>> echo
>> > >> >>> > '\''"FREE_MB=$FREE_MB"'\'' sh -s
>> > >> >>> > ++ SSH_EXEC_OUT=
>> > >> >>> > ++ SSH_EXEC_RC=255
>> > >> >>> > ++ '\''['\'' 255 -ne 0 '\'']'\''
>> > >> >>> > ++ log_error '\''Command "" failed: '\''
>> > >> >>> > ++ log_function ERROR '\''Command "" failed: '\''
>> > >> >>> > ++ echo '\''ERROR: monitor: Command "" failed: '\''
>> > >> >>> > ERROR: monitor: Command "" failed:
>> > >> >>> > ++ error_message '\''Cannot monitor USED_MB=$(du -sLm
>> > 2>/dev/null |
>> > >> >>> > cut
>> > >> >>> > -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''\'\'''\''s/
>> \+/:/g'\''\'\'''\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\''\'\'''\'':'\''\'\'''\'' -f
>> 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\''\'\'''\'':'\''\'\'''\'' -f
>> 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"'\''
>> > >> >>> > ++ echo '\''ERROR MESSAGE --8<------'\''
>> > >> >>> > ERROR MESSAGE --8<------
>> > >> >>> > ++ echo '\''Cannot monitor USED_MB=$(du -sLm 2>/dev/null | cut
>> > -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''\'\'''\''s/
>> \+/:/g'\''\'\'''\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\''\'\'''\'':'\''\'\'''\'' -f
>> 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\''\'\'''\'':'\''\'\'''\'' -f
>> 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"'\''
>> > >> >>> > Cannot monitor USED_MB=$(du -sLm 2>/dev/null | cut -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''s/ \+/:/g'\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"
>> > >> >>> > ++ echo '\''ERROR MESSAGE ------>8--'\''
>> > >> >>> > ERROR MESSAGE ------>8--
>> > >> >>> > ++ exit 255'
>> > >> >>> > +++ ssh 'USED_MB=$(du' -sLm '2>/dev/null' '|' cut '-f1)'
>> > >> >>> > 'DF_STR=$(df'
>> > >> >>> > -m
>> > >> >>> > '|' grep '|' sed ''\''s/' '\+/:/g'\'')' 'TOTAL_MB=$(echo'
>> > '$DF_STR'
>> > >> >>> > '|'
>> > >> >>> > cut
>> > >> >>> > '-d'\'':'\''' -f '2)' 'FREE_MB=$(echo' '$DF_STR' '|' cut
>> > >> >>> > '-d'\'':'\'''
>> > >> >>> > -f
>> > >> >>> > '4)' echo '"USED_MB=$USED_MB"' echo '"TOTAL_MB=$TOTAL_MB"' echo
>> > >> >>> > '"FREE_MB=$FREE_MB"' sh -s
>> > >> >>> > ++ SSH_EXEC_OUT=
>> > >> >>> > ++ SSH_EXEC_RC=255
>> > >> >>> > ++ '[' 255 -ne 0 ']'
>> > >> >>> > ++ log_error 'Command "" failed: '
>> > >> >>> > ++ log_function ERROR 'Command "" failed: '
>> > >> >>> > ++ echo 'ERROR: monitor: Command "" failed: '
>> > >> >>> > ERROR: monitor: Command "" failed:
>> > >> >>> > ++ error_message 'Cannot monitor USED_MB=$(du -sLm
>> 2>/dev/null |
>> > >> >>> > cut
>> > >> >>> > -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''s/ \+/:/g'\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"'
>> > >> >>> > ++ echo 'ERROR MESSAGE --8<------'
>> > >> >>> > ERROR MESSAGE --8<------
>> > >> >>> > ++ echo 'Cannot monitor USED_MB=$(du -sLm 2>/dev/null | cut
>> -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed '\''s/ \+/:/g'\'')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d'\'':'\'' -f 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"'
>> > >> >>> > Cannot monitor USED_MB=$(du -sLm 2>/dev/null | cut -f1)
>> > >> >>> >
>> > >> >>> > DF_STR=$(df -m | grep | sed 's/ \+/:/g')
>> > >> >>> >
>> > >> >>> > TOTAL_MB=$(echo $DF_STR | cut -d':' -f 2)
>> > >> >>> > FREE_MB=$(echo $DF_STR | cut -d':' -f 4)
>> > >> >>> >
>> > >> >>> > echo "USED_MB=$USED_MB"
>> > >> >>> > echo "TOTAL_MB=$TOTAL_MB"
>> > >> >>> > echo "FREE_MB=$FREE_MB"
>> > >> >>> > ++ echo 'ERROR MESSAGE ------>8--'
>> > >> >>> > ERROR MESSAGE ------>8--
>> > >> >>> > ++ exit 255
>> > >> >>> > + exit 255
>> > >> >>> >
>> > >> >>> >
>> > >> >>> >
>> > >> >>> > 2013/11/11 <users-request at lists.opennebula.org>
>> > >> >>> >>
>> > >> >>> >> Send Users mailing list submissions to
>> > >> >>> >> users at lists.opennebula.org
>> > >> >>> >>
>> > >> >>> >> To subscribe or unsubscribe via the World Wide Web, visit
>> > >> >>> >>
>> > >> >>> >> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> or, via email, send a message with subject or body 'help' to
>> > >> >>> >> users-request at lists.opennebula.org
>> > >> >>> >>
>> > >> >>> >> You can reach the person managing the list at
>> > >> >>> >> users-owner at lists.opennebula.org
>> > >> >>> >>
>> > >> >>> >> When replying, please edit your Subject line so it is more
>> > specific
>> > >> >>> >> than "Re: Contents of Users digest..."
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >> Today's Topics:
>> > >> >>> >>
>> > >> >>> >> 1. Re: VM in Opennebula for Esxi 5.1 failed (Hans-Joachim
>> > >> >>> >> Ehlers)
>> > >> >>> >>
>> > >> >>> >> 2. Re: VM in Opennebula for Esxi 5.1 failed (Tino Vazquez)
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >>
>> > ----------------------------------------------------------------------
>> > >> >>> >>
>> > >> >>> >> Message: 1
>> > >> >>> >> Date: Mon, 11 Nov 2013 17:55:24 +0100
>> > >> >>> >> From: Hans-Joachim Ehlers <HansJoachim.Ehlers at eumetsat.int>
>> > >> >>> >> To: 'Catalina Quinde' <catalinaquinde at gmail.com>,
>> > >> >>> >> "Users at lists.opennebula.org"
>> > >> >>> >> <Users at lists.opennebula.org>
>> > >> >>> >>
>> > >> >>> >> Subject: Re: [one-users] VM in Opennebula for Esxi 5.1 failed
>> > >> >>> >> Message-ID:
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >> <
>> >
>> A3ADF8C56ADA0F45BC34F904A6514E280151E8A95E66 at EXW10.eum.root.eumetsat.int>
>> > >> >>> >>
>> > >> >>> >> Content-Type: text/plain; charset="windows-1252"
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >> Use the following ONELINER ....
>> > >> >>> >>
>> > >> >>> >> $ bash -x /var/lib/one/remotes/datastore/vmfs/monitor
>> > >> >>> >>
>> > >> >>> >> From: users-bounces at lists.opennebula.org
>> > >> >>> >> [mailto:users-bounces at lists.opennebula.org] On Behalf Of
>> > Catalina
>> > >> >>> >> Quinde
>> > >> >>> >> Sent: Monday, November 11, 2013 5:50 PM
>> > >> >>> >> To: Users at lists.opennebula.org
>> > >> >>> >> Subject: Re: [one-users] VM in Opennebula for Esxi 5.1 failed
>> > >> >>> >>
>> > >> >>> >> Hi Tino,
>> > >> >>> >> The comand bash -x /var/lib/one/remotes can't execution
>> because
>> > >> >>> >> this is
>> > >> >>> >> directory:
>> > >> >>> >>
>> > >> >>> >> oneadmin at ubuntuOpNeb:~$ bash -x /var/lib/one/remotes/
>> > >> >>> >> /var/lib/one/remotes/: /var/lib/one/remotes/: es un directorio
>> > >> >>> >> This directory contains:
>> > >> >>> >> oneadmin at ubuntuOpNeb:~$ ls /var/lib/one/remotes/
>> > >> >>> >> auth datastore hooks im scripts_common.rb
>> scripts_common.sh
>> > >> >>> >> tm
>> > >> >>> >> vmm
>> > >> >>> >> vnm
>> > >> >>> >> -------------- next part --------------
>> > >> >>> >> An HTML attachment was scrubbed...
>> > >> >>> >> URL:
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >> <
>> >
>> http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131111/5c1feeee/attachment-0001.htm
>> > >
>> > >> >>> >>
>> > >> >>> >> ------------------------------
>> > >> >>> >>
>> > >> >>> >> Message: 2
>> > >> >>> >> Date: Mon, 11 Nov 2013 17:58:35 +0100
>> > >> >>> >>
>> > >> >>> >> From: Tino Vazquez <cvazquez at c12g.com>
>> > >> >>> >> To: Catalina Quinde <catalinaquinde at gmail.com>
>> > >> >>> >> Cc: users <Users at lists.opennebula.org>
>> > >> >>> >> Subject: Re: [one-users] VM in Opennebula for Esxi 5.1 failed
>> > >> >>> >> Message-ID:
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >> <
>> > CAHfKwc21-kkPabWgtNwQs07EbbAiC9b8zzFJwMsRt3EXb0umSg at mail.gmail.com>
>> > >> >>> >> Content-Type: text/plain; charset=ISO-8859-1
>> > >> >>> >>
>> > >> >>> >> Hi,
>> > >> >>> >>
>> > >> >>> >> The following should executed in the same line:
>> > >> >>> >>
>> > >> >>> >> ----8<-----
>> > >> >>> >> $ bash -x /var/lib/one/remotes/datastore/vmfs/monitor
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >>
>> >
>> PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDM8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5zc2hfZGlFc3hpPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xPC9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xPC9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wPC9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT05TPjxEU19NQUQ+dm1mczwvRFNfTUFEPjxUTV9NQUQ+dm1mczwvVE1fTUFEPjxCQVNFX1BBVEg+L3ZtZnMvdm9sdW1lcy8xMDM8L0JBU0VfUEFUSD48VFlQRT4wPC9UWVBFPjxESVNLX1RZUEU+MDwvRElTS19UWVBFPjxDTFVTVEVSX0lEPjEwMDwvQ0xVU1RFUl9JRD48Q0xVU1RFUj5Fc3hpY2x1czwvQ0xVU1RFUj48VE9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNFRF9NQj4wPC9VU0VEX01CPjxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDREFUQVsxOTIuMTY4LjE0Ny4xMzFdXT48L0JSSURHRV9MSVNUPjxEU19NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvRFNfTUFEPjxUTV9NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvVE1fTUFEPjxUWVBF
>> > Pjw
>> > >> hW0N
>> > >> >>> EQV
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >>
>> >
>> RBW0lNQUdFX0RTXV0+PC9UWVBFPjwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU19EUklWRVJfQUNUSU9OX0RBVEE+
>> > >> >>> >> 103
>> > >> >>> >> --->8-----
>> > >> >>> >>
>> > >> >>> >> Regards,
>> > >> >>> >>
>> > >> >>> >> -T
>> > >> >>> >> --
>> > >> >>> >> OpenNebula - Flexible Enterprise Cloud Made Simple
>> > >> >>> >>
>> > >> >>> >> --
>> > >> >>> >> Constantino V?zquez Blanco, PhD, MSc
>> > >> >>> >> Senior Infrastructure Architect at C12G Labs
>> > >> >>> >> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>> > >> >>> >>
>> > >> >>> >> --
>> > >> >>> >> Confidentiality Warning: The information contained in this
>> e-mail
>> > >> >>> >> and
>> > >> >>> >> any accompanying documents, unless otherwise expressly
>> indicated,
>> > >> >>> >> is
>> > >> >>> >> confidential and privileged, and is intended solely for the
>> > person
>> > >> >>> >> and/or entity to whom it is addressed (i.e. those identified
>> in
>> > the
>> > >> >>> >> "To" and "cc" box). They are the property of C12G Labs S.L..
>> > >> >>> >> Unauthorized distribution, review, use, disclosure, or
>> copying of
>> > >> >>> >> this
>> > >> >>> >> communication, or any part thereof, is strictly prohibited and
>> > may
>> > >> >>> >> be
>> > >> >>> >> unlawful. If you have received this e-mail in error, please
>> > notify
>> > >> >>> >> us
>> > >> >>> >> immediately by e-mail at abuse at c12g.com and delete the e-mail
>> > and
>> > >> >>> >> attachments and any copy from your system. C12G thanks you for
>> > your
>> > >> >>> >> cooperation.
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >> On Mon, Nov 11, 2013 at 5:50 PM, Catalina Quinde
>> > >> >>> >> <catalinaquinde at gmail.com> wrote:
>> > >> >>> >> > Hi Tino,
>> > >> >>> >> >
>> > >> >>> >> > The comand bash -x /var/lib/one/remotes can't execution
>> because
>> > >> >>> >> > this
>> > >> >>> >> > is
>> > >> >>> >> > directory:
>> > >> >>> >> >
>> > >> >>> >> > oneadmin at ubuntuOpNeb:~$ bash -x /var/lib/one/remotes/
>> > >> >>> >> > /var/lib/one/remotes/: /var/lib/one/remotes/: es un
>> directorio
>> > >> >>> >> >
>> > >> >>> >> > This directory contains:
>> > >> >>> >> >
>> > >> >>> >> > oneadmin at ubuntuOpNeb:~$ ls /var/lib/one/remotes/
>> > >> >>> >> > auth datastore hooks im scripts_common.rb
>> > scripts_common.sh
>> > >> >>> >> > tm
>> > >> >>> >> > vmm
>> > >> >>> >> > vnm
>> > >> >>> >> >
>> > >> >>> >> > Regards, Caty.
>> > >> >>> >> >
>> > >> >>> >> > 2013/11/11 Tino Vazquez <cvazquez at c12g.com>
>> > >> >>> >> >>
>> > >> >>> >> >> Hi,
>> > >> >>> >> >>
>> > >> >>> >> >> The setup looks correct, both DS 102 and 103 have the
>> correct
>> > >> >>> >> >> BASE_PATH. About the monitoring error, could you send the
>> > output
>> > >> >>> >> >> of
>> > >> >>> >> >> the execution of the following as oneadmin in the
>> front-end:
>> > >> >>> >> >>
>> > >> >>> >> >> $ bash -x /var/lib/one/remotes/
>> > >> >>> >> >> datastore/vmfs/monitor
>> > >> >>> >> >> PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDM8L
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> 0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZ
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> WFkbWluPC9HTkFNRT48TkFNRT5zc2hfZGlFc3hpPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xP
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> C9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xP
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> C9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wP
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> C9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> 05TPjxEU19NQUQ+dm1mczwvRFNfTUFEPjxUTV9NQUQ+dm1mczwvVE1fTUFEPjxCQVNFX1BBVEg+L3ZtZ
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> nMvdm9sdW1lcy8xMDM8L0JBU0VfUEFUSD48VFlQRT4wPC9UWVBFPjxESVNLX1RZUEU+MDwvRElTS19UW
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> VBFPjxDTFVTVEVSX0lEPjEwMDwvQ0xVU1RFUl9JRD48Q0xVU1RFUj5Fc3hpY2x1czwvQ0xVU1RFUj48V
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> E9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNFRF9NQj4wPC9VU0VEX01CP
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> jxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDREFUQVsxOTIuMTY4LjE0N
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> y4xMzFdXT48L0JSSURHRV9MSVNUPjxEU19NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvRFNfTUFEPjxUTV9NQ
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >>
>> >
>> UQ+PCFbQ0RBVEFbdm1mc11dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFP
>> > >> >>> >> >>
>> > jwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU19EUklWRVJfQUNUSU9OX0RBVEE+
>> > >> >>> >> >> 103
>> > >> >>> >> >>
>> > >> >>> >> >> Regards,
>> > >> >>> >> >>
>> > >> >>> >> >> -Tino
>> > >> >>> >> >> --
>> > >> >>> >> >> OpenNebula - Flexible Enterprise Cloud Made Simple
>> > >> >>> >> >>
>> > >> >>> >> >> --
>> > >> >>> >> >> Constantino V?zquez Blanco, PhD, MSc
>> > >> >>> >> >> Senior Infrastructure Architect at C12G Labs
>> > >> >>> >> >> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>> > >> >>> >> >>
>> > >> >>> >> >> --
>> > >> >>> >> >> Confidentiality Warning: The information contained in this
>> > >> >>> >> >> e-mail
>> > >> >>> >> >> and
>> > >> >>> >> >> any accompanying documents, unless otherwise expressly
>> > >> >>> >> >> indicated, is
>> > >> >>> >> >> confidential and privileged, and is intended solely for the
>> > >> >>> >> >> person
>> > >> >>> >> >> and/or entity to whom it is addressed (i.e. those
>> identified
>> > in
>> > >> >>> >> >> the
>> > >> >>> >> >> "To" and "cc" box). They are the property of C12G Labs
>> S.L..
>> > >> >>> >> >> Unauthorized distribution, review, use, disclosure, or
>> copying
>> > >> >>> >> >> of
>> > >> >>> >> >> this
>> > >> >>> >> >> communication, or any part thereof, is strictly prohibited
>> and
>> > >> >>> >> >> may
>> > >> >>> >> >> be
>> > >> >>> >> >> unlawful. If you have received this e-mail in error, please
>> > >> >>> >> >> notify
>> > >> >>> >> >> us
>> > >> >>> >> >> immediately by e-mail at abuse at c12g.com and delete the
>> e-mail
>> > >> >>> >> >> and
>> > >> >>> >> >> attachments and any copy from your system. C12G thanks you
>> for
>> > >> >>> >> >> your
>> > >> >>> >> >> cooperation.
>> > >> >>> >> >>
>> > >> >>> >> >>
>> > >> >>> >> >> On Mon, Nov 11, 2013 at 4:25 PM, Catalina Quinde
>> > >> >>> >> >> <catalinaquinde at gmail.com> wrote:
>> > >> >>> >> >> > Hi friends,
>> > >> >>> >> >> >
>> > >> >>> >> >> > The result for command "onedatastore show' are:
>> > >> >>> >> >> >
>> > >> >>> >> >> > 1. For datastore system
>> > >> >>> >> >> >
>> > >> >>> >> >> > oneadmin at ubuntuOpNeb:~$ onedatastore show 102
>> > >> >>> >> >> > DATASTORE 102 INFORMATION
>> > >> >>> >> >> > ID : 102
>> > >> >>> >> >> > NAME : ssh_dsEsxi
>> > >> >>> >> >> > USER : oneadmin
>> > >> >>> >> >> > GROUP : oneadmin
>> > >> >>> >> >> > CLUSTER : Esxiclus
>> > >> >>> >> >> > TYPE : SYSTEM
>> > >> >>> >> >> > DS_MAD : -
>> > >> >>> >> >> > TM_MAD : vmfs
>> > >> >>> >> >> > BASE PATH : /vmfs/volumes/102
>> > >> >>> >> >> > DISK_TYPE : FILE
>> > >> >>> >> >> >
>> > >> >>> >> >> > DATASTORE CAPACITY
>> > >> >>> >> >> > TOTAL: : -
>> > >> >>> >> >> > USED: : -
>> > >> >>> >> >> > FREE: : -
>> > >> >>> >> >> >
>> > >> >>> >> >> > PERMISSIONS
>> > >> >>> >> >> > OWNER : um-
>> > >> >>> >> >> > GROUP : u--
>> > >> >>> >> >> > OTHER : ---
>> > >> >>> >> >> >
>> > >> >>> >> >> > DATASTORE TEMPLATE
>> > >> >>> >> >> > BRIDGE_LIST="192.168.147.131"
>> > >> >>> >> >> > TM_MAD="vmfs"
>> > >> >>> >> >> > TYPE="SYSTEM_DS"
>> > >> >>> >> >> >
>> > >> >>> >> >> > IMAGES
>> > >> >>> >> >> >
>> > >> >>> >> >> > 2. For datastore image
>> > >> >>> >> >> >
>> > >> >>> >> >> > oneadmin at ubuntuOpNeb:~$ onedatastore show 103
>> > >> >>> >> >> > DATASTORE 103 INFORMATION
>> > >> >>> >> >> > ID : 103
>> > >> >>> >> >> > NAME : ssh_diEsxi
>> > >> >>> >> >> > USER : oneadmin
>> > >> >>> >> >> > GROUP : oneadmin
>> > >> >>> >> >> > CLUSTER : Esxiclus
>> > >> >>> >> >> > TYPE : IMAGE
>> > >> >>> >> >> > DS_MAD : vmfs
>> > >> >>> >> >> > TM_MAD : vmfs
>> > >> >>> >> >> > BASE PATH : /vmfs/volumes/103
>> > >> >>> >> >> > DISK_TYPE : FILE
>> > >> >>> >> >> >
>> > >> >>> >> >> > DATASTORE CAPACITY
>> > >> >>> >> >> > TOTAL: : 0M
>> > >> >>> >> >> > USED: : 0M
>> > >> >>> >> >> > FREE: : 0M
>> > >> >>> >> >> >
>> > >> >>> >> >> > PERMISSIONS
>> > >> >>> >> >> > OWNER : um-
>> > >> >>> >> >> > GROUP : u--
>> > >> >>> >> >> > OTHER : ---
>> > >> >>> >> >> >
>> > >> >>> >> >> > DATASTORE TEMPLATE
>> > >> >>> >> >> > BRIDGE_LIST="192.168.147.131"
>> > >> >>> >> >> > DS_MAD="vmfs"
>> > >> >>> >> >> > TM_MAD="vmfs"
>> > >> >>> >> >> > TYPE="IMAGE_DS"
>> > >> >>> >> >> >
>> > >> >>> >> >> > IMAGES
>> > >> >>> >> >> >
>> > >> >>> >> >> > 3. In oned.log file show:
>> > >> >>> >> >> >
>> > >> >>> >> >> > Mon Nov 11 09:46:38 2013 [ImM][I]: Command execution
>> fail:
>> > >> >>> >> >> > /var/lib/one/remotes/
>> > >> >>> >> >> > datastore/vmfs/monitor
>> > >> >>> >> >> > PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMDM8L
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> 0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZ
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> WFkbWluPC9HTkFNRT48TkFNRT5zc2hfZGlFc3hpPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xP
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> C9PV05FUl9VPjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xP
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> C9HUk9VUF9VPjxHUk9VUF9NPjA8L0dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wP
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> C9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVSX0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> 05TPjxEU19NQUQ+dm1mczwvRFNfTUFEPjxUTV9NQUQ+dm1mczwvVE1fTUFEPjxCQVNFX1BBVEg+L3ZtZ
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> nMvdm9sdW1lcy8xMDM8L0JBU0VfUEFUSD48VFlQRT4wPC9UWVBFPjxESVNLX1RZUEU+MDwvRElTS19UW
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> VBFPjxDTFVTVEVSX0lEPjEwMDwvQ0xVU1RFUl9JRD48Q0xVU1RFUj5Fc3hpY2x1czwvQ0xVU1RFUj48V
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> E9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNFRF9NQj4wPC9VU0VEX01CP
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> jxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxCUklER0VfTElTVD48IVtDREFUQVsxOTIuMTY4LjE0N
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> y4xMzFdXT48L0JSSURHRV9MSVNUPjxEU19NQUQ+PCFbQ0RBVEFbdm1mc11dPjwvRFNfTUFEPjxUTV9NQ
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> >
>> UQ+PCFbQ0RBVEFbdm1mc11dPjwvVE1fTUFEPjxUWVBFPjwhW0NEQVRBW0lNQUdFX0RTXV0+PC9UWVBFP
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > jwvVEVNUExBVEU+PC9EQVRBU1RPUkU+PC9EU19EUklWRVJfQUNUSU9OX0RBVEE+
>> > >> >>> >> >> > 103
>> > >> >>> >> >> > Mon Nov 11 09:46:38 2013 [ImM][I]: ExitCode: 255
>> > >> >>> >> >> > Mon Nov 11 09:46:38 2013 [ImM][E]: Error monitoring
>> > datastore
>> > >> >>> >> >> > 103:
>> > >> >>> >> >> > -
>> > >> >>> >> >> >
>> > >> >>> >> >> > 4. Sending oned.log file attachment, inside this check
>> nodes
>> > >> >>> >> >> > monitoring
>> > >> >>> >> >> > errors is because I have only Esxi node running at this
>> tame
>> > >> >>> >> >> > others
>> > >> >>> >> >> > nodes
>> > >> >>> >> >> > are turned off.
>> > >> >>> >> >> >
>> > >> >>> >> >> > Regards, Caty.
>> > >> >>> >> >> >
>> > >> >>> >> >> > 2013/11/11 <users-request at lists.opennebula.org>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Send Users mailing list submissions to
>> > >> >>> >> >> >> users at lists.opennebula.org
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> To subscribe or unsubscribe via the World Wide Web,
>> visit
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> >> >> or, via email, send a message with subject or body
>> 'help'
>> > to
>> > >> >>> >> >> >> users-request at lists.opennebula.org
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> You can reach the person managing the list at
>> > >> >>> >> >> >> users-owner at lists.opennebula.org
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> When replying, please edit your Subject line so it is
>> more
>> > >> >>> >> >> >> specific
>> > >> >>> >> >> >> than "Re: Contents of Users digest..."
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Today's Topics:
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> 1. Re: VM in Opennebula for Esxi 5.1 failed (Tino
>> > Vazquez)
>> > >> >>> >> >> >> 2. Re: VM in Opennebula for Esxi 5.1 failed (Tino
>> > Vazquez)
>> > >> >>> >> >> >> 3. Re: Problem with oZones. Error 401 while trying to
>> > >> >>> >> >> >> access a
>> > >> >>> >> >> >> zone (Tino Vazquez)
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > ----------------------------------------------------------------------
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Message: 1
>> > >> >>> >> >> >> Date: Mon, 11 Nov 2013 11:23:35 +0100
>> > >> >>> >> >> >> From: Tino Vazquez <cvazquez at c12g.com>
>> > >> >>> >> >> >> To: Catalina Quinde <catalinaquinde at gmail.com>
>> > >> >>> >> >> >> Cc: users <users at lists.opennebula.org>
>> > >> >>> >> >> >> Subject: Re: [one-users] VM in Opennebula for Esxi 5.1
>> > failed
>> > >> >>> >> >> >> Message-ID:
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> <
>> > CAHfKwc0uhtMvDkkd0bDvVoDMAeXhC8x+7MMd0apE9Ow+47t4Zg at mail.gmail.com>
>> > >> >>> >> >> >> Content-Type: text/plain; charset=windows-1252
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Hi Catalina,
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Let's focus first on the 'vmfs' datastore not being
>> > correctly
>> > >> >>> >> >> >> monitored. Could you send us the output of "onedatastore
>> > show
>> > >> >>> >> >> >> <ds_id>"
>> > >> >>> >> >> >> for both the system and the images datastore.
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Also, is there any monitor errors in
>> /var/log/one/oned.log?
>> > >> >>> >> >> >> If
>> > >> >>> >> >> >> cannot
>> > >> >>> >> >> >> find the relevant section in the log file, please send
>> it
>> > >> >>> >> >> >> through
>> > >> >>> >> >> >> for
>> > >> >>> >> >> >> analysis.
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Best regards,
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> -Tino
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> --
>> > >> >>> >> >> >> OpenNebula - Flexible Enterprise Cloud Made Simple
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> --
>> > >> >>> >> >> >> Constantino V?zquez Blanco, PhD, MSc
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Senior Infrastructure Architect at C12G Labs
>> > >> >>> >> >> >> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> --
>> > >> >>> >> >> >> Confidentiality Warning: The information contained in
>> this
>> > >> >>> >> >> >> e-mail
>> > >> >>> >> >> >> and
>> > >> >>> >> >> >> any accompanying documents, unless otherwise expressly
>> > >> >>> >> >> >> indicated,
>> > >> >>> >> >> >> is
>> > >> >>> >> >> >> confidential and privileged, and is intended solely for
>> the
>> > >> >>> >> >> >> person
>> > >> >>> >> >> >> and/or entity to whom it is addressed (i.e. those
>> > identified
>> > >> >>> >> >> >> in
>> > >> >>> >> >> >> the
>> > >> >>> >> >> >> "To" and "cc" box). They are the property of C12G Labs
>> > S.L..
>> > >> >>> >> >> >> Unauthorized distribution, review, use, disclosure, or
>> > >> >>> >> >> >> copying of
>> > >> >>> >> >> >> this
>> > >> >>> >> >> >> communication, or any part thereof, is strictly
>> prohibited
>> > >> >>> >> >> >> and
>> > >> >>> >> >> >> may
>> > >> >>> >> >> >> be
>> > >> >>> >> >> >> unlawful. If you have received this e-mail in error,
>> please
>> > >> >>> >> >> >> notify
>> > >> >>> >> >> >> us
>> > >> >>> >> >> >> immediately by e-mail at abuse at c12g.com and delete the
>> > e-mail
>> > >> >>> >> >> >> and
>> > >> >>> >> >> >> attachments and any copy from your system. C12G thanks
>> you
>> > >> >>> >> >> >> for
>> > >> >>> >> >> >> your
>> > >> >>> >> >> >> cooperation.
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> On Mon, Nov 11, 2013 at 2:17 AM, Catalina Quinde
>> > >> >>> >> >> >> <catalinaquinde at gmail.com> wrote:
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Hi Daniel,
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thanks for reply,
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 1. 1. I'm using Opennebula 4.2 and node Esxi
>> > version
>> > >> >>> >> >> >> > 5.1.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 2. I 2. I have used the following links:
>> > >> >>> >> >> >> >
>> http://opennebula.org/documentation:rel4.2:cluster_guide
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > http://opennebula.org/documentation:rel4.2:vmware_ds
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> >
>> http://opennebula.org/documentation:rel4.2:system_ds#the_system_datastore_for_multi-cluster_setups
>> > >> >>> >> >> >> > and responses in mail lists.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 33. I 3. I have enabled ssh between Opennebula and
>> Esxi
>> > >> >>> >> >> >> > to
>> > >> >>> >> >> >> > use
>> > >> >>> >> >> >> > no
>> > >> >>> >> >> >> > password
>> > >> >>> >> >> >> > correctly.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 4. 4. According to documentation should I put
>> the
>> > >> >>> >> >> >> > Esxi
>> > >> >>> >> >> >> > node
>> > >> >>> >> >> >> > inside
>> > >> >>> >> >> >> > the
>> > >> >>> >> >> >> > cluster, as Opennebula is working with several
>> > hypervisors,
>> > >> >>> >> >> >> > has
>> > >> >>> >> >> >> > been
>> > >> >>> >> >> >> > used
>> > >> >>> >> >> >> > the command to create the cluster: ?onecluster create
>> > >> >>> >> >> >> > [namecluster]?.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 5. 5. I added the node Esxi to this cluster with
>> > the
>> > >> >>> >> >> >> > command:
>> > >> >>> >> >> >> > ?onecluster addhost [namecluster] [namenodo]?.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 6. 6. I added to the cluster, virtual network
>> > created
>> > >> >>> >> >> >> > for
>> > >> >>> >> >> >> > Esxi,
>> > >> >>> >> >> >> > which
>> > >> >>> >> >> >> > is also created on the Esxi node through Vsphere
>> Client,
>> > >> >>> >> >> >> > with
>> > >> >>> >> >> >> > the
>> > >> >>> >> >> >> > command:?onecluster addvnet [namecluster]
>> > [namevirtualred]?
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 7. 7. I updated the cluster using the command:
>> > >> >>> >> >> >> > ?onecluster
>> > >> >>> >> >> >> > update
>> > >> >>> >> >> >> > [namecluster]?, to add: DATASTORE_LOCATION, which will
>> > >> >>> >> >> >> > affect
>> > >> >>> >> >> >> > only
>> > >> >>> >> >> >> > that
>> > >> >>> >> >> >> > node, by placing the following:
>> > >> >>> >> >> >> > DATASTORE_LOCATION=/vmfs/volumes
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 8. 8. I have created two files that serve as
>> image
>> > to
>> > >> >>> >> >> >> > create
>> > >> >>> >> >> >> > the
>> > >> >>> >> >> >> > datastores:
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > a. The image file for the datastore system
>> > contains:
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > NAME=dsEsxi
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > TM_MAD=vmfs
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > TYPE=SYSTEM_DS
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > BRIDGE_LIST="192.168.147.131"
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > b. This datastore is linked to the cluster at the
>> > same
>> > >> >>> >> >> >> > time
>> > >> >>> >> >> >> > created
>> > >> >>> >> >> >> > with the command: ?onedatastore create -c
>> [namecluster]
>> > >> >>> >> >> >> > /var/lib/images/system_esxi.ds?.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > c. The image file for the datastore image
>> contains:
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > NAME=diEsxi
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > DS_MAD=vmfs
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > TM_MAD=vmfs
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > TYPE=IMAGE_DS
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > BRIDGE_LIST="192.168.147.131"
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > d. This datastore is linked to the cluster at the
>> > same
>> > >> >>> >> >> >> > time
>> > >> >>> >> >> >> > created
>> > >> >>> >> >> >> > with the commando:
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > ?onedatastore create ?c [namecluster]
>> > >> >>> >> >> >> > /var/lib/images/image_esxi.ds?
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 9. 9. When I list the datastores, you can see
>> that
>> > >> >>> >> >> >> > the
>> > >> >>> >> >> >> > image
>> > >> >>> >> >> >> > datastore
>> > >> >>> >> >> >> > new created has 0M capacity.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > I've done all this procedure since according to the
>> > >> >>> >> >> >> > documentation
>> > >> >>> >> >> >> > followed
>> > >> >>> >> >> >> > at Opennebula 4.2, it changes how to use Esxi node.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Before all of the above to deploy a VM this error
>> > occurred:
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > oneadmin at ubuntuOpNeb:~$ cat /var/log/one/36.log
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thu Oct 31 11:18:02 2013 [DiM][I]: New VM state is
>> > ACTIVE.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thu Oct 31 11:18:02 2013 [LCM][I]: New VM state is
>> > PROLOG.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][I]: Command execution
>> fail:
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > /var/lib/one/remotes/tm/shared/clone
>> > >> >>> >> >> >> > ubuntuOpNeb:/var/lib/one/datastores/1/
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 02e8c67d4f59826e9fb12ccb48f77557
>> > >> >>> >> >> >> > demos:/var/lib/one/datastores/0/36/disk.0
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 36 1
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][I]: clone: Cloning
>> > >> >>> >> >> >> > /var/lib/one/datastores/1/
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 02e8c67d4f59826e9fb12ccb48f77557 in
>> > >> >>> >> >> >> > demos:/var/lib/one/datastores/
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 0/36/disk.0
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][E]: clone: Command "cd
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > /var/lib/one/datastores/0/36; cp
>> > /var/lib/one/datastores/1/
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 02e8c67d4f59826e9fb12ccb48f77557
>> > >> >>> >> >> >> > /var/lib/one/datastores/0/36/disk.0"
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > failed: cp: can't stat '/var/lib/one/datastores/1/
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 02e8c67d4f59826e9fb12ccb48f77557': No such file or
>> > >> >>> >> >> >> > directory
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][E]: Error copying
>> > >> >>> >> >> >> > ubuntuOpNeb:/var/lib/one/
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > datastores/1/02e8c67d4f59826e9fb12ccb48f77557 to
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > demos:/var/lib/one/datastores/0/36/disk.0
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][I]: ExitCode: 1
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][E]: Error executing
>> image
>> > >> >>> >> >> >> > transfer
>> > >> >>> >> >> >> > script:
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Error copying ubuntuOpNeb:/var/lib/one/datastores/1/
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 02e8c67d4f59826e9fb12ccb48f77557 to
>> > >> >>> >> >> >> > demos:/var/lib/one/datastores/
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 0/36/disk.0
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thu Oct 31 11:18:03 2013 [DiM][I]: New VM state is
>> FAILED
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Now, I suppose that I've to create the image for
>> > opennebula
>> > >> >>> >> >> >> > in
>> > >> >>> >> >> >> > the
>> > >> >>> >> >> >> > new
>> > >> >>> >> >> >> > image
>> > >> >>> >> >> >> > datastore but I can?t do because it has no capacity.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Please help me.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thanks, Caty
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 2013/11/10 <users-request at lists.opennebula.org>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Send Users mailing list submissions to
>> > >> >>> >> >> >> >> users at lists.opennebula.org
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> To subscribe or unsubscribe via the World Wide Web,
>> > visit
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> >> >> >> or, via email, send a message with subject or body
>> > 'help'
>> > >> >>> >> >> >> >> to
>> > >> >>> >> >> >> >> users-request at lists.opennebula.org
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> You can reach the person managing the list at
>> > >> >>> >> >> >> >> users-owner at lists.opennebula.org
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> When replying, please edit your Subject line so it is
>> > more
>> > >> >>> >> >> >> >> specific
>> > >> >>> >> >> >> >> than "Re: Contents of Users digest..."
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Today's Topics:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> 1. VM in Opennebula for Esxi 5.1 failed (Catalina
>> > >> >>> >> >> >> >> Quinde)
>> > >> >>> >> >> >> >> 2. Re: VM in Opennebula for Esxi 5.1 failed
>> (Daniel
>> > >> >>> >> >> >> >> Dehennin)
>> > >> >>> >> >> >> >> 3. ask to password when add ceph datastore
>> > >> >>> >> >> >> >> (12navidb2 at gmail.com)
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > ----------------------------------------------------------------------
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Message: 1
>> > >> >>> >> >> >> >> Date: Sat, 9 Nov 2013 19:07:44 -0500
>> > >> >>> >> >> >> >> From: Catalina Quinde <catalinaquinde at gmail.com>
>> > >> >>> >> >> >> >> To: users at lists.opennebula.org
>> > >> >>> >> >> >> >> Subject: [one-users] VM in Opennebula for Esxi 5.1
>> > failed
>> > >> >>> >> >> >> >> Message-ID:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> <CAPgz++yOE=CmekTkFjD=
>> > yCFo4uwN6oMYb+z+aYY7VtZ4rOoubw at mail.gmail.com>
>> > >> >>> >> >> >> >> Content-Type: text/plain; charset="iso-8859-1"
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Please help me,I created an OpenNebula cluster in
>> which
>> > I
>> > >> >>> >> >> >> >> have
>> > >> >>> >> >> >> >> added
>> > >> >>> >> >> >> >> the
>> > >> >>> >> >> >> >> host Esxi, I've also created two datastores, one for
>> > >> >>> >> >> >> >> system
>> > >> >>> >> >> >> >> and
>> > >> >>> >> >> >> >> one
>> > >> >>> >> >> >> >> for
>> > >> >>> >> >> >> >> images, with features:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> ID 100 system TM_MAD = vmfs
>> > >> >>> >> >> >> >> ID 101 image DS_MAD = vmfs
>> > >> >>> >> >> >> >> TM_MAD = vmfs
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> These two datastores I have added to the cluster
>> > created,
>> > >> >>> >> >> >> >> the
>> > >> >>> >> >> >> >> comand
>> > >> >>> >> >> >> >> "onedatastore list" indicates CAPACITY datastore = 0M
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> And in the path "/ var / lib / one / datastores" here
>> > are
>> > >> >>> >> >> >> >> not
>> > >> >>> >> >> >> >> these
>> > >> >>> >> >> >> >> two
>> > >> >>> >> >> >> >> datastores as directories.
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> I'm using OpenNebula Esxi 4.2 and 5.1.
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Greetings.
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Caty.
>> > >> >>> >> >> >> >> -------------- next part --------------
>> > >> >>> >> >> >> >> An HTML attachment was scrubbed...
>> > >> >>> >> >> >> >> URL:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> <
>> >
>> http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131109/556b453a/attachment.html
>> > >
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> ------------------------------
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Message: 2
>> > >> >>> >> >> >> >> Date: Sun, 10 Nov 2013 01:47:35 +0100
>> > >> >>> >> >> >> >> From: Daniel Dehennin <daniel.dehennin at baby-gnu.org>
>> > >> >>> >> >> >> >> To: users at lists.opennebula.org
>> > >> >>> >> >> >> >> Subject: Re: [one-users] VM in Opennebula for Esxi
>> 5.1
>> > >> >>> >> >> >> >> failed
>> > >> >>> >> >> >> >> Message-ID: <878uwx9j6g.fsf at hati.baby-gnu.org>
>> > >> >>> >> >> >> >> Content-Type: text/plain; charset="utf-8"
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Catalina Quinde <catalinaquinde at gmail.com> writes:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> First of all hello,
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> I do not have proprietary system so I'll only give
>> you
>> > >> >>> >> >> >> >> information I
>> > >> >>> >> >> >> >> have.
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> > Please help me,I created an OpenNebula cluster in
>> > which
>> > >> >>> >> >> >> >> > I
>> > >> >>> >> >> >> >> > have
>> > >> >>> >> >> >> >> > added
>> > >> >>> >> >> >> >> > the
>> > >> >>> >> >> >> >> > host Esxi, I've also created two datastores, one
>> for
>> > >> >>> >> >> >> >> > system
>> > >> >>> >> >> >> >> > and
>> > >> >>> >> >> >> >> > one
>> > >> >>> >> >> >> >> > for
>> > >> >>> >> >> >> >> > images, with features:
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> > ID 100 system TM_MAD = vmfs
>> > >> >>> >> >> >> >> > ID 101 image DS_MAD = vmfs
>> > >> >>> >> >> >> >> > TM_MAD = vmfs
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> > These two datastores I have added to the cluster
>> > >> >>> >> >> >> >> > created,
>> > >> >>> >> >> >> >> > the
>> > >> >>> >> >> >> >> > comand
>> > >> >>> >> >> >> >> > "onedatastore list" indicates CAPACITY datastore =
>> 0M
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> > And in the path "/ var / lib / one / datastores"
>> here
>> > >> >>> >> >> >> >> > are
>> > >> >>> >> >> >> >> > not
>> > >> >>> >> >> >> >> > these
>> > >> >>> >> >> >> >> > two
>> > >> >>> >> >> >> >> > datastores as directories.
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Reading the documentation[1], I understand that:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> 1. oneadmin must be able to SSH ESX servers without
>> > >> >>> >> >> >> >> password[2]
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> 2. nothing is under /var/lib/one/datastores of the
>> > >> >>> >> >> >> >> OpenNebula
>> > >> >>> >> >> >> >> frontend
>> > >> >>> >> >> >> >> because it's managed by ESX servers, ONE access to
>> > VMFS
>> > >> >>> >> >> >> >> datastores
>> > >> >>> >> >> >> >> though API
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Maybe if you describe a little more your setup will
>> help
>> > >> >>> >> >> >> >> people
>> > >> >>> >> >> >> >> helping
>> > >> >>> >> >> >> >> you ;-)
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> > I'm using OpenNebula Esxi 4.2 and 5.1.
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> I suppose you mean that you use:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> - OpenNebula 4.2
>> > >> >>> >> >> >> >> - ESXi 5.1
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Right?
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Regards.
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Footnotes:
>> > >> >>> >> >> >> >> [1]
>> > http://opennebula.org/documentation:rel4.2:vmware_ds
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> [2]
>> > >> >>> >> >> >> >>
>> > http://opennebula.org/documentation:rel4.2:hostsubsystem
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> --
>> > >> >>> >> >> >> >> Daniel Dehennin
>> > >> >>> >> >> >> >> R?cup?rer ma clef GPG:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> gpg --keyserver pgp.mit.edu --recv-keys 0x7A6FE2DF
>> > >> >>> >> >> >> >> -------------- next part --------------
>> > >> >>> >> >> >> >> A non-text attachment was scrubbed...
>> > >> >>> >> >> >> >> Name: not available
>> > >> >>> >> >> >> >> Type: application/pgp-signature
>> > >> >>> >> >> >> >> Size: 229 bytes
>> > >> >>> >> >> >> >> Desc: not available
>> > >> >>> >> >> >> >> URL:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> <
>> >
>> http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131110/580e83eb/attachment-0001.pgp
>> > >
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> ------------------------------
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Message: 3
>> > >> >>> >> >> >> >> Date: Sun, 10 Nov 2013 13:26:27 +0330
>> > >> >>> >> >> >> >> From: "12navidb2 at gmail.com" <12navidb2 at gmail.com>
>> > >> >>> >> >> >> >> To: users at lists.opennebula.org
>> > >> >>> >> >> >> >> Subject: [one-users] ask to password when add ceph
>> > >> >>> >> >> >> >> datastore
>> > >> >>> >> >> >> >> Message-ID:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> <CAK9QDg9znnHjkH=mLc19F=
>> > CTSd+Yo16arttCEg_erv5-x4WAtg at mail.gmail.com>
>> > >> >>> >> >> >> >> Content-Type: text/plain; charset="iso-8859-1"
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> hi all
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> when i add new ceph datastore, opennebula run a
>> command
>> > >> >>> >> >> >> >> like
>> > >> >>> >> >> >> >> this
>> > >> >>> >> >> >> >> with
>> > >> >>> >> >> >> >> oneadmin user that needs password
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> oneadmin at one-server2:~$
>> > >> >>> >> >> >> >> /var/lib/one/remotes/datastore/ceph/monitor
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> >
>> PERTX0RSSVZFUl9BQ1RJT05fREFUQT48REFUQVNUT1JFPjxJRD4xMTM8L0lEPjxVSUQ+MDwvVUlEPjxHSUQ+MDwvR0lEPjxVTkFNRT5vbmVhZG1pbjwvVU5BTUU+PEdOQU1FPm9uZWFkbWluPC9HTkFNRT48TkFNRT5jZXBoZHM8L05BTUU+PFBFUk1JU1NJT05TPjxPV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjE8L0dST1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L09USEVSX1U+PE9USEVSX00+MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PERTX01BRD5jZXBoPC9EU19NQUQ+PFRNX01BRD5jZXBoPC9UTV9NQUQ+PEJBU0VfUEFUSD4vdmFyL2xpYi9vbmUvZGF0YXN0b3Jlcy8xMTM8L0JBU0VfUEFUSD48VFlQRT4wPC9UWVBFPjxESVNLX1RZUEU+MzwvRElTS19UWVBFPjxDTFVTVEVSX0lEPi0xPC9DTFVTVEVSX0lEPjxDTFVTVEVSPjwvQ0xVU1RFUj48VE9UQUxfTUI+MDwvVE9UQUxfTUI+PEZSRUVfTUI+MDwvRlJFRV9NQj48VVNFRF9NQj4wPC9VU0VEX01CPjxJTUFHRVM+PC9JTUFHRVM+PFRFTVBMQVRFPjxEU19NQUQ+PCFbQ0RBVEFbY2VwaF1dPjwvRFNfTUFEPjxIT1NUPjwhW0NEQVRBW29uZS1zZXJ2ZXIyXV0+PC9IT1NUPjxQT09MX05BTUU+PCFbQ0RBVEFbcmJkXV0+PC9QT09MX05BTUU+PFNBRkVfRElSUz48IVt
>> > DRE
>> > >> FUQV
>> > >> >>> svX
>> > >> >>> >> V0+PC9
>> > >> >>> >> >> >> TQU
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> >
>> ZFX0RJUlM+PFRNX01BRD48IVtDREFUQVtjZXBoXV0+PC9UTV9NQUQ+PC9URU1QTEFURT48L0RBVEFTVE9SRT48L0RTX0RSSVZFUl9BQ1RJT05fREFUQT4=
>> > >> >>> >> >> >> >> 113
>> > >> >>> >> >> >> >> oneadmin at one-server2's password:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> because it, opennebula cant show datastore space and
>> > show
>> > >> >>> >> >> >> >> (0MB)
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> output this command is
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> USED_MB=3015
>> > >> >>> >> >> >> >> FREE_MB=1184747
>> > >> >>> >> >> >> >> TOTAL_MB=1193984
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> the point here is that i cant run this command with
>> > >> >>> >> >> >> >> oneadmin
>> > >> >>> >> >> >> >> user
>> > >> >>> >> >> >> >> !
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> sudo rados df
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> or even rados df (without sudo)
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> i add these lines to /etc/sudoers
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> oneadmin ALL=(ALL) NOPASSWD:
>> > >> >>> >> >> >> >> /usr/bin/rbd*,/usr/bin/rados*
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> best regards
>> > >> >>> >> >> >> >> -------------- next part --------------
>> > >> >>> >> >> >> >> An HTML attachment was scrubbed...
>> > >> >>> >> >> >> >> URL:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> <
>> >
>> http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131110/64242fd6/attachment-0001.htm
>> > >
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> ------------------------------
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> _______________________________________________
>> > >> >>> >> >> >> >> Users mailing list
>> > >> >>> >> >> >> >> Users at lists.opennebula.org
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> End of Users Digest, Vol 69, Issue 26
>> > >> >>> >> >> >> >> *************************************
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > _______________________________________________
>> > >> >>> >> >> >> > Users mailing list
>> > >> >>> >> >> >> > Users at lists.opennebula.org
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> ------------------------------
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Message: 2
>> > >> >>> >> >> >> Date: Mon, 11 Nov 2013 14:30:20 +0100
>> > >> >>> >> >> >> From: Tino Vazquez <cvazquez at c12g.com>
>> > >> >>> >> >> >> To: Catalina Quinde <catalinaquinde at gmail.com>
>> > >> >>> >> >> >> Cc: users <users at lists.opennebula.org>
>> > >> >>> >> >> >> Subject: Re: [one-users] VM in Opennebula for Esxi 5.1
>> > failed
>> > >> >>> >> >> >> Message-ID:
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> <CAHfKwc05HTwsGJSxkW4fSoMu5O10=
>> > HuAUcS70pVCzh2d3zgxtA at mail.gmail.com>
>> > >> >>> >> >> >> Content-Type: text/plain; charset=ISO-8859-1
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> On Thu, Nov 7, 2013 at 5:55 AM, Catalina Quinde
>> > >> >>> >> >> >> <catalinaquinde at gmail.com> wrote:
>> > >> >>> >> >> >> > Thanks Tino,
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Excuse me for not responding earlier, I have read the
>> > >> >>> >> >> >> > documentation
>> > >> >>> >> >> >> > provided
>> > >> >>> >> >> >> > I do, but there is a section that is not clear in
>> which
>> > >> >>> >> >> >> > states
>> > >> >>> >> >> >> > the
>> > >> >>> >> >> >> > following:
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > "... In heterogeneous clouds (mix of ESX and other
>> > >> >>> >> >> >> > hypervisor
>> > >> >>> >> >> >> > hosts)
>> > >> >>> >> >> >> > put
>> > >> >>> >> >> >> > all
>> > >> >>> >> >> >> > the ESX hosts in clusters With The Following In Their
>> > >> >>> >> >> >> > template
>> > >> >>> >> >> >> > attribute
>> > >> >>> >> >> >> > (eg
>> > >> >>> >> >> >> > onecluster update) ...":
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > DATASTORE_LOCATION = / vmfs / volumes
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > In my case, opennebula works with 4 different
>> > hypervisors,
>> > >> >>> >> >> >> > and
>> > >> >>> >> >> >> > I
>> > >> >>> >> >> >> > don't
>> > >> >>> >> >> >> > have
>> > >> >>> >> >> >> > created clusters, nodes are registered directly in
>> > >> >>> >> >> >> > opennebula:
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 1. I make clusters for each hypervisor??
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> You will need to create one at least for VMware in
>> order to
>> > >> >>> >> >> >> set
>> > >> >>> >> >> >> the
>> > >> >>> >> >> >> DATASTORE_LOCATION in the cluster so the datastores can
>> > >> >>> >> >> >> inherit
>> > >> >>> >> >> >> it.
>> > >> >>> >> >> >> In
>> > >> >>> >> >> >> any case, it would be recommended to have one cluster
>> per
>> > >> >>> >> >> >> hypervisor.
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> > 2. In the image cluster file to create the cluster
>> must
>> > >> >>> >> >> >> > point
>> > >> >>> >> >> >> > out
>> > >> >>> >> >> >> > the
>> > >> >>> >> >> >> > DATASTORE_LOCATION????
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Yes, the template used to create the cluster must
>> include
>> > the
>> > >> >>> >> >> >> DATASTORE_LOCATION, although this can be updated later.
>> In
>> > >> >>> >> >> >> any
>> > >> >>> >> >> >> case,
>> > >> >>> >> >> >> the DATASTORE_LOCATION _must_ be present before the
>> > datastore
>> > >> >>> >> >> >> is
>> > >> >>> >> >> >> created.
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > In my case I have created the datastores in "ESXi"
>> with
>> > the
>> > >> >>> >> >> >> > frontend
>> > >> >>> >> >> >> > in
>> > >> >>> >> >> >> > "/
>> > >> >>> >> >> >> > var/lib/one/datastores/0" and "1", created by the
>> "VMWare
>> > >> >>> >> >> >> > Client",
>> > >> >>> >> >> >> > in
>> > >> >>> >> >> >> > this
>> > >> >>> >> >> >> > datastores is created a virtual machine for Esxi. I
>> must
>> > >> >>> >> >> >> > add
>> > >> >>> >> >> >> > the
>> > >> >>> >> >> >> > datastores
>> > >> >>> >> >> >> > "/ vmfs / volumes. .. "as additional datastores??
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> In the ESX, only the system and one image datastore
>> need to
>> > >> >>> >> >> >> be
>> > >> >>> >> >> >> register (and potentially, more than one image
>> datastore).
>> > In
>> > >> >>> >> >> >> OpenNebula 4.2 they cannot be 0 and 1, since they are
>> > created
>> > >> >>> >> >> >> on
>> > >> >>> >> >> >> OpneNebula first boot and they won't have the proper
>> > >> >>> >> >> >> DATASTORE_LOCATION inherit.
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> To check that all is good, the BASE_PATH of both the
>> system
>> > >> >>> >> >> >> and
>> > >> >>> >> >> >> image
>> > >> >>> >> >> >> datstore for VMware should contain "/vmfs/volumes"
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Regards,
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> -Tino
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > Thanks for responding
>> > >> >>> >> >> >> > Caty
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > 2013/11/4 Tino Vazquez <cvazquez at c12g.com>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Hi,
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Ok, so then you need to create a VMFS datastore in
>> order
>> > >> >>> >> >> >> >> to
>> > >> >>> >> >> >> >> register
>> > >> >>> >> >> >> >> VMDK images that can be used with VMware ESX. More
>> info
>> > >> >>> >> >> >> >> here:
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> http://opennebula.org/documentation:rel4.2:vmware_ds
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> Regards,
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> -Tino
>> > >> >>> >> >> >> >> --
>> > >> >>> >> >> >> >> OpenNebula - Flexible Enterprise Cloud Made Simple
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> --
>> > >> >>> >> >> >> >> Constantino V?zquez Blanco, PhD, MSc
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >> Senior Infrastructure Architect at C12G Labs
>> > >> >>> >> >> >> >> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> --
>> > >> >>> >> >> >> >> Confidentiality Warning: The information contained in
>> > this
>> > >> >>> >> >> >> >> e-mail
>> > >> >>> >> >> >> >> and
>> > >> >>> >> >> >> >> any accompanying documents, unless otherwise
>> expressly
>> > >> >>> >> >> >> >> indicated,
>> > >> >>> >> >> >> >> is
>> > >> >>> >> >> >> >> confidential and privileged, and is intended solely
>> for
>> > >> >>> >> >> >> >> the
>> > >> >>> >> >> >> >> person
>> > >> >>> >> >> >> >> and/or entity to whom it is addressed (i.e. those
>> > >> >>> >> >> >> >> identified
>> > >> >>> >> >> >> >> in
>> > >> >>> >> >> >> >> the
>> > >> >>> >> >> >> >> "To" and "cc" box). They are the property of C12G
>> Labs
>> > >> >>> >> >> >> >> S.L..
>> > >> >>> >> >> >> >> Unauthorized distribution, review, use, disclosure,
>> or
>> > >> >>> >> >> >> >> copying
>> > >> >>> >> >> >> >> of
>> > >> >>> >> >> >> >> this
>> > >> >>> >> >> >> >> communication, or any part thereof, is strictly
>> > prohibited
>> > >> >>> >> >> >> >> and
>> > >> >>> >> >> >> >> may
>> > >> >>> >> >> >> >> be
>> > >> >>> >> >> >> >> unlawful. If you have received this e-mail in error,
>> > >> >>> >> >> >> >> please
>> > >> >>> >> >> >> >> notify
>> > >> >>> >> >> >> >> us
>> > >> >>> >> >> >> >> immediately by e-mail at abuse at c12g.com and delete
>> the
>> > >> >>> >> >> >> >> e-mail
>> > >> >>> >> >> >> >> and
>> > >> >>> >> >> >> >> attachments and any copy from your system. C12G
>> thanks
>> > you
>> > >> >>> >> >> >> >> for
>> > >> >>> >> >> >> >> your
>> > >> >>> >> >> >> >> cooperation.
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >>
>> > >> >>> >> >> >> >> On Mon, Nov 4, 2013 at 4:04 PM, Catalina Quinde
>> > >> >>> >> >> >> >> <catalinaquinde at gmail.com> wrote:
>> > >> >>> >> >> >> >> > Hi, Tino thanks for reply,
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> > In Opennebula-sunstone in the bottom, says
>> > "Opennebula
>> > >> >>> >> >> >> >> > 4.2.0
>> > >> >>> >> >> >> >> > by
>> > >> >>> >> >> >> >> > C12G
>> > >> >>> >> >> >> >> > Labs",
>> > >> >>> >> >> >> >> > the commands "one -v" or "one -version" not
>> working,
>> > Is
>> > >> >>> >> >> >> >> > this
>> > >> >>> >> >> >> >> > version
>> > >> >>> >> >> >> >> > for
>> > >> >>> >> >> >> >> > opennebula?, I did the installation with command
>> > >> >>> >> >> >> >> > "apt-get
>> > >> >>> >> >> >> >> > install
>> > >> >>> >> >> >> >> > opennebula".
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> > Please, guide me more clearly, because I have the
>> > >> >>> >> >> >> >> > others
>> > >> >>> >> >> >> >> > hypervisors
>> > >> >>> >> >> >> >> > working with opennebula too.
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> > Thanks Tino,
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> > Caty.
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> > 2013/11/4 Tino Vazquez <cvazquez at c12g.com>
>> > >> >>> >> >> >> >> >>
>> > >> >>> >> >> >> >> >> Hi Catalina,
>> > >> >>> >> >> >> >> >>
>> > >> >>> >> >> >> >> >> Which OpenNebula version are you using? If you
>> are in
>> > >> >>> >> >> >> >> >> 4.2,
>> > >> >>> >> >> >> >> >> be
>> > >> >>> >> >> >> >> >> aware
>> > >> >>> >> >> >> >> >> that the only datastore drivers for VMware are the
>> > >> >>> >> >> >> >> >> 'vmfs',
>> > >> >>> >> >> >> >> >> not
>> > >> >>> >> >> >> >> >> 'shared'.
>> > >> >>> >> >> >> >> >>
>> > >> >>> >> >> >> >> >> Regards,
>> > >> >>> >> >> >> >> >>
>> > >> >>> >> >> >> >> >> -Tino
>> > >> >>> >> >> >> >> >> --
>> > >> >>> >> >> >> >> >> OpenNebula - Flexible Enterprise Cloud Made Simple
>> > >> >>> >> >> >> >> >>
>> > >> >>> >> >> >> >> >> --
>> > >> >>> >> >> >> >> >> Constantino V?zquez Blanco, PhD, MSc
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >> >> Senior Infrastructure Architect at C12G Labs
>> > >> >>> >> >> >> >> >> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>> > >> >>> >> >> >> >> >>
>> > >> >>> >> >> >> >> >> --
>> > >> >>> >> >> >> >> >> Confidentiality Warning: The information
>> contained in
>> > >> >>> >> >> >> >> >> this
>> > >> >>> >> >> >> >> >> e-mail
>> > >> >>> >> >> >> >> >> and
>> > >> >>> >> >> >> >> >> any accompanying documents, unless otherwise
>> > expressly
>> > >> >>> >> >> >> >> >> indicated,
>> > >> >>> >> >> >> >> >> is
>> > >> >>> >> >> >> >> >> confidential and privileged, and is intended
>> solely
>> > for
>> > >> >>> >> >> >> >> >> the
>> > >> >>> >> >> >> >> >> person
>> > >> >>> >> >> >> >> >> and/or entity to whom it is addressed (i.e. those
>> > >> >>> >> >> >> >> >> identified
>> > >> >>> >> >> >> >> >> in
>> > >> >>> >> >> >> >> >> the
>> > >> >>> >> >> >> >> >> "To" and "cc" box). They are the property of C12G
>> > Labs
>> > >> >>> >> >> >> >> >> S.L..
>> > >> >>> >> >> >> >> >> Unauthorized distribution, review, use,
>> disclosure,
>> > or
>> > >> >>> >> >> >> >> >> copying
>> > >> >>> >> >> >> >> >> of
>> > >> >>> >> >> >> >> >> this
>> > >> >>> >> >> >> >> >> communication, or any part thereof, is strictly
>> > >> >>> >> >> >> >> >> prohibited
>> > >> >>> >> >> >> >> >> and
>> > >> >>> >> >> >> >> >> may
>> > >> >>> >> >> >> >> >> be
>> > >> >>> >> >> >> >> >> unlawful. If you have received this e-mail in
>> error,
>> > >> >>> >> >> >> >> >> please
>> > >> >>> >> >> >> >> >> notify
>> > >> >>> >> >> >> >> >> us
>> > >> >>> >> >> >> >> >> immediately by e-mail at abuse at c12g.com and
>> delete
>> > the
>> > >> >>> >> >> >> >> >> e-mail
>> > >> >>> >> >> >> >> >> and
>> > >> >>> >> >> >> >> >> attachments and any copy from your system. C12G
>> > thanks
>> > >> >>> >> >> >> >> >> you
>> > >> >>> >> >> >> >> >> for
>> > >> >>> >> >> >> >> >> your
>> > >> >>> >> >> >> >> >> cooperation.
>> > >> >>> >> >> >> >> >>
>> > >> >>> >> >> >> >> >>
>> > >> >>> >> >> >> >> >> On Mon, Nov 4, 2013 at 5:40 AM, Catalina Quinde
>> > >> >>> >> >> >> >> >> <catalinaquinde at gmail.com> wrote:
>> > >> >>> >> >> >> >> >> > Hi friends, please help me in this problem
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> > I have opennebula working with nodes kvm, xen,
>> Esxi
>> > >> >>> >> >> >> >> >> > 5.1
>> > >> >>> >> >> >> >> >> > and
>> > >> >>> >> >> >> >> >> > OpenVZ, I
>> > >> >>> >> >> >> >> >> > did VM
>> > >> >>> >> >> >> >> >> > in Opennebula for KVM and Xen, VM states RUNN,
>> but
>> > >> >>> >> >> >> >> >> > when I
>> > >> >>> >> >> >> >> >> > made
>> > >> >>> >> >> >> >> >> > a
>> > >> >>> >> >> >> >> >> > VM
>> > >> >>> >> >> >> >> >> > in
>> > >> >>> >> >> >> >> >> > Opennebula for Esxi 5.1, I have this error:
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> > ------------------------------------------------
>> > >> >>> >> >> >> >> >> > oneadmin at ubuntuOpNeb:~$ cat /var/log/one/36.log
>> > >> >>> >> >> >> >> >> > Thu Oct 31 11:18:02 2013 [DiM][I]: New VM state
>> is
>> > >> >>> >> >> >> >> >> > ACTIVE.
>> > >> >>> >> >> >> >> >> > Thu Oct 31 11:18:02 2013 [LCM][I]: New VM state
>> is
>> > >> >>> >> >> >> >> >> > PROLOG.
>> > >> >>> >> >> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][I]: Command
>> execution
>> > >> >>> >> >> >> >> >> > fail:
>> > >> >>> >> >> >> >> >> > /var/lib/one/remotes/tm/shared/clone
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > ubuntuOpNeb:/var/lib/one/datastores/1/02e8c67d4f59826e9fb12ccb48f77557
>> > >> >>> >> >> >> >> >> > demos:/var/lib/one/datastores/0/36/disk.0 36 1
>> > >> >>> >> >> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][I]: clone: Cloning
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > /var/lib/one/datastores/1/02e8c67d4f59826e9fb12ccb48f77557
>> > >> >>> >> >> >> >> >> > in
>> > >> >>> >> >> >> >> >> > demos:/var/lib/one/datastores/0/36/disk.0
>> > >> >>> >> >> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][E]: clone: Command
>> > "cd
>> > >> >>> >> >> >> >> >> > /var/lib/one/datastores/0/36; cp
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > /var/lib/one/datastores/1/02e8c67d4f59826e9fb12ccb48f77557
>> > >> >>> >> >> >> >> >> > /var/lib/one/datastores/0/36/disk.0" failed: cp:
>> > >> >>> >> >> >> >> >> > can't
>> > >> >>> >> >> >> >> >> > stat
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > '/var/lib/one/datastores/1/02e8c67d4f59826e9fb12ccb48f77557':
>> > >> >>> >> >> >> >> >> > No
>> > >> >>> >> >> >> >> >> > such
>> > >> >>> >> >> >> >> >> > file
>> > >> >>> >> >> >> >> >> > or directory
>> > >> >>> >> >> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][E]: Error copying
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > ubuntuOpNeb:/var/lib/one/datastores/1/02e8c67d4f59826e9fb12ccb48f77557
>> > >> >>> >> >> >> >> >> > to
>> > >> >>> >> >> >> >> >> > demos:/var/lib/one/datastores/0/36/disk.0
>> > >> >>> >> >> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][I]: ExitCode: 1
>> > >> >>> >> >> >> >> >> > Thu Oct 31 11:18:03 2013 [TM][E]: Error
>> executing
>> > >> >>> >> >> >> >> >> > image
>> > >> >>> >> >> >> >> >> > transfer
>> > >> >>> >> >> >> >> >> > script:
>> > >> >>> >> >> >> >> >> > Error copying
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > ubuntuOpNeb:/var/lib/one/datastores/1/02e8c67d4f59826e9fb12ccb48f77557
>> > >> >>> >> >> >> >> >> > to
>> > >> >>> >> >> >> >> >> > demos:/var/lib/one/datastores/0/36/disk.0
>> > >> >>> >> >> >> >> >> > Thu Oct 31 11:18:03 2013 [DiM][I]: New VM state
>> is
>> > >> >>> >> >> >> >> >> > FAILED
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > ----------------------------------------------------------
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> > Here I put some features of my settings in
>> > >> >>> >> >> >> >> >> > opennebula:
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> > The datastores are: 0, 1 and 2; the 0 and 1
>> > >> >>> >> >> >> >> >> > datastores
>> > >> >>> >> >> >> >> >> > are
>> > >> >>> >> >> >> >> >> > shared,
>> > >> >>> >> >> >> >> >> > and 2
>> > >> >>> >> >> >> >> >> > is
>> > >> >>> >> >> >> >> >> > ssh, Esxi works with 0 and 1 datastores.
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> > oned.conf file have TM configuration general for
>> > all
>> > >> >>> >> >> >> >> >> > hypervisors.
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> > Please help me for solved this problem.
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> > Thanks, Caty.
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> > _______________________________________________
>> > >> >>> >> >> >> >> >> > Users mailing list
>> > >> >>> >> >> >> >> >> > Users at lists.opennebula.org
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >> >
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> >> >> >> >> >
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> ------------------------------
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Message: 3
>> > >> >>> >> >> >> Date: Mon, 11 Nov 2013 15:24:52 +0100
>> > >> >>> >> >> >> From: Tino Vazquez <cvazquez at c12g.com>
>> > >> >>> >> >> >> To: Andranik Hayrapetyan <andranik.h89 at gmail.com>
>> > >> >>> >> >> >> Cc: users <users at lists.opennebula.org>
>> > >> >>> >> >> >> Subject: Re: [one-users] Problem with oZones. Error 401
>> > while
>> > >> >>> >> >> >> trying
>> > >> >>> >> >> >> to access a zone
>> > >> >>> >> >> >> Message-ID:
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> <CAHfKwc0Q=
>> > FxRAkYtqQKJLLnSTi-mJZSFBKNz1vr1nQfou8oh6A at mail.gmail.com>
>> > >> >>> >> >> >> Content-Type: text/plain; charset=ISO-8859-1
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Hi,
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> The HTTP code (401) tells us that the credentials are
>> not
>> > >> >>> >> >> >> being
>> > >> >>> >> >> >> recognize. Can you check that the oneadmin credentials
>> used
>> > >> >>> >> >> >> to
>> > >> >>> >> >> >> create
>> > >> >>> >> >> >> the 'not working' zone are indeed the right ones?
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> The best way to find out would be to check the $ONE_AUTH
>> > file
>> > >> >>> >> >> >> of
>> > >> >>> >> >> >> the
>> > >> >>> >> >> >> zone, and recreating the zone in oZones using those
>> > >> >>> >> >> >> credentials.
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Best regards,
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> -Tino
>> > >> >>> >> >> >> --
>> > >> >>> >> >> >> OpenNebula - Flexible Enterprise Cloud Made Simple
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> --
>> > >> >>> >> >> >> Constantino V?zquez Blanco, PhD, MSc
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> Senior Infrastructure Architect at C12G Labs
>> > >> >>> >> >> >> www.c12g.com | @C12G | es.linkedin.com/in/tinova
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> --
>> > >> >>> >> >> >> Confidentiality Warning: The information contained in
>> this
>> > >> >>> >> >> >> e-mail
>> > >> >>> >> >> >> and
>> > >> >>> >> >> >> any accompanying documents, unless otherwise expressly
>> > >> >>> >> >> >> indicated,
>> > >> >>> >> >> >> is
>> > >> >>> >> >> >> confidential and privileged, and is intended solely for
>> the
>> > >> >>> >> >> >> person
>> > >> >>> >> >> >> and/or entity to whom it is addressed (i.e. those
>> > identified
>> > >> >>> >> >> >> in
>> > >> >>> >> >> >> the
>> > >> >>> >> >> >> "To" and "cc" box). They are the property of C12G Labs
>> > S.L..
>> > >> >>> >> >> >> Unauthorized distribution, review, use, disclosure, or
>> > >> >>> >> >> >> copying of
>> > >> >>> >> >> >> this
>> > >> >>> >> >> >> communication, or any part thereof, is strictly
>> prohibited
>> > >> >>> >> >> >> and
>> > >> >>> >> >> >> may
>> > >> >>> >> >> >> be
>> > >> >>> >> >> >> unlawful. If you have received this e-mail in error,
>> please
>> > >> >>> >> >> >> notify
>> > >> >>> >> >> >> us
>> > >> >>> >> >> >> immediately by e-mail at abuse at c12g.com and delete the
>> > e-mail
>> > >> >>> >> >> >> and
>> > >> >>> >> >> >> attachments and any copy from your system. C12G thanks
>> you
>> > >> >>> >> >> >> for
>> > >> >>> >> >> >> your
>> > >> >>> >> >> >> cooperation.
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> On Sat, Nov 9, 2013 at 11:06 AM, Andranik Hayrapetyan
>> > >> >>> >> >> >> <andranik.h89 at gmail.com> wrote:
>> > >> >>> >> >> >> > Good day.
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > I have setup OpenNebula 4.2 with ozones server. I have
>> > >> >>> >> >> >> > added 3
>> > >> >>> >> >> >> > zones
>> > >> >>> >> >> >> > to
>> > >> >>> >> >> >> > it
>> > >> >>> >> >> >> > and created a VDC for each one. For 2 of them
>> everything
>> > >> >>> >> >> >> > works
>> > >> >>> >> >> >> > just
>> > >> >>> >> >> >> > fine. I
>> > >> >>> >> >> >> > can access the sunstone web UI through my oZones
>> server.
>> > >> >>> >> >> >> > But
>> > >> >>> >> >> >> > when
>> > >> >>> >> >> >> > I
>> > >> >>> >> >> >> > try
>> > >> >>> >> >> >> > to
>> > >> >>> >> >> >> > access the third one I see just blank browser. In
>> > >> >>> >> >> >> > access.log of
>> > >> >>> >> >> >> > my
>> > >> >>> >> >> >> > httpd
>> > >> >>> >> >> >> > I
>> > >> >>> >> >> >> > see the following:
>> > >> >>> >> >> >> > "GET /sunstone_MY_ZONE_3/ HTTP/1.1" 401 - "-"
>> > "Mozilla/5.0
>> > >> >>> >> >> >> > (X11;
>> > >> >>> >> >> >> > Linux
>> > >> >>> >> >> >> > x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu
>> > >> >>> >> >> >> > Chromium/MY_IP
>> > >> >>> >> >> >> > Chrome/MY_IP Safari/537.36"
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > The Rewrite rules in my .htaccess are made the same
>> way
>> > for
>> > >> >>> >> >> >> > each
>> > >> >>> >> >> >> > VDC.
>> > >> >>> >> >> >> > Here
>> > >> >>> >> >> >> > is one of them:
>> > >> >>> >> >> >> > RewriteRule ^MY_ZONE_3 http://IP_OF_ZONE:2633/RPC2[P]
>> > >> >>> >> >> >> > RewriteRule ^sunstone_MY_ZONE_3/(.+)
>> > >> >>> >> >> >> > http://IP_OF_ZONE:9869//$1
>> > >> >>> >> >> >> > [P]
>> > >> >>> >> >> >> > RewriteRule ^sunstone_MY_ZONE_3
>> http://IP_OF_ZONE:9869//
>> > >> >>> >> >> >> > [P]
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > What can I do to solve this problem? Can anyone help
>> me,
>> > >> >>> >> >> >> > please?
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > P.S. Sorry, my last email was sent by mistake. It was
>> > >> >>> >> >> >> > incomplete.
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> > _______________________________________________
>> > >> >>> >> >> >> > Users mailing list
>> > >> >>> >> >> >> > Users at lists.opennebula.org
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >> >
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> >> >> >
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> ------------------------------
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> _______________________________________________
>> > >> >>> >> >> >> Users mailing list
>> > >> >>> >> >> >> Users at lists.opennebula.org
>> > >> >>> >> >> >>
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> >> >>
>> > >> >>> >> >> >>
>> > >> >>> >> >> >> End of Users Digest, Vol 69, Issue 28
>> > >> >>> >> >> >> *************************************
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> >
>> > >> >>> >> >> > _______________________________________________
>> > >> >>> >> >> > Users mailing list
>> > >> >>> >> >> > Users at lists.opennebula.org
>> > >> >>> >> >> >
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> >> >
>> > >> >>> >> >
>> > >> >>> >> >
>> > >> >>> >> >
>> > >> >>> >> > _______________________________________________
>> > >> >>> >> > Users mailing list
>> > >> >>> >> > Users at lists.opennebula.org
>> > >> >>> >> >
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >> >
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >> ------------------------------
>> > >> >>> >>
>> > >> >>> >> _______________________________________________
>> > >> >>> >> Users mailing list
>> > >> >>> >> Users at lists.opennebula.org
>> > >> >>> >> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >>
>> > >> >>> >>
>> > >> >>> >> End of Users Digest, Vol 69, Issue 33
>> > >> >>> >> *************************************
>> > >> >>> >
>> > >> >>> >
>> > >> >>> >
>> > >> >>> > _______________________________________________
>> > >> >>> > Users mailing list
>> > >> >>> > Users at lists.opennebula.org
>> > >> >>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>> >
>> > >> >>>
>> > >> >>>
>> > >> >>> ------------------------------
>> > >> >>>
>> > >> >>> _______________________________________________
>> > >> >>> Users mailing list
>> > >> >>> Users at lists.opennebula.org
>> > >> >>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>>
>> > >> >>>
>> > >> >>> End of Users Digest, Vol 69, Issue 35
>> > >> >>> *************************************
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >> _______________________________________________
>> > >> >> Users mailing list
>> > >> >> Users at lists.opennebula.org
>> > >> >> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >> >>
>> > >>
>> > >>
>> > >> ------------------------------
>> > >>
>> > >> _______________________________________________
>> > >> Users mailing list
>> > >> Users at lists.opennebula.org
>> > >> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >>
>> > >>
>> > >> End of Users Digest, Vol 69, Issue 44
>> > >> *************************************
>> > >
>> > >
>> > >
>> > > _______________________________________________
>> > > Users mailing list
>> > > Users at lists.opennebula.org
>> > > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> > >
>> >
>> >
>> > ------------------------------
>> >
>> > _______________________________________________
>> > Users mailing list
>> > Users at lists.opennebula.org
>> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>> >
>> >
>> > End of Users Digest, Vol 69, Issue 48
>> > *************************************
>> >
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL: <
>> http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131112/6e9eddb5/attachment.htm
>> >
>>
>> ------------------------------
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>> End of Users Digest, Vol 69, Issue 49
>> *************************************
>>
>> _______________________________________________
>> Users mailing list
>> Users at lists.opennebula.org
>> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>>
>>
>
>
> --
> Sharuzzaman Ahmat Raslan
>
--
Sharuzzaman Ahmat Raslan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131113/3bf675a1/attachment.htm>
------------------------------
_______________________________________________
Users mailing list
Users at lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
End of Users Digest, Vol 69, Issue 53
*************************************
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20131113/74a2737e/attachment-0001.htm>
More information about the Users
mailing list