<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<br>
<div class="moz-cite-prefix">On 11/08/14 10:25, Jaime Melis wrote:<br>
</div>
<blockquote
cite="mid:CA+HrgRq4Wc2cFUF5eBtywrKpZGYzew4+FEa1CFhkH9xWqW9Ztg@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Alvaro,
<div><br>
</div>
<div>Could you also provide a patch for the documentation?</div>
</div>
<div class="gmail_extra"><br>
</div>
</blockquote>
Sure, I will include the documentation patch during today/tomorrow..<br>
<br>
Cheers and thanks<br>
Alvaro
<blockquote
cite="mid:CA+HrgRq4Wc2cFUF5eBtywrKpZGYzew4+FEa1CFhkH9xWqW9Ztg@mail.gmail.com"
type="cite">
<div class="gmail_extra"><br>
<div class="gmail_quote">On Mon, Aug 11, 2014 at 10:22 AM,
Alvaro Simon Garcia <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:Alvaro.SimonGarcia@ugent.be" target="_blank">Alvaro.SimonGarcia@ugent.be</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Hi Jaime, Javier<br>
<br>
Do you think that it would be possible to include these
ceph patches into next 4.8 release? At this moment we are
applying theses patches by hand in our configuration to
use different Ceph pools.<br>
<br>
<a moz-do-not-send="true"
href="https://github.com/OpenNebula/one/pull/27"
target="_blank">https://github.com/OpenNebula/one/pull/27</a><br>
<a moz-do-not-send="true"
href="https://github.com/OpenNebula/one/pull/28"
target="_blank">https://github.com/OpenNebula/one/pull/28</a><br>
<a moz-do-not-send="true"
href="https://github.com/OpenNebula/one/pull/29"
target="_blank">https://github.com/OpenNebula/one/pull/29</a><br>
<a moz-do-not-send="true"
href="https://github.com/OpenNebula/one/pull/30"
target="_blank">https://github.com/OpenNebula/one/pull/30</a><br>
<a moz-do-not-send="true"
href="https://github.com/OpenNebula/one/pull/31"
target="_blank">https://github.com/OpenNebula/one/pull/31</a><br>
<br>
Cheers<span class="HOEnZb"><font color="#888888"><br>
Alvaro</font></span>
<div>
<div class="h5"><br>
<br>
<div>On 17/07/14 14:11, Alvaro Simon Garcia wrote:<br>
</div>
<blockquote type="cite"> Hi Jaime<br>
<br>
Sorry for the late reply, I didn't see your mail
before because I was included in cc. With this patch
you don't need to include a new user into your ceph
keyring. If you want to use a ceph datastore you
only need to include the user and pool into the
datastore template that's all. In our case we use
livbirt datastore and we have created two ceph
datastore for testing porposes (each one use a
different pool as well):<br>
<br>
<br>
$ onedatastore show ceph<br>
DATASTORE 103
INFORMATION
<br>
ID : 103 <br>
NAME : ceph <br>
USER : oneadmin <br>
GROUP : oneadmin <br>
CLUSTER : - <br>
TYPE : IMAGE <br>
DS_MAD : ceph <br>
TM_MAD : ceph <br>
BASE PATH : /var/lib/one//datastores/103<br>
DISK_TYPE : RBD <br>
<br>
DATASTORE
CAPACITY
<br>
TOTAL: : 87.6T <br>
FREE: : 59.2T <br>
USED: : 28.4T <br>
LIMIT: : - <br>
<br>
PERMISSIONS
<br>
OWNER : um- <br>
GROUP : u-- <br>
OTHER : --- <br>
<br>
DATASTORE
TEMPLATE
<br>
BASE_PATH="/var/lib/one//datastores/"<br>
BRIDGE_LIST="hyp004.cubone.os"<br>
CEPH_HOST="ceph001.cubone.os ceph002.cubone.os
ceph003.cubone.os"<br>
CEPH_SECRET="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"<br>
CEPH_USER="libvirt"<br>
CLONE_TARGET="SELF"<br>
DATASTORE_CAPACITY_CHECK="yes"<br>
DISK_TYPE="RBD"<br>
DS_MAD="ceph"<br>
LN_TARGET="NONE"<br>
POOL_NAME="one"<br>
TM_MAD="ceph"<br>
<br>
IMAGES <br>
7 <br>
8 <br>
31 <br>
32 <br>
34 <br>
35 <br>
<br>
<br>
and the second one:<br>
<br>
$ onedatastore show "ceph two"<br>
DATASTORE 104
INFORMATION
<br>
ID : 104 <br>
NAME : ceph two <br>
USER : oneadmin <br>
GROUP : oneadmin <br>
CLUSTER : - <br>
TYPE : IMAGE <br>
DS_MAD : ceph <br>
TM_MAD : ceph <br>
BASE PATH : /var/lib/one//datastores/104<br>
DISK_TYPE : RBD <br>
<br>
DATASTORE
CAPACITY
<br>
TOTAL: : 87.6T <br>
FREE: : 59.2T <br>
USED: : 28.4T <br>
LIMIT: : - <br>
<br>
PERMISSIONS
<br>
OWNER : um- <br>
GROUP : u-- <br>
OTHER : --- <br>
<br>
DATASTORE
TEMPLATE
<br>
BASE_PATH="/var/lib/one//datastores/"<br>
BRIDGE_LIST="hyp004.cubone.os"<br>
CEPH_HOST="ceph001.cubone.os ceph002.cubone.os
ceph003.cubone.os"<br>
CEPH_SECRET="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"<br>
CEPH_USER="libvirt"<br>
CLONE_TARGET="SELF"<br>
DATASTORE_CAPACITY_CHECK="yes"<br>
DISK_TYPE="RBD"<br>
DS_MAD="ceph"<br>
LN_TARGET="NONE"<br>
POOL_NAME="two"<br>
TM_MAD="ceph"<br>
<br>
As you can see we are using different pools in each
one (so we don't need to include the pool name into
ceph.conf either, and we are able to use several
ceph clusters) and this change simplifies the ONE
conf and ceph cluster admin as well. <br>
<br>
Cheers<br>
Alvaro<br>
<br>
<div>On 03/07/14 17:08, Jaime Melis wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi Alvaro,
<div><br>
</div>
<div>thanks a lot for the contribution. I
haven't tested it yet, but the code seems to
be perfect.</div>
<div><br>
</div>
<div>My main issue here is that I'm not entirely
sure if we need this. Doesn't it make sense
for the oneadmin user to be in the keyring and
be able to run rados? You mentioned this, but
I'm having some trouble understanding what do
you mean:</div>
<div><br>
</div>
<div><span
style="font-family:arial,sans-serif;font-size:13px">>
</span><span
style="font-family:arial,sans-serif;font-size:13px">This
feature could be useful as well if you want
to monitor several datastores that are using
different </span><span
style="font-family:arial,sans-serif;font-size:13px">ceph</span><span
style="font-family:arial,sans-serif;font-size:13px"> pools and users
ids.</span></div>
<div><span
style="font-family:arial,sans-serif;font-size:13px">>
You only have to include the id and pool
info into the ONE </span><span
style="font-family:arial,sans-serif;font-size:13px">datastore</span><span
style="font-family:arial,sans-serif;font-size:13px"> template and the </span><span
style="font-family:arial,sans-serif;font-size:13px">monitoring</span><span
style="font-family:arial,sans-serif;font-size:13px"> script will use one
or another depending on the DS conf.</span><br
style="font-family:arial,sans-serif;font-size:13px">
</div>
<div><span
style="font-family:arial,sans-serif;font-size:13px"><br>
</span></div>
<div><span
style="font-family:arial,sans-serif;font-size:13px">With
the current system you can monitor multiple
ceph pools as long as the oneadmin user has
right, isn't that so?</span></div>
<div><br>
</div>
<div>Joel, would you like to weigh in? would you
find this useful?</div>
<div><br>
</div>
<div>cheers,<br>
Jaime</div>
</div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Fri, Jun 20, 2014 at
3:23 PM, Javier Fontan <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:jfontan@opennebula.org"
target="_blank">jfontan@opennebula.org</a>></span>
wrote:<br>
<blockquote class="gmail_quote"
style="margin:0 0 0 .8ex;border-left:1px
#ccc solid;padding-left:1ex">Great, thanks!
Our ceph dev is traveling right now, he will
review the<br>
patches when he arrives.<br>
<br>
Cheers<br>
<br>
On Thu, Jun 19, 2014 at 3:21 PM, Alvaro
Simon Garcia<br>
<div>
<div><<a moz-do-not-send="true"
href="mailto:Alvaro.SimonGarcia@ugent.be"
target="_blank">Alvaro.SimonGarcia@ugent.be</a>>
wrote:<br>
> Hi Javier<br>
><br>
> We have modified the ceph monitor
to take into account the datastore ceph<br>
> pool and also the ceph user. This
is a generic solution that could be
useful<br>
> for other datacenters as well, we
have created a pull request in github if<br>
> you are agree about this change and
you want to include it in the next<br>
> release.<br>
><br>
> <a moz-do-not-send="true"
href="https://github.com/OpenNebula/one/pull/27"
target="_blank">https://github.com/OpenNebula/one/pull/27</a><br>
><br>
> We only have modified these lines
in<br>
>
/var/lib/one/remotes/datastore/ceph/monitor:<br>
><br>
>> --- monitor.orig.190614
2014-06-19 14:35:24.022755989 +0200<br>
>> +++ monitor 2014-06-19
14:49:34.043187892 +0200<br>
>> @@ -46,10 +46,12 @@<br>
>> while IFS= read -r -d ''
element; do<br>
>>
XPATH_ELEMENTS[i++]="$element"<br>
>> done < <($XPATH
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/BRIDGE_LIST
\<br>
>> -
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME)<br>
>> +
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/POOL_NAME
\<br>
>> +
/DS_DRIVER_ACTION_DATA/DATASTORE/TEMPLATE/CEPH_USER)<br>
>><br>
>>
BRIDGE_LIST="${XPATH_ELEMENTS[j++]}"<br>
>>
POOL_NAME="${XPATH_ELEMENTS[j++]:-$POOL_NAME}"<br>
>>
+CEPH_USER="${XPATH_ELEMENTS[j++]}"<br>
>><br>
>> HOST=`get_destination_host`<br>
>><br>
>> @@ -61,7 +63,7 @@<br>
>> # ------------ Compute
datastore usage -------------<br>
>><br>
>> MONITOR_SCRIPT=$(cat
<<EOF<br>
>> -$RADOS df | $AWK '{<br>
>> +$RADOS df -p ${POOL_NAME} --id
${CEPH_USER}| $AWK '{<br>
>> if (\$1 == "total") {<br>
>><br>
>> space = int(\$3/1024)<br>
><br>
><br>
> CEPH_USER and POOL_NAME should be
mandatory to create the ceph datastore.<br>
><br>
> Cheers<br>
> Alvaro<br>
><br>
><br>
><br>
>> Hola Javi<br>
>><br>
>> Thanks a lot for your feedback.
Yes we will modify the current<br>
>> monitoring scripts to take into
account this. This feature could be<br>
>> useful as well if you want to
monitor several datastores that are
using<br>
>> different ceph pools and users
ids. You only have to include the id and<br>
>> pool info into the ONE
datastore template and the monitoring
script will<br>
>> use one or another depending on
the DS conf.<br>
>><br>
>><br>
>> Cheers and thanks!<br>
>> Alvaro<br>
>> On 2014-06-17 14:55, Javier
Fontan wrote:<br>
>>><br>
>>> CEPH_USER is used when
generating the libvirt/kvm deployment
file but<br>
>>> not for DS monitoring:<br>
>>><br>
>>> * Deployment file
generation:<br>
>>><br>
>>> <a moz-do-not-send="true"
href="https://github.com/OpenNebula/one/blob/one-4.6/src/vmm/LibVirtDriverKVM.cc#L461"
target="_blank">https://github.com/OpenNebula/one/blob/one-4.6/src/vmm/LibVirtDriverKVM.cc#L461</a><br>
>>> * Monitoring:<br>
>>> <a moz-do-not-send="true"
href="https://github.com/OpenNebula/one/blob/one-4.6/src/datastore_mad/remotes/ceph/monitor#L64"
target="_blank">https://github.com/OpenNebula/one/blob/one-4.6/src/datastore_mad/remotes/ceph/monitor#L64</a><br>
>>><br>
>>> Ceph is not may area of
expertise but you may need to add those<br>
>>> parameters to the monitor
script a maybe to other scripts that use
the<br>
>>> "rados" command. It may
also be possible to modify the RADOS
command<br>
>>> to have those parameters
instead of modifying all the scripts:<br>
>>><br>
>>><br>
>>> <a moz-do-not-send="true"
href="https://github.com/OpenNebula/one/blob/one-4.6/src/mad/sh/scripts_common.sh#L40"
target="_blank">https://github.com/OpenNebula/one/blob/one-4.6/src/mad/sh/scripts_common.sh#L40</a><br>
>>><br>
>>> As I said I don't know much
about Ceph and it may be those
credentials<br>
>>> could be set in a config
file or so.<br>
>>><br>
>>> On Tue, Jun 17, 2014 at
11:19 AM, Alvaro Simon Garcia<br>
>>> <<a
moz-do-not-send="true"
href="mailto:Alvaro.SimonGarcia@ugent.be"
target="_blank">Alvaro.SimonGarcia@ugent.be</a>>
wrote:<br>
>>>><br>
>>>> Hi<br>
>>>><br>
>>>> We have included the
admin keyring instead of libvirt user
and it<br>
>>>> works...<br>
>>>> that means that we can
run rbd or qemu-img wihtout the libvirt
id, but<br>
>>>> is<br>
>>>> not the best solution.
We have included the user into datastore
conf:<br>
>>>><br>
>>>> CEPH_USER="libvirt"<br>
>>>><br>
>>>> but it seems that is
not used by opennebula at the end<br>
>>>><br>
>>>> Cheers<br>
>>>> Alvaro<br>
>>>><br>
>>>><br>
>>>> On 2014-06-17 10:09,
Alvaro Simon Garcia wrote:<br>
>>>>><br>
>>>>> Hi all<br>
>>>>><br>
>>>>><br>
>>>>> We have included
our ONE nodes into Ceph cluster, cephx
auth is working<br>
>>>>> but OpenNebula is
not able to detect the free space:<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>>> $ onedatastore
show 103<br>
>>>>>> DATASTORE 103
INFORMATION<br>
>>>>>> ID
: 103<br>
>>>>>> NAME
: ceph<br>
>>>>>> USER
: oneadmin<br>
>>>>>> GROUP
: oneadmin<br>
>>>>>> CLUSTER
: -<br>
>>>>>> TYPE
: IMAGE<br>
>>>>>> DS_MAD
: ceph<br>
>>>>>> TM_MAD
: ceph<br>
>>>>>> BASE PATH
: /var/lib/one//datastores/103<br>
>>>>>> DISK_TYPE
: RBD<br>
>>>>>><br>
>>>>>> DATASTORE
CAPACITY<br>
>>>>>> TOTAL:
: 0M<br>
>>>>>> FREE:
: 0M<br>
>>>>>> USED:
: 0M<br>
>>>>>> LIMIT:
: -<br>
>>>>>><br>
>>>>>> PERMISSIONS<br>
>>>>>> OWNER
: um-<br>
>>>>>> GROUP
: u--<br>
>>>>>> OTHER
: ---<br>
>>>>><br>
>>>>><br>
>>>>>> $ onedatastore
list<br>
>>>>>> ID NAME
SIZE AVAIL CLUSTER
IMAGES TYPE DS<br>
>>>>>> TM<br>
>>>>>> 0 system
114.8G 85% -
0 sys -<br>
>>>>>> shared<br>
>>>>>> 1 default
114.9G 84% -
2 img fs<br>
>>>>>> ssh<br>
>>>>>> 2 files
114.9G 84% -
0 fil fs<br>
>>>>>> ssh<br>
>>>>>> 103 ceph
0M - -
0 img ceph<br>
>>>>>> ceph<br>
>>>>><br>
>>>>><br>
>>>>> but if we run rados
as oneadmin user:<br>
>>>>><br>
>>>>>> $ rados df -p
one --id libvirt<br>
>>>>>> pool name
category KB objects
clones<br>
>>>>>> degraded
unfound rd rd KB
wr wr<br>
>>>>>> KB<br>
>>>>>> one
- 0 0
0<br>
>>>>>> 0 0
0 0 0 0<br>
>>>>>> total used
1581852 37<br>
>>>>>> total
avail 140846865180<br>
>>>>>> total
space 140848447032<br>
>>>>><br>
>>>>><br>
>>>>> It's working
correctly (we are using one pool and
libvirt ceph id)<br>
>>>>><br>
>>>>> the oned.log only
shows this info:<br>
>>>>> Tue Jun 17 10:06:37
2014 [InM][D]: Monitoring datastore
default (1)<br>
>>>>> Tue Jun 17 10:06:37
2014 [InM][D]: Monitoring datastore
files (2)<br>
>>>>> Tue Jun 17 10:06:37
2014 [InM][D]: Monitoring datastore ceph
(103)<br>
>>>>> Tue Jun 17 10:06:37
2014 [ImM][D]: Datastore default (1)
successfully<br>
>>>>> monitored.<br>
>>>>> Tue Jun 17 10:06:37
2014 [ImM][D]: Datastore files (2)
successfully<br>
>>>>> monitored.<br>
>>>>> Tue Jun 17 10:06:37
2014 [ImM][D]: Datastore ceph (103)
successfully<br>
>>>>> monitored.<br>
>>>>><br>
>>>>> Any clue about how
to debug this issue?<br>
>>>>><br>
>>>>> Thanks in advance!<br>
>>>>> Alvaro<br>
>>>>><br>
>>>>><br>
>>>>><br>
>>>>>
_______________________________________________<br>
>>>>> Users mailing list<br>
>>>>> <a
moz-do-not-send="true"
href="mailto:Users@lists.opennebula.org"
target="_blank">Users@lists.opennebula.org</a><br>
>>>>> <a
moz-do-not-send="true"
href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org"
target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
>>>><br>
>>>>
_______________________________________________<br>
>>>> Users mailing list<br>
>>>> <a
moz-do-not-send="true"
href="mailto:Users@lists.opennebula.org"
target="_blank">Users@lists.opennebula.org</a><br>
>>>> <a
moz-do-not-send="true"
href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org"
target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
>>><br>
>>><br>
>>
_______________________________________________<br>
>> Users mailing list<br>
>> <a moz-do-not-send="true"
href="mailto:Users@lists.opennebula.org"
target="_blank">Users@lists.opennebula.org</a><br>
>> <a moz-do-not-send="true"
href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org"
target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
><br>
><br>
<br>
<br>
<br>
</div>
</div>
<div>--<br>
Javier Fontán Muiños<br>
Developer<br>
OpenNebula - Flexible Enterprise Cloud
Made Simple<br>
<a moz-do-not-send="true"
href="http://www.OpenNebula.org"
target="_blank">www.OpenNebula.org</a> |
@OpenNebula | <a moz-do-not-send="true"
href="http://github.com/jfontan"
target="_blank">github.com/jfontan</a><br>
</div>
<div>
<div>_______________________________________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@lists.opennebula.org"
target="_blank">Users@lists.opennebula.org</a><br>
<a moz-do-not-send="true"
href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org"
target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr">
<div>Jaime Melis<br>
Project Engineer<br>
OpenNebula - Flexible Enterprise Cloud Made
Simple<br>
<a moz-do-not-send="true"
href="http://www.OpenNebula.org"
target="_blank">www.OpenNebula.org</a> | <a
moz-do-not-send="true"
href="mailto:jmelis@opennebula.org"
target="_blank">jmelis@opennebula.org</a></div>
</div>
</blockquote>
<br>
</blockquote>
<br>
</div>
</div>
</div>
<br>
_______________________________________________<br>
Users mailing list<br>
<a moz-do-not-send="true"
href="mailto:Users@lists.opennebula.org">Users@lists.opennebula.org</a><br>
<a moz-do-not-send="true"
href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org"
target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br>
</blockquote>
</div>
<br>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr">
<div>Jaime Melis<br>
Project Engineer<br>
OpenNebula - Flexible Enterprise Cloud Made Simple<br>
<a moz-do-not-send="true" href="http://www.OpenNebula.org"
target="_blank">www.OpenNebula.org</a> | <a
moz-do-not-send="true" href="mailto:jmelis@opennebula.org"
target="_blank">jmelis@opennebula.org</a></div>
</div>
</div>
</blockquote>
<br>
</body>
</html>