[one-users] Getting Ceph VM to boot--Re: Configuring Ceph datastore 4.6

Steven Timm timm at fnal.gov
Wed Nov 19 07:59:41 PST 2014


Hi Jaime..yes we did figure it out eventually.  Turned out the image I had
stored in CEPH was not a good image. Once I loaded another image into ceph 
and booted that, I was fine. the version of libvirt I was running only
supported RAW images over ceph, not qcow2, and my first raw image had been
truncated in some bad way.

Steve Timm


On Wed, 19 Nov 2014, Jaime Melis wrote:

> Hi Steven,
> Sorry, but this email fell through the cracks.
> 
> Did you ever manage to launch a Ceph VM? Or you are still stumped by this issue here?
> 
> if it's still not working for you, can you send us the output of "onedatastore show -x <id>" where
> id is the ceph's ds id.
> 
> Regards,
> Jaime
> 
> On Wed, Sep 24, 2014 at 4:49 PM, Steven Timm <timm at fnal.gov> wrote:
>       Now have upgraded to opennebula 4.8.0 and still struggling
>       with successfully launching a Ceph VM.  We have worked out
>       all permissions issues on the client host and in the Ceph datastore
>       and have gotten to the point where opennebula can deploy
>       the VM from RBD and the virtual machine starts.. but we get "Geom Error"
>       on the console of the VM and that is all.   Has anyone seen this error before and have
>       any idea how to deal with it?  I presume that
>       it means that the virtual machine can not even find the boot sector of
>       the disk that it sees as /dev/vda but I can't find any information
>       on this error anywhere.  Any help is appreciated.
>
>       We are trying to replace an old san-gfs file store with Ceph
>       but need better success than that if this is going to work.
>
>       Steve Timm
> 
>
>       On Wed, 10 Sep 2014, Steven Timm wrote:
>
>             The first and most obvious problem below was that we were running an
>             old version of qemu-img and qemu-kvm that ships with RHEL6/Centos6/SL6
>             that doesn't support the "rbd" format.  We were able to find
>             a modified version that the Ceph people had back-ported and now
>             we can import an image into the datastore and have
>             gotten as far to getting a deployment.0 written on the hypervisor.
>             It can't contact Ceph as yet, is getting connection refused but
>             we think that is an authentication issue.
>
>             http://ceph.com/packages/qemu-kvm/redhat/x86_64/
>             is where these packages for 6.2 live.
>
>             Hopefully this all gets easier pretty soon now that RedHat has bought Ceph
>             and all the right packages will be in RHEL7.  Or will they be only
>             proprietarily available in Redhat Enterprise Virtualization?  Has anyone
>             tried yet?
>
>             Steve
> 
> 
>
>             On Wed, 10 Sep 2014, Steven Timm wrote:
> 
> 
>
>                    I have configured a Ceph datastore on one 4.6 and have gotten
>                   as
>                    far as to get opennebula to accept the datastore.  But when we
>                    try to do the first oneimage create into the datastore we get
>                   the
>                    following error in oned.log :
>
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: Command execution fail:
>                    /var/lib/one/remotes/datastore/ceph/cp
>  PERTX0RSSVZFUl9BQ1RJT05fREFUQT48SU1BR0U+PElEPjE4PC9JRD48VUlEPjA8L1VJRD48R0lEPjA8L0dJRD48VU5BTUU+b
> 25lYWRtaW48L1VOQU1FPjxHTkFNRT5vbmVhZG1pbjwvR05BTUU+PE5BTUU+Y2VwaHRlc3Q8L05BTUU+PFBFUk1JU1NJT05TPjx
> PV05FUl9VPjE8L09XTkVSX1U+PE9XTkVSX00+MTwvT1dORVJfTT48T1dORVJfQT4wPC9PV05FUl9BPjxHUk9VUF9VPjA8L0dST
> 1VQX1U+PEdST1VQX00+MDwvR1JPVVBfTT48R1JPVVBfQT4wPC9HUk9VUF9BPjxPVEhFUl9VPjA8L09USEVSX1U+PE9USEVSX00
> +MDwvT1RIRVJfTT48T1RIRVJfQT4wPC9PVEhFUl9BPjwvUEVSTUlTU0lPTlM+PFRZUEU+MDwvVFlQRT48RElTS19UWVBFPjM8L
> 0RJU0tfVFlQRT48UEVSU0lTVEVOVD4wPC9QRVJTSVNURU5UPjxSRUdUSU1FPjE0MTAzNzI1NTU8L1JFR1RJTUU+PFNPVVJDRT4
> 8L1NPVVJDRT48UEFUSD4vY2xvdWQvaW1hZ2VzL3RpbW0vNDBnYi5xY293MjwvUEFUSD48RlNUWVBFPjwvRlNUWVBFPjxTSVpFP
> jQwOTYwPC9TSVpFPjxTVEFURT40PC9TVEFURT48UlVOTklOR19WTVM+MDwvUlVOTklOR19WTVM+PENMT05JTkdfT1BTPjA8L0N
> MT05JTkdfT1BTPjxDTE9OSU5HX0lEPi0xPC9DTE9OSU5HX0lEPjxEQVRBU1RPUkVfSUQ+MTAzPC9EQVRBU1RPUkVfSUQ+PERBV
> EFTVE9SRT5jZXBoX2RhdGFzdG9yZTwvREFUQVNUT1JFPjxWTVM+PC9WTVM+PENMT05FUz48L0NMT05FUz48VEVNUExBVEU+PER
>                   FU0NSSV
>                    BUS
>  U9OPjwhW0NEQVRBW3Rlc3QgY2VwaCBnb2xkZW4gaW1nXV0+PC9ERVNDUklQVElPTj48REVWX1BSRUZJWD48IVtDREFUQVtoZF
> 1dPjwvREVWX1BSRUZJWD48RFJJVkVSPjwhW0NEQVRBW3Fjb3cyXV0+PC9EUklWRVI+PC9URU1QTEFURT48L0lNQUdFPjxEQVRB
> U1RPUkU+PElEPjEwMzwvSUQ+PFVJRD4wPC9VSUQ+PEdJRD4wPC9HSUQ+PFVOQU1FPm9uZWFkbWluPC9VTkFNRT48R05BTUU+b2
> 5lYWRtaW48L0dOQU1FPjxOQU1FPmNlcGhfZGF0YXN0b3JlPC9OQU1FPjxQRVJNSVNTSU9OUz48T1dORVJfVT4xPC9PV05FUl9V
> PjxPV05FUl9NPjE8L09XTkVSX00+PE9XTkVSX0E+MDwvT1dORVJfQT48R1JPVVBfVT4xPC9HUk9VUF9VPjxHUk9VUF9NPjA8L0
> dST1VQX00+PEdST1VQX0E+MDwvR1JPVVBfQT48T1RIRVJfVT4wPC9PVEhFUl9VPjxPVEhFUl9NPjA8L09USEVSX00+PE9USEVS
> X0E+MDwvT1RIRVJfQT48L1BFUk1JU1NJT05TPjxEU19NQUQ+Y2VwaDwvRFNfTUFEPjxUTV9NQUQ+Y2VwaDwvVE1fTUFEPjxCQV
> NFX1BBVEg+L3Zhci9saWIvb25lLy9kYXRhc3RvcmVzLzEwMzwvQkFTRV9QQVRIPjxUWVBFPjA8L1RZUEU+PERJU0tfVFlQRT4z
> PC9ESVNLX1RZUEU+PENMVVNURVJfSUQ+LTE8L0NMVVNURVJfSUQ+PENMVVNURVI+PC9DTFVTVEVSPjxUT1RBTF9NQj42MTAyNz
> MyODwvVE9UQUxfTUI+PEZSRUVfTUI+NjA4NTE1NTI8L0ZSRUVfTUI+PFVTRURfTUI+MTc1Nzc2PC9VU0VEX01CPjxJTUFHRVM+
>                   PC9JTUF
>                    HRV
>  M+PFRFTVBMQVRFPjxCQVNFX1BBVEg+PCFbQ0RBVEFbL3Zhci9saWIvb25lLy9kYXRhc3RvcmVzL11dPjwvQkFTRV9QQVRIPjx
> CUklER0VfTElTVD48IVtDREFUQVtvbmU0ZGV2XV0+PC9CUklER0VfTElTVD48Q0VQSF9IT1NUPjwhW0NEQVRBW3N0a2VuZGNhM
> DFhIHN0a2VuZGNhMDRhIHN0a2VuZGNhMDJhXV0+PC9DRVBIX0hPU1Q+PENFUEhfU0VDUkVUPjwhW0NEQVRBWy9ldGMvY2VwaC9
> jZXBoLmNsaWVudC5hZG1pbi5rZXlyaW5nXV0+PC9DRVBIX1NFQ1JFVD48Q0xPTkVfVEFSR0VUPjwhW0NEQVRBW1NFTEZdXT48L
> 0NMT05FX1RBUkdFVD48REFUQVNUT1JFX0NBUEFDSVRZX0NIRUNLPjwhW0NEQVRBW3llc11dPjwvREFUQVNUT1JFX0NBUEFDSVR
> ZX0NIRUNLPjxESVNLX1RZUEU+PCFbQ0RBVEFbUkJEXV0+PC9ESVNLX1RZUEU+PERTX01BRD48IVtDREFUQVtjZXBoXV0+PC9EU
> 19NQUQ+PExOX1RBUkdFVD48IVtDREFUQVtOT05FXV0+PC9MTl9UQVJHRVQ+PFBPT0xfTkFNRT48IVtDREFUQVtvbmVdXT48L1B
> PT0xfTkFNRT48U1RBR0lOR19ESVI+PCFbQ0RBVEFbL3Zhci9saWIvb25lL2NlcGgtdG1wXV0+PC9TVEFHSU5HX0RJUj48VE1fT
> UFEPjwhW0NEQVRBW2NlcGhdXT48L1RNX01BRD48L1RFTVBMQVRFPjwvREFUQVNUT1JFPjwvRFNfRFJJVkVSX0FDVElPTl9EQVR
>                   BPg==
>                    18
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: cp: Copying local image
>                    /cloud/images/timm/40gb.qcow2 to the image repository
>                    Wed Sep 10 13:10:44 2014 [ImM][E]: cp: Command "    set -e
>                    Wed Sep 10 13:10:44 2014 [ImM][I]:
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: if [ "" = "2" ]; then
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: FORMAT=$(qemu-img info
>                    /var/lib/one/ceph-tmp/68bec7e25cb73c98a31a48117022d72c | grep
>                   "^file
>                    format:" |  awk '{print }')
>                    Wed Sep 10 13:10:44 2014 [ImM][I]:
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: if [ "$FORMAT" != "raw" ];
>                   then
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: qemu-img convert -O raw
>                    /var/lib/one/ceph-tmp/68bec7e25cb73c98a31a48117022d72c
>                    /var/lib/one/ceph-tmp/68bec7e25cb73c98a31a48117022d72c.raw
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: mv
>                    /var/lib/one/ceph-tmp/68bec7e25cb73c98a31a48117022d72c.raw
>                    /var/lib/one/ceph-tmp/68bec7e25cb73c98a31a48117022d72c
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: fi
>                    Wed Sep 10 13:10:44 2014 [ImM][I]:
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: rbd import --format 2
>                    /var/lib/one/ceph-tmp/68bec7e25cb73c98a31a48117022d72c
>                   one/one-18
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: else
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: qemu-img convert
>                    /var/lib/one/ceph-tmp/68bec7e25cb73c98a31a48117022d72c
>                   rbd:one/one-18
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: fi
>                    Wed Sep 10 13:10:44 2014 [ImM][I]:
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: # remove original
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: rm -f
>                    /var/lib/one/ceph-tmp/68bec7e25cb73c98a31a48117022d72c"
>                   failed: Unknown
>                    protocol 'rbd:one/one-18'
>                    Wed Sep 10 13:10:44 2014 [ImM][E]: Error registering
>                   one/one-18 in one4dev
>                    Wed Sep 10 13:10:44 2014 [ImM][I]: ExitCode: 1
>                    Wed Sep 10 13:10:44 2014 [ImM][E]: Error copying image in the
>                   datastore:
>                    Error registering one/one-18 in one4dev
> 
> 
> 
>
>                    ---
>
>                    Clear we are afailed to register the rbd, but not clear why..
>                   any
>                    or clues on why we failed are helpful.
>                    Several places in the docs refer to libvirt 1.x.  Has anyone
>                    made this work on RHEL6/Centos 6 (we are running the newer
>                   kernel so
>                    we do have the rbd.o kernel module available, and rbd
>                   import/export
>                    works from the command line.)
>
>                    Steve Timm
> 
>
>                    ------------------------------------------------------------------
>                    Steven C. Timm, Ph.D  (630) 840-8525
>                    timm at fnal.gov  http://home.fnal.gov/~timm/
>                    Fermilab Scientific Computing Division, Scientific Computing
>                   Services
>                    Quad.
>                    Grid and Cloud Services Dept., Associate Dept. Head for Cloud
>                   Computing
> 
>
>             ------------------------------------------------------------------
>             Steven C. Timm, Ph.D  (630) 840-8525
>             timm at fnal.gov  http://home.fnal.gov/~timm/
>             Fermilab Scientific Computing Division, Scientific Computing Services Quad.
>             Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing
> 
>
>       ------------------------------------------------------------------
>       Steven C. Timm, Ph.D  (630) 840-8525
>       timm at fnal.gov  http://home.fnal.gov/~timm/
>       Fermilab Scientific Computing Division, Scientific Computing Services Quad.
>       Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing
>       _______________________________________________
>       Users mailing list
>       Users at lists.opennebula.org
>       http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
> 
> 
> 
> 
> --
> OpenNebula - Flexible Enterprise Cloud Made Simple
> --
> Jaime Melis
> Senior Infrastructure Architect at OpenNebula Systems (formerly C12G Labs)
> jmelis at opennebula.systems | @OpenNebula
> --
> Confidentiality Warning: The information contained in this e-mail and any accompanying documents,
> unless otherwise expressly indicated, is confidential and privileged, and is intended solely for
> the person and/or entity to whom it is addressed (i.e. those identified in the "To" and "cc" box).
> They are the property of OpenNebula.Systems S.L.. Unauthorized distribution, review, use,
> disclosure, or copying of this communication, or any part thereof, is strictly prohibited and may
> be unlawful. If you have received this e-mail in error, please notify us immediately by e-mail at
> abuse at opennebula.systems and delete the e-mail and attachments and any copy from your system.
> OpenNebula's thanks you for your cooperation.
> 
>

------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
timm at fnal.gov  http://home.fnal.gov/~timm/
Office:  Wilson Hall room 804
Fermilab Scientific Computing Division,
Currently transitioning from:
Scientific Computing Services Quadrant
Grid and Cloud Services Dept., Associate Dept. Head for Cloud Computing

To:
Scientific Computing Facilities Quadrant.,
Experimental Computing Facilities Dept.,
Project Lead for Virtual Facility Project.



More information about the Users mailing list