[one-users] deploy Non-persistent vm on 4.6.2 base glusterfs no running
Jaime Melis
jmelis at opennebula.org
Wed Aug 13 01:55:01 PDT 2014
Hi,
did this work for you?
cheers,
Jaime
On Mon, Jul 14, 2014 at 2:38 AM, clm_tuan at hotmail.com <clm_tuan at hotmail.com>
wrote:
> Thank you for your reply.
>
> Regards.
>
>
> *From:* vincent <vincent at vanderkussen.org>
> *Date:* 2014-07-12 18:30
> *To:* clm_tuan <clm_tuan at hotmail.com>
> *CC:* users <users at lists.opennebula.org>
> *Subject:* Re: [one-users] deploy Non-persistent vm on 4.6.2 base
> glusterfs no running
> On 2014-07-11 10:17, clm_tuan at hotmail.com wrote:
> > Dear All,
> >
> > I tried to deploy opennebula 4.6.2 base glusterfs 3.5.1
> >
> > all three server:
> > 1. one server,ip is 192.168.0.198;
> > 2. node0,ip is 192.168.0.199;
> > 3. node1,ip is 192.168.0.200;
> >
> > Datastores configuration:
> >
> > [oneadmin at fc-server ]$ onedatastore show 102
> > DATASTORE 102 INFORMATION
> > ID : 102
> > NAME : sys
> > USER : oneadmin
> > GROUP : oneadmin
> > CLUSTER : c1
> > TYPE : SYSTEM
> > DS_MAD : -
> > TM_MAD : shared
> > BASE PATH : /var/lib/one//datastores/102
> > DISK_TYPE : FILE
> >
> > DATASTORE CAPACITY
> > TOTAL: : 131.3G
> > FREE: : 123.3G
> > USED: : 41M
> > LIMIT: : -
> >
> > PERMISSIONS
> > OWNER : um-
> > GROUP : u--
> > OTHER : ---
> >
> > DATASTORE TEMPLATE
> > BASE_PATH="/var/lib/one//datastores/"
> > GLUSTER_HOST="192.168.0.199:24007"
> > GLUSTER_VOLUME="fc-sys"
> > SHARED="YES"
> > TM_MAD="shared"
> > TYPE="SYSTEM_DS"
> >
> > [oneadmin at fc-server ]$ onedatastore show 103
> > DATASTORE 103 INFORMATION
> > ID : 103
> > NAME : img
> > USER : oneadmin
> > GROUP : oneadmin
> > CLUSTER : c1
> > TYPE : IMAGE
> > DS_MAD : fs
> > TM_MAD : shared
> > BASE PATH : /var/lib/one//datastores/103
> > DISK_TYPE :
> >
> > DATASTORE CAPACITY
> > TOTAL: : 131.3G
> > FREE: : 123.3G
> > USED: : 41M
> > LIMIT: : -
> >
> > PERMISSIONS
> > OWNER : um-
> > GROUP : u--
> > OTHER : ---
> >
> > DATASTORE TEMPLATE
> > BASE_PATH="/var/lib/one//datastores/"
> > CLONE_TARGET="SYSTEM"
> > DISK_TYPE="GLUSTER"
> > DS_MAD="fs"
> > GLUSTER_HOST="192.168.0.199:24007"
> > GLUSTER_VOLUME="fc-img"
> > LN_TARGET="NONE"
> > TM_MAD="shared"
> > TYPE="IMAGE_DS"
> >
> > 192.168.0.199:/fc-sys on /var/lib/one/datastores/102 type
> > fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
> > 192.168.0.199:/fc-img on /var/lib/one/datastores/103 type
> > fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
> >
> > deploy image to PERSISTENT, it's ok.
> >
> > [oneadmin at fc-server ]$ oneimage show 1
> > IMAGE 1 INFORMATION
> > ID : 1
> > NAME : ttylinux
> > USER : oneadmin
> > GROUP : oneadmin
> > DATASTORE : img
> > TYPE : OS
> > REGISTER TIME : 07/11 21:35:11
> > PERSISTENT : Yes
> > SOURCE : /var/lib/one//datastores/103/4c40e4068ad2ff41c8767c40f85ae786
> >
> > PATH :
> >
> http://marketplace.c12g.com/appliance/4fc76a938fb81d3517000003/download/0
> >
> > SIZE : 40M
> > STATE : used
> > RUNNING_VMS : 1
> >
> > PERMISSIONS
> > OWNER : um-
> > GROUP : ---
> > OTHER : ---
> >
> > IMAGE TEMPLATE
> > DEV_PREFIX="hd"
> > FROM_APP="4fc76a938fb81d3517000003"
> > FROM_APP_FILE="0"
> > FROM_APP_NAME="ttylinux - kvm"
> > MD5="04c7d00e88fa66d9aaa34d9cf8ad6aaa"
> >
> > VIRTUAL MACHINES
> >
> > ID USER GROUP NAME STAT UCPU UMEM HOST TIME
> > 8 oneadmin oneadmin tt runn 0 512M 192.168.0. -1d 16h32
> > [oneadmin at fc-server tm]$
> >
> > [oneadmin at fc-server 8]$ ll
> > total 366
> > -rw-rw-r-- 1 oneadmin oneadmin 992 Jul 11 13:44 deployment.0
> > lrwxrwxrwx 1 oneadmin oneadmin 60 Jul 11 13:44 disk.0 ->
> > /var/lib/one/datastores/103/4c40e4068ad2ff41c8767c40f85ae786
> > -rw-r--r-- 1 oneadmin oneadmin 372736 Jul 11 13:44 disk.1
> > lrwxrwxrwx 1 oneadmin oneadmin 36 Jul 11 13:44 disk.1.iso ->
> > /var/lib/one/datastores/102/8/disk.1
> >
> > but deploy image to Non-persistent, it's no running.
> >
> > [oneadmin at fc-server 102]$ oneimage show 1
> > IMAGE 1 INFORMATION
> > ID : 1
> > NAME : ttylinux
> > USER : oneadmin
> > GROUP : oneadmin
> > DATASTORE : img
> > TYPE : OS
> > REGISTER TIME : 07/11 21:35:11
> > PERSISTENT : No
> > SOURCE : /var/lib/one//datastores/103/4c40e4068ad2ff41c8767c40f85ae786
> >
> > PATH :
> >
> http://marketplace.c12g.com/appliance/4fc76a938fb81d3517000003/download/0
> >
> > SIZE : 40M
> > STATE : used
> > RUNNING_VMS : 1
> >
> > PERMISSIONS
> > OWNER : um-
> > GROUP : ---
> > OTHER : ---
> >
> > IMAGE TEMPLATE
> > DEV_PREFIX="hd"
> > FROM_APP="4fc76a938fb81d3517000003"
> > FROM_APP_FILE="0"
> > FROM_APP_NAME="ttylinux - kvm"
> > MD5="04c7d00e88fa66d9aaa34d9cf8ad6aaa"
> >
> > VIRTUAL MACHINES
> >
> > ID USER GROUP NAME STAT UCPU UMEM HOST TIME
> > 10 oneadmin oneadmin tt fail 0 0K 0d 00h00
> >
> > [oneadmin at fc-server 102]$ ls
> > 10
> > [oneadmin at fc-server 102]$ cd 10
> > [oneadmin at fc-server 10]$ ll
> > total 41325
> > -rw-rw-r-- 1 oneadmin oneadmin 971 Jul 11 14:35 deployment.0
> > -rw-r--r-- 1 oneadmin oneadmin 41943040 Jul 11 14:35 disk.0
> > -rw-r--r-- 1 oneadmin oneadmin 372736 Jul 11 14:35 disk.1
> > lrwxrwxrwx 1 oneadmin oneadmin 37 Jul 11 14:35 disk.1.iso ->
> > /var/lib/one/datastores/102/10/disk.1
> >
> > show log :
> >
> > Fri Jul 11 14:35:26 2014 [DiM][D]: Deploying VM 10
> > Fri Jul 11 14:35:26 2014 [ReM][D]: Req:8400 UID:0 VirtualMachineDeploy
> > result SUCCESS, 10
> > Fri Jul 11 14:35:31 2014 [TM][D]: Message received: TRANSFER SUCCESS
> > 10 -
> >
> > Fri Jul 11 14:35:32 2014 [VMM][D]: Message received: LOG I 10
> > ExitCode: 0
> >
> > Fri Jul 11 14:35:32 2014 [VMM][D]: Message received: LOG I 10
> > Successfully execute network driver operation: pre.
> >
> > Fri Jul 11 14:35:33 2014 [VMM][D]: Message received: LOG I 10 Command
> > execution fail: cat << EOT | /var/tmp/one/vmm/kvm/deploy
> > '/var/lib/one//datastores/102/10/deployment.0' '192.168.0.199' 10
> > 192.168.0.199
> >
> > Fri Jul 11 14:35:33 2014 [VMM][D]: Message received: LOG I 10 error:
> > Failed to create domain from
> > /var/lib/one//datastores/102/10/deployment.0
> >
> > Fri Jul 11 14:35:33 2014 [VMM][D]: Message received: LOG I 10 error:
> > internal error process exited while connecting to monitor: qemu-kvm:
> > -drive
> > file=gluster+tcp://
> 192.168.0.199:24007/fc-img/10/disk.0,if=none,id=drive-ide0-0-0,format=raw,cache=none
> :
> > could not open disk image gluster+tcp://192.168.0.199:24007/fc-
> > img/10/disk.0: No such file or directory
> >
> > Fri Jul 11 14:35:33 2014 [VMM][D]: Message received: LOG I 10
> > [2014-07-11 06:35:36.105637] E [afr-common.c:4157:afr_notify]
> > 0-fc-img-replicate-0: All subvolumes are down.
> > Going offline until atleast one of them comes back up.
> >
> > Fri Jul 11 14:35:33 2014 [VMM][D]: Message received: LOG I 10
> >
> > Fri Jul 11 14:35:33 2014 [VMM][D]: Message received: LOG E 10 Could
> > not create domain from /var/lib/one//datastores/102/10/deployment.0
> >
> > Fri Jul 11 14:35:33 2014 [VMM][D]: Message received: LOG I 10
> > ExitCode: 255
> >
> > Fri Jul 11 14:35:33 2014 [VMM][D]: Message received: LOG I 10 Failed
> > to execute virtualization driver operation: deploy.
> >
> > [oneadmin at fc-server 102]$ more /var/log/one/10.log
> > Fri Jul 11 14:35:26 2014 [DiM][I]: New VM state is ACTIVE.
> > Fri Jul 11 14:35:27 2014 [LCM][I]: New VM state is PROLOG.
> > Fri Jul 11 14:35:32 2014 [LCM][I]: New VM state is BOOT
> > Fri Jul 11 14:35:32 2014 [VMM][I]: Generating deployment file:
> > /var/lib/one/vms/10/deployment.0
> > Fri Jul 11 14:35:32 2014 [VMM][I]: ExitCode: 0
> > Fri Jul 11 14:35:32 2014 [VMM][I]: Successfully execute network driver
> > operation: pre.
> > Fri Jul 11 14:35:33 2014 [VMM][I]: Command execution fail: cat << EOT
> > | /var/tmp/one/vmm/kvm/deploy
> > '/var/lib/one//datastores/102/10/deployment.0' '192.168.0.199
> > ' 10 192.168.0.199
> > Fri Jul 11 14:35:33 2014 [VMM][I]: error: Failed to create domain from
> > /var/lib/one//datastores/102/10/deployment.0
> > Fri Jul 11 14:35:33 2014 [VMM][I]: error: internal error process
> > exited while connecting to monitor: qemu-kvm: -drive
> > file=gluster+tcp://192.168.0.199:24007/fc-i
> > mg/10/disk.0,if=none,id=drive-ide0-0-0,format=raw,cache=none: could
> > not open disk image
> > gluster+tcp://192.168.0.199:24007/fc-img/10/disk.0: No such file or
> > direc
> > tory
> > Fri Jul 11 14:35:33 2014 [VMM][I]: [2014-07-11 06:35:36.105637] E
> > [afr-common.c:4157:afr_notify] 0-fc-img-replicate-0: All subvolumes
> > are down. Going offline unt
> > il atleast one of them comes back up.
> > Fri Jul 11 14:35:33 2014 [VMM][I]:
> > Fri Jul 11 14:35:33 2014 [VMM][E]: Could not create domain from
> > /var/lib/one//datastores/102/10/deployment.0
> > Fri Jul 11 14:35:33 2014 [VMM][I]: ExitCode: 255
> > Fri Jul 11 14:35:33 2014 [VMM][I]: Failed to execute virtualization
> > driver operation: deploy.
> > Fri Jul 11 14:35:33 2014 [VMM][E]: Error deploying virtual machine:
> > Could not create domain from
> > /var/lib/one//datastores/102/10/deployment.0
> > Fri Jul 11 14:35:35 2014 [DiM][I]: New VM state is FAILED
> >
> > qemu-kvm: -drive
> > file=gluster+tcp://192.168.0.199:24007/fc-img/10/disk.0
> > I think it should be
> > gluster+tcp://192.168.0.199:24007/fc-sys/10/disk.0
> >
> > What advice do you have.Thanks.
> >
>
> Hi,
>
> I've been setting up our Opennebula private cloud over the last few
> weeks and I'm also
> using Gluster for the storage backend. Not all info to make it work is
> available in the
> Opennebula docs and I was planning on submitting the necessary info.
>
> After reading your post I decided to no longer postpone it.
>
> have a look at this pull request :
>
> https://github.com/vincentvdk/docs/commit/98e70a83b40b58d84e00d3180023671263e2b787
>
> Adding the 'virt' group should fix your problem.
>
> Regards,
> Vincent
>
>
>
> >
> > _______________________________________________
> > Users mailing list
> > Users at lists.opennebula.org
> > http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>
>
--
Jaime Melis
Project Engineer
OpenNebula - Flexible Enterprise Cloud Made Simple
www.OpenNebula.org | jmelis at opennebula.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.opennebula.org/pipermail/users-opennebula.org/attachments/20140813/146b2a74/attachment-0001.htm>
More information about the Users
mailing list