[one-users] max number of VMs that can be started at once ?

Tino Vazquez tinova at fdi.ucm.es
Tue Jul 21 03:08:08 PDT 2009


Hi Sebastien,

There is indeed a limit in the number of threads that the transfer
manager uses. It is currently 10, and it can be changed in line 39 of
$ONE_LOCATION/lib/mads/one_tm.rb.

I am assuming that machines 67-72 are eventually transferred. It is
odd though that you experience a 6 transfers limit instead of the
expected 10. This may be due to previous active epilogs or maybe even
crashed prologs. OpenNebula 1.4 will offer the ability to cancel this
crashed threads.

If there is no obvious explanation for the missing 4 transfers you can
send through the tm_ssh.log for us to analyze.

Best regards,

-Tino

--
Constantino Vázquez, Grid Technology Engineer/Researcher:
http://www.dsa-research.org/tinova
DSA Research Group: http://dsa-research.org
Globus GridWay Metascheduler: http://www.GridWay.org
OpenNebula Virtual Infrastructure Engine: http://www.OpenNebula.org



On Mon, Jul 20, 2009 at 3:13 PM, sebastien goasguen<sebgoa at clemson.edu> wrote:
> Hi,
>
> Is there a maximum number of VM that can be started at once ?
>
> I am trying to deploy 12 VMs at once on three hosts.
> I have 6 VMs in the process of starting.
> But the other six have not even started to execute the tm_clone.sh
> script, even though they have been created in /one/var/ that there is
> a transfer file. The log shows them as being in prolog state.
>
> There are no errors in var/log/oned.log and var/log/tm_ssh.log, here
> is all I see:
>
> Mon Jul 20 14:45:45 2009: TRANSFER 67 /home/oneadmin/one/var/67/transfer.0
> Mon Jul 20 14:45:46 2009: TRANSFER 68 /home/oneadmin/one/var/68/transfer.0
> Mon Jul 20 14:45:46 2009: LOG - 64 tm_clone.sh: Executed "ssh
> lxbrl2315 sudo /usr/sbin/lvcreate -L25G -n 64-0 xen_vg".
> Mon Jul 20 14:45:46 2009: LOG - 64 tm_clone.sh: Creating a link from
> /opt/opennebula/64/images/disk.0 to the 64-0 volume
> Mon Jul 20 14:45:46 2009: LOG - 64 tm_clone.sh: Executed "ssh
> lxbrl2315 ln -s /dev/xen_vg/64-0 /opt/opennebula/64/images/disk.0".
> Mon Jul 20 14:45:46 2009: LOG - 65 tm_clone.sh: Executed "ssh
> lxbrl2316 sudo /usr/sbin/lvcreate -L25G -n 65-0 xen_vg".
> Mon Jul 20 14:45:46 2009: LOG - 65 tm_clone.sh: Creating a link from
> /opt/opennebula/65/images/disk.0 to the 65-0 volume
> Mon Jul 20 14:45:46 2009: LOG - 66 tm_clone.sh: Executed "ssh
> lxbrl2316 sudo /usr/sbin/lvcreate -L25G -n 66-0 xen_vg".
> Mon Jul 20 14:45:46 2009: LOG - 66 tm_clone.sh: Creating a link from
> /opt/opennebula/66/images/disk.0 to the 66-0 volume
> Mon Jul 20 14:45:47 2009: LOG - 65 tm_clone.sh: Executed "ssh
> lxbrl2316 ln -s /dev/xen_vg/65-0 /opt/opennebula/65/images/disk.0".
> Mon Jul 20 14:45:47 2009: LOG - 66 tm_clone.sh: Executed "ssh
> lxbrl2316 ln -s /dev/xen_vg/66-0 /opt/opennebula/66/images/disk.0".
> Mon Jul 20 14:46:12 2009: TRANSFER 69 /home/oneadmin/one/var/69/transfer.0
> Mon Jul 20 14:46:12 2009: TRANSFER 70 /home/oneadmin/one/var/70/transfer.0
> Mon Jul 20 14:46:12 2009: TRANSFER 71 /home/oneadmin/one/var/71/transfer.0
> Mon Jul 20 14:46:12 2009: TRANSFER 72 /home/oneadmin/one/var/72/transfer.0
>
> 67 through 72 are not starting but 61-66 are in the process ....
>
> any ideas ?
>
> Cheers,
>
> -sebastien
>
> --
> ---
> Sebastien Goasguen
> School of Computing
> Clemson University
> 864-656-6753
> http://runseb.googlepages.com
> _______________________________________________
> Users mailing list
> Users at lists.opennebula.org
> http://lists.opennebula.org/listinfo.cgi/users-opennebula.org
>


More information about the Users mailing list