<div dir="ltr">I forgot to add that you can get the xsd files from the documentation [1] or the repository [2]<div><br></div><div>Regards</div><div><br></div><div>[1] <a href="http://opennebula.org/documentation:rel4.2:api#xsd_reference">http://opennebula.org/documentation:rel4.2:api#xsd_reference</a><br>
</div><div>[2] <a href="http://dev.opennebula.org/projects/opennebula/repository/revisions/release-4.2/show/share/doc/xsd">http://dev.opennebula.org/projects/opennebula/repository/revisions/release-4.2/show/share/doc/xsd</a><br>
</div></div><div class="gmail_extra"><br clear="all"><div><div dir="ltr">--<br>Join us at <a href="http://opennebulaconf.com" target="_blank">OpenNebulaConf2013</a> in Berlin, 24-26 September, 2013<br>--<div>Carlos Martín, MSc<br>
Project Engineer<br>OpenNebula - The Open-source Solution for Data Center Virtualization<div><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a> | <a href="http://twitter.com/opennebula" target="_blank">@OpenNebula</a></span><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="mailto:cmartin@opennebula.org" style="color:rgb(42,93,176)" target="_blank"></a></span></div>
</div></div></div>
<br><br><div class="gmail_quote">On Tue, Aug 27, 2013 at 6:19 PM, Carlos Martín Sánchez <span dir="ltr"><<a href="mailto:cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hi,<div><br></div><div>Something did not work in the migration process...</div><div>The HOST/VMS element you mention should have been added by this file [1], and your VM xml is missing the USER_TEMPLATE element, which is added here [2].</div>
<div><br></div><div>Can you compare the contents of your migrator files in /usr/lib/one/ruby/onedb/ to the repo [3]?</div><div><br></div><div>Regards</div><div><br></div><div>[1] <a href="http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/onedb/3.8.0_to_3.8.1.rb#L92" target="_blank">http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/onedb/3.8.0_to_3.8.1.rb#L92</a><br>
</div><div>[2] <a href="http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/onedb/3.8.4_to_3.9.80.rb#L410" target="_blank">http://dev.opennebula.org/projects/opennebula/repository/revisions/master/entry/src/onedb/3.8.4_to_3.9.80.rb#L410</a></div>
<div>[3] <a href="http://dev.opennebula.org/projects/opennebula/repository/revisions/master/show/src/onedb" target="_blank">http://dev.opennebula.org/projects/opennebula/repository/revisions/master/show/src/onedb</a><br>
</div><div class="gmail_extra"><div class="im">
<br clear="all"><div><div dir="ltr">--<br>Join us at <a href="http://opennebulaconf.com" target="_blank">OpenNebulaConf2013</a> in Berlin, 24-26 September, 2013<br>--<div>Carlos Martín, MSc<br>Project Engineer<br>OpenNebula - The Open-source Solution for Data Center Virtualization<div>
<span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a> | <a href="mailto:cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a> | <a href="http://twitter.com/opennebula" target="_blank">@OpenNebula</a></span><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="mailto:cmartin@opennebula.org" style="color:rgb(42,93,176)" target="_blank"></a></span></div>
</div></div></div>
<br><br></div><div><div class="h5"><div class="gmail_quote">On Mon, Aug 26, 2013 at 2:53 PM, Federico Zani <span dir="ltr"><<a href="mailto:federico.zani@roma2.infn.it" target="_blank">federico.zani@roma2.infn.it</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<div>Hi Carlos,<br>
the problem is that I can't even get the xml of the vms.<br>
It seems it's something related to how the xml in the "body"
column (for both hosts and vms) of the database is structured.<br>
<br>
Looking deeply in the migrations scripts, I solved the hosts
problem by adding the <vms> node (even without child) under
the <host> tag of the body column in "host_pool" table, but
for the vms I still have to find a solution.<br>
<br>
Now with hosts access I'm able to submit and control new vm
instances, but I have dozens of running vms that I'm not even able
to destroy (not even with the force switch turned on).<br>
<br>
This is the xml of one my hosts, as returned by onehost show -x
(relevant names are remmed out via the "[...]" string) : <br>
<br>
<HOST><br>
<ID>15</ID><br>
<NAME>[...]</NAME><br>
<STATE>2</STATE><br>
<IM_MAD>im_kvm</IM_MAD><br>
<VM_MAD>vmm_kvm</VM_MAD><br>
<VN_MAD>dummy</VN_MAD><br>
<LAST_MON_TIME>1377520947</LAST_MON_TIME><br>
<CLUSTER_ID>101</CLUSTER_ID><br>
<CLUSTER>[...]</CLUSTER><br>
<HOST_SHARE><br>
<DISK_USAGE>0</DISK_USAGE><br>
<MEM_USAGE>20971520</MEM_USAGE><br>
<CPU_USAGE>1800</CPU_USAGE><br>
<MAX_DISK>0</MAX_DISK><br>
<MAX_MEM>24596936</MAX_MEM><br>
<MAX_CPU>2400</MAX_CPU><br>
<FREE_DISK>0</FREE_DISK><br>
<FREE_MEM>5558100</FREE_MEM><br>
<FREE_CPU>2323</FREE_CPU><br>
<USED_DISK>0</USED_DISK><br>
<USED_MEM>19038836</USED_MEM><br>
<USED_CPU>76</USED_CPU><br>
<RUNNING_VMS>6</RUNNING_VMS><br>
</HOST_SHARE><br>
<VMS><br>
<ID>326</ID><br>
</VMS><br>
<TEMPLATE><br>
<ARCH><![CDATA[x86_64]]></ARCH><br>
<CPUSPEED><![CDATA[1600]]></CPUSPEED><br>
<FREECPU><![CDATA[2323.2]]></FREECPU><br>
<FREEMEMORY><![CDATA[5558100]]></FREEMEMORY><br>
<HOSTNAME><![CDATA[[...]]]></HOSTNAME><br>
<HYPERVISOR><![CDATA[kvm]]></HYPERVISOR><br>
<MODELNAME><![CDATA[Intel(R) Xeon(R) CPU
E5645 @ 2.40GHz]]></MODELNAME><br>
<NETRX><![CDATA[16007208117863]]></NETRX><br>
<NETTX><![CDATA[1185926401588]]></NETTX><br>
<TOTALCPU><![CDATA[2400]]></TOTALCPU><br>
<TOTALMEMORY><![CDATA[24596936]]></TOTALMEMORY><br>
<TOTAL_ZOMBIES><![CDATA[5]]></TOTAL_ZOMBIES><br>
<USEDCPU><![CDATA[76.8000000000002]]></USEDCPU><br>
<USEDMEMORY><![CDATA[19038836]]></USEDMEMORY><br>
<ZOMBIES><![CDATA[one-324, one-283, one-314, one-317,
one-304]]></ZOMBIES><br>
</TEMPLATE><br>
</HOST><br>
<br>
As you can see, every hosts now recognize the connected vms as
"zombies", probably because he can't query the vms.<br>
<br>
I'm also sending you the xml contained in the "body" column of the
vm_pool table of a vm I can't query with onevm show : <br>
<br>
<VM><br>
<ID>324</ID><br>
<UID>0</UID><br>
<GID>0</GID><br>
<UNAME>oneadmin</UNAME><br>
<GNAME>oneadmin</GNAME><br>
<NAME>[...]</NAME><br>
<PERMISSIONS><br>
<OWNER_U>1</OWNER_U><br>
<OWNER_M>1</OWNER_M><br>
<OWNER_A>0</OWNER_A><br>
<GROUP_U>0</GROUP_U><br>
<GROUP_M>0</GROUP_M><br>
<GROUP_A>0</GROUP_A><br>
<OTHER_U>0</OTHER_U><br>
<OTHER_M>0</OTHER_M><br>
<OTHER_A>0</OTHER_A><br>
</PERMISSIONS><br>
<LAST_POLL>1375778872</LAST_POLL><br>
<STATE>3</STATE><br>
<LCM_STATE>3</LCM_STATE><br>
<RESCHED>0</RESCHED><br>
<STIME>1375457045</STIME><br>
<ETIME>0</ETIME><br>
<DEPLOY_ID>one-324</DEPLOY_ID><br>
<MEMORY>4194304</MEMORY><br>
<CPU>9</CPU><br>
<NET_TX>432290511</NET_TX><br>
<NET_RX><a href="tel:2072231827" value="+12072231827" target="_blank">2072231827</a></NET_RX><br>
<TEMPLATE><br>
<CONTEXT><br>
<ETH0_DNS><![CDATA[[...]]]></ETH0_DNS><br>
<ETH0_GATEWAY><![CDATA[[...]]]></ETH0_GATEWAY><br>
<ETH0_IP><![CDATA[[...]]]></ETH0_IP><br>
<ETH0_MASK><![CDATA[[...]]]></ETH0_MASK><br>
<FILES><![CDATA[[...]]]></FILES><br>
<HOSTNAME><![CDATA[[...]]]></HOSTNAME><br>
<TARGET><![CDATA[hdb]]></TARGET><br>
</CONTEXT><br>
<CPU><![CDATA[4]]></CPU><br>
<DISK><br>
<CLONE><![CDATA[YES]]></CLONE><br>
<CLUSTER_ID><![CDATA[101]]></CLUSTER_ID><br>
<DATASTORE><![CDATA[nonshared_ds]]></DATASTORE><br>
<DATASTORE_ID><![CDATA[101]]></DATASTORE_ID><br>
<DEV_PREFIX><![CDATA[hd]]></DEV_PREFIX><br>
<DISK_ID><![CDATA[0]]></DISK_ID><br>
<IMAGE><![CDATA[[...]]]></IMAGE><br>
<IMAGE_ID><![CDATA[119]]></IMAGE_ID><br>
<IMAGE_UNAME><![CDATA[oneadmin]]></IMAGE_UNAME><br>
<READONLY><![CDATA[NO]]></READONLY><br>
<SAVE><![CDATA[NO]]></SAVE><br>
<SOURCE><![CDATA[/var/lib/one/datastores/101/3860dfcd1bec39ce672ba855564b44ca]]></SOURCE><br>
<TARGET><![CDATA[hda]]></TARGET><br>
<TM_MAD><![CDATA[ssh]]></TM_MAD><br>
<TYPE><![CDATA[FILE]]></TYPE><br>
</DISK><br>
<DISK><br>
<DEV_PREFIX><![CDATA[hd]]></DEV_PREFIX><br>
<DISK_ID><![CDATA[1]]></DISK_ID><br>
<FORMAT><![CDATA[ext3]]></FORMAT><br>
<SIZE><![CDATA[26000]]></SIZE><br>
<TARGET><![CDATA[hdc]]></TARGET><br>
<TYPE><![CDATA[fs]]></TYPE><br>
</DISK><br>
<DISK><br>
<DEV_PREFIX><![CDATA[hd]]></DEV_PREFIX><br>
<DISK_ID><![CDATA[2]]></DISK_ID><br>
<SIZE><![CDATA[8192]]></SIZE><br>
<TARGET><![CDATA[hdd]]></TARGET><br>
<TYPE><![CDATA[swap]]></TYPE><br>
</DISK><br>
<FEATURES><br>
<ACPI><![CDATA[yes]]></ACPI><br>
</FEATURES><br>
<GRAPHICS><br>
<KEYMAP><![CDATA[it]]></KEYMAP><br>
<LISTEN><![CDATA[0.0.0.0]]></LISTEN><br>
<PORT><![CDATA[6224]]></PORT><br>
<TYPE><![CDATA[vnc]]></TYPE><br>
</GRAPHICS><br>
<MEMORY><![CDATA[4096]]></MEMORY><br>
<NAME><![CDATA[[...]]]></NAME><br>
<NIC><br>
<BRIDGE><![CDATA[br1]]></BRIDGE><br>
<CLUSTER_ID><![CDATA[101]]></CLUSTER_ID><br>
<IP><![CDATA[[...]]]></IP><br>
<MAC><![CDATA[02:00:c0:a8:1e:02]]></MAC><br>
<MODEL><![CDATA[virtio]]></MODEL><br>
<NETWORK><![CDATA[[...]]]></NETWORK><br>
<NETWORK_ID><![CDATA[9]]></NETWORK_ID><br>
<NETWORK_UNAME><![CDATA[oneadmin]]></NETWORK_UNAME><br>
<VLAN><![CDATA[NO]]></VLAN><br>
</NIC><br>
<OS><br>
<ARCH><![CDATA[x86_64]]></ARCH><br>
<BOOT><![CDATA[hd]]></BOOT><br>
</OS><br>
<RAW><br>
<TYPE><![CDATA[kvm]]></TYPE><br>
</RAW><br>
<REQUIREMENTS><![CDATA[CLUSTER_ID =
101]]></REQUIREMENTS><br>
<TEMPLATE_ID><![CDATA[38]]></TEMPLATE_ID><br>
<VCPU><![CDATA[4]]></VCPU><br>
<VMID><![CDATA[324]]></VMID><br>
</TEMPLATE><br>
<HISTORY_RECORDS><br>
<HISTORY><br>
<OID>324</OID><br>
<SEQ>0</SEQ><br>
<HOSTNAME>[...]</HOSTNAME><br>
<HID>15</HID><br>
<STIME>1375457063</STIME><br>
<ETIME>0</ETIME><br>
<VMMMAD>vmm_kvm</VMMMAD><br>
<VNMMAD>dummy</VNMMAD><br>
<TMMAD>ssh</TMMAD><br>
<DS_LOCATION>/var/datastore</DS_LOCATION><br>
<DS_ID>102</DS_ID><br>
<PSTIME>1375457063</PSTIME><br>
<PETIME>1375457263</PETIME><br>
<RSTIME>1375457263</RSTIME><br>
<RETIME>0</RETIME><br>
<ESTIME>0</ESTIME><br>
<EETIME>0</EETIME><br>
<REASON>0</REASON><br>
</HISTORY><br>
</HISTORY_RECORDS><br>
</VM><br>
<br>
I think it'd be of a great help for me to have the update XSD
files for all the body columns in the databases: I'd able to
validate the xml structure of all the tables to highlight
migration problems.<br>
<br>
Thanks! :)<br>
<br>
F.<br>
<br>
<br>
Il 21/08/2013 12:13, Carlos Martín Sánchez ha scritto:<br>
</div><div><div>
<blockquote type="cite">
<div dir="ltr">Hi,
<div><br>
</div>
<div>Could you send us the xml of some of the failing vms and
hosts? You can get it with the -x flag in onevm/host list.</div>
<div><br>
</div>
<div>Send them off-list if you prefer.</div>
<div>
<br>
</div>
<div>Regards</div>
<div class="gmail_extra"><br clear="all">
<div>
<div dir="ltr">--<br>
Join us at <a href="http://opennebulaconf.com" target="_blank">OpenNebulaConf2013</a>
in Berlin, 24-26 September, 2013<br>
--
<div>
Carlos Martín, MSc<br>
Project Engineer<br>
OpenNebula - The Open-source Solution for Data Center
Virtualization
<div><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"><a href="http://www.OpenNebula.org" target="_blank">www.OpenNebula.org</a>
| <a href="mailto:cmartin@opennebula.org" target="_blank">cmartin@opennebula.org</a> | <a href="http://twitter.com/opennebula" target="_blank">@OpenNebula</a></span><span style="border-collapse:collapse;color:rgb(136,136,136);font-family:arial,sans-serif;font-size:13px"></span></div>
</div>
</div>
</div>
<br>
<br>
<div class="gmail_quote">On Thu, Aug 8, 2013 at 11:29 AM,
Federico Zani <span dir="ltr"><<a href="mailto:federico.zani@roma2.infn.it" target="_blank">federico.zani@roma2.infn.it</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"> Hi, <br>
I am experiencing some issues after the update from
3.7 to 4.2 (frontend on a CentOS 6.4 and hosts with KVM
virt manager), this is what I did : <br>
<br>
- Stopped one and sunstone and backed up /etc/one <br>
- yum localinstall opennebula-4.2.0-1.x86_64.rpm
opennebula-java-4.2.0-1.x86_64.rpm
opennebula-ruby-4.2.0-1.x86_64.rpm
opennebula-server-4.2.0-1.x86_64.rpm
opennebula-sunstone-4.2.0-1.x86_64.rpm <br>
- duplicated im and vmm for kvm mads as specified here
<a href="http://opennebula.org/documentation:archives:rel4.0:upgrade#driver_names" target="_blank">http://opennebula.org/documentation:archives:rel4.0:upgrade#driver_names</a>
<br>
- checked for other mismatch in one.conf but actually I
found nothing to be fixed <br>
- onedb upgrade -v --sqlite /var/lib/one/one.db (no
errors, just a few warning about manual fixes needed -
that I did) <br>
- moved vm description files from <i><span>/</span>var/lib/one<span>/</span></i>[0-9]*
to <i><span>/</span>var/lib/one/vms<span>/</span></i> <br>
<br>
Then I tried to fsck the sqlite db but got the following
error : <br>
-------------- <br>
onedb fsck -f -v -s /var/lib/one/one.db <br>
Version read: <br>
4.2.0 : Database migrated from 3.7.80 to 4.2.0
(OpenNebula 4.2.0) by onedb command. <br>
<br>
Sqlite database backup stored in /var/lib/one/one.db.bck
<br>
Use 'onedb restore' or copy the file back to restore the
DB. <br>
> Running fsck <br>
<br>
Datastore 0 is missing fom Cluster 101 datastore id list
<br>
Image 127 is missing fom Datastore 101 image id list <br>
undefined method `elements' for nil:NilClass <br>
Error running fsck version 4.2.0 <br>
The database will be restored <br>
Sqlite database backup restored in /var/lib/one/one.db <br>
----------- <br>
<br>
I also tried to reinstall ruby gems with
/usr/share/one/install_gems but still got the same
issue. <br>
<br>
After a few searching, I tried to start one and
sunstone-server anyway, and this is the result : <br>
- I can do "onevm list" and "onehost list" correctly <br>
- When I do a "onevm show" on a terminated vm it shows
me the correct information <br>
- When I do a "onevm show" (on a running vm) or
"onehost show", it returns a "[VirtualMachineInfo] Error
getting virtual machine [312]." or either "[HostInfo]
Error getting host [30]." <br>
<br>
In the log file (/var/log/oned.log) I can see the
following errors, when issuing those commands : <br>
---------- <br>
Tue Aug 6 12:49:40 2013 [ONE][E]: SQL command was:
SELECT body FROM host_pool WHERE oid = 30, error:
callback requested query abort <br>
Tue Aug 6 12:49:40 2013 [ONE][E]: SQL command was:
SELECT body FROM vm_pool WHERE oid = 312, error:
callback requested query abort <br>
------------ <br>
<br>
I am still able to see datastores informations and the
overall situation of my private cloud through the
sunstone dashboard, but it seems I cannot access
informations related to running vms and hosts: it leads
to an unusable private cloud (can't stop vms, can't run
a new one, etc...) <br>
<br>
Any clues ? <br>
<span><font color="#888888"> <br>
Federico. <br>
</font></span></div>
<br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</div></div></div>
</blockquote></div><br></div></div></div></div>
</blockquote></div><br></div>