<div dir="ltr"><div class="gmail_quote"><div dir="ltr"><div><div><div><div>Hi, <br><br></div>well, I think in general in a production environment you should use something you completely trust. If installing a clvm is not an issue in your setup then why not do it for extra security?<br>

<br></div>With OpenNebula we have been using this LVM setup for a couple of months only, so we can say it is still in testing. Therefore, you should wait a bit before putting critical stuff on it. We will do the same, first moving less important stuff on these LV-s then proceed with the more important services. The reason why I'm not particularly worried is that we have been using a similar setup under a simple xen cluster for 5 years. Although it was backed by fibrechannel and not iscsi, and there was no opennebula involved, we did not have any issues with LVM itself. But, of course, instead of a one frontend we managed everything with a bunch of simple bash scripts that were executed by simple php web interface.  <br>

<br><br></div>Anyway, we I will blog about our experiences with the shared_lvm driver so you will be informed!<br><br></div>Cheers<br>Mihály<br>MTA SZTAKI ITAK<br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra">
<br><br><div class="gmail_quote">
On 30 January 2013 18:47, Miloš Kozák <span dir="ltr"><<a href="mailto:milos.kozak@lejmr.com" target="_blank">milos.kozak@lejmr.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">


  
    
  
  <div text="#000000" bgcolor="#FFFFFF">
    Hi, it sounds interesting, I think I am going to give it a try. I
    still struggle whether to use or not CLVM. For how long you have
    been running it like that? Have you ever had any serious issues
    related to LVM?<br>
    <br>
    Thank you, Milos <br>
    <br>
    <br>
    <div>Dne 30.1.2013 13:59, Marlok Tamás
      napsal(a):<br>
    </div>
    <blockquote type="cite"><div><div>Hi,<br>
      <br>
      We are running it without CLVM.<br>
      If you examine the ONE/lvm driver (the tm/clone script for
      example), you can see, that the lvcreate command runs on the
      destination host. In the shared LVM driver, all the LVM commands
      are running on the frontend, hence there is no possibility of
      parralel changes (assuming that you are using only 1 frontend),
      because local locking is in effect on the frontend.<br clear="all">
      <div><br>
        The other thing with the ONE/lvm driver is that it makes a
        snapshot in the clone script, while our driver makes a new clone
        LV. I tried to use the original LVM driver, and every time, I
        deployed a new VM, I got this error message:<br>
        <pre>lv-one-50 must be active exclusively to create snapshot
</pre>
        If you (or everyone else) knows, how to avoid this error, please
        let me know.<br>
        Besides that snapshots are much slower in write operations (as
        far as I know).<br>
        <br>
        Hope this helps!<br>
        --<br>
        Cheers,<br>
        tmarlok<br>
      </div>
      <br>
      <br>
      </div></div><div class="gmail_quote"><div><div>On Wed, Jan 30, 2013 at 1:37 PM, Miloš
        Kozák <span dir="ltr"><<a href="mailto:milos.kozak@lejmr.com" target="_blank">milos.kozak@lejmr.com</a>></span>
        wrote:<br>
        </div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
          <div text="#000000" bgcolor="#FFFFFF"><div><div> Hi, thank you. I
            checked source codes and I found it is very similar to LVM
            TM/Datastore drivers which is facilitated in ONE already
            only you added lvchange -ay DEV. Do you run CLVM along that
            or not? <br>
            <br>
            I worry about parallel changes of LVM metadata which might
            destroy them. From sequential behaviour it is probably not
            an issues can you prove it to me? Or  is it highly dangerous
            to run lvm_shared without CLVM?<br>
            <br>
            Thanks, Milos<br>
            <br>
            <br>
            <div>Dne 30.1.2013 10:09, Marlok Tamás napsal(a):<br>
            </div>
            </div></div><div>
              <div>
                <blockquote type="cite"><div><div>Hi,<br>
                  <br>
                  We have a custom datastore, and transfer manager
                  driver, which runs the lvchange command when it is
                  needed.<br>
                  In order to work, you have to enable it in oned.conf.<br>
                  <br>
                  for example:<br>
                  <span style="font-family:courier new,monospace"><br>
                    DATASTORE_MAD = [<br>
                        executable = "one_datastore",<br>
                        arguments  = "-t 10 -d
                    fs,vmware,iscsi,lvm,shared_lvm"]<br>
                    <br>
                    TM_MAD = [<br>
                        executable = "one_tm",<br>
                        arguments  = "-t 10 -d
                    dummy,lvm,shared,qcow2,ssh,vmware,iscsi,shared_lvm"
                    ]</span><br clear="all">
                  <div><br>
                    After that, you can create a datastore, with the
                    shared_lvm tm and datastore driver.<br>
                    <br>
                    The only limitation is that you can't live migrate
                    VM-s. We have a working solution for that as well,
                    but it is still untested.I can send you that too, if
                    you want to help us testing it.<br>
                    <br>
                    Anyway, here are the drivers, feel free to use or
                    modify it.<br>
                    <a href="https://dl.dropbox.com/u/140123/shared_lvm.tar.gz" target="_blank">https://dl.dropbox.com/u/140123/shared_lvm.tar.gz</a><br>
                    <br>
                    --<br>
                    Cheers,<br>
                    Marlok Tamas<br>
                    MTA Sztaki<br>
                    <br>
                  </div>
                  <br>
                  <br>
                  </div></div><div class="gmail_quote"><div><div>On Thu, Jan 24, 2013 at 11:32
                    PM, Mihály Héder <span dir="ltr"><<a href="mailto:mihaly.heder@sztaki.mta.hu" target="_blank">mihaly.heder@sztaki.mta.hu</a>></span>
                    wrote:<br>
                    </div></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div>
                      Hi,<br>
                      <br>
                      Well, if you can run the lvs or lvscan on at least
                      one server<br>
                      successfully, then the metadata is probably fine.<br>
                      We had similar issues before we learned how to
                      exclude unnecessary<br>
                      block devices in the lvm config.<br>
                      <br>
                      The thing is that lvscan and lvs will try to check
                      _every_ potential<br>
                      block device by default for LVM partitions. If you
                      are lucky, this is<br>
                      only annoying, because it will throw 'can't read
                      /dev/sdX' or similar<br>
                      messages. However, if you are using dm-multipath,
                      you will have one<br>
                      device for each path, like /dev/sdr _plus_ the
                      aggregated device with<br>
                      the name you have configured in multipath.conf
                      (/dev/mapper/yourname)<br>
                      what you actually need. LVM did not quite
                      understand this situation<br>
                      and got stuck on the individual path devices, so
                      we have configured to<br>
                      look for lvm only on the right place. In man page
                      of lvm.conf look for<br>
                      the devices / scan and filter options. Also there
                      are quite good<br>
                      examples in the comments there.<br>
                      <br>
                      Also, there could be a much simpler explanation to
                      the issue:<br>
                      something with the iSCSI connection or multipath
                      that are one layer<br>
                      below.<br>
                      <br>
                      I hope this helps.<br>
                      <br>
                      Cheers<br>
                      Mihály<br>
                      <br>
                      On 24 January 2013 23:18, Miloš Kozák <<a href="mailto:milos.kozak@lejmr.com" target="_blank">milos.kozak@lejmr.com</a>>
                      wrote:<br>
                      > Hi, thank you. I tried to update TM ln
                      script, which works but it is not<br>
                      > clean solution. So I will try to write hook
                      code and then we can discuss it.<br>
                      ><br>
                      > I deployed a few VM and now on the other
                      server lvs command freezes. I have<br>
                      > not set up clvm, do you think it could be
                      caused by lvm metadata corruption?<br>
                      > The thing is I can not longer start a VM on
                      the other server.<br>
                      ><br>
                      > Miloš<br>
                      ><br>
                      > Dne 24.1.2013 23:10, Mihály Héder napsal(a):<br>
                      </div></div><div>
                        <div><div><div>><br>
                          >> Hi!<br>
                          >><br>
                          >> We solve this problem via hooks that
                          are activating the LV-s for us<br>
                          >> when we start/migrate a VM.
                          Unfortunately I will be out of office<br>
                          >> until early next week but then I will
                          consult with my colleague who<br>
                          >> did the actual coding of this part
                          and we will share the code.<br>
                          >><br>
                          >> Cheers<br>
                          >> Mihály<br>
                          >><br>
                          >> On 24 January 2013 20:15, Miloš
                          Kozák<<a href="mailto:milos.kozak@lejmr.com" target="_blank">milos.kozak@lejmr.com</a>>

                           wrote:<br>
                          >>><br>
                          >>> Hi, I have just set it up having
                          two hosts with shared blockdevice. On<br>
                          >>> top<br>
                          >>> of that LVM, as discussed
                          earlier. Triggering lvs I can see all logical<br>
                          >>> volumes. When I create a new LV
                           on the other server, I can see the LV<br>
                          >>> being<br>
                          >>> inactive, so I have to run
                          lvchange -ay VG/LV enable it then this LV can<br>
                          >>> be<br>
                          >>> used for VM..<br>
                          >>><br>
                          >>> Is there any trick howto auto
                          enable newly created LV on every host?<br>
                          >>><br>
                          >>> Thanks Milos<br>
                          >>><br>
                          >>> Dne 22.1.2013 18:22, Mihály Héder
                          napsal(a):<br>
                          >>><br>
                          >>>> Hi!<br>
                          >>>><br>
                          >>>> You need to look at
                          locking_type in the lvm.conf manual [1]. The<br>
                          >>>> default - locking in a local
                          directory - is ok for the frontend, and<br>
                          >>>> type 4 is read-only. However,
                          you should not forget that this only<br>
                          >>>> prevents damaging thing by
                          the lvm commands. If you start to write<br>
                          >>>> zeros to your disk with the
                          dd command for example, that will kill<br>
                          >>>> your partition regardless the
                          lvm setting. So this is against user or<br>
                          >>>> middleware errors mainly, not
                          against malicious attacks.<br>
                          >>>><br>
                          >>>> Cheers<br>
                          >>>> Mihály Héder<br>
                          >>>> MTA SZTAKI<br>
                          >>>><br>
                          >>>> [1] <a href="http://linux.die.net/man/5/lvm.conf" target="_blank">http://linux.die.net/man/5/lvm.conf</a><br>
                          >>>><br>
                          >>>> On 21 January 2013 18:58,
                          Miloš Kozák<<a href="mailto:milos.kozak@lejmr.com" target="_blank">milos.kozak@lejmr.com</a>>

                            wrote:<br>
                          >>>>><br>
                          >>>>> Oh snap, that sounds
                          great I didn't know about that.. it makes all<br>
                          >>>>> easier.<br>
                          >>>>> In this scenario only
                          frontend can work with LVM, so no issues of<br>
                          >>>>> concurrent<br>
                          >>>>> change. Only one last
                          think to make it really safe against that. Is<br>
                          >>>>> there<br>
                          >>>>> any way to suppress LVM
                          changes from hosts, make it read only? And let<br>
                          >>>>> it<br>
                          >>>>> RW<br>
                          >>>>> at frontend?<br>
                          >>>>><br>
                          >>>>> Thanks<br>
                          >>>>><br>
                          >>>>><br>
                          >>>>> Dne 21.1.2013 18:50,
                          Mihály Héder napsal(a):<br>
                          >>>>><br>
                          >>>>>> Hi,<br>
                          >>>>>><br>
                          >>>>>> no, you don't have to
                          do any of that. Also, nebula doesn't have to<br>
                          >>>>>> care about LVM
                          metadata at all and therefore there is no
                          corresponding<br>
                          >>>>>> function in it. At
                          /etc/lvm there is no metadata, only
                          configuration<br>
                          >>>>>> files.<br>
                          >>>>>><br>
                          >>>>>> Lvm metadata simply
                          sits somewhere at the beginning of your<br>
                          >>>>>> iscsi-shared disk,
                          like a partition table. So it is on the
                          storage<br>
                          >>>>>> that is accessed by
                          all your hosts, and no distribution is
                          necessary.<br>
                          >>>>>> Nebula frontend
                          simply issues lvcreate, lvchange, etc, on this
                          shared<br>
                          >>>>>> disk and those
                          commands will manipulate the metadata.<br>
                          >>>>>><br>
                          >>>>>> It is really LVM's
                          internal business, many layers below
                          opennebula.<br>
                          >>>>>> All you have to make
                          sure that you don't run these commands<br>
                          >>>>>> concurrently  from
                          multiple hosts on the same iscsi-attached
                          disk,<br>
                          >>>>>> because then they
                          could interfere with each other. This setting
                          is<br>
                          >>>>>> what you have to
                          indicate in /etc/lvm on the server hosts.<br>
                          >>>>>><br>
                          >>>>>> Cheers<br>
                          >>>>>> Mihály<br>
                          >>>>>><br>
                          >>>>>> On 21 January 2013
                          18:37, Miloš Kozák<<a href="mailto:milos.kozak@lejmr.com" target="_blank">milos.kozak@lejmr.com</a>>

                            wrote:<br>
                          >>>>>>><br>
                          >>>>>>> Thank you. does
                          it mean, that I can distribute metadata files
                          located<br>
                          >>>>>>> in<br>
                          >>>>>>> /etc/lvm on
                          frontend onto other hosts and these hosts will
                          see my<br>
                          >>>>>>> logical<br>
                          >>>>>>> volumes? Is there
                          any code in nebula which would provide it? Or
                          I<br>
                          >>>>>>> need<br>
                          >>>>>>> to<br>
                          >>>>>>> update DS scripts
                          to update/distribute LVM metadata among
                          servers?<br>
                          >>>>>>><br>
                          >>>>>>> Thanks, Milos<br>
                          >>>>>>><br>
                          >>>>>>> Dne 21.1.2013
                          18:29, Mihály Héder napsal(a):<br>
                          >>>>>>><br>
                          >>>>>>>> Hi,<br>
                          >>>>>>>><br>
                          >>>>>>>> lvm
                          metadata[1] is simply stored on the disk. In
                          the setup we are<br>
                          >>>>>>>> discussing
                          this happens to be a  shared virtual disk on
                          the storage,<br>
                          >>>>>>>> so any other
                          hosts that are attaching the same virtual disk
                          should<br>
                          >>>>>>>> see<br></div></div><div>
                          >>>>>>>> the changes
                          as they happen, provided that they re-read the
                          disk.<br>
                          >>>>>>>> This<br>
                          >>>>>>>> re-reading
                          step is what you can trigger with lvscan, but
                          nowadays<br>
                          >>>>>>>> that<br>
                          >>>>>>>> seems to be
                          unnecessary. For us it works with Centos 6.3
                          so I guess<br>
                          >>>>>>>> Sc<br>
                          >>>>>>>> Linux should
                          be fine as well.<br>
                          >>>>>>>><br>
                          >>>>>>>> Cheers<br>
                          >>>>>>>> Mihály<br>
                          >>>>>>>><br>
                          >>>>>>>><br>
                          >>>>>>>> [1]<br>
                          >>>>>>>><br>
                          >>>>>>>><br>
                          >>>>>>>><br>
                          >>>>>>>> <a href="http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_metadata.html" target="_blank">http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/lvm_metadata.html</a><br>


                          >>>>>>>><br></div>
                          >>>>>>>> On 21 January
                          2013 12:53, Miloš Kozák<<a href="mailto:milos.kozak@lejmr.com" target="_blank">milos.kozak@lejmr.com</a>><br>
                          >>>>>>>> wrote:<br>
                          >>>>>>>>><br>
                          >>>>>>>>> Hi,<div><div><br>
                          >>>>>>>>> thank you
                          for great answer. As I wrote my objective is
                          to avoid as<br>
                          >>>>>>>>> much<br>
                          >>>>>>>>> of<br>
                          >>>>>>>>>
                          clustering sw (pacemaker,..) as possible, so
                          clvm is one of these<br>
                          >>>>>>>>> things<br>
                          >>>>>>>>> I<br>
                          >>>>>>>>> feel bad
                          about them in my configuration.. Therefore I
                          would rather<br>
                          >>>>>>>>> let<br>
                          >>>>>>>>> nebula
                          manage LVM metadata in the first place as I
                          you wrote. Only<br>
                          >>>>>>>>> one<br>
                          >>>>>>>>> last<br>
                          >>>>>>>>> thing I
                          dont understand is a way nebula distributes
                          LVM metadata?<br>
                          >>>>>>>>><br>
                          >>>>>>>>> Is kernel
                          in Scientific Linux 6.3 new enought to LVM
                          issue you<br>
                          >>>>>>>>>
                          mentioned?<br>
                          >>>>>>>>><br>
                          >>>>>>>>> Thanks
                          Milos<br>
                          >>>>>>>>><br>
                          >>>>>>>>><br>
                          >>>>>>>>><br>
                          >>>>>>>>><br>
                          >>>>>>>>> Dne
                          21.1.2013 12:34, Mihály Héder napsal(a):<br>
                          >>>>>>>>><br>
                          >>>>>>>>>> Hi!<br>
                          >>>>>>>>>><br>
                          >>>>>>>>>> Last
                          time we could test an Equalogic it did not
                          have option for<br>
                          >>>>>>>>>>
                          create/configure Virtual Disks inside in it by
                          an API, so I think<br>
                          >>>>>>>>>> the<br>
                          >>>>>>>>>> iSCSI
                          driver is not an alternative, as it would
                          require a<br>
                          >>>>>>>>>>
                          configuration step per virtual machine on the
                          storage.<br>
                          >>>>>>>>>><br>
                          >>>>>>>>>>
                          However, you can use your storage just fine in
                          a shared LVM<br>
                          >>>>>>>>>>
                          scenario.<br>
                          >>>>>>>>>> You
                          need to consider two different things:<br>
                          >>>>>>>>>> -the
                          LVM metadata, and the actual VM data on the
                          partitions. It is<br>
                          >>>>>>>>>> true,
                          that the concurrent modification of the
                          metadata should be<br>
                          >>>>>>>>>>
                          avoided as in theory it can damage the whole
                          virtual group. You<br>
                          >>>>>>>>>> could<br>
                          >>>>>>>>>> use
                          clvm which avoids that by clustered locking,
                          and then every<br>
                          >>>>>>>>>>
                          participating machine can safely
                          create/modify/delete LV-s.<br>
                          >>>>>>>>>>
                          However,<br>
                          >>>>>>>>>> in a
                          nebula setup this is not necessary in every
                          case: you can<br>
                          >>>>>>>>>> make<br>
                          >>>>>>>>>> the
                          LVM metadata read only on your host servers,
                          and let only the<br>
                          >>>>>>>>>>
                          frontend modify it. Then it can use local
                          locking that does not<br>
                          >>>>>>>>>>
                          require clvm.<br>
                          >>>>>>>>>> -of
                          course the host servers can write the data
                          inside the<br>
                          >>>>>>>>>>
                          partitions<br>
                          >>>>>>>>>>
                          regardless that the metadata is read-only for
                          them. It should work<br>
                          >>>>>>>>>> just
                          fine as long as you don't start two VMs for
                          one partition.<br>
                          >>>>>>>>>><br>
                          >>>>>>>>>> We
                          are running this setup with a dual controller
                          Dell MD3600<br>
                          >>>>>>>>>>
                          storage<br>
                          >>>>>>>>>>
                          without issues so far. Before that, we used to
                          do the same with<br>
                          >>>>>>>>>> XEN<br>
                          >>>>>>>>>>
                          machines for years on an older EMC (that was
                          before nebula). Now<br>
                          >>>>>>>>>> with<br>
                          >>>>>>>>>>
                          nebula we have been using a home-grown module
                          for doing that,<br>
                          >>>>>>>>>> which<br>
                          >>>>>>>>>> I<br>
                          >>>>>>>>>> can
                          send you any time - we plan to submit that as
                          a feature<br>
                          >>>>>>>>>>
                          enhancement anyway. Also, there seems to be a
                          similar shared LVM<br>
                          >>>>>>>>>>
                          module in the nebula upstream which we could
                          not get to work yet,<br>
                          >>>>>>>>>> but<br>
                          >>>>>>>>>> did
                          not try much.<br>
                          >>>>>>>>>><br>
                          >>>>>>>>>> The
                          plus side of this setup is that you can make
                          live migration<br>
                          >>>>>>>>>> work<br>
                          >>>>>>>>>>
                          nicely. There are two points to consider
                          however: once you set the<br>
                          >>>>>>>>>> LVM<br>
                          >>>>>>>>>>
                          metadata read-only you wont be able to modify
                          the local LVMs in<br>
                          >>>>>>>>>> your<br>
                          >>>>>>>>>>
                          servers, if there are any. Also, in older
                          kernels, when you<br>
                          >>>>>>>>>>
                          modified<br>
                          >>>>>>>>>> the
                          LVM on one machine the others did not get
                          notified about the<br>
                          >>>>>>>>>>
                          changes, so you had to issue an lvs command.
                          However in new<br>
                          >>>>>>>>>>
                          kernels<br>
                          >>>>>>>>>> this
                          issue seems to be solved, the LVs get
                          instantly updated. I<br>
                          >>>>>>>>>> don't<br>
                          >>>>>>>>>> know
                          when and what exactly changed though.<br>
                          >>>>>>>>>><br></div></div>
                          >>>>>>>>>>
                          Cheers<br>
                          >>>>>>>>>>
                          Mihály Héder<br>
                          >>>>>>>>>> MTA
                          SZTAKI ITAK<br>
                          >>>>>>>>>><br>
                          >>>>>>>>>> On 18
                          January 2013 08:57, Miloš Kozák<<a href="mailto:milos.kozak@lejmr.com" target="_blank">milos.kozak@lejmr.com</a>><div><div><br>
                          >>>>>>>>>>
                          wrote:<br>
                          >>>>>>>>>>><br>
                          >>>>>>>>>>>
                          Hi, I am setting up a small installation of
                          opennebula with<br>
                          >>>>>>>>>>>
                          sharedstorage<br>
                          >>>>>>>>>>>
                          using iSCSI. THe storage is Equilogic EMC with
                          two controllers.<br>
                          >>>>>>>>>>>
                          Nowadays<br>
                          >>>>>>>>>>>
                          we<br>
                          >>>>>>>>>>>
                          have only two host servers so we use backed
                          direct connection<br>
                          >>>>>>>>>>>
                          between<br>
                          >>>>>>>>>>>
                          storage and each server, see attachment. For
                          this purpose we set<br>
                          >>>>>>>>>>>
                          up<br>
                          >>>>>>>>>>>
                          dm-multipath. Cause in the future we want to
                          add other servers<br>
                          >>>>>>>>>>>
                          and<br>
                          >>>>>>>>>>>
                          some<br>
                          >>>>>>>>>>>
                          other technology will be necessary in the
                          network segment.<br>
                          >>>>>>>>>>>
                          Thesedays<br>
                          >>>>>>>>>>>
                          we<br>
                          >>>>>>>>>>>
                          try<br>
                          >>>>>>>>>>>
                          to make it as same as possible with future
                          topology from<br>
                          >>>>>>>>>>>
                          protocols<br>
                          >>>>>>>>>>>
                          point<br>
                          >>>>>>>>>>>
                          of<br>
                          >>>>>>>>>>>
                          view.<br>
                          >>>>>>>>>>><br>
                          >>>>>>>>>>>
                          My question is related to the way how to
                          define datastore, which<br>
                          >>>>>>>>>>>
                          driver<br>
                          >>>>>>>>>>>
                          and<br>
                          >>>>>>>>>>>
                          TM is the best and which?<br>
                          >>>>>>>>>>><br>
                          >>>>>>>>>>>
                          My primal objective is to avoid GFS2 or any
                          other cluster<br>
                          >>>>>>>>>>>
                          filesystem<br>
                          >>>>>>>>>>> I<br>
                          >>>>>>>>>>>
                          would<br>
                          >>>>>>>>>>>
                          prefer to keep datastore as block devices.
                          Only option I see is<br>
                          >>>>>>>>>>>
                          to<br>
                          >>>>>>>>>>>
                          use<br>
                          >>>>>>>>>>>
                          LVM<br>
                          >>>>>>>>>>>
                          but I worry about concurent writes isn't it a
                          problem? I was<br>
                          >>>>>>>>>>>
                          googling<br>
                          >>>>>>>>>>> a<br>
                          >>>>>>>>>>>
                          bit<br>
                          >>>>>>>>>>>
                          and I found I would need to set up clvm - is
                          it really necessary?<br>
                          >>>>>>>>>>><br>
                          >>>>>>>>>>>
                          Or is better to use iSCSI driver, drop the
                          dm-multipath and hope?<br>
                          >>>>>>>>>>><br>
                          >>>>>>>>>>>
                          Thanks, Milos<br>
                          >>>>>>>>>>><br>
                          >>>>>>>>>>>
                          _______________________________________________<br>
                          >>>>>>>>>>>
                          Users mailing list<br>
                          >>>>>>>>>>> <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
                          >>>>>>>>>>> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>


                          >>>>>>>>>>><br>
                          >>>>>>>
                          _______________________________________________<br>
                          >>>>>>> Users mailing
                          list<br>
                          >>>>>>> <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
                          >>>>>>> <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
                          >>>>><br>
                          >>>>><br>
                          ><br>
_______________________________________________<br>
                          Users mailing list<br>
                          <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
                          <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
                        </div></div></div>
                      </div>
                    </blockquote>
                  </div>
                  <br>
                </blockquote>
                <br>
              </div>
            </div>
          </div><div><div>
          <br>
          _______________________________________________<br>
          Users mailing list<br>
          <a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
          <a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
          <br>
        </div></div></blockquote>
      </div>
      <br>
    </blockquote>
    <br>
  </div>

<br>_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@lists.opennebula.org" target="_blank">Users@lists.opennebula.org</a><br>
<a href="http://lists.opennebula.org/listinfo.cgi/users-opennebula.org" target="_blank">http://lists.opennebula.org/listinfo.cgi/users-opennebula.org</a><br>
<br></blockquote></div><br></div>
</div></div></div><br></div>