The OpenNET Project / Index page

[ новости /+++ | форум | теги | ]




Версия для распечатки Пред. тема | След. тема
Новые ответы [ Отслеживать ]
OpenAIS+peacemaker не работает миграция, !*! Madnokia, 18-Авг-11, 09:25  [смотреть все]
Добрый день!

Есть кластер, состоящий из 2х серверов. Настроен drbd, xen и heartbeat.

При попытке мигрировать одну виртуальную машину на другой узел кластера происходит следующее:
dom0a:~ # crm_resource -M -V -r dns2 -H dom0b
crm_resource[27133]: 2011/08/18_09:17:41 WARN: unpack_rsc_op: Processing failed op xen_r5_start_0 on dom0b: unknown error
crm_resource[27133]: 2011/08/18_09:17:41 WARN: unpack_rsc_op: Operation drbd_r5:1_monitor_0 found resource drbd_r5:1 active on dom0b
crm_resource[27133]: 2011/08/18_09:17:41 WARN: unpack_rsc_op: Operation dns01_monitor_0 found resource dns01 active on dom0a
crm_resource[27133]: 2011/08/18_09:17:41 WARN: unpack_rsc_op: Operation xen_r4_monitor_0 found resource xen_r4 active on dom0a
crm_resource[27133]: 2011/08/18_09:17:41 WARN: unpack_rsc_op: Operation drbd_r4:1_monitor_0 found resource drbd_r4:1 active in master mode on dom0a
crm_resource[27133]: 2011/08/18_09:17:41 WARN: unpack_rsc_op: Operation xen_r5_monitor_0 found resource xen_r5 active on dom0a
crm_resource[27133]: 2011/08/18_09:17:41 WARN: unpack_rsc_op: Operation dns02_monitor_0 found resource dns02 active on dom0a
crm_resource[27133]: 2011/08/18_09:17:41 WARN: unpack_rsc_op: Operation monitor01_monitor_0 found resource monitor01 active on dom0a
crm_resource[27133]: 2011/08/18_09:17:41 WARN: unpack_rsc_op: Operation drbd_r5:0_monitor_0 found resource drbd_r5:0 active in master mode on dom0a

И ничего не мигрирует. Куда копать?


Вот конфигурация crm

node dom0a \
        attributes standby="off"
node dom0b \
        attributes standby="off"
primitive xen_r4 ocf:heartbeat:Filesystem \
        params device="/dev/drbd4" directory="/xen/r4"
primitive dns01 ocf:heartbeat:Xen \
        params xmfile="/xen/r4/dns1.cfg" \
        op monitor interval="10s" \
        op start interval="0s" timeout="50s" \
        op stop interval="0s" timeout="300s"
primitive xen_r5 ocf:heartbeat:Filesystem \
        params device="/dev/drbd5" directory="/xen/r5"
primitive dns02 ocf:heartbeat:Xen \
        params xmfile="/xen/r5/dns2.cfg" \
        op monitor interval="10s" \
        op start interval="0s" timeout="50s" \
        op stop interval="0s" timeout="300s"
primitive drbd_r4 ocf:heartbeat:drbd \
        params drbd_resource="r4" \
        op monitor interval="15s"
primitive drbd_r7 ocf:heartbeat:drbd \
        params drbd_resource="r7" \
        op monitor interval="15s"
primitive xen_r7 ocf:heartbeat:Filesystem \
        params device="/dev/drbd7" directory="/xen/r7"
primitive monitor01 ocf:heartbeat:Xen \
        params xmfile="/xen/r7/monitoring.cfg" \
        op monitor interval="10s" \
        op start interval="0s" timeout="50s" \
        op stop interval="0s" timeout="300s" \
        meta target-role="Started"
primitive drbd_r5 ocf:heartbeat:drbd \
        params drbd_resource="r5" \
        op monitor interval="15s"
group dns1 xen_r4 dns01
group dns2 xen_r5 dns02 \
        meta target-role="Started"
group monitor1 xen_r7 monitor01 \
        meta target-role="Started"
ms ms_drbd_r4 drbd_r4 \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
ms ms_drbd_r7 drbd_r7 \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
ms ms_drbd_r5 drbd_r5 \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation dns1_on_drbd inf: dns1 ms_drbd_r4:Master
colocation monitor1_on_drbd inf: monitor1 ms_drbd_r7:Master
colocation dns2_on_drbd inf: dns2 ms_drbd_r5:Master
order dns1_after_drbd inf: ms_drbd_r4:promote dns1:start
order monitor1_after_drbd inf: ms_drbd_r7:promote monitor1:start
order dns2_after_drbd inf: ms_drbd_r5:promote dns2:start
property $id="cib-bootstrap-options" \
        dc-version="1.0.2-ec6b0bbee1f3aa72c4c2559997e675db6ab39160" \
        expected-quorum-votes="2" \
        no-quorum-policy="ignore" \
        default-resource-stickiness="100000" \
        stonith-enabled="false" \
        last-lrm-refresh="1313553498"

  • OpenAIS+peacemaker не работает миграция, !*! Madnokia, 12:56 , 18-Авг-11 (1)
    Отвечу сам себе. Может кому пригодится.

    crm_resource -C -V --force -r xen_r5 -H dom0b

    решило ситуацию.

    А это объяснение:

    >>> "Taldevkar, Chetan" <chetan.taldevkar at patni.com> 18.07.2007 07:23 >>>
    >Hi all,
    >
    >When I start cluster lunixha is able to invoke start call on both the
    >nodes. On first node start fails as script calls echo "stopped"
    >followed by exit 1 as this resource needs to be running on the second
    >node. After that it successfully starts on second node. But if I
    >simulate error conditions on second node it does not invoke the script
    >on first node resulting no failover.
    >

    As far as I know: As soon as a RA couldn't be started on a node this
    RA is not able to start there anymore. You have to use crm_resource -C
    (--force) to reset the information that this RA couldn't be started there.
    So, if you like the resource to be run initially on node 2 add a
    location constraint to the resource and it will be started on the
    second node. After a failure of that resource heartbeat will start
    it on the first one if and only if the values for stickiness and
    failure stickiness are set properly.

    Best regards
    Andreas Mock




Партнёры:
PostgresPro
Inferno Solutions
Hosting by Hoster.ru
Хостинг:

Закладки на сайте
Проследить за страницей
Created 1996-2024 by Maxim Chirkov
Добавить, Поддержать, Вебмастеру