From 6b422f35b2d4409e2872c48e50e6775101661b2a Mon Sep 17 00:00:00 2001 From: bigwad Date: Thu, 23 Apr 2020 12:26:42 +0200 Subject: [PATCH 01/26] update sysreqs --- docs/README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/README.md b/docs/README.md index 9cf312b..efd2683 100644 --- a/docs/README.md +++ b/docs/README.md @@ -25,9 +25,8 @@ Please make sure to read our [End User License Agreement for Starcounter Softwar ## Requirements -* [Ubuntu 18.04.02 x64](https://ubuntu.com/download/desktop) or [Windows 10 Pro x64 Build 1903](https://www.microsoft.com/en-us/software-download/windows10). - * [Windows Subsystem for Linux \(WSL\)](https://docs.microsoft.com/en-us/windows/wsl/install-win10) is also supported. - * [Ubuntu 19.10 x64](https://ubuntu.com/download/desktop) is also supported. +* Supported OS: Windows 10 x64 build 1903+, Ubuntu 18.04, Ubuntu 19.10, CentOS 8. + - Supported linux distrubutions are also supported under [Windows Subsystem for Linux \(WSL\)](https://docs.microsoft.com/en-us/windows/wsl/install-win10) * [.NET Core 3.0.100](https://dotnet.microsoft.com/download/dotnet-core/3.0), SDK for development, runtime for production. * Enough RAM to load database of targeted size. * It's recommended to have at least two CPU cores. From ffd361ea63a8bd4358924f2237db4c502f2c770e Mon Sep 17 00:00:00 2001 From: bigwad Date: Thu, 23 Apr 2020 16:34:28 +0200 Subject: [PATCH 02/26] CentOS installation instructions --- docs/README.md | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/docs/README.md b/docs/README.md index efd2683..da54128 100644 --- a/docs/README.md +++ b/docs/README.md @@ -80,6 +80,32 @@ wget https://starcounter.io/Starcounter/Starcounter.3.0.0-rc-20191212.zip unzip Starcounter.3.0.0-rc-20191212.zip ``` +#### CentOS 8 + +**Install prerequisites.** + +```text +yum install wget unzip libaio ncurses-compat-libs clang +``` + +Starcounter requires a certain version of SWI-Prolog, which is not available from existing repositories, but can be found in package archives: + +```text +yum localinstall https://kojipkgs.fedoraproject.org//packages/compat-readline6/6.3/16.fc30/x86_64/compat-readline6-6.3-16.fc30.x86_64.rpm +yum localinstall https://kojipkgs.fedoraproject.org//vol/fedora_koji_archive05/packages/pl/7.2.0/1.fc23/x86_64/pl-7.2.0-1.fc23.x86_64.rpm +ln /usr/lib64/swipl-7.2.0/lib/x86_64-linux/libswipl.so.7.2.0 /usr/lib64/libswipl.so +``` + +**Download and unpack Starcounter binaries.** + +```text +cd $HOME +mkdir Starcounter.3.0.0-rc-20191212 +cd Starcounter.3.0.0-rc-20191212 +wget https://starcounter.io/Starcounter/Starcounter.3.0.0-rc-20191212.zip +unzip Starcounter.3.0.0-rc-20191212.zip +``` + ### Application **Create an application folder and initialize a .NET Core console application.** From d7bb4eba9a8da2024a62d311efc9abea69821879 Mon Sep 17 00:00:00 2001 From: bigwad Date: Fri, 24 Apr 2020 11:53:33 +0200 Subject: [PATCH 03/26] failover introduction --- docs/failover-cluster.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 488cea6..0bd4ae2 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -200,5 +200,10 @@ Start-ClusterGroup Starcounter ## Starcounter failover on Linux -Starcounter 3 Release Candidate does not yet support failover on Linux operating systems out of the box. If you have a Linux production environment which requires failover, please contact us. +### Introduction +The idea of starounter failover cluster is to boundle a starcounter database and a starcounter-based application into an entity that can be health monitored and automatically restarted or migrated to a standby cluster node should a disaster happens. Due to in-memory nature of a starcounter database, when failover happens it may take significant time to load data from media on a cold standby node. Thus it would be beneficial to keep starcounter running as a hot standby. Another requirement to the system concers data integrity. Our goal is to provide consistent solution in terms of [CAP](https://en.wikipedia.org/wiki/CAP_theorem), i.e. no committed transactions can be lost during migration. +### Setup Explained + +### Future directions +### Practical setup steps From bc66973ceb8f649d36b9a810200740fc4ed91845 Mon Sep 17 00:00:00 2001 From: bigwad Date: Fri, 24 Apr 2020 14:52:10 +0200 Subject: [PATCH 04/26] Setup explained 1 --- docs/failover-cluster.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 0bd4ae2..d953bf0 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -203,6 +203,9 @@ Start-ClusterGroup Starcounter ### Introduction The idea of starounter failover cluster is to boundle a starcounter database and a starcounter-based application into an entity that can be health monitored and automatically restarted or migrated to a standby cluster node should a disaster happens. Due to in-memory nature of a starcounter database, when failover happens it may take significant time to load data from media on a cold standby node. Thus it would be beneficial to keep starcounter running as a hot standby. Another requirement to the system concers data integrity. Our goal is to provide consistent solution in terms of [CAP](https://en.wikipedia.org/wiki/CAP_theorem), i.e. no committed transactions can be lost during migration. ### Setup Explained +Starcounter failover cluster is build on top of proven stack consisting of [pacemaker](https://clusterlabs.org/), [DRBD](https://www.linbit.com/drbd/) and [GFS2](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/global_file_system_2/index). Pacemaker is responsible for managing cluster nodes, control resources it manages and perform appropriate failover actions. DRBD is responsible for synchronizing starcounter transaction log on a block level. And GFS2 provides Starcounter file level access to a shared transacton log. Starcounter role in this is: +* supporting hot standby mode so that in-memory data on a standy node is up-to-date with an active node +* providing pacemaker control scripts for starcounter database. ### Future directions ### Practical setup steps From 1e7e9ec2218f93e8d2e1856389fa7a9b6a67b0dd Mon Sep 17 00:00:00 2001 From: bigwad Date: Fri, 24 Apr 2020 17:46:16 +0200 Subject: [PATCH 05/26] add cluster diagram --- docs/images/Starcounter cluster.png | Bin 0 -> 43160 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 docs/images/Starcounter cluster.png diff --git a/docs/images/Starcounter cluster.png b/docs/images/Starcounter cluster.png new file mode 100644 index 0000000000000000000000000000000000000000..01935ade21843c3b9eadfece2785c33bc2832238 GIT binary patch literal 43160 zcmeFZ2V9fewl50EQfyQem1+=CkzPWP9(oBiKv0lQCZLKs>pm!B95h%ZiICy3BlMZgyZmUfozs6RDBo{cTm(*9s3UJ`;%IvVa87<+`F3rY>3*RsZX2&?D{TKc*fXd!j&jj>jG9>Nl6 zBP$zkjG(0{6s4}Op`i#2jC@HUiGQc<>2aq}{YcqO0rf z;B9K+vF)x^|qNTTrttr-?Jbv(_r;X9m2Q{@-F}^StZ6l;IxqoLp zJ9|kSOdP7J2~$IfscSeX>Z5HW48(lwm2C8#t@M19T=hhR3@njYjJFC_Psj%)DyE=} z()Mr_7xxnNwl?telu&TB)KNg`IosikEyZ16l2ELhp@gP`zKN!Rj**9^sE(GBjk-Dt zDye2bXRD#$ZmlQ+!wMl?;es$ND^C?ea28dZkrHSm+}T&yR87eh{H^Te zhEvniG=gfn=sHTe`HHH;4K#!tLATmIs?KWQw}yzNxU;088+bhUalpW=ZPXB6YNCQ- zI-*93ieA=$vQ?CA^b|E+T;XtMEdy(NeI;Kj1tTL>U6_Za7*5#(gHqN|uo890xWirC z6%ls&;u@kVrm8}YcIv{);>OA@FdYL$9Zj6F3D|Ts340$MD-&U?h?uj7n6;#psfwwv zsXNqK*Z^h#x)Q=kBD|m)#@YsAc4#3-BMBoBM^#lrH@Kjwr?H8$m!7h&x+B^|P!bDQ z(6jee)(7WW(bzys*xgi96Jx5aVkc~C3-xx_geju*G{l^&mGpe{Y|#b=9(I;?>iR~K zt_FfaZjS2eqAJ$LrV7fwY8tv;t}Zxnl!TR=CKBz65Hztz>nmz2D7u>3I~ZfUg^fTD zNL`GQv6V6$3wWuDE!tT^RS>BzW@O-_tLJTH;HYIUhP8FEvGLJV(N)w{5z*3iMv1!# z>nd65sMve@s)%XY8^a{rP2kEpUPhuumgH8fwX8+FP%h4*P<15Q)>zOkbQ9ON)fIEr!GYpBIBypPUqL9^N))PSqUi4KZl@$>sHmf=F6e9ox7AT_ z!2tfC1lKl{gbHdKIT(t;?G$ZA4XogjI4y*xFC6V-Y-lO$uHvC@jI~n-+pprTs-`K4 z)pAfa@Ny8rNdhLW2vc(RLFjv$iW)1at6FOr>L|M#t6KS@^n^6@6|J;XaE97Csy3jY zp$gi-8)*r{+NeX_$>Y;jMquoO;3`n?tOqx7))CRepnb_lTSCN1U*FCLJZbnU=^-@* zT}{L^joj@~Fa7xf)KSsCi+X079cHPFc#^1j5AESIpT#U(#FB&Rs)DPutoSYbzpY?2FJ+6jswfOIX{PAcPdf zgd9YaRYfFi;cx>eV0ccBHhMeA1ubkN3YJ~P(@sdy(%Dl;-NDYsR?$e*#un;i zWA9D=$_V3$g94_aqU9ncuBW7HtEDUMYiwz2ZRu*~>Ff%mrizz3SvZLRNvraQRNEJE z;P2m61`%am^Wb6%3Qh`jn1Z3V`E0V@q>-^F$?Gxd4(%nnrzM=H7wf+=E7+XXeev_c zcxEv5$n7n9s`!gtx2$^WU=0pdiq=-vVg^SP9gISQ7}m>mJ*}Zn9tN&57CK^}QQsFYV1sI_0gJIwkK`ZO?nm%Xol)(ahAG z6hY@``aRlYM4q)|UZKDBR_%i31`Q?63H2ZBu$2d8=Z;hPFw!$bH8zQqGM?USftJD! zQ_^60ZimjdMvGja2qJOaZfvSJ)rc9~O#zjt4!Y;Vf8c76126f^Jhlco*C`2|g2vlVDxYptXTAw=>jWfc&oy(-Z4oK9vWf zJOkeI-EzB3%_*2duF>=tH9lzk^%N-)X`0z!Bw?mJc3>i56p->T`UNobC@}S?(1)_O zcP=H%J`16s_Jx4uI(p6Sf-WN^3^CYq>EDM%PcLsXc9ZrebYX{)4NNf#w8jE$y+Z*8 zSpNc5^P==8J zF!WdoNEU-$`;iAjFFQh(>2J%!L9a4F_1CC1Ab=vI`~IR6Dyq=z%I2V*@nnm%yA;&r zpf#`2Pp0%>;maSNmxnROIZ}a9BI)H}rMK%}(BJ;xO(hwgZB~kY&nE3zw!6Jv(V1o3 z_GnAmYrgDeh39*hzLNL0%?U>0F>KP~3H&#Edvjk~bkL?)2WCVij{-`g(E`i(S{&@= zlcPT`7l_t7UQX-IvyfxLXcJ!JE;j7wut+y3P*8y$4Gw`xV5sjinlcNRV~^0Xx)OHh zFaz+wB_^+*jSOB>?cAl7wfC=<%=MKTwdYr+IyyDpF_-b^JbTkx;X15{OL-=MH!HM@lJ*#nG|RWDpTz8yso&Z7@$qh_ z1kP_1k;eU6eNVum+TTWXWU$I_{bpcGxrVH^-ii1jl>hKq8hHvT1R!`9Zl(>&pnFG8 za2LQ_Ww&Pwt*IGkL~w=fCjeExd$U^ zTM@&+kkDWu@_IUeJ#&h!qoqD}N$^o4-18pnxDFE_!n63I@G^B`|+B7mWxVEXU1U zUn(#fy-O#1HfJY%GWlYf^5u{blM_BcGCHtSX)pAkNkIYM#CBrnK)H+$<^qFE6e)D_ zPEy3z3lx-vfMV?o?po8Ug~)3$%$4fgE1Kr5InPV%xSpBbtg#}x$CcJG_1kqol1&%D z8Q|%L9Sa(ysbeG#r#SR*&8Mt@3;NV5y`zn13r2}`oi>tW&gZ(n9@c8Xg z{7m59a;iWXkM9K%1FU=d##kKNz7^yu0RFz~~G>$0B^JEXvA0S=$P2XAcbQ01~F7SKABDs&RvFL9iT zo`%d4`Hw6mc?E{iQqfPUP%sQcvDkxi5jtt*NWpMZ>+R7~fT=2g_f%&>6e*}ikAfPx zC;p;FHjLTe5U8fF{T%dh9JFxeodl?Si@UrI9Pz}*IIH^<)K6(R(~@WwWdV0mqloPc zy*zcO5z{_Z>jO9o52%y!!RibZHv`~S$u57<2_=~&j>!iuw+ax!Cg)~z!3ps z1@a(5T`UNUk`KH$`!E7T#;xn*8u1GMUgLKB|G^ZqVTmSSB8-JPU%*7j;)ChJAYjbr z!PL()a0gIqYD)!3&{5K?z`(LFnS4=vev!=I3bp^e6HZRDkYxP}AqmF%mx3*lhg?JW zFKPgh_m@)y6Jc}$^S9~>I|ud(@I5r7ZW|1p3rw9W?Ij&2K|O91B)L=0+!D0$nWCM{%uNFiBkqW*x%tg(4E1Uw-YbfPJvs_X zw6a%|2ly4d?ivm~4cg`e?cI5({f0c9OJr$sp8WLpB>&#YzsdD~)|qsT@j&x8HJ$*+ z{0;r%r1W4T`4Yi9`^E$h;{&M?V1vBWe9lPJs+qb(-`0HT_Tz{+d|AVAO5dArdgyc9 zDGzDXNRTwmCW(AI)!XV4IvT_5A%5s|ebx82kGz2z8_IvBukI-~<)vW>RInfG{QD>k zwy^}gkt<}q{iQRLQv&tld!sK;&%2dPh+oA$GE)i505V|mVS#qGDdR21=s3hbcLyw$ zCh>ebb#iCO|C-JmUO5kyuqQ7|Ck+eiovW+xhiPTuJ)HZUnPHc^nVCoJ%#jOgZ9=_UPc`<1FL6ayr?K4^?Zu zPC0EZV!8@PM2d&&1EaH)a1lQr3Iv?y)$=ry%<9f$N%iPB_vNktclcneF<+h*rO=o8GPj_bZaoyBrlkn zlx_V8D;wwf%(+&EcD-NaRdlcIw}?r(y-~Rop1=y44iv2D%UoXNQn=q}6!hwgs;`nA z$J+U;*SOQ_=S6D>Hxrs9&oksaazYi6jSIQWI5{^9{O+nld;o;Ta+Eyp?G2A_&K zjUUAeG>yq@*6~$uSS!EDyYaqY>*aPG6TB=&ZqNJF-HrEnQlQuDccD`6#Sy*hgV7d7 zc@~cz>FZ{n1Jsro@;kK^H@``>XZQX=$>Whq8i$^GJ?xK6#n)`EdygwaiEIV!ZTO1G%^MwimoGgxxhY zk#ELtW@>yAFmL;81@qDwh#H$KkS)yKIHq52r&g{Em{vwq5~qmOlQA4}^If?&r5))} z$FBSUri9_0mG)PfoyMw=TO7WVFC6Q;j!*w8bLFj9Om&`V(I;Kxn`qMf^k#cTd2xGz zAjdcrC>_E|0G5V^M022m;5XgW z!#kHJw%vBqx?gUZj3XF`OH1Xqo=U*3MFjpD(%OE%@x(qQ^UT(#O)l#%TQ0bYYvUfG z5ag9a%UU&&tpLPQX2(KU?gIB(3zy%$F>is&-3*7#(#jecvtdU}S%}jGhJkOa<~v)j zR|D@ke3}cB#ZErxE3}X4D|1Wto6T={dq2@*rbaHZAy1|7l~U%fXIX1DEiElm=jz%G zsej0xzC`QDDbH5S8|zB@XmfREb2+~8y8ni2@mHo+W#OTXP1W&C+)P(gHDCOu$3s5T z=jXi^;0{^UZ25apn42*8Q}_(Ad%}9r?9;Ki&ys;bS?nR9lj=_u&eYm0YMfb|-hSTG zPRymHL33%QcK#BrJ*qfW%eu#IfmY$4l)n; z_$~w@sYZY0Q@=^~3>HH7J-;^CC?h9)M0RU3-K)hqP&OeaG>+P?tfj_J6t!&=fI~)e z@I0;MV;){>2`)+5*et7lBZhi`eillL*`f0Z`*21_*rFTrOBGg{r36qUbEdYp&RgyP+V8yR7s@eSRE`uIij)`kqTRNyV7jeb*(^#chv(# z0bgxt4zHVGw51WDL+)`eos~9g!C~-$u5+dJbzT7IN?xXN5$h!8%rEGnYehU#&M$-4 zV9jojw=*$6QH4BGtTStKeaq}E%=5Nu9Mm>3kNf4%&JQoun2)SLVfSNpGy7sy89$r5 z5|ZyGon1NhK;0z9mPOMQqZDDyK$H>_L~B~_9yN+vgKbx}21eAqlp#WdtD;={tU8!p z)%fJh+n2eXYVD329}PyNF^AVKgsm;RNib_3{y9Y0855WoNWX~3^sZ&bRqbTIOIEs) zF^aIL&9L9d!@mxY3h6i9X2vfvd&wkQ_akVB&UVNq*W%wM%$-@nb#}nz=43a2#yMJ; zUJf541aNK9Q4e=W?s%j%9NpeZh;`*X0-w;mT22zV`?zCFGDoZot{Fah1_DSqE5;2{ zUafz9u0wJ#V12;R%ywf#iV)C_8GGRO%!WZNCHi#rCmG&yepqb=`d*)8&E_|ZTxMH? zdF8g>qfY^SbGyez-^e9%v35s{xecKU-*x^$zYnFG+cUIE+$R(0;@jS3AoW>8 zLiy7VKP|zUJQtidb1HGiww-{KOIQ4CE&8P%XV)pfQuj=@19hz?;JgNF2kf?#Bji!n zHey18ASEDLYr&$Uuwj-<0V^7t{lP;0n)AcO>%YBZrw#I>(>*S#&UX4SRx$OpXJfDVk%^m@z+a0*w-@gTi5LuYJLM+ zHVxy?X%~quEsDUR7LiwOmYIEq^#(pdsJ^OC^2C*_R~PLRFb-6!NYqLMF(y5ZpB#=T zD3W4HaXrVCKrUpaa=MVQMFWe9PjRb!b@QcWM7PFN&49+G_O#xQV z<0sd|YR-K+`sjOIHjK{V(W}m$U-z2N=I<90e)~Ay0XSx|Rw76F<{&qbl?V;yd4zCm z3CF8WJ=Egso5^!k0(^vQSZN+ZtDw&Fx=VK4z8h)Ra$4{6-1ro_UnoJZJ{wlZcoX@` zo)v>)o@yT{)-QK0eHd1>dpRdsBLX@%?Yt0x+O*{^4~W9bjsnXE4G?y+?_k;anG~)G zSa7cj2dZ@{y%*D@eU@HIyZ;O|D{|1YsD0DNmmg*th7d?sx%;Zql#8Br685}ED*Uv- zGA3imIY2*qr^M56iW*kL*ogX~#jcM~(*C+4q42=jwe^WOYqoc;Kar9`4h+q%TWcP1J6fGDSt zUl=H6FdLRjGxbU{5kcY;V8~el_FJwIY3{eBeCvjOvN?-y?nnmpr(a)n*PG!jpD8HA;p`*^asN@XobK{=>kq{w6cX zEiMG{{h+c(c%hM1ID9lw$O?r!#cK}MV(SVbFeHqYfo*kW&n@*{?W1p{c8z`ob0fgu zdBT%013{M9-xGMR=QdD|>-3~G%}FqtbLFl8W`GOCssYnR&%oDr-6pFMtjMsr-(2zZ z;ab<`Fb*%WGYj}v-XJPxtniZ~{P|%FL?z>MeUkTiQYJ!gkY)qqFEu6E#?O>4QkKu!J185IPReEJV)RI{(X!4Zx8Iqchaa z4y8R_E{;USDI0!2M`w;?s}E1y^m=?LJbuVB&6>M+MFV>|4Kq}I^C+U{^@TpMnqA^> z{g2TYxm>FyzgM;#Mg8)Z`y^kRhS4gojMPVD$+RpTTZc8{EeL@DHt^P?awkGDP;|d* zp!5-l?{`oBxiqnB1E14B{=q21Ph(gZT6X%XGOWf*D;f%uKc`IgY!pjXWQOgs+xMhg z=;KNW+(NB?=1KkHx1_v#g>{vWbMHwBOo!_M&usj)Jt+5XZCw_fcSmhd-kCaAU-3xfM&(hkjR^Im3bhd)6{0hGLFKnKH zc%A2OHvf-P31q?ldN%KR-;D|!SYBc+v+cj^p&j?r_{sLGGNay_6>5oR^5WZrE8?|Y z&Z(kshR8~kvAy4dW-}(AmpG?+S41D8+ur1iV&>z;(flE2)kenRp+W8A7MYW2h_q#X zi!|OSc~soH_?-j!cUMY78t-i(9t1H?Yu1q@PNZPCm&3HiRZy2+!z$xaLcq4C*W}j=rS@&?`}is;`pS48@#m!J zQIiFW@gY-@FPnvo?_^2giphoZFKtP7eDtlQ+ML1a(R+`p?>|MP3EV9IsgrWu^r)%N zTyJqR04!aA`=4eAAk)2!?mHWcVUR7d>?@V2jhOkhz>CN}+hai%x4b_)RO@3i8Gmd& zbR7TXM?-aG1;~X^ja!j6M_+!e3!R*5cIr==6m#k-6R~eK?31oBa=;iN(swlxkAZF_ z!bpfiq@{X~gjKAWZ+A|v58(+&BCT<9PpeP>t%uaZ8BLv@i)UQ+n?*{`d~XhyKc*cS zno|99+N!vkL#jWT0U4jey5z`}R?7#$(N)nW^08*%9`7EOVrNeVt1mmzB;E2 zz1j;o5LR=MJ#G^`0?FIa-B)&cNSD*1(wis)fw@dw!csV>%@+jpnGHV-V6peR-&B{_-aljNx|r7{NFP8627-8*Iw*1A6tzETTflPaP()Ps%ig?bn?Q9BX9{7H{ zbY^zLDU*-e64=EZRc7u|bSKKbu@Yh4UoxpgN3Ljdn*E(H*Owz+hWMHcO2&Lz<(`;j z?8~OspQMAWWSfrfWFPESw&sH$!k0d*a*zIG?EP}^^%qoi-|bnSmA0o*tH&K=?is!O zeCp|01{r@8GB08{Owfv%s%xgP96$P4gyfkCOPVmrlUt-?7*Lw~S|=kPWg|?UJe>t= z*Nl}nB=URk?NzCQc@9XqX6Gx9=+!Mv4cK<-`0ip1el`Waw>hojO*(W`pK+LNb#;BG zZMDgbng7O{`5L;^j`Mo~{Sr&fHddqk?>&|S4@*=w%N8+&J8V|qXXE=f<>p)c@n7(J z6L^G>4@W>|v*glUxVHf@Wy*!Bvh*?Q^lj%8QONpTG?7KBv-`e(ZHVgg~1 zo%~QYw=lWk7_)E74;6Vc$mBcB``P#A3HUvR3i@~Es)KUl;5hNG%FIY!Sy2_{C7MUd z>}`yuH|QqI;R7#T-W9pATWHMfv*%_6^n4Z>_a+j>ogK%=a+mzYvv;%u+t=%vEwb4q zxjpefM@Ebw%t7=ORyBq_d3|Uvu~iXoBOP-6_!>}^eZvAC$)^*Kt$BhT+@Tb!F({|m2bFUwB1_ny!;?ja!TNxX3 zVvL0|9(a3<`1O5W?awFc&+~Z^n&1<=nYdZ1nd~LQ6TGLU0rR7j*7+K&zC@53ug_$5 z+dE6TVZz`s{I-a-6gUoi2&`xJ?wH<$-M8+l&j_6aMgRdR#LU2(x0RQ^=|RgT^>b~u z$6)u?UAYpp^v(mp&(U>ULVmF`F5glw{UGs;einGecQLKjjmO0R6`s{fa8jz55I7)^ zbAKX`W6Gk-3scia9zMob82Xla=ugJHgkdkQDJsku%Z>41cQs2cY*?IJwgS+*t)I%q zk{p30y6{0LZE0928=>kWrkEaIy)klpz1JyEvZt3GEx9<_ln?rl zfm`ZSlO<~)VYMtH`d;K2$GB6NGA%OeniFU(OtGxuw~LBd4!LlvD*93mOoX(Dj+j&w z7$55KyS%@bA%5{n`Cec{Vnf$bw|3wg;a}quPx*h};K1)JRuUpy-UsQHcxV+evV~99 zM=%@vJslgvY%Ni3x7*M-D$x7HZfU~PeVnP%?-V~H`}?<<$}H7ZS; zo8+Z$r8wa8=$983=A)L@4}ZSix|1~&1f(-F{OdmUVjBx#(La#aQ2^g z+M7Kr?f8(g(bFe@2>rInMEDa{WMS))O51u{-jC{vs1T$)z}#M@*7xR@^Bfa&N*{dL zI?0qPi2Hb%z_rh)GZ8e`z;wEHXQbu6$7p`=-3`NsPL_$)Ony4x*258ve_~FcT+K6&c0dh#^u(!e zj43LR2Z~Hw0CJ+8!3kYwP?=Q(<1<~)JMPh{9&&qiO9_@?py_=+Y+s_XS4(7f*Z+c@ z%|A{#gR>bR@PCoyVFR3k zSgGQB8LCpRV~;ZpvfgNb7eW(8Eko{{SE-C{SDKU-u51XfepKmmL%M` z{h8q_TFS+*SN`vk|36x#@3q@U=-9CUpm8V13mfN|6czRWdTY}>2w5K3TAitQ3Jpl{>_GK(+_xW)oAb{3es|xn4Iyh# z7QH{(1^XMQr;cIo>5X?H#rB>#!=ojG7IG{Jor(|~S1>N(dtR^rm<+3mOp zq{9FiU^KTVbG5`q;xR2x#pYvG*~w8JxMD^=fqb&vQ3~vk^|3??1TV2UfzJc-V~jhP zvt}EF7^KH9YA1;s;myjinai?aCw(^Km(=kww{fB`_vdG4{h!Vc!$>TsPp_8ZLw_T& z=ZXV7s)~`&5X0dGjG-J@VI>U4A?yDgBpIY{Eodh48$oW=m4-Utqn+p8U{>knc75#I zQgA>k%c?77U2o#eD}xSt*9L4&WmuE=Yoxn0E2fI(P#*d7sh*XH*#kJMmdq{%=4M9& zm?&pfLqgF_x9Aq&z&CxvGUex;?CYu5ecUfb@8X-;9Y}+JglT8>4qq*`2wO8#|H-=8 zk|!JCkjTdkbh0F#1SR}W6MabQViG>M*Ti`8NK z{26NdE@V9k_@~e7w+@tIh9>C&fX4f!K%aF`ZBn@?n^kW0n?X9u(HTf@gpH|WF6MNs zf{jz-6A}BjRXxMsUo@t!th0aBw$&$3PWv=0;x6E0kVh|RgZ=cUglDqJbd=gtH}d${ z{`?T?9@1Plm9YV@Osu)f0h-x4tFEVAzNj~PztQtz14qsPws%d`>V%8Qe>!L}7#TgB znDeU!X!ax#w<<#%zKRQg1MqRiFU>aVeyHOn30Qmh6_aXL_{vK{Ay0Wo&68X0TgrA4 zjJ46!>y|zfQcw2@Jr}2NgTS>f))=l9XNA(H-n-Cu4dAL`>txyQmoU>O^z+Jj3;#uB zTx4ZUhDSfZedh)CMg-on99LrCYY1-3p)Gay4hop>f8Ufc)G9Q!!0;~m{&li#uIUnQ$EyyAk z@)Q{?&vKv-YOCO)i_Mv(^sHWC4(#H9i6h%?NB32v{;gXp)W54p$Ny}I{qLY6{jF5~ ze@5H=dn^Cm%KzD>-rpA0f3xNHW@9hMth7IY&kVq&6IZj6#9iFUE(@R>yDC)xY!}H|3nLZ-o9-vkLfGPVQgPwqzqX4`FWZ&7983Y zw67py(_k<%A|JGx-Y5n8fBOv_`dUe=;s{Pe(%0-dGF@6A#RZWwfN(`oZHc~&=ZUz7j#(bp%-$2t%e0yM_L)N`7Rti(@jb|& zB?g)63KUI9_``Kh7#AtuYL=b0ZiqJW%i7J_Th|Q*-Wv}EhPiq_iQMc+KZZHu&&u-M z!IhQ97H*i=X@2+iXC1ZXHKg{Xr3IB#Z+?if%m8x!0N4?AzkRX-JaRy!tmf|3h@Z-I zx3;|LyzRn1g1-E8Ov3c^8uap(>olJ3H@CRZwX9xkNbeBmRKs=L^dpdDJYb?HchPR2 zpjv9m@x6=4{3HABR=Rz2v#2m;U3a7M%&m4VUx{E1m@npO&qviFWTi1-Dfc~{r^#jDOt^by^?7oQ? zZNcUJT#M6LN+4wn#RU;nmM3hEC(XZ~NJ;Yjw(r)#Np;c|nKL^ES-mknP++Sr9T_Da zD6lJxA1Pg^^uO(!{Y$g;%U(kP#wWSpg0U>TeUu2bIXR1inNxi4?)cW*C6PSArh5|5GldxzrE|Dm9StsAFhUU*q(uf?S zk&LYpJHZREGe`JhnsS8QlG|ilgBTlp;;hv1NO0|fmJ;W0vA080F2%8S9NBm84XBPE zDL5DI;M28yEl(HLX2rAi{hHtOV0M2mhph8Re-$wsDS&}jc1d<7Z`(x zAG=OE;V7rFal_GPd>J6#s=UNIs-GTA7&XrTcXnq?p*Nnx`$tg4Aj{dQ%3<;`NfBq6 z*+Vr%$k6-*cq9qAE|7ubOsF95<{FIdMseVn_vojnbPk@L_Z}J2zAFx6p8;Hx#iJSe zW{n_w+3f1i286it_a8ekPb0T8=Ut^kz8}Ls?aLF6kVqh%>02r*{grrx#|7}9jdvhz zScSW$o{r=cGB`vI1`AhiU2mxGr%EH^21S~!WIkX;M=wrmIo6$s3Owps9QzbRhMa=|3{p7YAZk%~s^*{R(jx2o_t+ldBz^v5CP ziRZtftz7(E?w-lNlGU^~R%+Nts7?Kn8!`Dyg>Z?1obpq?Jkd6UvwqI}G0^FL zb*Bx{%_&a-uMgzWZWA_u8%doN+C%m~b!uSClwl<|zLYaY0fM;w>4g0*gejGtUH$_< z<;sL084*!p;!c&?grg znz?3oj;#U`$*THNP;(t69Wr~@+yO2MDOXnS^%(4JDM@DSL zZXll{Y#%|F(%Tyh|M!j6e99rrKr}=nsEY))0Hs-)k@2Zn1x_BSuI=t}yksX;K1kEf zq@26dRno2b_3WIMyfmSAP3nmFFznxt#~Bz_Kq%Nz5!6|9aUIPA|cgeM`l*33rs(slV06U(#uSDb@33_(0*kgvr>uNOqvOSZq zE|m}&8W<07xBDJ*{U8W9%H4Q}+TZRu%^fAX7OK`5@)>hlPWaKkQ@lMB5wGaI` z)mTWcRsN4ScFLBCr@(RbbaC32QWG#aCdym^aAga@hFl0Fk`&!3GQZcgY?+*jBV|u8 zlQ`*rr_(R}LY^eRn^QMJLgxvzY0#ZWN$-%v!V@>2fzk-*TVUpDN>b^ zQDLfML|+CZJO~%qwOg2Cv2#~m>tJz(JQ}|50zVZfv**OQMbi}_DE0jCk?dta0INGA z9Mo2f`^P=viS(y=vj0R0wr+8VsQS+Q9+C1wT22N9iSq#;7<d2`H}rVhAr05f0~@roRf1M(2W2a?3-1E^<| z&$7|;61xm;HYe4ktq8mZZ%BC0VG1$ZbdO_a|#fIkW2_~^@^a->e9X!X<&4(bB#Tf1986Lda)_X8l6x0Fx4#UK7^PU6^1 z<|8~i8T9yIhpfh<^S5~bIBF6$PdJuTF@)=!Ze%m+_f+AQxN4w%vw*!4=cRq7MR@$)dUlU2RuF01c|nt0CIU5+w@t!IQm6o_lu&0|qyi@F zVo86NmxvUTXKGVn4D?y%A{34#7GC&Qy5!&`?sm(sFk~}ZV&$`8W^l?q1x_=nuDmuN zHA)0&BI6x$&OZ;c=Q>KL+>9OSS!(V90agd%&2Ii|1%6)#_y;-ux0AoO^ER3FXUobPgMJI|6U}`;SM6=hkoCy3{dtB*fPa5td|baiVu=3oQxMFP)&c)@ z&Y#I2BxtuTU3A`sSmzwC-W+Es#ZKw3*K+KAX6nEOz}sEKyg# z_)N^|7C40f;6F=!f`0!Liab!fF-d0rfNUFnHi+(mMBECX#C3yoGcPXaeAT-7`p$A& z1$4+?u%dQ`z~8WZsW(@`92JR zjm>{3VEuhw^;Wpiq@DkA^nQ>PUtH5Ly&#!?#ln8vSKp6H>b05rGH+ORzU3;`D`nq{ z?iVP@1Wtu(K$v=Jeh$-qq*hnfn{@oJqiEJwwhL0wa-qVC;J~(`z}P9MZ?+Pr%K`U6 z*>fx;k^RqWFsQBI;<~LEGNtPReDsz8w$x_dtMAp!XRp914tQg}$cK#t;5p|2B$fO8 zTtHmsBu}b;d%(tU$ZYU@<9?iymll790;e!29n19L*K2a=`Y;FeWB=qDzB|czD?%RC z%{*>lw%Q#6eY&i7Y*+7PXqT2dhjM7kJT+PKGGA}}l!_volz&JA7gEt2kOz`=X~`DA z8yp$Y&*UO=rB|K5CB-wacn&pD`)38IcMsJER3!IMWwNL$I>(h(9A+gTMY(TNodlN8 zKXj>oFPiM}CwF7+2DnLSfKAf9jt^P%#uo#!7v{g`TX(|k&HPq}hk&9vKsHmSIx?6e zO`}bV9pn27Y@1=%K%}D^2<&hN!GInhM2dqt^}rW->z^kzz4jPCVJxz|;LqN>nh`K} z*&McgST=BHL+xjnL?u}#e3@rKhz?EB#f3V|dCuCk0fofO@Y!!c!eklgLQ?emLh$OA zfPf70z_FRau!hswqH7>mVj$?yk^Z*8H4!inNpgy~qb)hi0}bNi($5OqDANtY z0h`R4vv7#nT=!D0*^z#4S~P=cDtWf`Zp1iEIXj$U2PEvrXt3s*SNXD6x>nqyF=Xfl zkzqN}OhoU!olOlvi*oE|a+o3)%L|$W>)p3=;Krt_vAn!PkZa1fKp<299B>;eFz`)O zZj404=fxqK>fL?(u-;?0z zPP7O#T|9O!JX!sETZ+sKz)lcRTzgeZO}z6_(80%E)y$JM@Mt9%d|;gw+-Z0A!h<_% zU;UpNofTNl%NmD1@p^)9fZ71Qz%~Djr6T<5h}R351_&!W<+>yz@x}KOL&uH%XYlJI z%jEnASmu!Q4T~_^dOHWMezZPw<%@;Y^&0n_hBDnRkb6~SHyxxy86as6U4}fnLtn844-)MEc&0f{E?6q+VzhsLYjg_)T z_p;s=xSt16l~k_aGO_fI9ZxFLsnt8kx|=s=H<_mK>vvYc6Qd$Y|_lpcFvox{=NWxBQX2-I@j_(vx7({4!aLKS#qgP7VvOpBBbf(X<19F z{y-ZaDb)b6(rOTY_07~cZMWLFM+%aMQeIJ1^lA4yo&&=J_jEcR?_TZ4viQlZkh|b! z1as%+rWJ4sCR|QGsk`6Gj0JZYAVieyF^?6c8HEIitkRx!tm;E?BEuxqk2>;erZav$kLg08o9+f zrsMNrYukRdv!z3U6}{FnIiYJDhj&sY1!)%p9){8ey8vhI_d4K|&Xsj;44l~_=aqo# z$25v2F80B9x=xw0t()DNzHVzfC?|_)PjMi0_sgl*W0Ix{%?*zsdqO=co8~RS4Ou3) z_t#0Fs9|SD1xnoJixKA8@&kc?48*If7TjxWwvER>VhNqa+;86F$MLnxhR^h^-b7p&d&?O(s`BaLfl@53XTz2#2)(_6@gt1JvrJs?i90AzDIK>?| zGRWe$Hv429OG^;r>Zj75I2X-Jnrn+C(Z260b#L4b=#J3P<2_G|HYSQ`fh)95k~Omu zJPxq6WiguNg`NAvT)Da9`XFquD9B1c_JDS%@V_}?2ryl$= zrpq`7@h>SL`8boDLOY&uuyes{HvqslzFXvJ#-hZYw~q&y?ne>)ZlxLb$+`HQFBY8l z>3lKZD!Up}lZ?ATGBF`hXNfN_U+cYvASAVRx8{{pT_W{W$AqFaou7}6En?)hd)oR9 z;4)+7qB))&&4s)ifg8iq=ZIYvWQz=*1WccQzyQU&9VJjFJ}oY_;t7LApEC?|ZHXj0 z{AD!a%=rl?VCAK#M7}r)D-ze^_FL_=AXPg0m={*=z%kAEkcH14aWid&dky#<)%E36 zQvYY3o2Ghea#kZOL=x4ljmp-o<;0BlUoQvC7M>%^XsE|+u|r+E9ns8w^o<~#1MmWP z-(YsJt_9~q`~tWUMYL^;PfRxzI{i+JdYfe52ZxQwnu(Nz+)X`IJja zyS2d!U(GqcH&K%6Gky*vH$+c2AZx7aWJt}yOG7}azpYzUqLVQc%OMpjos-+){$OhS zDgK?7t;#=zikzYimUlZK=iW8DE04?RPvReG&2C}tDvNec7?6(N9rnLA(HN6l1!N8v z!|a6KX0f@?0!UEojEVW&&nqC1M%Yx8^br24XMl z^sLy?u3uT)=UPeEd^S*6z$@OL-8g)*?Igba#_R_C+nzDwm`TJ*>C87kF+L@{D9v<5 zG~ce$@rNL|mE7T^34L21R&Eo^Ns*?d^y+e<6LOs53k#BrJa7*clYQF^AmQ(PZy~RK;R{QpTaaRL^8Dq zC!)zI9W$jy-zI*lFS`Zk@fP9NW=*~Jc6`7cn6c9elzUZf&@)XpW=h>Qe2mAPe1iR+ zGY|`HEGBNiifC1hcvs^c2opV+;>Wh!Nc4#nGub*6zVOFO_z%llp$?s*S?3BxuVnrf ze1YZ^2<#*|Zf5}GOW1VMoP!Y6j`3OnLCS%XsPR!(^JGu%^6CHs=N*s+>T%j`R{`9` zXIhZw!4E-(q?gC*zUxvh5^V)4{9FwStH08=bl}*KyAmt_UIYTD4h?gV7CB-N?;4^m zScq+&o`21EX_~@L@J&E$R|!r!b?u z=hyD~kQX^_ZyiP#CeS}PJUTYTT4-lJ;ZNQu9B&sQkS}l~nfm57B*z6_VM2~5`iEwc z3cyXLk6Tp4mr3XO&|?p7kV$PYdsSbKgCR%TkeCubKF)l6!7U**&~g`&{+;ph9eK|` z&YbA6nzx**L$uqf% z;MkDmLKj%bP~6&9jxQumIMe*p-Z!t?+;RTPEzo(8y@>^!_Jvu2O_;55%a&c$UighL zaWg&{NxLcOQ?K*X zB)R4)WUk>Mx#kQHuSu{xJ9!xZc!dbcR{X=v!~sJ!b$dE1&m?O8!8LMN-u#1a-*oZ; z)a8Sm0CAEtoAq*kLVAuz@nwBosNT-q25Lcn)%xHJNHg(4q)T6HCdcPWUlc1g$Yy?l zOX=D)U2pRZT%`L4JTvezzt+eBe%TzFplD~)U~tJ-CfRxOBXA|1LvpiTVe=nG zd=d{%D^-jvJrD9lDCJHkHg_=Wti?%D+OU(EIIi!&aOh(G!2o#D8%9lw(1ivkhel0N~T$ z@0)9UWXG=gf3)|VQBiHnx{9cn$UzY#Nf1ON=O94@iHhVPIYj`yIVO|MNPSLE!v&3sHhp$s2K-er!=hFp=EAN0? ziJypiNqnu$!W^0D=7rOkdujES+Cmo65+{(- zpRqbWnri?(cuglQIH*9SZ>Of4XCvrD1J?f zDCZ?jhtTDW)UR=4*~|8-HnGi2Vdc!%qdyhnjmT6|E9)l90(F((#UT(a-d|6&=a0*K z=T5g3d_rT5w4tt#n^4lDWjzQ4_?~KJpjP5XH5lD7X>w7-(Hno1B?VcLq%c~@xzhm@ zLMw5>Gk9nBpNzVr4G4+R)TXwr=?8*F>-1xP1Jf(ytT%|9)wX;fc$8 zGe&&Gm~ADF5Eqj9!v$2SV>8Q8K9;-P?t& zH(;x5@FvNJ%v!GID-z81G3C^tiUKvspW2V#5kux<`lbc0=DT+N=lpC(N)n8?qrHs@ zT3iNy4X)~x`a#iA#?}P=mFlB72Cp1!>8Y(Z7DKLtB;Pr-@@#7oO$VZ^N}X0UWN*fm zNE}l=b0pwO%C%po(M~JPyxkn9oYjrEgUT%3)P^E+k_4BEcDL*_>A4ImGh2FkZ z^3&rpWiW)#VfH6amN{=mS#QM$TK2CqnCgQeqB-rZ^gwc6|@LK&=%81ol`4H|#MHOW0h)ebZ{dwgtU@P$&_k zy}=bFHo=N-@(Xzli6|m(G{e!dTo0T!7UET`!_QG=?7?0aS}P}fIPxnYu$DxBBv5g` zZX`Xr>9VgLSDi7NGhKrS7!w!Pj#%PmNT5KbF?sFy!%^Eah3!Nn|BjX8+d4hFON=R+ z6bvfL1y>jNcdm}l^n(MnV&HT?8-Ic_NxMJQ-{JGWtmfmuq)<{uY9QdhXe(a}Npab; zNbl$;Zie*U{^VvJ>`xBzp#RR0Ip9QsqO3O86mJVm^l&}u!N!M5Kw<&tvqO6gNi%MYq?P?y0Y;XJpMLz+) zH{T1W9mDYZ{TQ3rto_nUo;#7Rx{zcamgW-q!?j)AEC-m#fahVCRaMP&y8DY)Hs;Gl zpI#gSIDR2Jzt)7JlDP|^WDN7X2BBppc>7K($g0|FcQ~jTXU8Ig=SJIj`PX$@X{J_9 zu_BXn>)^7KhLlVm7x#}1$g+0)mumj^+J+S`(wxe z=^L4R{)w`k-I`B+XUp{4^MCJ%?zZyK4G;_Ut?3;8yw5ACnv5=@CyocF*LP;x?LK@# zaJwJe0fY+TO_`j{Hmi}U5lJ`dYYTp^osO=w-p}pOzWpc6N`v$Ao{GFK0`z2tm~5vLW4elh zJzrA2gS1ROenIu_=(o{jHxkTmN~H@ZT_jEAOS(elh6%%oxBcy7WK?+Vy~$umA9P$T zl~ZeN4Wr=SjP^yi8a2^leCOhpcV&i1uTQBo3d+N@XdTlo2#mS>GDVP74v3Jag6$jC}fvxV+-b-=c(@8LGPtxcLI$4cL8Vd9KL#7qH z(y|OH)&*J50U6Y>L`3ApvHcYgx3mB6K%n7h$-CXbr*DzOUr9*jmX)kCFB6%((tsg% zZea98-PxneueMojg};NiuM0?nRm?WZk`ZS*OaISLe>M%C{-1ZuCbh~}D^I__oK$>) zrB03S`U5-7g7iZ=?|#A1ZGT~9{n~lTfBv0VVfLXDoGRVPWHsT$d6KN2Pym>I%W~^w z4M^;IIQAqn*&UJgW%>oWrvK9cHiH8!(6zrY-w6h29|qbtg`|_w?=N;SiSD21I{Rls zxLcaAU>}haFh$_lIXTu}A3eBVF8625_&*>0&kFqCt-yqdT(Q8lxqimWf`VD!)s8FY zoXMvaItdA!P<+Lz+H2L-%Pc9vbQzs=H!x0dSMJ0FxC9O?*$n!Q+REkvyXmjEgFfx( z{gqI!<&Uhx0dzjo(HC`0`fnB0>g1zAG^$PWy6RIMuDYZ;P2M{Q81aSlAp?QgwcnVU zLN#FI3yE$cj$GA?kGue2jlbN_m$XBa8ciXb!Ae}l|~p4HP#1-ZZ(pG9A~aysm^`~qA|yV2k#=WC%w)Xb>Z zYt!u7laL%0JTbHs_OX7!mDcC9x% z%Vja{={_Y*HKyrhSRr7DV}FMsc0D|L&yPo=Nkp$0;?CW-#5yd|+sRhUOHl3u$ zxiU0b`up9$pe7#5Py`s#O0ZXCn<`(v6g)P_;Kxe*R{q`guEj7&>`)e%xbkyg#5-e> z*M1bO+}ALoy-O34E1!7XwkDw1zz9*(3c@vnP^qpq z5NT3j{)|OwNVnP(ZvK#e6 z`dd-+WUQ)5G`Wht zb0FFYt9g9Sjq622BWO>_>s>mpgyvZ&bDA5esq#btRhwy$=eu-mV8`-txm9543?V!j zOuGCqRJL3d0Nq4Pi~V#+SU&4q&5IHQgUFA0d7B(B_7?_mM>og9)8t8emCbUzQx#jP@2lx^LCFS&d>tt; zm3OHQ^hAd#$yO08IMxC`dOYMlF$gqzb4?@ZR}VeiI-&#O#y=Z!))eY1Q8nierxQ0n z-UkOp`55E2+>(TiLx~SW@6^2X^uQ({Ds1l z7z9NN$7~5EY8D|Vev-{chhBFmSdbT_+@2>KqkKZfs#0!*ozd#9ol?+a19FvkCHLi_ z&tqDdlMakY=7RxG96?zQ6B<}(LyK$M;RU5H(4*m6z$!nz$IB#0xvrsY`bihvd{X{V z1bz6qUwEeD8A}{?c;Ze)!P~vrgN&7^#wWx>fzy5-ifX@rq6@v7#~-Htn&PR%eDbCh zB0Sj0j&qE<$rmX1!lX7|F>FN|QEUjZX5zK%gMb>#Pm+lPuk5=SuB+`gPDg?i=tYLK zfU!?($H?C9V^m{bCPeT(fk6ty*j}$kr|H>$6^iJd`*6~}h9Drd)0-U074#;G0ki=aN0wi}+^{U+qBB$h9(33Gl=zYg@ zJ^JF0+ZCT~?ia$umMDDr0mkr|4MCWBDiAs(1nk@k$AvyLubrN^YUN-P892wFdFT51u!+*=C;E z+U)1^Fa1+0Ho=A=KpR1kRO1~%5;QLPd2!^Untp=)n`(br)sF3-3j&a&I#kRwGf>X& z+6;DCJPGV!39gymCwL|a(EsP7|Ib!H^StMukduh+KRIkIl&Je+3PAp&)#@KifmSgI ze3@}@e1biV)x)<|Yo_v3Te;Am~6Fy5JM&*DT8!O&v2dp*&=N|rL4T4Vj3MgmP$fBmxI$7Jh z4mG8>9h42O`4jy^SNsn_=~6qr*%!=INb0W2?WZg7Yi7(oTuvL&Z=K&>ZWsMoj?}t2 zgLyd9nv?Gyi=w)u4p>4A%S^UmkVA`0nN_gx6tMV>FibGV+X9HzOhBbwHkH&Y$ zSSQf+Ix{B8(KjE)9hKaC7KG zna?E;uF1_zAxesG{Elx)X;O80gP!@|ka)rOQHu^1I*eA$r}3NaKpzSDhEHOUtly~2 zmwGK~VwSP;hqo{*`M{cjTH}47RmH@^>3*ipoL zRp=d2x6Q>do6je;a=wisV%EDvO~j&O-}E=+xerIZe*2@^J$gybD)@m9nBHM%t}M9r z_5z<{5TmY*l8u558aLLSpQAW3IjMCOVH(XG%HBnPSq0|s8|oOMTlYQ%k+CN1Sy(~V z?iOdOpxE-f`sT(zdQ?G|n?21?)MQJPx~fM!V;$Df0s1*j-`~^o8Ct3xbMoqc0 zfQzkPVj15lpAC32n}1<%JAZFR0>mOQB@AK`G#xuTpj}hp$gMn_5=8f6aBcFBi^PYTGq5!)UZJf|F_+2Oq_Rjh`)oO!_Y*+bcir#{ETZcgU#T|K~u zy|$LS(VEP$v_-K6dh8lhn90_dFcZpi$`0s((-uVpJx7P37z%h7-NbcLr7w#; z)^2gx-PfMhhPS^#xLt*f`~#7%@o<1%_)8197mCBJquz-=kpD<`@V#VEsX;p-DHR;J zo~Ci%9TLAlS*D|(-20HUymm=?6BZ%SEn)A#e3__w!JJp8L&mERt227-5w)x9vxA&7 zqwZU|6l){Th2%P0DS7u8-W!F?0I}&P;SYw6`n|GI5z41G!&L7-=bq`s3vz&sOm~=; zmZThb)?-QAP;6I&s@h}~jym_e7nWC4RcT)l{*nKB2(Yl7%KWypc+mzks#T?N2r-s) z$vVEgmUSRsHt5~ivMGcNLlBuUcuCNM64}$`MrRfrsKWB@o;%`>`m-w5~FJ! z_qS1HiMBXG>&hlN)DX56y&JOg6N*}O+8k@u@y7fHP`-?L+g`-G!6CbCD69C@RCP@i zY#E!x(DqSHuG2Mv>*Ct^KNXt%P(+eB+qmV?1w`9@<69-Sbwg<_yX3;i37ydBn7yw+ zIhAA!d6gStWvppA-v30;;2dk(q&vzWd}QO~w|H*U{36F9W90{j<|om#_AciakD?Zj zE_Ceeb_{cY+UP4J?k=5ARQ}ORSu?O6x9hMYF(^Dhs`qtM0q9xfAEK=GC<>GL8dl4r zzIQIXycMrtWY^2;F6rn`MJmbNs@0gfxt6Lzih+dlp;Us`qjStnesuW$#L!5CgVI$x zb4D*%mY}88!BX*-Y{-FAWeHy#QfOGA=h4@@ljR|S>k`csKJh_RlhllHQC41UJ#zKh z$Zdqy2WQSapiSpyJ?}aB5=^Q#sGrO(CzfJ3LS#jc7ii0O z(aEc!_pPuSsJ#sv^jjKU1t4=f1ty~RcF=C9H7yi>l@OW;$1%3 znuGH+(1Dr+6MQPo?9KfB3tnkPK3kJB-=#tMDi$>;X_fLiXP~FGb3Lx{&W;2J#x98S z?tLS-qk?evS=D&7oiwc`yOXbuee)T-+LX9Ky=pLW1FM%G+RAtNkRLlsQ+;|-kczN? zwGyZBBFl-5vXbHEn%#*tih=E;m6=`#b_Mwg(UrJ|F3$ofX(mbql?Dh&M+Uz-^sNcX z_sjrj%@o^s@n)df90v$NXn zD-*?nl#FSp-fIJCGF9S(+cAS(!T`13(irr#F3hvbx2`Vz34Ogt??7fk4Bj9`ubV!F zQBf8s-W!_JM0eBG16n6>VT#S8e$#iuGnNAkafdcE-n&EL&2;BAKyc@rgHp&d9@ zb}umWh1=1U1gnNeg7)mYN3!ZE+8uxBOka>_?266xx)AtyZ*-*poz*RU)aGukm22or ziYx3F)NIJ@G%gnVyh~H(h!e2EzAKh~M9p-CNpTQA{!NBJR59f-jc}&}2k3?sWUpv6 zX!vrgLus5wZX7kUkL;FOaoVfbCg~csJ&b_@+I?*PSWEkrOI4Mv?Nx3I6_yrBg4V%v zHBRjwK28h5qMt;|Vo^zDJ}P``WmDD$sbn0h_X))fDVCNS>N+(-L$%h{gDmR07nz+u z`<;WzWr<$hjSg1-&R#yB3MZo%98T+wR_K{?VJwyQhih#|J#qf)T&wQu68NLt_x7o_w*o+!xT$Jp;gGfNa}lV+ho@Di?ezrN1> z32)}-yIX}4MpkJ@^Eln8nTbVyNQY5T$GhxFH?_JGUjqvA?x}gB>7^TSCoa^J>W+3! z_*p2C(Z;b&bgR%KbLHZLfY$G6%o2S#>A~5pvjgbPi3~of8-dKlwW&=f+Fr1T@ToUu zCrVAfJRxpqCDnsHLCZC9g`OF&XGO3WmD+UO`qE3{{n=B+aVtApr9IISeb}i$4|xtDhWw z_?$qyW3&?oQ_H9bm%f;~b_&LAGujvn5*+tMB|J zSdViuqybOLB+u>AKof9z`m(8zkQmhgeCeL_)5}a%y*^pu-e{V{H3cgJqJ8R}0a8Po z2&(}E$C)qfO#-|3+19*6U3_%%WMd-vYO8WprDykn=42OdASJqx!sGQ|vu}#h4KA*v z%90s8y&>twd#n{8#(~|=vV_{9Qx8RsX?H+jY4_{IFuTnywX~p1@g5_%g zK-6f#OxY8>1TrTM4(0aT10!KSV7H0N3wx)R4{o0?CdBOHx4lp;ti}q5;Lb2wq-3Sw zlq4OH?qbsG75R4Anbqv?&Jtaj#YLre3=>}i*2gYFN$Ed`zIR+_3TkTq^JD)XuGzN& zQiInAj|50b;){riw(*QJ*lAMJ_(Dr?IN~c|MEEmMe2uF-`J;HwEO9*J_TaXR$l*qp z#s{@7pPK~mTS(|ptxH8dg4?=UuM$w|wo`0WjBb(j)*S~|mGz9$uE!tP3aPawyCy0U zH(k8OnKr?YSlT=tUs@|0BXDq;%Damt+|`*6V?BGFEVT9vF}*B7{q+YdB8Kz##pI3X z6qgKT$Y}FjN0K7zM_#2>6MLsWKb6Jsj#2)I(}O9e0md%btXwW4>(#ag2N(jAWA z6OiC7k*0hM))J-b!1M;(sP65QZOTq8n>t@0_=b&3c9hNhgEhh1=7{$+U$+uUn~~|Z z%20dr(n|#r2gV#w)~p`ba(`MY@qE0-ADMf#IBQUix||2Y_SFk#Rb(E>Jw5sTK*vtQ-hvri8^#Bm^f&t8wS>@ zxA=Dv-;QAN6@IrOY!_jj*3}aez_)E7^?;axnCoC%4lK(Ojkh;jgV!l#=jwsAA6s;| z#iLu3T&N2mqY=|zWO<#V=` zy@FM~-hoR?4O$3Odn~#4EmGC2Ih>(zD#=Un&Yf&1kIC5l3Ae{qNKqN?Za~KBtg%`7 z(Ak5*msjv$&FN7S-Fbo?uhG^&`i=PbFj$h{)hR6lm#B-{Z>1!F^^ivZ4Vbw=81<<> z^d*kC>Nm)T`2;bEsj=)F&zO+4 zAe(pu29}_Gjb5_WjB4V;+=@6QmM3~^0u)rD;Ud`#x?<+Q5x7@RBHs+wQ@JnFxlDJ-5U>LrOdj&SaAITqpj9iF<&VZ93dP*_ny;?1D0BQ?E%Zc8y-eE zr=53)h=4>ac7v}Cs=N!8pu5Ra@pbL31vj)_wG#7cQ_Yh}2h>iy*xv3+;r*?99+0{X z$=rriR}8c?tFr3Z>SL55Da|-KB9(Fm@0$Z}@oHWMZl>UAKdEx@m9+)*o{tUyz&W}l zikT{j?3PI$nF+1~)+HX>z=-|`wN>YlLItG+A$5ySFBqm0oK);^x64`K#Pa?+v3Az- z*piF@7YD1fQx}Twj!9Rn_2`9ow*U^6s+24-9PYN~WDbS=_nK(k$anHB+v1puEIS0I zOzjEPA3NPlaKFhYg&R5X=@dlHyB~(W7L@=22oZ4tr0^ z>%tBw7t>{>XjNeASreU~>EB;!-~r-GifONK-=o7PzCngNCM(JFOOvs*=7?(kt+j#qULAbTjZBitP(dUN|RnaVYrsNq6Qa^uz+Xm9JP z3ZIW>_+lj9Q=y#}q^VI^RCLpyi0%^fATOtJ`Jx^i?V^l4>OJVi-qIL)wn6!_L^o*) zfmUk_hCz)PCaPFj)tHH!x!c}$T)8-HeBp_?Tlmfgxb|C>oV#h6;COXOnli-FpjYOl zNH+P{Odk&q{u>r4oTl<7YT_o!MQF;xk40B~X{AFFy(yjQ6`?pow;@frari^kLM6wV z+MTd4M*S*$p#u*O!pa_flK1-TYqav%xQQwrA0{b9T3^6*PTOPg{n5#{9|%uyC;Z?Z zai2IPJeyNw2K2N{tkQeo;Je4!NoDfzn~i8EU!z45oQkB$a$l8uk$ePB0~;qNXRzih zcQ*#Z0Q6XN`JwqO4R{OnIgPFcNJYV0$o^x2I(|F*gRme6A4V}4)beiDE=T1D?%b2P zL>KJ)^z+@Gu_qJ+8du=ZJhJKFevR(tZSTf|#aJd>2id6*ih(bZ_6^?CovAw5&uTH5 zi|}9@>+Zf2RFj5+rwXCg<(WYKs8h| z@nMcnU#L4C&VBGowt}T}`-;^%=`k zl6w{Iyk8ufXtk=B)&kfiSJOGKoL$@o8hW_&_9HocR*G3 z)p47*JU{2FPIIKOw7=@>&_(XNfCsdA-}0YQelenLuh2<__v*eIQ7edtsV{(&A14rV zds-|yT*sebulG1AY|wu^&d04XQ1s{kt?|YL-JW*sM$ON)8ODYy^1?#1cN>r6zX8{& zdBp`J9E^4^tZB*nlIM2Ha=e+@d8-d%k&a?R(4K^0Z+A*EBR4mvE5J5Q_sxpf1OqTBs}*w7t<AGU@`~>fms3bHGK6$t4g^kcOUccEE$UZafl8Fn z&m$w3fW8n8VSqf)T5`uL5g*qWMrTCRDhToxTidE+G;?A&}e-9Jjinxf)-6p+CkUQw}U22$$Pus702{N`1 zdg<~Qvoe^>2tjgWc?=8omxo=0RB-lQdXW8kZDDUOg@X>-w2H31;DSGzxK9o~9ec{f z5nh{Vy%>r(E4gJC<^q#IaivL0u7p{}2L_1Io_hJzwZgHT(=btI)f~)B@HeOUFBJ{h zLQ$LM8JGI{{G@KROE_0;#%g7 zW8wGOZ2q_3+hzGwedGLt3&`Xx?xHa?H`JPz& zznIsbZ7M3GSQEn^Z5*}QuRg>vEj$Mg;vZmcA57+;h?sbOr-=l;YLTMAsOd@}Xt&F; zRUEqVR_XU;D?8o6KEWwO%aRg<3}Ho9W7Ey91NELm?h~P(KYu2Lra2SO>fgWYpO^Z} zz%GNr(!C%3vr?PPk-bCq;kaE;ksF7qx z>NTFw2p3xA0WrytuRHIdA#|5eN))+u3B7x-0t7NA!(|9B;C!Io%U~m5h=r8hP%bEq z@9@&-jvfyAA*3JuN}fy9Se2FHY;>knw2(sgz@I49MOp>Qe_hrCZ!UT@Q@t%!R{l z;;%9egf`tvmD7xld!v1Ij>9g!A75~rnypS_#&JV^br^A_jg4M?C*@uvgL@yD0i195 z?aY*J%kK2yK8H`=DG7lH>MBDXK|=*b_9jZ^73tn-_+e`J;fK2{bLS2W$KTORqhOVi zL%tL|WU5Udbz2aQOX3fbV5it9E7+}`IK(ko2dJ2o5mn}k>p(PsTFR(vWOp(BSu{w7 zts2GVX{XxT*>RSl%2iEO`3Hr`S!L;OrhdOg?{@kK9m5h8v9|9 zb$Gl0HDso0xXXn2^k2c_uUX#u-lCS^tvZ#0i&p1I^0H2&C^;TgAABi>CrYn6<91Z} znRe!{jvV+l5M9j>g#Uh%j7lNcxhAo1CxUOj4!H0?zS(ip?620jE$kyj%G@ckNR^!p z-WHsA_zvG)<;W35EGNO`{MGLi`Pd%g#Nn4LK`tzme|De)JCapyuC6nud2jo9WH8}z zU_=s%cH6??1=j!bd&I8Hx1#7^2=Z|sDpwU(;n;sGY`w%WB0Fu^Z-Y?pjz#0h9s*ySCj0 zvu^e55n}WxydyV50xFE4G4ef(9959li5na9G?y@ggM(Eg0`5ODVkp~6cFJRNhVHd; z%B2rky>S4+Gn2E|Cg%MXIQ@_rV(wNY^N$pMTX0k^K0E3?l$@vI z@iBu?M~71m8v*r@O=p8D0WNCu4aZZZJ3n=bEY{rZ+)a1=#p!Hom$siY@OzUz;4M0| z4?VaPuS^crNZdfV6bJPq;htx$!jP54Fy)C29VqR^bhxVG{4I?H)|WR4-{=o1{eE7NP^LnxuBjCr!eDZb3u2 zW6!{79E>G!PMt92;hXl^M;bTp!T$#LU0Hv{P`-p?@=H=%*e{&SnT5 z@q>4l;H0(NzBLScg%(+%FQ!0}3U0h30RJ!{S_RH~jrnO?hRPLK%y37TtK|(rl^M2) zcrW^LE=g#~p8M(Y(Cn25jB^KruG8Os4v`(xNeIH7?2*T!a5FmTEZ;)w+GmVhf!3hw zXS0rF;;rg$eySyKq3m8ZcP7wKfJ?)(g}j3Q3XsbO867_>uzmz_zgNgD&uz#tKn!~J zGz8J#f5R+Ak{8XDE|a1N#y-l~EmRrXJ(duUJZnSDMIAHWfyU(JZzoozv|TqxD|Urjk@6S-_duU9a( z9`>W_ITlRMK{Ve1vHVY^KeLzv;-{oK`(yNZlkI^mJJ z?@kr}kC7+k_a+s$#%Jqtw_=}3l8-(Aaz1(pHnlPsOpr&GeJD%i5kDj!KYqd=;(f;( z*(D2QyC6bRvnf6H@aQ@1L;YgPHy*EiNqp4>p~B1zjP>h|A4d)_RhjwBm)G}zl}VL9 z`V>21$c%&1rWUY7tbkdm&5g-UO3DebSKOt<6Nf>!$sTQIcZl5z;luE=&l<||IDJ?0 z0CWktPThkDyLY-9kv(KMKXP~@^FzFSr9w}hfnIgX{efcZ@7^li zSAy@pX(81j1as|qD-+k#D(O1Qhw&%Z2bS^>MDh(jCUBh(vAkGSBrqJJF;rU!X6GX? zPEL`0>Y@m~`x`DbIxBJmWj9S6PsN8hLgB<^hFn3*yA8`6wm^5TwOY}2S>H+2q@Fw`mPz{80IMDjYIzjFWq7Q)4}8Q4*ifXziBKrpbg zCeyVfr#RS6M}DN%oA0v(@?M%g&Miz%6@g zpMEjgq3)h!0N4HP`T)`>tr|2q3xQjfc5vjau}p!g2N0CxeCMbqpFVw>=IDB=_|8iM znc>f$Z}0(3c}~DqyI{}r%PVy`CJ8`QgyAhU?@YPETr=USj7M&)Jka+@N*ms4?Ih}S zbre5ald&(02w~zPDV{e#LcGHCfSxZ48qP1$05$_Sw5-YlMw*HT<;nc;-*>i-N!)|~ zk2n#yLkyoj`kzn#UK0al=WmW*CGZPbI{W)$U3T$_9%>&eXZ&;&5ANdc_6|93B@R7d TJNV`R{85lok Date: Fri, 24 Apr 2020 19:14:59 +0200 Subject: [PATCH 06/26] setup explained 2 --- docs/failover-cluster.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index d953bf0..b9a5261 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -207,6 +207,18 @@ Starcounter failover cluster is build on top of proven stack consisting of [pace * supporting hot standby mode so that in-memory data on a standy node is up-to-date with an active node * providing pacemaker control scripts for starcounter database. +Here is a system diagram of a typical starcounter failover cluster: + +![cluster](images/Starcounter%20cluster.png) + +Let's explain components of the cluster top-down. + +* IP address +This is a resource of type `ocf:heartbeat:IPaddr2`, which we use as an ip address flowing in the cluster along with a starcounter application. It allows external clients to access the application by the ip address regardless of which node hosts it. Should be configured to start on the same node as the starcounter application. +* Starcounter application +Controls your starcounter application. A good fit for resource type would be `ocf:heartbeat:anything` which can control any long-running daemon like processes. +* Starcounter database +Controls starcounter database required for the starcounter application. This is the only resource in this setup which is managed by starcounter provided resource agent - `ocf:starcounter:database`. You must install `resource-agents-starcounter` ### Future directions ### Practical setup steps From 59eed14660fb890544d3e25691484bd2b4697093 Mon Sep 17 00:00:00 2001 From: bigwad Date: Sun, 26 Apr 2020 14:43:25 +0200 Subject: [PATCH 07/26] setup explained (starcounter database) --- docs/failover-cluster.md | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index b9a5261..0e765db 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -211,14 +211,18 @@ Here is a system diagram of a typical starcounter failover cluster: ![cluster](images/Starcounter%20cluster.png) -Let's explain components of the cluster top-down. +Let's go through all cluster resource we have under pacemaker control: * IP address -This is a resource of type `ocf:heartbeat:IPaddr2`, which we use as an ip address flowing in the cluster along with a starcounter application. It allows external clients to access the application by the ip address regardless of which node hosts it. Should be configured to start on the same node as the starcounter application. +This is a resource of type `ocf:heartbeat:IPaddr2`, which we use as a virtual public ip address flowing in the cluster along with a starcounter application. It allows external clients to access the application by the signle ip address regardless of which node hosts it. Should be configured to start on the same node as the starcounter application. * Starcounter application -Controls your starcounter application. A good fit for resource type would be `ocf:heartbeat:anything` which can control any long-running daemon like processes. +Controls your starcounter application. A good fit for resource type would be `ocf:heartbeat:anything` which can control any long-running daemon like processes. Should be configured to start on the same node where an active instance of starcounter database is running. * Starcounter database -Controls starcounter database required for the starcounter application. This is the only resource in this setup which is managed by starcounter provided resource agent - `ocf:starcounter:database`. You must install `resource-agents-starcounter` +Controls starcounter database required for the starcounter application. It has a type of `ocf:starcounter:database` and this is the only resource in this setup which is authored by starcounter. You must install `resource-agents-starcounter` package to use it. Unlike IP address and Starcounter application resources that can have only one running instance per cluster, this resource runs on every cluster node, but in different states - only one node can run it as a "master" while the rest are "slaves". "Master" and "slave" are pacemaker terms that directly correspond to starcounter database modes named "active" and "standby", so that if starcounter database resource in pacemaker is master, then the database it controls is in active mode. The same connection exists between "slave" pacemaker resource state and "standby" mode of starcounter database. In the active mode starcounter is able to accept client connections and perform database operations. And in the standby mode starcounter constantly pulls latest transactions from a transaction log and applies it to in-memory state, thus accelerating possible failover. +* GFS2 +* DRDB + + ### Future directions ### Practical setup steps From 64dc8c27cce763b3bc539110d68649c1f1720873 Mon Sep 17 00:00:00 2001 From: bigwad Date: Sun, 26 Apr 2020 14:48:53 +0200 Subject: [PATCH 08/26] formatting --- docs/failover-cluster.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 0e765db..152530c 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -214,11 +214,14 @@ Here is a system diagram of a typical starcounter failover cluster: Let's go through all cluster resource we have under pacemaker control: * IP address + This is a resource of type `ocf:heartbeat:IPaddr2`, which we use as a virtual public ip address flowing in the cluster along with a starcounter application. It allows external clients to access the application by the signle ip address regardless of which node hosts it. Should be configured to start on the same node as the starcounter application. * Starcounter application + Controls your starcounter application. A good fit for resource type would be `ocf:heartbeat:anything` which can control any long-running daemon like processes. Should be configured to start on the same node where an active instance of starcounter database is running. * Starcounter database -Controls starcounter database required for the starcounter application. It has a type of `ocf:starcounter:database` and this is the only resource in this setup which is authored by starcounter. You must install `resource-agents-starcounter` package to use it. Unlike IP address and Starcounter application resources that can have only one running instance per cluster, this resource runs on every cluster node, but in different states - only one node can run it as a "master" while the rest are "slaves". "Master" and "slave" are pacemaker terms that directly correspond to starcounter database modes named "active" and "standby", so that if starcounter database resource in pacemaker is master, then the database it controls is in active mode. The same connection exists between "slave" pacemaker resource state and "standby" mode of starcounter database. In the active mode starcounter is able to accept client connections and perform database operations. And in the standby mode starcounter constantly pulls latest transactions from a transaction log and applies it to in-memory state, thus accelerating possible failover. + +Controls running instance of starcounter database required for the starcounter application. It has a type of `ocf:starcounter:database` and this is the only resource in this setup which is authored by starcounter. You must install `resource-agents-starcounter` package to use it. Unlike IP address and Starcounter application resources that can have only one running instance per cluster, this resource runs on every cluster node, but in different states - only one node can run it as a "master" while the rest are "slaves". "Master" and "slave" are pacemaker terms that directly correspond to starcounter database modes named "active" and "standby", so that if starcounter database resource in pacemaker is master, then the database it controls is in active mode. The same connection exists between "slave" pacemaker resource state and "standby" mode of starcounter database. In the active mode starcounter is able to accept client connections and perform database operations. And in the standby mode starcounter constantly pulls latest transactions from a transaction log and applies it to in-memory state, thus accelerating possible failover. * GFS2 * DRDB From 7ef76aa5ff1dc212fb201d5d56d930bbf8b7dac8 Mon Sep 17 00:00:00 2001 From: bigwad Date: Sun, 26 Apr 2020 15:59:35 +0200 Subject: [PATCH 09/26] Setup explained (GFS2) --- docs/failover-cluster.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 152530c..8979a59 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -223,6 +223,8 @@ Controls your starcounter application. A good fit for resource type would be `oc Controls running instance of starcounter database required for the starcounter application. It has a type of `ocf:starcounter:database` and this is the only resource in this setup which is authored by starcounter. You must install `resource-agents-starcounter` package to use it. Unlike IP address and Starcounter application resources that can have only one running instance per cluster, this resource runs on every cluster node, but in different states - only one node can run it as a "master" while the rest are "slaves". "Master" and "slave" are pacemaker terms that directly correspond to starcounter database modes named "active" and "standby", so that if starcounter database resource in pacemaker is master, then the database it controls is in active mode. The same connection exists between "slave" pacemaker resource state and "standby" mode of starcounter database. In the active mode starcounter is able to accept client connections and perform database operations. And in the standby mode starcounter constantly pulls latest transactions from a transaction log and applies it to in-memory state, thus accelerating possible failover. * GFS2 + +A resource to build a GFS2 cluster file system on top of a shared DRBD volume. This resource is mostly technical as DRBD itself is just a raw syncronized block device while Starounter stores transaction log in a conventional file thus requiring a file-system. The need of a cluster file system (and not a more common local one like ext4) stems form a fact that we use DRBD in dual primary mode. The necessity of dual-primary mode is covered in the section concerning DRBD resource. More on cluster file system requirement for dual-primary DRBD: https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-dual-primary-mode. * DRDB From 4edbff722851705d8790f82eb51e600de39748e2 Mon Sep 17 00:00:00 2001 From: bigwad Date: Sun, 26 Apr 2020 16:47:36 +0200 Subject: [PATCH 10/26] Setup explained (DRBD) --- docs/failover-cluster.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 8979a59..a2ca816 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -225,7 +225,9 @@ Controls running instance of starcounter database required for the starcounter a * GFS2 A resource to build a GFS2 cluster file system on top of a shared DRBD volume. This resource is mostly technical as DRBD itself is just a raw syncronized block device while Starounter stores transaction log in a conventional file thus requiring a file-system. The need of a cluster file system (and not a more common local one like ext4) stems form a fact that we use DRBD in dual primary mode. The necessity of dual-primary mode is covered in the section concerning DRBD resource. More on cluster file system requirement for dual-primary DRBD: https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-dual-primary-mode. -* DRDB +* DRBD + +DRBD resource provides us with a shared block storage so that standby instances of starcounter could have an access to up-to-date transaction log. Using DRBD has befits of ensuring data high-availability and data consistency due to DRBD's synchronous replication. There is one caveat of DRBD usage in starcounter scenario - we need to run DRBD in not so common dual-primary mode. Only dual-primary allows mounting of DRBD volume on several nodes at the same time, thus allowing starcounter standby instance to read the transaction log at the same time active instance writes to it. In order to avoid split-brain and keep data consistent, it's strongly advised to use pacemaker fencing when DRBD is running as dual-primary. Without fencing, a cluster can end up in split-brain situation (for instance due to communication problems) and each instance saves a write transaction in the shared transaction log overwriting a transaction saved by another instance. As a result all transactions from the moment of split-brain might be lost. More on fencing: https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/ch05.html#_what_is_fencing. ### Future directions From 21949b9ad592f9760bbd503e303e1aab99dc5867 Mon Sep 17 00:00:00 2001 From: bigwad Date: Sun, 26 Apr 2020 17:22:25 +0200 Subject: [PATCH 11/26] Atlernative setups --- docs/failover-cluster.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index a2ca816..9dc10d7 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -229,6 +229,14 @@ A resource to build a GFS2 cluster file system on top of a shared DRBD volume. T DRBD resource provides us with a shared block storage so that standby instances of starcounter could have an access to up-to-date transaction log. Using DRBD has befits of ensuring data high-availability and data consistency due to DRBD's synchronous replication. There is one caveat of DRBD usage in starcounter scenario - we need to run DRBD in not so common dual-primary mode. Only dual-primary allows mounting of DRBD volume on several nodes at the same time, thus allowing starcounter standby instance to read the transaction log at the same time active instance writes to it. In order to avoid split-brain and keep data consistent, it's strongly advised to use pacemaker fencing when DRBD is running as dual-primary. Without fencing, a cluster can end up in split-brain situation (for instance due to communication problems) and each instance saves a write transaction in the shared transaction log overwriting a transaction saved by another instance. As a result all transactions from the moment of split-brain might be lost. More on fencing: https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/ch05.html#_what_is_fencing. +### Alternative setups + +As shown, starcouner failover cluster requires conistent shared data storage to maintain up-to-date in-memory state on standby node. It gives us possiblity tweak cluster setup in two dimensions: +1. If we give up on keeping in-memory state and we're fine with longer starcounter startup on failover, then we can use DRBD in single-primary mode. Using DRBD in single primary let us avoid strict fencing requirement if DRBD quorum is configured: https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-feature-quorum +2. We can use another storage alternatives given they provide two required features. Namely, being accesible from active and standby starcounter nodes (1) and ensure data consistency in split-brain situation (2). Possible solutions include: + - using [OCFS2] (https://oss.oracle.com/projects/ocfs2/) instead of GFS2 + - using iSCSI shared volume with scsci fencing instead of DRBD + - using NFS-based transaction log given NFS server supports fencing instead of GFS2+DRBD ### Future directions ### Practical setup steps From 78ad047fcf5f93656f3a9f2f981e1c6c9058519c Mon Sep 17 00:00:00 2001 From: bigwad Date: Sun, 26 Apr 2020 17:27:09 +0200 Subject: [PATCH 12/26] future directions --- docs/failover-cluster.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 9dc10d7..49c39ef 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -239,5 +239,8 @@ As shown, starcouner failover cluster requires conistent shared data storage to - using NFS-based transaction log given NFS server supports fencing instead of GFS2+DRBD ### Future directions + +Given enough interest, it should be possible to develop a setup that allows running starcounter in standby mode in a cluster with signle-primary DRBD. Such configuration has an advantage of not requiring pacemaker fencing while still be consistent and highly available. + ### Practical setup steps From 8cab0f16450b5fb7906b96c8efb3f80f0fcd4ec0 Mon Sep 17 00:00:00 2001 From: bigwad Date: Sun, 26 Apr 2020 18:48:24 +0200 Subject: [PATCH 13/26] setup steps (common) --- docs/failover-cluster.md | 143 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 49c39ef..6e1aa36 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -244,3 +244,146 @@ Given enough interest, it should be possible to develop a setup that allows runn ### Practical setup steps +The backbone of a starcounter cluster is pretty standard mix of pacemaker, DRBD, GFS2 and IP Address resources. Please refer to [Cluster from scratch](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/index.html) for detailed configuring steps. Here we'll briefly list required steps to set it up: + +1. SETUP PACEMAKER CLUSTER + +\#install and run cluster software (on both nodes) + +apt-get install pacemaker pcs psmisc corosync +systemctl start pcsd.service +systemctl enable pcsd.service + +\#set cluster user password (on both nodes) + +passwd hacluster + +\#authenticate cluster nodes (on any node) + +pcs host auth node1 node2 + +\#create cluster (on any node) + +pcs cluster setup mycluster node1.mshome.net node2.mshome.net --force + +2. ADD QUORUM NODE TO THE CLUSTER (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-quorumdev-haar) + +\#on a quorum node (it would be a third machine) + +apt install pcs corosync-qnetd +systemctl start pcsd.service +systemctl enable pcsd.service +passwd hacluster + +\#install quorum defince (on both nodes) + +apt install corosync-qdevice + +\#add quorum to the cluster (on any node) + +pcs host auth node3.mshome.net +pcs quorum device add model net host=node3.mshome.net algorithm=lms + +3. CONFIGURE FENCING + +\#configure diskless sbd (https://documentation.suse.com/sle-ha/12-SP4/html/SLE-HA-all/cha-ha-storage-protect.html#pro-ha-storage-protect-confdiskless) + +\#configure sbd (on both nodes) + +apt install sbd +mkdir /etc/sysconfig +cat </etc/sysconfig/sbd + SBD_PACEMAKER=yes + SBD_STARTMODE=always + SBD_DELAY_START=no + SBD_WATCHDOG_DEV=/dev/watchdog + SBD_WATCHDOG_TIMEOUT=5 + EOF +systemctl enable sbd + +\#enable stonith for the cluster + +pcs property set stonith-enabled="true" +pcs property set stonith-watchdog-timeout=10 + +4. CONFIGURE DRDB PARTITIONS + +prerequisite: empty partition \dev\sdb1 on bot nodes + +\#intall and configure drbd (on both nodes) + +apt-get install drbd-utils +cat < /etc/drbd.d/test.res +resource test { + protocol C; + meta-disk internal; + device /dev/drbd1; + syncer { + verify-alg sha1; + } + net { + allow-two-primaries; + fencing resource-only; + } + handlers { + fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh"; + unfence-peer "/usr/lib/drbd/crm-unfence-peer.9.sh"; + } + on node1 { + disk /dev/sdb1; + address node1_ip_address:7789; + } + on node2 { + disk /dev/sdb1; + address node2_ip_address:7789; + } +} +END +drbdadm create-md test +drbdadm up test + +\#make one of the nodes primary (on any node) + +drbdadm primary --force test + +5. SETUP GFS + +\#setup gfs packages (on both nodes) + +For Ubuntu: +apt-get install gfs2-utils dlm-controld + +For CentOS: +yum localinstall https://repo.cloudlinux.com/cloudlinux/8.1/BaseOS/x86_64/dlm-4.0.9-3.el8.x86_64.rpm + +\#setup dlm resource (on one node) (https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_dlm.html) + +pcs cluster cib dlm_cfg +pcs -f dlm_cfg resource create dlm ocf:pacemaker:controld op monitor interval=60s +pcs -f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1 +pcs cluster cib-push dlm_cfg --config + +\#create GFS2 filysystem (on both nodes) +mkfs.gfs2 -p lock_dlm -j 2 -t mycluster:gfs_fs /dev/drbd1 + +6. CONFIGURE DRBD CLUSTER RESOURCE(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_drbd_device.html) + +pcs cluster cib drbd_cfg +pcs -f drbd_cfg resource create drbd_drive ocf:linbit:drbd drbd_resource=test op monitor interval=60s +pcs -f drbd_cfg resource promotable drbd_drive promoted-max=2 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true +pcs cluster cib-push drbd_cfg --config + +7. CONFIGURE GFS CLUSTER RESOURCE(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_filesystem.html) + +pcs cluster cib fs_cfg +pcs -f fs_cfg resource create drbd_fs Filesystem device="/dev/drbd1" directory="/mnt/drbd" fstype="gfs2" +pcs -f fs_cfg constraint colocation add drbd_fs with drbd_drive-clone INFINITY with-rsc-role=Master +pcs -f fs_cfg constraint order promote drbd_drive-clone then start drbd_fs +pcs -f fs_cfg constraint colocation add drbd_fs with dlm-clone INFINITY +pcs -f fs_cfg constraint order dlm-clone then drbd_fs +pcs -f fs_cfg resource clone drbd_fs meta interleave=true +pcs cluster cib-push fs_cfg --config + +8. CONFIGURE CLUSTER VIRTUAL IP (on any node) + +pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.52.235 cidr_netmask=28 op monitor interval=30s From 610797d442bd0b89e6b15ea28014de1743798fea Mon Sep 17 00:00:00 2001 From: bigwad Date: Sun, 26 Apr 2020 18:53:38 +0200 Subject: [PATCH 14/26] Formatting --- docs/failover-cluster.md | 81 +++++++++++++++++++++------------------- 1 file changed, 42 insertions(+), 39 deletions(-) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 6e1aa36..42c213a 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -246,50 +246,47 @@ Given enough interest, it should be possible to develop a setup that allows runn The backbone of a starcounter cluster is pretty standard mix of pacemaker, DRBD, GFS2 and IP Address resources. Please refer to [Cluster from scratch](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/index.html) for detailed configuring steps. Here we'll briefly list required steps to set it up: -1. SETUP PACEMAKER CLUSTER - -\#install and run cluster software (on both nodes) +#### 1. SETUP PACEMAKER CLUSTER +```text +#install and run cluster software (on both nodes) apt-get install pacemaker pcs psmisc corosync systemctl start pcsd.service systemctl enable pcsd.service -\#set cluster user password (on both nodes) - +#set cluster user password (on both nodes) passwd hacluster -\#authenticate cluster nodes (on any node) - +#authenticate cluster nodes (on any node) pcs host auth node1 node2 -\#create cluster (on any node) - +#create cluster (on any node) pcs cluster setup mycluster node1.mshome.net node2.mshome.net --force +``` -2. ADD QUORUM NODE TO THE CLUSTER (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-quorumdev-haar) - -\#on a quorum node (it would be a third machine) +#### 2. ADD QUORUM NODE TO THE CLUSTER (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-quorumdev-haar) +```text +#on a quorum node (it would be a third machine) apt install pcs corosync-qnetd systemctl start pcsd.service systemctl enable pcsd.service passwd hacluster -\#install quorum defince (on both nodes) - +#install quorum defince (on both nodes) apt install corosync-qdevice -\#add quorum to the cluster (on any node) - +#add quorum to the cluster (on any node) pcs host auth node3.mshome.net pcs quorum device add model net host=node3.mshome.net algorithm=lms +``` -3. CONFIGURE FENCING - -\#configure diskless sbd (https://documentation.suse.com/sle-ha/12-SP4/html/SLE-HA-all/cha-ha-storage-protect.html#pro-ha-storage-protect-confdiskless) +#### 3. CONFIGURE FENCING -\#configure sbd (on both nodes) +```text +#configure diskless sbd (https://documentation.suse.com/sle-ha/12-SP4/html/SLE-HA-all/cha-ha-storage-protect.html#pro-ha-storage-protect-confdiskless) +#configure sbd (on both nodes) apt install sbd mkdir /etc/sysconfig cat </etc/sysconfig/sbd @@ -301,17 +298,17 @@ cat </etc/sysconfig/sbd EOF systemctl enable sbd -\#enable stonith for the cluster - +#enable stonith for the cluster pcs property set stonith-enabled="true" pcs property set stonith-watchdog-timeout=10 +``` -4. CONFIGURE DRDB PARTITIONS - -prerequisite: empty partition \dev\sdb1 on bot nodes +#### 4. CONFIGURE DRDB PARTITIONS -\#intall and configure drbd (on both nodes) +Prerequisite: empty partition \dev\sdb1 on both nodes +```text +#intall and configure drbd (on both nodes) apt-get install drbd-utils cat < /etc/drbd.d/test.res resource test { @@ -342,39 +339,42 @@ END drbdadm create-md test drbdadm up test -\#make one of the nodes primary (on any node) - +#make one of the nodes primary (on any node) drbdadm primary --force test +``` -5. SETUP GFS - -\#setup gfs packages (on both nodes) +#### 5. SETUP GFS -For Ubuntu: +```text +#setup gfs packages (on both nodes) +#For Ubuntu: apt-get install gfs2-utils dlm-controld -For CentOS: +#For CentOS: yum localinstall https://repo.cloudlinux.com/cloudlinux/8.1/BaseOS/x86_64/dlm-4.0.9-3.el8.x86_64.rpm -\#setup dlm resource (on one node) (https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_dlm.html) - +#setup dlm resource (on one node) (https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_dlm.html) pcs cluster cib dlm_cfg pcs -f dlm_cfg resource create dlm ocf:pacemaker:controld op monitor interval=60s pcs -f dlm_cfg resource clone dlm clone-max=2 clone-node-max=1 pcs cluster cib-push dlm_cfg --config -\#create GFS2 filysystem (on both nodes) +#create GFS2 filysystem (on both nodes) mkfs.gfs2 -p lock_dlm -j 2 -t mycluster:gfs_fs /dev/drbd1 +``` -6. CONFIGURE DRBD CLUSTER RESOURCE(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_drbd_device.html) +#### 6. CONFIGURE DRBD CLUSTER RESOURCE(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_drbd_device.html) +```text pcs cluster cib drbd_cfg pcs -f drbd_cfg resource create drbd_drive ocf:linbit:drbd drbd_resource=test op monitor interval=60s pcs -f drbd_cfg resource promotable drbd_drive promoted-max=2 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true pcs cluster cib-push drbd_cfg --config +``` -7. CONFIGURE GFS CLUSTER RESOURCE(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_filesystem.html) +#### 7. CONFIGURE GFS CLUSTER RESOURCE(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_filesystem.html) +```text pcs cluster cib fs_cfg pcs -f fs_cfg resource create drbd_fs Filesystem device="/dev/drbd1" directory="/mnt/drbd" fstype="gfs2" pcs -f fs_cfg constraint colocation add drbd_fs with drbd_drive-clone INFINITY with-rsc-role=Master @@ -383,7 +383,10 @@ pcs -f fs_cfg constraint colocation add drbd_fs with dlm-clone INFINITY pcs -f fs_cfg constraint order dlm-clone then drbd_fs pcs -f fs_cfg resource clone drbd_fs meta interleave=true pcs cluster cib-push fs_cfg --config +``` -8. CONFIGURE CLUSTER VIRTUAL IP (on any node) +#### 8. CONFIGURE CLUSTER VIRTUAL IP (on any node) +```text pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.52.235 cidr_netmask=28 op monitor interval=30s +``` From 3da25ca04b3ad8bac01540614dd63f43c7260bcc Mon Sep 17 00:00:00 2001 From: bigwad Date: Mon, 27 Apr 2020 00:12:50 +0200 Subject: [PATCH 15/26] Setup steps (starcounter) --- docs/failover-cluster.md | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 42c213a..dfd5354 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -390,3 +390,43 @@ pcs cluster cib-push fs_cfg --config ```text pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.52.235 cidr_netmask=28 op monitor interval=30s ``` + +#### 9. CONFIGURE STARCOUNTER AND STARCOUNTER BASED APPLICATION +Next steps confugre a starcounter database and a starcounter based application. + +Let's start with setting default resource strickiness to avoid resources meving back after failed node recovery (https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_prevent_resources_from_moving_after_recovery.html): +```text +pcs resource defaults resource-stickiness=100 +``` + +Create a starcounter database resource - `db`. It requires two parameters: `dbpath` - a path to the database folder and `starpath` - a path to `star` utility: +```text +pcs resource create db database dbpath="/mnt/drbd/databases/db" starpath="/home/user/starcounter/star" +``` + +Configure `db` as promotable. After this, pacemaker will start `db` as a slave on all nodes: +```text +pcs resource promotable db meta interleave=true +``` + +Create a resource to control starcounter based application. We use `anything` resource type for it and the name of resource is `webapp`. +We configure connection string so that webapp connect to an existing and running database instance and doesn't start its own. This is to avoid possible interference with the databases that should be started by `db` resource: +```text +pcs resource create webapp anything binfile=/home/wad/WebApp/WebApp cmdline_options="ConnectionString='Database=/mnt/drbd/databases/db;OpenMode=Open;StartMode=RequireStarted'" +``` + +GFS2 fle system should be mounted before the database start: +```text +pcs constraint order start drbd_fs-clone then start db-clone +``` + +Webapp and ClusterIP should run on the same node: +```text +pcs constraint colocation add ClusterIP with webapp +``` + +Webapp requires a promoted instance of db to run on the same node as webapp itself. After this command, one instance of the database will be promoted to active state, while another one will keep running as standby. +```text +pcs constraint order promote db-clone then start webapp +pcs constraint colocation add db-clone with webapp rsc-role=Master +``` From 21058e21f664aadcc24c9d9da43895c46d253999 Mon Sep 17 00:00:00 2001 From: bigwad Date: Mon, 27 Apr 2020 00:13:45 +0200 Subject: [PATCH 16/26] minor --- docs/failover-cluster.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index dfd5354..5c0950f 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -392,7 +392,7 @@ pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.52.235 cidr_netma ``` #### 9. CONFIGURE STARCOUNTER AND STARCOUNTER BASED APPLICATION -Next steps confugre a starcounter database and a starcounter based application. +Now we move on to configuring a starcounter database and a starcounter based application. Let's start with setting default resource strickiness to avoid resources meving back after failed node recovery (https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_prevent_resources_from_moving_after_recovery.html): ```text From 39ffea61f9ebea0744168c88048d950f34ff22c9 Mon Sep 17 00:00:00 2001 From: Konstantin Date: Tue, 28 Apr 2020 13:29:56 +0200 Subject: [PATCH 17/26] Grammarlify failover-cluster.md --- docs/failover-cluster.md | 102 ++++++++++-------- ...er cluster.png => starcounter-cluster.png} | Bin 2 files changed, 57 insertions(+), 45 deletions(-) rename docs/images/{Starcounter cluster.png => starcounter-cluster.png} (100%) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 5c0950f..d3243ce 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -201,52 +201,57 @@ Start-ClusterGroup Starcounter ## Starcounter failover on Linux ### Introduction -The idea of starounter failover cluster is to boundle a starcounter database and a starcounter-based application into an entity that can be health monitored and automatically restarted or migrated to a standby cluster node should a disaster happens. Due to in-memory nature of a starcounter database, when failover happens it may take significant time to load data from media on a cold standby node. Thus it would be beneficial to keep starcounter running as a hot standby. Another requirement to the system concers data integrity. Our goal is to provide consistent solution in terms of [CAP](https://en.wikipedia.org/wiki/CAP_theorem), i.e. no committed transactions can be lost during migration. + +The idea of the Starounter failover cluster is to bundle a Starcounter database and a Starcounter-based application into an entity that can be health monitored and automatically restarted or migrated to a standby cluster node should a disaster happens. Due to the in-memory nature of the Starcounter database, when failover happens it may take significant time to load data from media on a cold standby node. Thus it would be beneficial to keep Starcounter running in the hot standby mode. Another requirement of the system concerns data integrity. Our goal is to provide a consistent solution in terms of [CAP](https://en.wikipedia.org/wiki/CAP_theorem), i.e. no committed transactions can be lost during migration. + ### Setup Explained -Starcounter failover cluster is build on top of proven stack consisting of [pacemaker](https://clusterlabs.org/), [DRBD](https://www.linbit.com/drbd/) and [GFS2](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/global_file_system_2/index). Pacemaker is responsible for managing cluster nodes, control resources it manages and perform appropriate failover actions. DRBD is responsible for synchronizing starcounter transaction log on a block level. And GFS2 provides Starcounter file level access to a shared transacton log. Starcounter role in this is: -* supporting hot standby mode so that in-memory data on a standy node is up-to-date with an active node -* providing pacemaker control scripts for starcounter database. -Here is a system diagram of a typical starcounter failover cluster: +Starcounter failover cluster is built on top of a proven stack consisting of [pacemaker](https://clusterlabs.org/), [DRBD](https://www.linbit.com/drbd/) and [GFS2](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/global_file_system_2/index). Pacemaker is responsible for managing cluster nodes, control resources it manages and performs appropriate failover actions. DRBD is responsible for synchronizing the Starcounter transaction log on a block level. And GFS2 provides Starcounter file-level access to a shared transaction log. Starcounter role in this is: + +* Supporting hot standby mode so that in-memory data on a standby node is up-to-date with an active node. +* Providing pacemaker control scripts for the Starcounter database. -![cluster](images/Starcounter%20cluster.png) +Here is a system diagram of a typical Starcounter failover cluster: + +![cluster](images/starcounter-cluster.png) Let's go through all cluster resource we have under pacemaker control: -* IP address +#### IP address -This is a resource of type `ocf:heartbeat:IPaddr2`, which we use as a virtual public ip address flowing in the cluster along with a starcounter application. It allows external clients to access the application by the signle ip address regardless of which node hosts it. Should be configured to start on the same node as the starcounter application. -* Starcounter application +This is a resource of type `ocf:heartbeat:IPaddr2`, which we use as a virtual public IP address flowing in the cluster along with a Starcounter application. It allows external clients to access the application by the single IP address regardless of which node hosts it. It should be configured to start on the same node as the Starcounter application. -Controls your starcounter application. A good fit for resource type would be `ocf:heartbeat:anything` which can control any long-running daemon like processes. Should be configured to start on the same node where an active instance of starcounter database is running. -* Starcounter database +#### Starcounter application -Controls running instance of starcounter database required for the starcounter application. It has a type of `ocf:starcounter:database` and this is the only resource in this setup which is authored by starcounter. You must install `resource-agents-starcounter` package to use it. Unlike IP address and Starcounter application resources that can have only one running instance per cluster, this resource runs on every cluster node, but in different states - only one node can run it as a "master" while the rest are "slaves". "Master" and "slave" are pacemaker terms that directly correspond to starcounter database modes named "active" and "standby", so that if starcounter database resource in pacemaker is master, then the database it controls is in active mode. The same connection exists between "slave" pacemaker resource state and "standby" mode of starcounter database. In the active mode starcounter is able to accept client connections and perform database operations. And in the standby mode starcounter constantly pulls latest transactions from a transaction log and applies it to in-memory state, thus accelerating possible failover. -* GFS2 +Controls your Starcounter application. A good fit for resource type would be `ocf:heartbeat:anything` which can control any long-running daemon like processes. It should be configured to start on the same node where an active instance of Starcounter database is running. -A resource to build a GFS2 cluster file system on top of a shared DRBD volume. This resource is mostly technical as DRBD itself is just a raw syncronized block device while Starounter stores transaction log in a conventional file thus requiring a file-system. The need of a cluster file system (and not a more common local one like ext4) stems form a fact that we use DRBD in dual primary mode. The necessity of dual-primary mode is covered in the section concerning DRBD resource. More on cluster file system requirement for dual-primary DRBD: https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-dual-primary-mode. -* DRBD +#### Starcounter database -DRBD resource provides us with a shared block storage so that standby instances of starcounter could have an access to up-to-date transaction log. Using DRBD has befits of ensuring data high-availability and data consistency due to DRBD's synchronous replication. There is one caveat of DRBD usage in starcounter scenario - we need to run DRBD in not so common dual-primary mode. Only dual-primary allows mounting of DRBD volume on several nodes at the same time, thus allowing starcounter standby instance to read the transaction log at the same time active instance writes to it. In order to avoid split-brain and keep data consistent, it's strongly advised to use pacemaker fencing when DRBD is running as dual-primary. Without fencing, a cluster can end up in split-brain situation (for instance due to communication problems) and each instance saves a write transaction in the shared transaction log overwriting a transaction saved by another instance. As a result all transactions from the moment of split-brain might be lost. More on fencing: https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/ch05.html#_what_is_fencing. +Controls running instance of the Starcounter database required for the Starcounter application. It has a type of `ocf:starcounter:database` and this is the only resource in this setup which is authored by Starcounter. You must install `resource-agents-starcounter` package to use it. Unlike IP address and Starcounter application resources that can have only one running instance per cluster, this resource runs on every cluster node, but in different states - only one node can run it as a "master" while the rest are "slaves". "Master" and "slave" are pacemaker terms that directly correspond to Starcounter database modes named "active" and "standby". If Starcounter database resource in Pacemaker is master, then the database it controls is in active mode. The same connection exists between the "slave" Pacemaker resource state and the "standby" mode of the Starcounter database. In the active mode, Starcounter can accept client connections and perform database operations. And in the standby mode, Starcounter constantly pulls the latest transactions from the transaction log and applies it to in-memory state, thus accelerating possible failover. -### Alternative setups +#### GFS2 -As shown, starcouner failover cluster requires conistent shared data storage to maintain up-to-date in-memory state on standby node. It gives us possiblity tweak cluster setup in two dimensions: -1. If we give up on keeping in-memory state and we're fine with longer starcounter startup on failover, then we can use DRBD in single-primary mode. Using DRBD in single primary let us avoid strict fencing requirement if DRBD quorum is configured: https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-feature-quorum -2. We can use another storage alternatives given they provide two required features. Namely, being accesible from active and standby starcounter nodes (1) and ensure data consistency in split-brain situation (2). Possible solutions include: - - using [OCFS2] (https://oss.oracle.com/projects/ocfs2/) instead of GFS2 - - using iSCSI shared volume with scsci fencing instead of DRBD - - using NFS-based transaction log given NFS server supports fencing instead of GFS2+DRBD +A resource to build a GFS2 cluster file system on top of a shared DRBD volume. This resource is mostly technical as DRBD itself is just a raw synchronized block device while Starounter stores transaction log in a conventional file thus requiring a file-system. The need for a cluster file system (and not a more common local one like `ext4`) stems form a fact that we use DRBD in dual primary mode. The necessity of dual-primary mode is covered in the section concerning DRBD resources. More on cluster file system requirement for dual-primary DRBD: https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-dual-primary-mode. -### Future directions +#### DRBD + +DRBD resource provides us with shared block storage so that standby instances of Starcounter could have access to the up-to-date transaction log. Using DRBD has benefits of ensuring data high-availability and data consistency due to DRBD's synchronous replication. There is one caveat of DRBD usage in the Starcounter scenario - we need to run DRBD in not so common dual-primary mode. Only dual-primary allows mounting of DRBD volume on several nodes at the same time, thus allowing Starcounter standby instance to read the transaction log at the same time as the active instance writes to it. In order to avoid split-brain and keep data consistent, it's strongly advised to use pacemaker fencing when DRBD is running as dual-primary. Without fencing, a cluster can end up in a split-brain situation (for instance due to communication problems) and each instance saves a write transaction in the shared transaction log overwriting a transaction saved by another instance. As a result, all transactions from the moment of split-brain might be lost. More on fencing: https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/ch05.html#_what_is_fencing. + +### Alternative setups + +As shown, the Starcouner failover cluster requires consistent shared data storage to maintain up-to-date in-memory state on the standby node. It gives us the possibility to tweak cluster setup in two dimensions: -Given enough interest, it should be possible to develop a setup that allows running starcounter in standby mode in a cluster with signle-primary DRBD. Such configuration has an advantage of not requiring pacemaker fencing while still be consistent and highly available. +1. If we give up on keeping in-memory state and we're fine with longer Starcounter startup on failover, then we can use DRBD in single-primary mode. Using DRBD in single primary lets us avoid strict fencing requirements if DRBD [quorum](https://www.linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-feature-quorum) is configured. +2. We can use other storage alternatives given they provide two required features. Namely, being accessible from active and standby Starcounter nodes (1) and ensure data consistency in the split-brain situation (2). Possible solutions include: + - Using [OCFS2] (https://oss.oracle.com/projects/ocfs2/) instead of GFS2. + - Using iSCSI shared volume with scsci fencing instead of DRBD. + - Using NFS-based transaction log given NFS server supports fencing instead of GFS2+DRBD. ### Practical setup steps -The backbone of a starcounter cluster is pretty standard mix of pacemaker, DRBD, GFS2 and IP Address resources. Please refer to [Cluster from scratch](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/index.html) for detailed configuring steps. Here we'll briefly list required steps to set it up: +The backbone of a Starcounter cluster is a pretty standard mix of Pacemaker, DRBD, GFS2, and IP Address resources. Please refer to [Cluster from scratch](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/index.html) for detailed configuring steps. Here we'll briefly list the required steps to set it up: -#### 1. SETUP PACEMAKER CLUSTER +#### 1. Setup Pacemaker cluster ```text #install and run cluster software (on both nodes) @@ -264,7 +269,7 @@ pcs host auth node1 node2 pcs cluster setup mycluster node1.mshome.net node2.mshome.net --force ``` -#### 2. ADD QUORUM NODE TO THE CLUSTER (https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-quorumdev-haar) +#### 2. [Add quorum node to the cluster](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-quorumdev-haar) ```text #on a quorum node (it would be a third machine) @@ -281,7 +286,7 @@ pcs host auth node3.mshome.net pcs quorum device add model net host=node3.mshome.net algorithm=lms ``` -#### 3. CONFIGURE FENCING +#### 3. Configure fencing ```text #configure diskless sbd (https://documentation.suse.com/sle-ha/12-SP4/html/SLE-HA-all/cha-ha-storage-protect.html#pro-ha-storage-protect-confdiskless) @@ -303,9 +308,9 @@ pcs property set stonith-enabled="true" pcs property set stonith-watchdog-timeout=10 ``` -#### 4. CONFIGURE DRDB PARTITIONS +#### 4. Configure DRDB partitions -Prerequisite: empty partition \dev\sdb1 on both nodes +Prerequisite: empty partition `\dev\sdb1` on both nodes ```text #intall and configure drbd (on both nodes) @@ -343,7 +348,7 @@ drbdadm up test drbdadm primary --force test ``` -#### 5. SETUP GFS +#### 5. Setup GFS ```text #setup gfs packages (on both nodes) @@ -363,7 +368,7 @@ pcs cluster cib-push dlm_cfg --config mkfs.gfs2 -p lock_dlm -j 2 -t mycluster:gfs_fs /dev/drbd1 ``` -#### 6. CONFIGURE DRBD CLUSTER RESOURCE(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_drbd_device.html) +#### 6. [Configure DRBD cluster resource](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_drbd_device.html) ```text pcs cluster cib drbd_cfg @@ -372,7 +377,7 @@ pcs -f drbd_cfg resource promotable drbd_drive promoted-max=2 promoted-node-max= pcs cluster cib-push drbd_cfg --config ``` -#### 7. CONFIGURE GFS CLUSTER RESOURCE(https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_filesystem.html) +#### 7. [Configure GFS cluster resource](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_filesystem.html) ```text pcs cluster cib fs_cfg @@ -385,48 +390,55 @@ pcs -f fs_cfg resource clone drbd_fs meta interleave=true pcs cluster cib-push fs_cfg --config ``` -#### 8. CONFIGURE CLUSTER VIRTUAL IP (on any node) +#### 8. Configure cluster virtual IP (on any node) ```text pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=192.168.52.235 cidr_netmask=28 op monitor interval=30s ``` -#### 9. CONFIGURE STARCOUNTER AND STARCOUNTER BASED APPLICATION -Now we move on to configuring a starcounter database and a starcounter based application. +#### 9. Configure Starcounter and Starcounter based application + +Now we move on to configuring a Starcounter database and a Starcounter based application. + +Let's start with setting default resource strictness to avoid resources [moving back after failed node recovery](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_prevent_resources_from_moving_after_recovery.html): -Let's start with setting default resource strickiness to avoid resources meving back after failed node recovery (https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/_prevent_resources_from_moving_after_recovery.html): ```text pcs resource defaults resource-stickiness=100 ``` -Create a starcounter database resource - `db`. It requires two parameters: `dbpath` - a path to the database folder and `starpath` - a path to `star` utility: +Create a Starcounter database resource - `db`. It requires two parameters: `dbpath` - a path to the database folder and `starpath` - a path to the `star` utility: + ```text pcs resource create db database dbpath="/mnt/drbd/databases/db" starpath="/home/user/starcounter/star" ``` -Configure `db` as promotable. After this, pacemaker will start `db` as a slave on all nodes: +Configure `db` as promotable. After this, Pacemaker will start `db` as a slave on all nodes: + ```text pcs resource promotable db meta interleave=true ``` -Create a resource to control starcounter based application. We use `anything` resource type for it and the name of resource is `webapp`. -We configure connection string so that webapp connect to an existing and running database instance and doesn't start its own. This is to avoid possible interference with the databases that should be started by `db` resource: +Create a resource to control Starcounter based application. We use `anything` resource type for it and the name of the resource is `webapp`. We configure connection string so that the `webapp` connects to an existing and running database instance and doesn't start its own. This is to avoid possible interference with the databases that should be started by the `db` resource: + ```text pcs resource create webapp anything binfile=/home/wad/WebApp/WebApp cmdline_options="ConnectionString='Database=/mnt/drbd/databases/db;OpenMode=Open;StartMode=RequireStarted'" ``` -GFS2 fle system should be mounted before the database start: +GFS2 file system should be mounted before the database start: + ```text pcs constraint order start drbd_fs-clone then start db-clone ``` Webapp and ClusterIP should run on the same node: + ```text pcs constraint colocation add ClusterIP with webapp ``` -Webapp requires a promoted instance of db to run on the same node as webapp itself. After this command, one instance of the database will be promoted to active state, while another one will keep running as standby. +Webapp requires a promoted instance of `db` to run on the same node as the `webapp` itself. After this command, one instance of the database will be promoted to the active state, while another one will keep running in the standby state. + ```text pcs constraint order promote db-clone then start webapp pcs constraint colocation add db-clone with webapp rsc-role=Master -``` +``` \ No newline at end of file diff --git a/docs/images/Starcounter cluster.png b/docs/images/starcounter-cluster.png similarity index 100% rename from docs/images/Starcounter cluster.png rename to docs/images/starcounter-cluster.png From eb4de742129ab485c36a26e8e3f61b8796c28f8e Mon Sep 17 00:00:00 2001 From: Konstantin Date: Tue, 28 Apr 2020 15:37:34 +0200 Subject: [PATCH 18/26] Update README.md --- docs/README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/README.md b/docs/README.md index da54128..1a6c004 100644 --- a/docs/README.md +++ b/docs/README.md @@ -85,15 +85,15 @@ unzip Starcounter.3.0.0-rc-20191212.zip **Install prerequisites.** ```text -yum install wget unzip libaio ncurses-compat-libs clang +sudo yum install wget unzip libaio ncurses-compat-libs clang ``` Starcounter requires a certain version of SWI-Prolog, which is not available from existing repositories, but can be found in package archives: ```text -yum localinstall https://kojipkgs.fedoraproject.org//packages/compat-readline6/6.3/16.fc30/x86_64/compat-readline6-6.3-16.fc30.x86_64.rpm -yum localinstall https://kojipkgs.fedoraproject.org//vol/fedora_koji_archive05/packages/pl/7.2.0/1.fc23/x86_64/pl-7.2.0-1.fc23.x86_64.rpm -ln /usr/lib64/swipl-7.2.0/lib/x86_64-linux/libswipl.so.7.2.0 /usr/lib64/libswipl.so +sudo yum localinstall https://kojipkgs.fedoraproject.org//packages/compat-readline6/6.3/16.fc30/x86_64/compat-readline6-6.3-16.fc30.x86_64.rpm +sudo yum localinstall https://kojipkgs.fedoraproject.org//vol/fedora_koji_archive05/packages/pl/7.2.0/1.fc23/x86_64/pl-7.2.0-1.fc23.x86_64.rpm +sudo ln /usr/lib64/swipl-7.2.0/lib/x86_64-linux/libswipl.so.7.2.0 /usr/lib64/libswipl.so ``` **Download and unpack Starcounter binaries.** From d86c965247d0672940fc3c580832d8fc7cedc211 Mon Sep 17 00:00:00 2001 From: Konstantin Date: Thu, 7 May 2020 13:26:17 +0200 Subject: [PATCH 19/26] Add DRBD firewall rules --- docs/failover-cluster.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index d3243ce..b1ba344 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -344,6 +344,16 @@ END drbdadm create-md test drbdadm up test +# Add the DRBD port 7789 in the firewall to allow synchronization of data between the two nodes. + +# On the first node: +firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="node2_ip_address" port port="7789" protocol="tcp" accept' +firewall-cmd --reload + +# On the second node: +firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="node1_ip_address" port port="7789" protocol="tcp" accept' +firewall-cmd --reload + #make one of the nodes primary (on any node) drbdadm primary --force test ``` From b93d28e1e15a9f395ccca7a43f8e81783c4939d9 Mon Sep 17 00:00:00 2001 From: Konstantin Date: Thu, 7 May 2020 13:40:16 +0200 Subject: [PATCH 20/26] Add links to extra DRBD manuals --- docs/failover-cluster.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index b1ba344..661e49b 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -308,9 +308,14 @@ pcs property set stonith-enabled="true" pcs property set stonith-watchdog-timeout=10 ``` -#### 4. Configure DRDB partitions +#### 4. Configure DRBD partitions -Prerequisite: empty partition `\dev\sdb1` on both nodes +**Prerequisite**: empty partition `\dev\sdb1` on both nodes. + +Extra resources: + +- [How to Setup DRBD to Replicate Storage on Two CentOS 7 Servers](https://www.tecmint.com/setup-drbd-storage-replication-on-centos-7/). +- [How to Install DRBD on CentOS Linux](https://linuxhandbook.com/install-drbd-linux/). ```text #intall and configure drbd (on both nodes) From 4b3997aedc9d980314074f8f5a03347d9093c858 Mon Sep 17 00:00:00 2001 From: Konstantin Date: Fri, 8 May 2020 17:32:16 +0200 Subject: [PATCH 21/26] Fix minor typos in failover-cluster.md --- docs/failover-cluster.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 661e49b..04a6a33 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -310,7 +310,7 @@ pcs property set stonith-watchdog-timeout=10 #### 4. Configure DRBD partitions -**Prerequisite**: empty partition `\dev\sdb1` on both nodes. +**Prerequisite**: empty partition `/dev/sdb1` on both nodes. Extra resources: @@ -436,7 +436,7 @@ pcs resource promotable db meta interleave=true Create a resource to control Starcounter based application. We use `anything` resource type for it and the name of the resource is `webapp`. We configure connection string so that the `webapp` connects to an existing and running database instance and doesn't start its own. This is to avoid possible interference with the databases that should be started by the `db` resource: ```text -pcs resource create webapp anything binfile=/home/wad/WebApp/WebApp cmdline_options="ConnectionString='Database=/mnt/drbd/databases/db;OpenMode=Open;StartMode=RequireStarted'" +pcs resource create webapp anything binfile=/home/user/WebApp/WebApp cmdline_options="ConnectionString='Database=/mnt/drbd/databases/db;OpenMode=Open;StartMode=RequireStarted'" ``` GFS2 file system should be mounted before the database start: From e3502b58e04db98a029197cee3741d97992205bd Mon Sep 17 00:00:00 2001 From: Konstantin Date: Fri, 8 May 2020 17:32:56 +0200 Subject: [PATCH 22/26] Add [How to Create a GFS2 Formatted Cluster File System] link --- docs/failover-cluster.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 04a6a33..f9faa95 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -316,6 +316,7 @@ Extra resources: - [How to Setup DRBD to Replicate Storage on Two CentOS 7 Servers](https://www.tecmint.com/setup-drbd-storage-replication-on-centos-7/). - [How to Install DRBD on CentOS Linux](https://linuxhandbook.com/install-drbd-linux/). +- [How to Create a GFS2 Formatted Cluster File System](https://www.thegeekdiary.com/how-to-create-a-gfs2-formatted-cluster-file-system/). ```text #intall and configure drbd (on both nodes) From d2a54329daca6e38a764610ed9ffc9bf94efb823 Mon Sep 17 00:00:00 2001 From: Konstantin Date: Fri, 8 May 2020 17:33:53 +0200 Subject: [PATCH 23/26] Add info about DRBD under SELinux --- docs/failover-cluster.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index f9faa95..158c428 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -386,6 +386,12 @@ mkfs.gfs2 -p lock_dlm -j 2 -t mycluster:gfs_fs /dev/drbd1 #### 6. [Configure DRBD cluster resource](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Clusters_from_Scratch/_configure_the_cluster_for_the_drbd_device.html) +>[DRBD will not be able to run under the default SELinux security policies. If you are familiar with SELinux, you can modify the policies in a more fine-grained manner, but here we will simply exempt DRBD processes from SELinux control:](https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/ch07.html#_install_the_drbd_packages) + +```text +semanage permissive -a drbd_t +``` + ```text pcs cluster cib drbd_cfg pcs -f drbd_cfg resource create drbd_drive ocf:linbit:drbd drbd_resource=test op monitor interval=60s From 607cad8970281a55c1cad3e212a30f7ed4f306f7 Mon Sep 17 00:00:00 2001 From: Konstantin Date: Fri, 8 May 2020 17:34:07 +0200 Subject: [PATCH 24/26] Add resource-agents-4.2.0-2.fc30.x86_64.rpm package installation step --- docs/failover-cluster.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 158c428..9db7abf 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -442,6 +442,12 @@ pcs resource promotable db meta interleave=true Create a resource to control Starcounter based application. We use `anything` resource type for it and the name of the resource is `webapp`. We configure connection string so that the `webapp` connects to an existing and running database instance and doesn't start its own. This is to avoid possible interference with the databases that should be started by the `db` resource: +For CentOS extra package has to be installed. + +``` +yum install https://rpmfind.net/linux/fedora/linux/releases/30/Everything/x86_64/os/Packages/r/resource-agents-4.2.0-2.fc30.x86_64.rpm +``` + ```text pcs resource create webapp anything binfile=/home/user/WebApp/WebApp cmdline_options="ConnectionString='Database=/mnt/drbd/databases/db;OpenMode=Open;StartMode=RequireStarted'" ``` From 00593151ab8b520a60f263c9f2e6a89797bb040a Mon Sep 17 00:00:00 2001 From: Konstantin Date: Fri, 8 May 2020 19:14:49 +0200 Subject: [PATCH 25/26] Enhance [Configure Starcounter and Starcounter based application] section --- docs/failover-cluster.md | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 9db7abf..9e04fbc 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -428,7 +428,9 @@ Let's start with setting default resource strictness to avoid resources [moving pcs resource defaults resource-stickiness=100 ``` -Create a Starcounter database resource - `db`. It requires two parameters: `dbpath` - a path to the database folder and `starpath` - a path to the `star` utility: +Create a Starcounter database resource - `db`. It requires two parameters: `dbpath` - a path to the database folder and `starpath` - a path to the `star` utility executable file: + +***Note:** the `dbpath` value must point to a folder with an existing Starcounter database. A Starcounter database can be created with the following command: `star new path`.* ```text pcs resource create db database dbpath="/mnt/drbd/databases/db" starpath="/home/user/starcounter/star" @@ -449,7 +451,7 @@ yum install https://rpmfind.net/linux/fedora/linux/releases/30/Everything/x86_64 ``` ```text -pcs resource create webapp anything binfile=/home/user/WebApp/WebApp cmdline_options="ConnectionString='Database=/mnt/drbd/databases/db;OpenMode=Open;StartMode=RequireStarted'" +pcs resource create webapp anything binfile=/home/user/WebApp/WebApp cmdline_options="--urls http://0.0.0.0:80 ConnectionString='Database=/mnt/drbd/databases/db;OpenMode=Open;StartMode=RequireStarted'" ``` GFS2 file system should be mounted before the database start: From e4a90e39ef70b7e5ce819036455e389fb4cffb05 Mon Sep 17 00:00:00 2001 From: Konstantin Date: Thu, 11 Jun 2020 14:48:00 +0200 Subject: [PATCH 26/26] Add `corosync` and `pacemaker` auto startup commands --- docs/failover-cluster.md | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/docs/failover-cluster.md b/docs/failover-cluster.md index 9e04fbc..13af4ed 100644 --- a/docs/failover-cluster.md +++ b/docs/failover-cluster.md @@ -471,4 +471,11 @@ Webapp requires a promoted instance of `db` to run on the same node as the `weba ```text pcs constraint order promote db-clone then start webapp pcs constraint colocation add db-clone with webapp rsc-role=Master +``` + +#### Configure automatic `corosync` and `pacemaker` startup on restart + +```text +systemctl enable corosync +systemctl enable pacemaker ``` \ No newline at end of file