If you add a new node, and then cancel the cluster plan, the new node does not shut down cleanly.
> 2014-07-09 12:44:40 =ERROR REPORT====
> ** Generic server <0.12924.0> terminating
> ** Last message in was timeout
> ** When Server state == {state,folsom_sample_slide_uniform,484444094,60}
> ** Reason for termination ==
> **
> {badarg,[{ets,select_delete,[484444094,[{{{'$1','_'},'_'},[{'<','$1',1404935020}],[true]}]],[]},{folsom_sample_slide_uniform,trim,2,[{file,"src/folsom_sample_slide_uniform.erl"},{line,70}]},{folsom_sample_slide_server,handle_info,2,[{file,"src/folsom_sample_slide_server.erl"},{line,61}]},{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,607}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
> 2014-07-09 12:44:40 =CRASH REPORT====
> crasher:
> initial call: folsom_sample_slide_server:init/1
> pid: <0.12924.0>
> registered_name: []
> exception exit:
> {{badarg,[{ets,select_delete,[484444094,[{{{'$1','_'},'_'},[{'<','$1',1404935020}],[true]}]],[]},{folsom_sample_slide_uniform,trim,2,[{file,"src/folsom_sample_slide_uniform.erl"},{line,70}]},{folsom_sample_slide_server,handle_info,2,[{file,"src/folsom_sample_slide_server.erl"},{line,61}]},{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,607}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]},[{gen_server,terminate,6,[{file,"gen_server.erl"},{line,747}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
> ancestors:
> [folsom_sample_slide_sup,folsom_sup,riak_core_stat_sup,riak_core_sup,<0.148.0>]
> messages: []
> links: [<0.173.0>]
> dictionary: []
> trap_exit: false
> status: running
> heap_size: 610
> stack_size: 24
> reductions: 780
> neighbours:
> 2014-07-09 12:44:40 =SUPERVISOR REPORT====
> Supervisor: {local,folsom_sample_slide_sup}
> Context: child_terminated
> Reason:
> {badarg,[{ets,select_delete,[484444094,[{{{'$1','_'},'_'},[{'<','$1',1404935020}],[true]}]],[]},{folsom_sample_slide_uniform,trim,2,[{file,"src/folsom_sample_slide_uniform.erl"},{line,70}]},{folsom_sample_slide_server,handle_info,2,[{file,"src/folsom_sample_slide_server.erl"},{line,61}]},{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,607}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
> Offender:
> [{pid,<0.12924.0>},{name,undefined},{mfargs,{folsom_sample_slide_server,start_link,[folsom_sample_slide_uniform,484444094,60]}},{restart_type,transient},{shutdown,brutal_kill},{child_type,worker}]
If you add a new node, and then cancel the cluster plan, the new node does not shut down cleanly.
Steps to reproduce:
riak-admin cluster add riak@existing-noderiak-admin cluster planriak-admin cluster clear