Skip to content

Add graceful shutdown timeout for Jetty#5429

Merged
drcrallen merged 16 commits intoapache:masterfrom
drcrallen:shutdown/cleanShutdownNiketh
Mar 23, 2018
Merged

Add graceful shutdown timeout for Jetty#5429
drcrallen merged 16 commits intoapache:masterfrom
drcrallen:shutdown/cleanShutdownNiketh

Conversation

@drcrallen
Copy link
Copy Markdown
Contributor

@drcrallen drcrallen commented Feb 27, 2018

#4716 fixed against master

I'm hoping this can go in because it would be super handy.

Adds druid.server.http.gracefulShutdownTimeout and druid.server.http.unannouncePropogationDelay which are described in the docs.

The druid.server.http.unannouncePropogationDelay is defaulting to 0 to facilitate not delaying unit tests that may start a Jetty server, and to make the default behavior match legacy behavior of not waiting.

@drcrallen drcrallen changed the title Add graceful shutdown timeout Add graceful shutdown timeout for Jetty Feb 27, 2018
@drcrallen
Copy link
Copy Markdown
Contributor Author

Failed from #2373

@drcrallen
Copy link
Copy Markdown
Contributor Author

failing from

ERROR] Please refer to /home/travis/build/druid-io/druid/integration-tests/target/failsafe-reports for the individual test results.
[ERROR] Please refer to dump files (if any exist) [date]-jvmRun[N].dump, [date].dumpstream and [date]-jvmRun[N].dumpstream.
[ERROR] org.apache.maven.surefire.booter.SurefireBooterForkException: There was an error in the forked process
[ERROR] Max number of retries[10] exceeded for Task[Waiting for instance to be ready: [http://172.17.0.1:8082]]. Failing.
[ERROR] 	at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:673)
[ERROR] 	at org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:535)
[ERROR] 	at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:280)
[ERROR] 	at org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245)
[ERROR] 	at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1124)
[ERROR] 	at org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:954)
[ERROR] 	at org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:832)
[ERROR] 	at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
[ERROR] 	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
[ERROR] 	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154)
[ERROR] 	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146)
[ERROR] 	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
[ERROR] 	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
[ERROR] 	at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
[ERROR] 	at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
[ERROR] 	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:309)
[ERROR] 	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:194)
[ERROR] 	at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:107)
[ERROR] 	at org.apache.maven.cli.MavenCli.execute(MavenCli.java:993)
[ERROR] 	at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:345)
[ERROR] 	at org.apache.maven.cli.MavenCli.main(MavenCli.java:191)
[ERROR] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[ERROR] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
[ERROR] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[ERROR] 	at java.lang.reflect.Method.invoke(Method.java:498)
[ERROR] 	at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
[ERROR] 	at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
[ERROR] 	at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
[ERROR] 	at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)

@niketh
Copy link
Copy Markdown
Contributor

niketh commented Feb 27, 2018

@drcrallen With PR #4716 when I deployed it in production, I still saw channel disconnect issue. This was some time ago though, so I don't remember the issue. Let me re-deploy 4716 in production and confirm if I no longer see that issue.

server.setHandler(new StatisticsHandler());
server.setConnectors(connectors);
server.setAttribute(GRACEFUL_SHUTDOWN_TIMEOUT, config.getGracefulShutdownTimeout().toStandardDuration().getMillis());

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why doesn't this call server.setStopTimeout and or server.setStopAtShutdown which use/rely on the StatisticsHandler internally to cause graceful shutdown if set? As far as I can tell, using them and calling server.stop will have the more desirable effect of also no longer accepting new connections, which this approach seems like it will lack unless I am missing something, likely reducing the number of connections will have to be forcibly closed after the grace period is up.

further reading:

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setStopTimeout sounds good, but setStopAtShutdown registers a shutdown hook, which should already be taken care of by the lifecycle. Shutdown hooks do not preserve any sort of order, and we want to unannounce before we shutdown jetty

@drcrallen
Copy link
Copy Markdown
Contributor Author

@clintropolis I put in the changes to do the native shutdown stuff. Now the logs look something like this:

2018-03-01T18:28:25,074 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle - Running shutdown hook
2018-03-01T18:28:25,085 INFO [Thread-50] io.druid.curator.discovery.CuratorDruidNodeAnnouncer - Unannouncing [DiscoveryDruidNode{druidNode=DruidNode{serviceName='druid/historical', host='127.0.0.1', port=-1, plaintextPort=8083, enablePlaintextPort=true, tlsPort=-1, enableTlsPort=false}, nodeType='historical', services={dataNodeService=DataNodeService{tier='_default_tier', maxSize=0, type=historical, priority=0}, lookupNodeService=LookupNodeService{lookupTier='__default'}}}].
2018-03-01T18:28:25,085 INFO [Thread-50] io.druid.curator.announcement.Announcer - unannouncing [/druid/internal-discovery/historical/127.0.0.1:8083]
2018-03-01T18:28:25,103 INFO [Thread-50] io.druid.curator.discovery.CuratorDruidNodeAnnouncer - Unannounced [DiscoveryDruidNode{druidNode=DruidNode{serviceName='druid/historical', host='127.0.0.1', port=-1, plaintextPort=8083, enablePlaintextPort=true, tlsPort=-1, enableTlsPort=false}, nodeType='historical', services={dataNodeService=DataNodeService{tier='_default_tier', maxSize=0, type=historical, priority=0}, lookupNodeService=LookupNodeService{lookupTier='__default'}}}].
2018-03-01T18:28:25,104 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.stop()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@6cbbb9c4].
2018-03-01T18:28:25,104 INFO [Thread-50] io.druid.curator.announcement.Announcer - unannouncing [/druid/listeners/lookups/__default/http:127.0.0.1:8083]
2018-03-01T18:28:25,105 INFO [Thread-50] io.druid.server.listener.announcer.ListenerResourceAnnouncer - Unannouncing start time on [/druid/listeners/lookups/__default/http:127.0.0.1:8083]
2018-03-01T18:28:25,105 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.query.lookup.LookupReferencesManager.stop()] on object[io.druid.query.lookup.LookupReferencesManager@1cd43562].
2018-03-01T18:28:25,105 INFO [Thread-50] io.druid.query.lookup.LookupReferencesManager - LookupReferencesManager is stopping.
2018-03-01T18:28:25,105 INFO [LookupReferencesManager-MainThread] io.druid.query.lookup.LookupReferencesManager - Lookup Management loop exited, Lookup notices are not handled anymore.
2018-03-01T18:28:25,106 INFO [Thread-50] io.druid.query.lookup.LookupReferencesManager - LookupReferencesManager is stopped.
2018-03-01T18:28:25,106 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.discovery.DruidLeaderClient.stop()] on object[io.druid.discovery.DruidLeaderClient@310ed6b4].
2018-03-01T18:28:25,106 INFO [Thread-50] io.druid.discovery.DruidLeaderClient - Stopped.
2018-03-01T18:28:25,106 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@3440e9cd].
2018-03-01T18:28:25,109 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider.stop()] on object[io.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider@6088451e].
2018-03-01T18:28:25,109 INFO [Thread-50] io.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider - stopping
2018-03-01T18:28:25,109 INFO [Thread-50] io.druid.curator.discovery.CuratorDruidNodeDiscoveryProvider - stopped
2018-03-01T18:28:25,109 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.java.util.http.client.NettyHttpClient.stop()] on object[io.druid.java.util.http.client.NettyHttpClient@441fbe89].
2018-03-01T18:28:25,142 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.coordination.ZkCoordinator.stop()] on object[io.druid.server.coordination.ZkCoordinator@1665fa54].
2018-03-01T18:28:25,143 INFO [Thread-50] io.druid.server.coordination.ZkCoordinator - Stopping ZkCoordinator for [DruidServerMetadata{name='127.0.0.1:8083', hostAndPort='127.0.0.1:8083', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=historical, priority=0}]
2018-03-01T18:28:25,143 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.coordination.SegmentLoadDropHandler.stop()] on object[io.druid.server.coordination.SegmentLoadDropHandler@36b9cb99].
2018-03-01T18:28:25,143 INFO [Thread-50] io.druid.server.coordination.SegmentLoadDropHandler - Stopping...
2018-03-01T18:28:25,143 INFO [Thread-50] io.druid.server.coordination.CuratorDataSegmentServerAnnouncer - Unannouncing self[DruidServerMetadata{name='127.0.0.1:8083', hostAndPort='127.0.0.1:8083', hostAndTlsPort='null', maxSize=0, tier='_default_tier', type=historical, priority=0}] at [/druid/announcements/127.0.0.1:8083]
2018-03-01T18:28:25,143 INFO [Thread-50] io.druid.curator.announcement.Announcer - unannouncing [/druid/announcements/127.0.0.1:8083]
2018-03-01T18:28:25,145 INFO [Thread-50] io.druid.server.coordination.SegmentLoadDropHandler - Stopped.
2018-03-01T18:28:25,145 INFO [Thread-50] io.druid.server.initialization.jetty.JettyServerModule - Stopping jetty server...
2018-03-01T18:28:25,151 INFO [Thread-50] org.eclipse.jetty.server.AbstractConnector - Stopped ServerConnector@619c446a{HTTP/1.1,[http/1.1]}{0.0.0.0:8083}
2018-03-01T18:28:25,152 INFO [Thread-50] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@2ffb3aec{/,null,UNAVAILABLE}
2018-03-01T18:28:25,154 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.java.util.metrics.MonitorScheduler.stop()] on object[io.druid.java.util.metrics.MonitorScheduler@6342d610].
2018-03-01T18:28:25,154 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.java.util.emitter.service.ServiceEmitter.close() throws java.io.IOException] on object[ServiceEmitter{serviceDimensions={service=druid/historical, host=127.0.0.1:8083, version=}, emitter=io.druid.java.util.emitter.core.NoopEmitter@1a5f7e7c}].
2018-03-01T18:28:25,154 INFO [Thread-50] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner.stop()] on object[io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner@53f4c1e6].

on shutdown.

The jetty shut down happens after unannouncing, but it probably needs to wait a short time before simply rejecting connections or else folks will get rejected connection exceptions.

@clintropolis
Copy link
Copy Markdown
Member

@drcrallen I think you are right, a delay before graceful jetty shutdown makes sense and would be more graceful than rejecting outright. That way we can handle incoming stuff for a reasonable period where we might be perceived to be announced (I'm not sure offhand what this value is), and then finish as many of those requests as possible.

I guess long enough delay after unannouncing sort of does negate the need for jetty to gracefully shutdown... and we've gone full circle 😜

@drcrallen
Copy link
Copy Markdown
Contributor Author

@clintropolis the timeframe between "unnanounce" and "reject any more connections" should be on the order of the propagation time of the unannouncement

I think I'll just add a config option for setting this extra delay, with a comment that it is additive to the graceful shutdown and is related to how fast nodes can act on a zookeeper unannouncement.

@drcrallen
Copy link
Copy Markdown
Contributor Author

@clintropolis / @niketh can you check and see if the latest changes make sense to you?

Copy link
Copy Markdown
Member

@clintropolis clintropolis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, sorry for the delay, LGTM 🤘

@drcrallen
Copy link
Copy Markdown
Contributor Author

I'm a bit stuck here, I can't run integration tests locally on docker-for-osx, and the failures are pretty much undiagnosable

@drcrallen
Copy link
Copy Markdown
Contributor Author

Ok, I'm trying allowing it completely as an opt-in thing to see if it passes integration tests now.

@jihoonson
Copy link
Copy Markdown
Contributor

Looks like it has a problem in integration tests.

@gianm
Copy link
Copy Markdown
Contributor

gianm commented Mar 9, 2018

@drcrallen what issue do you have when running the tests locally? I think @jihoonson had run into issues too but solved them somehow; maybe you have run into the same issue and he could help?

@drcrallen
Copy link
Copy Markdown
Contributor Author

@gianm / @jihoonson I'm on docker for mac and can't seem to find the magic DOCKER_IP that allows the nodes to talk to each other.

@drcrallen
Copy link
Copy Markdown
Contributor Author

here's the net info on the "druid-overlord" container:

root@b4c55b6b8c45:/var/lib/druid# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:05  
          inet addr:172.17.0.5  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:18691 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6160 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:27539981 (27.5 MB)  TX bytes:338017 (338.0 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

@drcrallen
Copy link
Copy Markdown
Contributor Author

After some more digging I found https://serverfault.com/questions/870568/fatal-error-cant-open-and-lock-privilege-tables-table-storage-engine-for-use which was causing mysql containers to fail. I added "storage-driver" : "aufs" to the docker for mac daemon and am trying the integration tests again.

@drcrallen
Copy link
Copy Markdown
Contributor Author

yay! I'm getting the same errors as integration tests now

@jihoonson looks like using aufs storage is really important for docker on mac for integration tests. Don't know a good way to record that

@jihoonson
Copy link
Copy Markdown
Contributor

@drcrallen hmm that's odd. I'm using overlay2 for docker on mac and integration tests work well. And aufs looks not default anymore because of some performance issues (https://docs.docker.com/storage/storagedriver/aufs-driver/). Are you using docker-machine?

@drcrallen
Copy link
Copy Markdown
Contributor Author

screen shot 2018-03-09 at 11 06 21 am

@drcrallen
Copy link
Copy Markdown
Contributor Author

Underlying problem was StatisticsHandler is a wrapper, so you have to supply the underlying handler to call. I added it as the main handler to the jetty server, with the chain of other handlers as the delegate

@jon-wei
Copy link
Copy Markdown
Contributor

jon-wei commented Mar 9, 2018

@drcrallen Hm, I'm using overlay2 storage driver on my mac as well, haven't run into issues with running the integration tests locally:

$ docker info
Containers: 36
 Running: 0
 Paused: 0
 Stopped: 36
Images: 167
Server Version: 17.12.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.60-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952GiB
Name: linuxkit-025000000001
ID: DMMW:LPG4:X355:2TTG:VIEY:6I3H:EYHS:K5NJ:EHK2:EDK6:ZPX3:U6LQ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: docker.for.mac.http.internal:3128
HTTPS Proxy: docker.for.mac.http.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

@jihoonson
Copy link
Copy Markdown
Contributor

@drcrallen I'm using the same version of docker. eval "$(docker-machine env integration)" and export DOCKER_IP=$(docker-machine ip integration) doesn't help?

@drcrallen
Copy link
Copy Markdown
Contributor Author

@jihoonson it works with aufs and DOCKER_IP=localhost

I didn't bother trying docker-machine commands

@jihoonson
Copy link
Copy Markdown
Contributor

All right. It's good if it works for you. Just wondering what steps you took before meeting the above error.

@drcrallen
Copy link
Copy Markdown
Contributor Author

I didn't do anything special, had the overlayfs enabled, then set the docker ip and ran the integration tests. But the thing would not want to finish. I checked out the containers and noticed that mysql was not running in its container. Looking in the logs I saw the error.

It might be a red herring, though. So I wouldn't spend too much time on it unless others report a similar problem with integration tests.

@drcrallen
Copy link
Copy Markdown
Contributor Author

@jihoonson would you be able to review this?

@jihoonson
Copy link
Copy Markdown
Contributor

@drcrallen sure. I'll review soon.

@Override
public void lifeCycleFailure(LifeCycle event, Throwable cause)
{
log.warn(cause, "Jetty lifecycle event failed [%s]", event.getClass());
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably log.error()?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can start at error and see what happens, I'm good for that.

}
catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RE(e, "Interrupted waiting for jetty shutdown.");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if an InterrupedException occurs which means this thread wakes up before unannounceDelay, server won't be stopped. Is this desirable? Maybe better to stop server anyway?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that is desired here. The reason that stop() would be called is because the lifecycle is shutting down, aka there is already an interrupt requested.

If there's ANOTHER interrupt coming in, I would think it essentially means "You are going to get a kill -9 if you don't exit right now" in which case continuing on with the lifecycle shutdowns in a best effort stopping seems like the thing to do

@drcrallen
Copy link
Copy Markdown
Contributor Author

@jihoonson any other comments?

@drcrallen
Copy link
Copy Markdown
Contributor Author

A few days have passed and no other comments

@drcrallen drcrallen merged commit ef21ce5 into apache:master Mar 23, 2018
@drcrallen drcrallen deleted the shutdown/cleanShutdownNiketh branch March 23, 2018 16:38
@drcrallen drcrallen added this to the 0.13.0 milestone Apr 6, 2018
gianm added a commit to gianm/druid that referenced this pull request Jun 21, 2018
One of these is a configuration parameter (introduced in apache#5429),
but it's never been in a release, so I think it's ok to rename it.
gianm added a commit that referenced this pull request Jun 25, 2018
One of these is a configuration parameter (introduced in #5429),
but it's never been in a release, so I think it's ok to rename it.
jon-wei pushed a commit to implydata/druid-public that referenced this pull request Sep 20, 2018
* Add graceful shutdown timeout

* Handle interruptedException

* Incorporate code review comments

* Address code review comments

* Poll for activeConnections to be zero

* Use statistics handler to get active requests

* Use native jetty shutdown gracefully

* Move log line back to where it was

* Add unannounce wait time

* Make the default retain prior behavior

* Update docs with new config defaults

* Make duration handling on jetty shutdown more consistent

* StatisticsHandler is a wrapper

* Move jetty lifecycle error logging to error
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants