Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions manager/controlapi/node.go
Original file line number Diff line number Diff line change
Expand Up @@ -248,6 +248,29 @@ func (s *Server) UpdateNode(ctx context.Context, request *api.UpdateNodeRequest)
}, nil
}

func removeNodeAttachments(tx store.Tx, nodeID string) error {
// orphan the node's attached containers. if we don't do this, the
// network these attachments are connected to will never be removeable
tasks, err := store.FindTasks(tx, store.ByNodeID(nodeID))
if err != nil {
return err
}
for _, task := range tasks {
// if the task is an attachment, then we just delete it. the allocator
// will do the heavy lifting. basically, GetAttachment will return the
// attachment if that's the kind of runtime, or nil if it's not.
if task.Spec.GetAttachment() != nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spoke with @dperny offline but capturing here for everyone else: I guess it makes sense to enforce a model where the task reaper is the only place where tasks are deleted. We could mark these tasks as orphaned and let them be cleaned out by the reaper, with the caveat that the network attachments are removable for orphaned tasks.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with marking the tasks Orphaned when the associated node is deleted. I think that's exactly the right behavior.

This should be done in the orchestrators, which is what controls task lifecycle. For example, in the replicated orchestrator:

func (r *Orchestrator) handleTaskEvent(ctx context.Context, event events.Event) {
        switch v := event.(type) {
        case api.EventDeleteNode:
                r.restartTasksByNodeID(ctx, v.Node.ID)

We would want to modify this code to set the old task's state (not desired state) to Orphaned. The code here might be a little hard to follow, because the tasks get added to a map that's eventually processed as a batch, with those tasks getting passed to Restart. We could potentially have a separate map for tasks that need to become orphaned, and pass them to the restart manager in the same way, but then set the state to Orphaned after calling Restart.

The global orchestrator would need similar changes.

For network attachment tasks, I'm not entirely sure what the best way is. We could create a simple orchestrator for those that just watches for node deletion events. I think that's the cleanest way, but as a simpler kludge we could handle this in the network allocator.

I don't think any changes in the task reaper or network allocator are necessary, because once a task's state is set to Orphaned, its network resources are supposed to be freed. But it's definitely worth confirming that this works as expected after the node has been deleted.

Copy link
Copy Markdown
Contributor

@anshulpundir anshulpundir Oct 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that when we remove a node, we delete all tasks for that node from the store. What is the reason to not keep the history around in that case ? Since the service may still be around, doesn't it make sense to keep the task history from that node around ? @aaronlehmann

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was done for global service tasks because otherwise these tasks would stay in the store forever. The task reaper keeps a certain number per node, so there's no provision for removing all the tasks from dead nodes. I'm not sure I see a better way than deleting it immediately, because with the node no longer in the system, we'll never know when the task has shut truly shut down, and is finally safe to delete. Possibly we could use the orphaned state, as discussed above, though if we wanted to be really careful, we would set that state after a delay passes, like we do for unresponsive nodes.

For reference, here's the commit that made the change: 56463e4

// don't delete the task. instead, update it to `ORPHANED` so that
// the taskreaper will clean it up.
task.Status.State = api.TaskStateOrphaned
if err := store.UpdateTask(tx, task); err != nil {
return err
}
}
}
return nil
}

// RemoveNode removes a Node referenced by NodeID with the given NodeSpec.
// - Returns NotFound if the Node is not found.
// - Returns FailedPrecondition if the Node has manager role (and is part of the memberlist) or is not shut down.
Expand Down Expand Up @@ -313,6 +336,10 @@ func (s *Server) RemoveNode(ctx context.Context, request *api.RemoveNodeRequest)
return err
}

if err := removeNodeAttachments(tx, request.NodeID); err != nil {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably makes sense to add a comment here to say why we're doing this.

return err
}

return store.DeleteNode(tx, request.NodeID)
})
if err != nil {
Expand Down
168 changes: 168 additions & 0 deletions manager/controlapi/node_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -731,3 +731,171 @@ func TestUpdateNodeDemote(t *testing.T) {
t.Parallel()
testUpdateNodeDemote(t)
}

// TestRemoveNodeAttachments tests the unexported removeNodeAttachments
// function. This avoids us having to update the TestRemoveNodes function to
// test all of this logic
func TestRemoveNodeAttachments(t *testing.T) {
// first, set up a store and all that
ts := newTestServer(t)
defer ts.Stop()

ts.Store.Update(func(tx store.Tx) error {
store.CreateCluster(tx, &api.Cluster{
ID: identity.NewID(),
Spec: api.ClusterSpec{
Annotations: api.Annotations{
Name: store.DefaultClusterName,
},
},
})
return nil
})

// make sure before we start that our server is in a good (empty) state
r, err := ts.Client.ListNodes(context.Background(), &api.ListNodesRequest{})
assert.NoError(t, err)
assert.Empty(t, r.Nodes)

// create a manager
createNode(t, ts, "id1", api.NodeRoleManager, api.NodeMembershipAccepted, api.NodeStatus_READY)
r, err = ts.Client.ListNodes(context.Background(), &api.ListNodesRequest{})
assert.NoError(t, err)
assert.Len(t, r.Nodes, 1)

// create a worker. put it in the DOWN state, which is the state it will be
// in to remove it anyway
createNode(t, ts, "id2", api.NodeRoleWorker, api.NodeMembershipAccepted, api.NodeStatus_DOWN)
r, err = ts.Client.ListNodes(context.Background(), &api.ListNodesRequest{})
assert.NoError(t, err)
assert.Len(t, r.Nodes, 2)

// create a network we can "attach" to
err = ts.Store.Update(func(tx store.Tx) error {
n := &api.Network{
ID: "net1id",
Spec: api.NetworkSpec{
Annotations: api.Annotations{
Name: "net1name",
},
Attachable: true,
},
}
return store.CreateNetwork(tx, n)
})
require.NoError(t, err)

// create some tasks:
err = ts.Store.Update(func(tx store.Tx) error {
// 1.) A network attachment on the node we're gonna remove
task1 := &api.Task{
ID: "task1",
NodeID: "id2",
DesiredState: api.TaskStateRunning,
Status: api.TaskStatus{
State: api.TaskStateRunning,
},
Spec: api.TaskSpec{
Runtime: &api.TaskSpec_Attachment{
Attachment: &api.NetworkAttachmentSpec{
ContainerID: "container1",
},
},
Networks: []*api.NetworkAttachmentConfig{
{
Target: "net1id",
Addresses: []string{}, // just leave this empty, we don't need it
},
},
},
// we probably don't care about the rest of the fields.
}
if err := store.CreateTask(tx, task1); err != nil {
return err
}

// 2.) A network attachment on the node we're not going to remove
task2 := &api.Task{
ID: "task2",
NodeID: "id1",
DesiredState: api.TaskStateRunning,
Status: api.TaskStatus{
State: api.TaskStateRunning,
},
Spec: api.TaskSpec{
Runtime: &api.TaskSpec_Attachment{
Attachment: &api.NetworkAttachmentSpec{
ContainerID: "container2",
},
},
Networks: []*api.NetworkAttachmentConfig{
{
Target: "net1id",
Addresses: []string{}, // just leave this empty, we don't need it
},
},
},
// we probably don't care about the rest of the fields.
}
if err := store.CreateTask(tx, task2); err != nil {
return err
}

// 3.) A regular task on the node we're going to remove
task3 := &api.Task{
ID: "task3",
NodeID: "id2",
DesiredState: api.TaskStateRunning,
Status: api.TaskStatus{
State: api.TaskStateRunning,
},
Spec: api.TaskSpec{
Runtime: &api.TaskSpec_Container{
Container: &api.ContainerSpec{},
},
},
}
if err := store.CreateTask(tx, task3); err != nil {
return err
}

// 4.) A regular task on the node we're not going to remove
task4 := &api.Task{
ID: "task4",
NodeID: "id1",
DesiredState: api.TaskStateRunning,
Status: api.TaskStatus{
State: api.TaskStateRunning,
},
Spec: api.TaskSpec{
Runtime: &api.TaskSpec_Container{
Container: &api.ContainerSpec{},
},
},
}
return store.CreateTask(tx, task4)
})
require.NoError(t, err)

// Now, call the function with our nodeID. make sure it returns no error
err = ts.Store.Update(func(tx store.Tx) error {
return removeNodeAttachments(tx, "id2")
})
require.NoError(t, err)

// Now, make sure only task1, the network-attacahed task on id2, was
// removed
ts.Store.View(func(tx store.ReadTx) {
tasks, err := store.FindTasks(tx, store.All)
require.NoError(t, err)
// should only be 3 tasks left
require.Len(t, tasks, 4)
// and the list should not contain task1
for _, task := range tasks {
require.NotNil(t, task)
if task.ID == "task1" {
require.Equal(t, task.Status.State, api.TaskStateOrphaned)
}
}
})
}