Skip to content
This repository was archived by the owner on Aug 19, 2019. It is now read-only.

Conversation

@igorpeshansky
Copy link
Contributor

Reviewers, please let me know if this needs to be squashed.

@igorpeshansky
Copy link
Contributor Author

@supriyagarg, would be great if you could review as well.

std::unique_lock<std::mutex>&& completion, bool verbose)
: completion_(std::move(completion)), callback_(callback),
remaining_bytes_(0), verbose_(verbose) {}
~Watcher() {} // Unlocks the completion_ lock.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is unlocking the completion lock inherent, or do you still need to add that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you meant s/inherent/implicit/. Yes, unlocking is implicit, through the semantics of std::unique_lock.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

LOG(INFO) << "Waiting for completion";
}
std::lock_guard<std::mutex> await_completion(completion_mutex);
if (config_.VerboseLogging()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: the varying conditionals for exporting logs begs for a level-based log filter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have a level-based log filter. I think you mean something like a semantic module-based log filter. I have an item on my list to rework logging and make it sane, but that won't happen any time soon.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM.

if (config_.VerboseLogging()) {
LOG(INFO) << "Waiting for completion";
}
std::lock_guard<std::mutex> await_completion(completion_mutex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should sync up with the Kubernetes folks to see what conditions a timeout would occur when watching resources. My concern is we might face a deadlock situation here should the connection be left hanging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This mutex is unlocked when the watch thread sees an EOF from the server or encounters an error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is an EOF enough to stop this thread? If so, we should definitely exit and trigger a restart.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An EOF means that the master terminated the watch connection. I would treat this as an exceptional situation.

const std::string& ns = KubernetesNamespace();
// TODO: This is unreliable, see
// https://github.com/kubernetes/kubernetes/issues/52162.
const std::string pod_name = boost::asio::ip::host_name();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we also add a conditional for an environment variable, that way we have at least one more option before going the unreliable route?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which environment variable? Do you mean $HOSTNAME? Or something set via the downward API?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something via the downward api. With enough documentation it will likely be more reliable than the hostname.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to entertain various options here, but let's open a bug and track this separately.

"/namespaces/" + ns + "/pods/" + pod_name);
const json::Object* pod = pod_response->As<json::Object>();
const json::Object* spec = pod->Get<json::Object>("spec");
current_node_ = spec->Get<json::String>("nodeName");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory, if hostname ended up pointing at an unscheduled pod (through user error), it's possible for nodeName to not be set. Crazy edge-case, but possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code examines the pod that the metadata agent itself is running in. Since the binary is executing, I think it's safe to assume the pod is scheduled... :-)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This concern is mostly vocalized due to the fact that using hostname is not reliable, and could lead to some strange behavior in the first place. If you feel that that's fine for now, this SGTM.

LOG(ERROR) << "Watch type: " << type << " object: " << *pod;
if (type == "MODIFIED" || type == "ADDED") {
std::vector<MetadataUpdater::ResourceMetadata> result_vector =
GetPodAndContainerMetadata(pod, collected_at);
Copy link
Contributor

@bmoyles0117 bmoyles0117 Dec 19, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it this callback's responsibility to generate the collected_at time? Could we move it to within the function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not at all certain that the callback will be executed immediately after the watch notification. Threads get delayed, etc. I'd rather have the timestamp as close to the actual change time as possible.

Copy link
Contributor

@bmoyles0117 bmoyles0117 Dec 19, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused, at what point between where collected_at and GetPodContainerMetadata being invoked, is a thread being spawned? It seems that all of this code IS the thread in question.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thread can get preempted between the time the watch notification is received and the time the response is parsed. Relegating the computation of collected_at to the callback would increase the chances of that happening.

} catch (const KubernetesReader::QueryException& e) {
// Already logged.
}
LOG(INFO) << "Watch thread (pods) exiting";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where do we have a looping mechanism to ensure that we re-establish a connection should our connection to the master get cut?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nowhere yet. Let's open a bug to improve the robustness of this code (probably need to guard it by a config option as well).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do, SGTM.

LOG(ERROR) << "Watch type: " << type << " object: " << *node;
if (type == "MODIFIED" || type == "ADDED") {
std::vector<MetadataUpdater::ResourceMetadata> result_vector;
result_vector.emplace_back(GetNodeMetadata(node->Clone(), collected_at));
Copy link
Contributor

@bmoyles0117 bmoyles0117 Dec 19, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same concern here for collected_at.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same response.


try {
// TODO: There seems to be a Kubernetes API bug with watch=true.
WatchMaster(std::string(kKubernetesEndpointPath) + "/watch/nodes/"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should resolve this before merging, or use /watch/ for pods as well if possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this works just fine for pods. Frankly, I don't see it as a merge blocker. We should absolutely open a bug and investigate at leisure.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do, SGTM.

Copy link
Contributor Author

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Responded. PTAL.

std::unique_lock<std::mutex>&& completion, bool verbose)
: completion_(std::move(completion)), callback_(callback),
remaining_bytes_(0), verbose_(verbose) {}
~Watcher() {} // Unlocks the completion_ lock.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you meant s/inherent/implicit/. Yes, unlocking is implicit, through the semantics of std::unique_lock.

if (config_.VerboseLogging()) {
LOG(INFO) << "Waiting for completion";
}
std::lock_guard<std::mutex> await_completion(completion_mutex);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This mutex is unlocked when the watch thread sees an EOF from the server or encounters an error.

LOG(INFO) << "Waiting for completion";
}
std::lock_guard<std::mutex> await_completion(completion_mutex);
if (config_.VerboseLogging()) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already have a level-based log filter. I think you mean something like a semantic module-based log filter. I have an item on my list to rework logging and make it sane, but that won't happen any time soon.

const std::string& ns = KubernetesNamespace();
// TODO: This is unreliable, see
// https://github.com/kubernetes/kubernetes/issues/52162.
const std::string pod_name = boost::asio::ip::host_name();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which environment variable? Do you mean $HOSTNAME? Or something set via the downward API?

"/namespaces/" + ns + "/pods/" + pod_name);
const json::Object* pod = pod_response->As<json::Object>();
const json::Object* spec = pod->Get<json::Object>("spec");
current_node_ = spec->Get<json::String>("nodeName");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code examines the pod that the metadata agent itself is running in. Since the binary is executing, I think it's safe to assume the pod is scheduled... :-)

LOG(ERROR) << "Watch type: " << type << " object: " << *pod;
if (type == "MODIFIED" || type == "ADDED") {
std::vector<MetadataUpdater::ResourceMetadata> result_vector =
GetPodAndContainerMetadata(pod, collected_at);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not at all certain that the callback will be executed immediately after the watch notification. Threads get delayed, etc. I'd rather have the timestamp as close to the actual change time as possible.

} catch (const KubernetesReader::QueryException& e) {
// Already logged.
}
LOG(INFO) << "Watch thread (pods) exiting";
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nowhere yet. Let's open a bug to improve the robustness of this code (probably need to guard it by a config option as well).

LOG(ERROR) << "Watch type: " << type << " object: " << *node;
if (type == "MODIFIED" || type == "ADDED") {
std::vector<MetadataUpdater::ResourceMetadata> result_vector;
result_vector.emplace_back(GetNodeMetadata(node->Clone(), collected_at));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same response.


try {
// TODO: There seems to be a Kubernetes API bug with watch=true.
WatchMaster(std::string(kKubernetesEndpointPath) + "/watch/nodes/"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this works just fine for pods. Frankly, I don't see it as a merge blocker. We should absolutely open a bug and investigate at leisure.

Copy link
Contributor Author

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Responded. PTAL.

if (config_.VerboseLogging()) {
LOG(INFO) << "Waiting for completion";
}
std::lock_guard<std::mutex> await_completion(completion_mutex);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An EOF means that the master terminated the watch connection. I would treat this as an exceptional situation.

const std::string& ns = KubernetesNamespace();
// TODO: This is unreliable, see
// https://github.com/kubernetes/kubernetes/issues/52162.
const std::string pod_name = boost::asio::ip::host_name();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to entertain various options here, but let's open a bug and track this separately.

LOG(ERROR) << "Watch type: " << type << " object: " << *pod;
if (type == "MODIFIED" || type == "ADDED") {
std::vector<MetadataUpdater::ResourceMetadata> result_vector =
GetPodAndContainerMetadata(pod, collected_at);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thread can get preempted between the time the watch notification is received and the time the response is parsed. Relegating the computation of collected_at to the callback would increase the chances of that happening.

Copy link
Contributor

@bmoyles0117 bmoyles0117 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, last thing I would request is to remove all of the disabled comments. 🥇

@igorpeshansky igorpeshansky force-pushed the igorp-kubernetes-refactoring branch from 86b5b28 to 9f60e88 Compare December 19, 2017 21:34
@igorpeshansky igorpeshansky force-pushed the igorp-kubernetes-refactoring branch from 9f60e88 to bad0e2d Compare December 19, 2017 21:36
@igorpeshansky igorpeshansky force-pushed the igorp-kubernetes-refactoring branch from bad0e2d to 02e756f Compare December 19, 2017 21:37
@igorpeshansky igorpeshansky changed the base branch from igorp-kubernetes-refactoring to igorp-kubernetes-memoize-owners December 19, 2017 21:40
@igorpeshansky igorpeshansky changed the base branch from igorp-kubernetes-memoize-owners to igorp-kubernetes-refactoring December 19, 2017 21:42
@igorpeshansky igorpeshansky changed the base branch from igorp-kubernetes-refactoring to igorp-kubernetes-memoize-owners December 19, 2017 21:42
@igorpeshansky igorpeshansky force-pushed the igorp-kubernetes-memoize-owners branch from bad0e2d to 94f1019 Compare December 19, 2017 23:18
const std::string crlf("\r\n");
auto begin = std::begin(range);
auto end = std::end(range);
const size_t available = std::distance(begin, end);
Copy link

@qingling128 qingling128 Dec 19, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if it's just me, but the naming of available and remaining_bytes threw me off a bit. I thought available is the available size in this current chunk while remaining_bytes is the total remaining bytes that we still need to read. After a closer look, I think it's in fact the other way around.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

available is the available bytes in the current notification, which may include the remainder of this chunk and the start of the next one. Suggestions welcome.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a comment?

// The available bytes in the current notification, which may include the remainder of this chunk and the start of the next one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Also renamed remaining_bytes_ to remaining_chunk_bytes_.

CompleteChunk();

// Process the next batch.
while (remaining_bytes_ == 0 && std::begin(pos) != std::end(pos)) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this while loop required only because we might run into blank lines? It took me a while to digest why there is a loop here while ReadNextChunk and CompleteChunk don't have loops (because the whole thing will be triggered repeatedly).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a callback that collects data. It may get partial data, or it may get data that's on the boundary between two chunks.
The while loop is because we may get dud (empty) chunks. It basically says: while there's still data available and we haven't started a new chunk, try to start one. Eventually it either runs out of data (std::begin(pos) == std::end(pos)) or starts a non-empty chunk (remaining_bytes_ != 0).
Now that you point it out, there's a way to split this out into a streaming callback that only handles the chunked encoding and then invokes another callback on each full chunk. The former could even be contributed upstream. Since this won't change the basic functionality, I would defer this refactoring until later. We should open a bug, though.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. I created b/70889561 for the potential refactoring.

}
std::string line(begin, iter);
iter = std::next(iter, crlf.size());
if (line.empty()) {
Copy link

@qingling128 qingling128 Dec 19, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we return early when the line is empty? Something like:

    const std::string crlf("\r\n");
    auto begin = std::begin(range);
    auto end = std::end(range);
    auto iter = std::search(begin, end, crlf.begin(), crlf.end());
    if (iter == begin) {
      // Blank lines are fine, just skip them.
#ifdef VERBOSE
      LOG(DEBUG) << "Skipping blank line within chunked encoding;"
                 << " remaining data '" << std::string(iter, end) << "'";
#endif
      return boost::iterator_range<const char*>(begin, end);
    } else if (iter == end) {
      LOG(ERROR) << "Invalid chunked encoding: '"
                 << std::string(begin, end)
                 << "'";
      return boost::iterator_range<const char*>(begin, end);
    }

    std::string line(begin, iter);
    iter = std::next(iter, crlf.size());
//#ifdef VERBOSE
//      LOG(DEBUG) << "Line: '" << line << "'";
//#endif
    std::stringstream stream(line);
    stream >> std::hex >> remaining_bytes_;
    return boost::iterator_range<const char*>(iter, end);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your proposed code is not quite equivalent. You're correct that line.empty() is equivalent to iter == begin (before the iter reassignment). However, in the case of empty line we return boost::iterator_range<const char*>(iter, end) after the reassignment, so we're skipping the crlf sequence.
I could do it the way you're proposing, but then I would have to replicate the code that skips crlf. WDYT?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, right. I was missing the iter = std::next(iter, crlf.size()); part. I'd still vote for keeping the edge cases (invalid chunk and blank line) together though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're right, it does read better. Changed.

@igorpeshansky igorpeshansky changed the base branch from igorp-kubernetes-memoize-owners to igorp-kubernetes-refactoring December 19, 2017 23:31
Copy link
Contributor Author

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Responded. PTAL.

CompleteChunk();

// Process the next batch.
while (remaining_bytes_ == 0 && std::begin(pos) != std::end(pos)) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is a callback that collects data. It may get partial data, or it may get data that's on the boundary between two chunks.
The while loop is because we may get dud (empty) chunks. It basically says: while there's still data available and we haven't started a new chunk, try to start one. Eventually it either runs out of data (std::begin(pos) == std::end(pos)) or starts a non-empty chunk (remaining_bytes_ != 0).
Now that you point it out, there's a way to split this out into a streaming callback that only handles the chunked encoding and then invokes another callback on each full chunk. The former could even be contributed upstream. Since this won't change the basic functionality, I would defer this refactoring until later. We should open a bug, though.

}
std::string line(begin, iter);
iter = std::next(iter, crlf.size());
if (line.empty()) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your proposed code is not quite equivalent. You're correct that line.empty() is equivalent to iter == begin (before the iter reassignment). However, in the case of empty line we return boost::iterator_range<const char*>(iter, end) after the reassignment, so we're skipping the crlf sequence.
I could do it the way you're proposing, but then I would have to replicate the code that skips crlf. WDYT?

const std::string crlf("\r\n");
auto begin = std::begin(range);
auto end = std::end(range);
const size_t available = std::distance(begin, end);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

available is the available bytes in the current notification, which may include the remainder of this chunk and the start of the next one. Suggestions welcome.

}
std::string line(begin, iter);
iter = std::next(iter, crlf.size());
if (line.empty()) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, right. I was missing the iter = std::next(iter, crlf.size()); part. I'd still vote for keeping the edge cases (invalid chunk and blank line) together though.

CompleteChunk();

// Process the next batch.
while (remaining_bytes_ == 0 && std::begin(pos) != std::end(pos)) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. I created b/70889561 for the potential refactoring.

const std::string crlf("\r\n");
auto begin = std::begin(range);
auto end = std::end(range);
const size_t available = std::distance(begin, end);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a comment?

// The available bytes in the current notification, which may include the remainder of this chunk and the start of the next one.

Copy link
Contributor Author

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed feedback. PTAL.

const std::string crlf("\r\n");
auto begin = std::begin(range);
auto end = std::end(range);
const size_t available = std::distance(begin, end);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Also renamed remaining_bytes_ to remaining_chunk_bytes_.

}
std::string line(begin, iter);
iter = std::next(iter, crlf.size());
if (line.empty()) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're right, it does read better. Changed.

@igorpeshansky igorpeshansky force-pushed the igorp-kubernetes-refactoring branch 2 times, most recently from f7c65d9 to 38d3db0 Compare December 20, 2017 19:56
Copy link

@qingling128 qingling128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@qingling128
Copy link

Please address the conflicts though.

@igorpeshansky igorpeshansky force-pushed the igorp-kubernetes-refactoring branch 2 times, most recently from 02e756f to d623dcb Compare December 20, 2017 22:55
json::value node_raw_metadata = json::object({
{"blobs", json::object({
{"association", json::object({
{"version", json::string(kRawContentVersion)},
Copy link
Contributor

@supriyagarg supriyagarg Dec 20, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What version does kRawContentVersion refer to?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's an arbitrary constant we (jointly with the Resource Metadata API) picked to represent the version of the envelope object schema.

std::string(kKubernetesEndpointPath) + "/nodes/" + node_name);
Timestamp collected_at = std::chrono::system_clock::now();
// TODO: find is_deleted.
//const json::Object* status = pod->Get<json::Object>("status");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drop the commented line?

Also, what does the TODO mean? Should it be "compute" is_deleted?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The commented line is a starting point for finding is_deleted (if it's anywhere, it's in the pod status). The point here is that we don't really know how to compute is_deleted, so this is a compromise between define is_deleted and compute is_deleted. :-)
Also, this seems to be an unrelated change from the earlier refactoring that accidentally cropped up in this PR. Let me rebase the commits off master.

docker_prefix_end);
// TODO: find is_deleted.
//const json::Object* state = container->Get<json::Object>("state");
bool is_deleted = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

@igorpeshansky igorpeshansky force-pushed the igorp-kubernetes-refactoring branch 2 times, most recently from d20cef7 to d623dcb Compare December 20, 2017 23:04
@igorpeshansky igorpeshansky changed the base branch from igorp-kubernetes-refactoring to master December 20, 2017 23:07
Copy link
Contributor Author

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Responded to your comments. PTAL.

json::value node_raw_metadata = json::object({
{"blobs", json::object({
{"association", json::object({
{"version", json::string(kRawContentVersion)},
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's an arbitrary constant we (jointly with the Resource Metadata API) picked to represent the version of the envelope object schema.

std::string(kKubernetesEndpointPath) + "/nodes/" + node_name);
Timestamp collected_at = std::chrono::system_clock::now();
// TODO: find is_deleted.
//const json::Object* status = pod->Get<json::Object>("status");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The commented line is a starting point for finding is_deleted (if it's anywhere, it's in the pod status). The point here is that we don't really know how to compute is_deleted, so this is a compromise between define is_deleted and compute is_deleted. :-)
Also, this seems to be an unrelated change from the earlier refactoring that accidentally cropped up in this PR. Let me rebase the commits off master.

docker_prefix_end);
// TODO: find is_deleted.
//const json::Object* state = container->Get<json::Object>("state");
bool is_deleted = false;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

Copy link
Contributor

@supriyagarg supriyagarg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - this PR looks much shorter now :)

@igorpeshansky
Copy link
Contributor Author

Yeah, rebasing does help. Sorry about the mess.

@igorpeshansky igorpeshansky merged commit d68c701 into master Dec 21, 2017
@igorpeshansky igorpeshansky deleted the igorp-kubernetes-watch branch December 21, 2017 00:09
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants