-
Notifications
You must be signed in to change notification settings - Fork 11
Conversation
|
@supriyagarg, would be great if you could review as well. |
| std::unique_lock<std::mutex>&& completion, bool verbose) | ||
| : completion_(std::move(completion)), callback_(callback), | ||
| remaining_bytes_(0), verbose_(verbose) {} | ||
| ~Watcher() {} // Unlocks the completion_ lock. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is unlocking the completion lock inherent, or do you still need to add that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you meant s/inherent/implicit/. Yes, unlocking is implicit, through the semantics of std::unique_lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM
| LOG(INFO) << "Waiting for completion"; | ||
| } | ||
| std::lock_guard<std::mutex> await_completion(completion_mutex); | ||
| if (config_.VerboseLogging()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: the varying conditionals for exporting logs begs for a level-based log filter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have a level-based log filter. I think you mean something like a semantic module-based log filter. I have an item on my list to rework logging and make it sane, but that won't happen any time soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM.
| if (config_.VerboseLogging()) { | ||
| LOG(INFO) << "Waiting for completion"; | ||
| } | ||
| std::lock_guard<std::mutex> await_completion(completion_mutex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should sync up with the Kubernetes folks to see what conditions a timeout would occur when watching resources. My concern is we might face a deadlock situation here should the connection be left hanging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This mutex is unlocked when the watch thread sees an EOF from the server or encounters an error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is an EOF enough to stop this thread? If so, we should definitely exit and trigger a restart.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An EOF means that the master terminated the watch connection. I would treat this as an exceptional situation.
| const std::string& ns = KubernetesNamespace(); | ||
| // TODO: This is unreliable, see | ||
| // https://github.com/kubernetes/kubernetes/issues/52162. | ||
| const std::string pod_name = boost::asio::ip::host_name(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also add a conditional for an environment variable, that way we have at least one more option before going the unreliable route?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which environment variable? Do you mean $HOSTNAME? Or something set via the downward API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something via the downward api. With enough documentation it will likely be more reliable than the hostname.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to entertain various options here, but let's open a bug and track this separately.
| "/namespaces/" + ns + "/pods/" + pod_name); | ||
| const json::Object* pod = pod_response->As<json::Object>(); | ||
| const json::Object* spec = pod->Get<json::Object>("spec"); | ||
| current_node_ = spec->Get<json::String>("nodeName"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In theory, if hostname ended up pointing at an unscheduled pod (through user error), it's possible for nodeName to not be set. Crazy edge-case, but possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code examines the pod that the metadata agent itself is running in. Since the binary is executing, I think it's safe to assume the pod is scheduled... :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This concern is mostly vocalized due to the fact that using hostname is not reliable, and could lead to some strange behavior in the first place. If you feel that that's fine for now, this SGTM.
| LOG(ERROR) << "Watch type: " << type << " object: " << *pod; | ||
| if (type == "MODIFIED" || type == "ADDED") { | ||
| std::vector<MetadataUpdater::ResourceMetadata> result_vector = | ||
| GetPodAndContainerMetadata(pod, collected_at); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it this callback's responsibility to generate the collected_at time? Could we move it to within the function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not at all certain that the callback will be executed immediately after the watch notification. Threads get delayed, etc. I'd rather have the timestamp as close to the actual change time as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm confused, at what point between where collected_at and GetPodContainerMetadata being invoked, is a thread being spawned? It seems that all of this code IS the thread in question.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thread can get preempted between the time the watch notification is received and the time the response is parsed. Relegating the computation of collected_at to the callback would increase the chances of that happening.
| } catch (const KubernetesReader::QueryException& e) { | ||
| // Already logged. | ||
| } | ||
| LOG(INFO) << "Watch thread (pods) exiting"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do we have a looping mechanism to ensure that we re-establish a connection should our connection to the master get cut?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nowhere yet. Let's open a bug to improve the robustness of this code (probably need to guard it by a config option as well).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do, SGTM.
| LOG(ERROR) << "Watch type: " << type << " object: " << *node; | ||
| if (type == "MODIFIED" || type == "ADDED") { | ||
| std::vector<MetadataUpdater::ResourceMetadata> result_vector; | ||
| result_vector.emplace_back(GetNodeMetadata(node->Clone(), collected_at)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same concern here for collected_at.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same response.
|
|
||
| try { | ||
| // TODO: There seems to be a Kubernetes API bug with watch=true. | ||
| WatchMaster(std::string(kKubernetesEndpointPath) + "/watch/nodes/" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should resolve this before merging, or use /watch/ for pods as well if possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this works just fine for pods. Frankly, I don't see it as a merge blocker. We should absolutely open a bug and investigate at leisure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do, SGTM.
igorpeshansky
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Responded. PTAL.
| std::unique_lock<std::mutex>&& completion, bool verbose) | ||
| : completion_(std::move(completion)), callback_(callback), | ||
| remaining_bytes_(0), verbose_(verbose) {} | ||
| ~Watcher() {} // Unlocks the completion_ lock. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you meant s/inherent/implicit/. Yes, unlocking is implicit, through the semantics of std::unique_lock.
| if (config_.VerboseLogging()) { | ||
| LOG(INFO) << "Waiting for completion"; | ||
| } | ||
| std::lock_guard<std::mutex> await_completion(completion_mutex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This mutex is unlocked when the watch thread sees an EOF from the server or encounters an error.
| LOG(INFO) << "Waiting for completion"; | ||
| } | ||
| std::lock_guard<std::mutex> await_completion(completion_mutex); | ||
| if (config_.VerboseLogging()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have a level-based log filter. I think you mean something like a semantic module-based log filter. I have an item on my list to rework logging and make it sane, but that won't happen any time soon.
| const std::string& ns = KubernetesNamespace(); | ||
| // TODO: This is unreliable, see | ||
| // https://github.com/kubernetes/kubernetes/issues/52162. | ||
| const std::string pod_name = boost::asio::ip::host_name(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which environment variable? Do you mean $HOSTNAME? Or something set via the downward API?
| "/namespaces/" + ns + "/pods/" + pod_name); | ||
| const json::Object* pod = pod_response->As<json::Object>(); | ||
| const json::Object* spec = pod->Get<json::Object>("spec"); | ||
| current_node_ = spec->Get<json::String>("nodeName"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code examines the pod that the metadata agent itself is running in. Since the binary is executing, I think it's safe to assume the pod is scheduled... :-)
| LOG(ERROR) << "Watch type: " << type << " object: " << *pod; | ||
| if (type == "MODIFIED" || type == "ADDED") { | ||
| std::vector<MetadataUpdater::ResourceMetadata> result_vector = | ||
| GetPodAndContainerMetadata(pod, collected_at); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not at all certain that the callback will be executed immediately after the watch notification. Threads get delayed, etc. I'd rather have the timestamp as close to the actual change time as possible.
| } catch (const KubernetesReader::QueryException& e) { | ||
| // Already logged. | ||
| } | ||
| LOG(INFO) << "Watch thread (pods) exiting"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nowhere yet. Let's open a bug to improve the robustness of this code (probably need to guard it by a config option as well).
| LOG(ERROR) << "Watch type: " << type << " object: " << *node; | ||
| if (type == "MODIFIED" || type == "ADDED") { | ||
| std::vector<MetadataUpdater::ResourceMetadata> result_vector; | ||
| result_vector.emplace_back(GetNodeMetadata(node->Clone(), collected_at)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same response.
|
|
||
| try { | ||
| // TODO: There seems to be a Kubernetes API bug with watch=true. | ||
| WatchMaster(std::string(kKubernetesEndpointPath) + "/watch/nodes/" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this works just fine for pods. Frankly, I don't see it as a merge blocker. We should absolutely open a bug and investigate at leisure.
igorpeshansky
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Responded. PTAL.
| if (config_.VerboseLogging()) { | ||
| LOG(INFO) << "Waiting for completion"; | ||
| } | ||
| std::lock_guard<std::mutex> await_completion(completion_mutex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An EOF means that the master terminated the watch connection. I would treat this as an exceptional situation.
| const std::string& ns = KubernetesNamespace(); | ||
| // TODO: This is unreliable, see | ||
| // https://github.com/kubernetes/kubernetes/issues/52162. | ||
| const std::string pod_name = boost::asio::ip::host_name(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy to entertain various options here, but let's open a bug and track this separately.
| LOG(ERROR) << "Watch type: " << type << " object: " << *pod; | ||
| if (type == "MODIFIED" || type == "ADDED") { | ||
| std::vector<MetadataUpdater::ResourceMetadata> result_vector = | ||
| GetPodAndContainerMetadata(pod, collected_at); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thread can get preempted between the time the watch notification is received and the time the response is parsed. Relegating the computation of collected_at to the callback would increase the chances of that happening.
bmoyles0117
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, last thing I would request is to remove all of the disabled comments. 🥇
86b5b28 to
9f60e88
Compare
610b9ce to
1448f21
Compare
9f60e88 to
bad0e2d
Compare
1448f21 to
9c63030
Compare
bad0e2d to
02e756f
Compare
bad0e2d to
94f1019
Compare
| const std::string crlf("\r\n"); | ||
| auto begin = std::begin(range); | ||
| auto end = std::end(range); | ||
| const size_t available = std::distance(begin, end); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if it's just me, but the naming of available and remaining_bytes threw me off a bit. I thought available is the available size in this current chunk while remaining_bytes is the total remaining bytes that we still need to read. After a closer look, I think it's in fact the other way around.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
available is the available bytes in the current notification, which may include the remainder of this chunk and the start of the next one. Suggestions welcome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a comment?
// The available bytes in the current notification, which may include the remainder of this chunk and the start of the next one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Also renamed remaining_bytes_ to remaining_chunk_bytes_.
src/kubernetes.cc
Outdated
| CompleteChunk(); | ||
|
|
||
| // Process the next batch. | ||
| while (remaining_bytes_ == 0 && std::begin(pos) != std::end(pos)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this while loop required only because we might run into blank lines? It took me a while to digest why there is a loop here while ReadNextChunk and CompleteChunk don't have loops (because the whole thing will be triggered repeatedly).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is a callback that collects data. It may get partial data, or it may get data that's on the boundary between two chunks.
The while loop is because we may get dud (empty) chunks. It basically says: while there's still data available and we haven't started a new chunk, try to start one. Eventually it either runs out of data (std::begin(pos) == std::end(pos)) or starts a non-empty chunk (remaining_bytes_ != 0).
Now that you point it out, there's a way to split this out into a streaming callback that only handles the chunked encoding and then invokes another callback on each full chunk. The former could even be contributed upstream. Since this won't change the basic functionality, I would defer this refactoring until later. We should open a bug, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. I created b/70889561 for the potential refactoring.
src/kubernetes.cc
Outdated
| } | ||
| std::string line(begin, iter); | ||
| iter = std::next(iter, crlf.size()); | ||
| if (line.empty()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't we return early when the line is empty? Something like:
const std::string crlf("\r\n");
auto begin = std::begin(range);
auto end = std::end(range);
auto iter = std::search(begin, end, crlf.begin(), crlf.end());
if (iter == begin) {
// Blank lines are fine, just skip them.
#ifdef VERBOSE
LOG(DEBUG) << "Skipping blank line within chunked encoding;"
<< " remaining data '" << std::string(iter, end) << "'";
#endif
return boost::iterator_range<const char*>(begin, end);
} else if (iter == end) {
LOG(ERROR) << "Invalid chunked encoding: '"
<< std::string(begin, end)
<< "'";
return boost::iterator_range<const char*>(begin, end);
}
std::string line(begin, iter);
iter = std::next(iter, crlf.size());
//#ifdef VERBOSE
// LOG(DEBUG) << "Line: '" << line << "'";
//#endif
std::stringstream stream(line);
stream >> std::hex >> remaining_bytes_;
return boost::iterator_range<const char*>(iter, end);There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your proposed code is not quite equivalent. You're correct that line.empty() is equivalent to iter == begin (before the iter reassignment). However, in the case of empty line we return boost::iterator_range<const char*>(iter, end) after the reassignment, so we're skipping the crlf sequence.
I could do it the way you're proposing, but then I would have to replicate the code that skips crlf. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, right. I was missing the iter = std::next(iter, crlf.size()); part. I'd still vote for keeping the edge cases (invalid chunk and blank line) together though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're right, it does read better. Changed.
9c63030 to
e40f01b
Compare
e40f01b to
9c63030
Compare
igorpeshansky
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Responded. PTAL.
src/kubernetes.cc
Outdated
| CompleteChunk(); | ||
|
|
||
| // Process the next batch. | ||
| while (remaining_bytes_ == 0 && std::begin(pos) != std::end(pos)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is a callback that collects data. It may get partial data, or it may get data that's on the boundary between two chunks.
The while loop is because we may get dud (empty) chunks. It basically says: while there's still data available and we haven't started a new chunk, try to start one. Eventually it either runs out of data (std::begin(pos) == std::end(pos)) or starts a non-empty chunk (remaining_bytes_ != 0).
Now that you point it out, there's a way to split this out into a streaming callback that only handles the chunked encoding and then invokes another callback on each full chunk. The former could even be contributed upstream. Since this won't change the basic functionality, I would defer this refactoring until later. We should open a bug, though.
src/kubernetes.cc
Outdated
| } | ||
| std::string line(begin, iter); | ||
| iter = std::next(iter, crlf.size()); | ||
| if (line.empty()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your proposed code is not quite equivalent. You're correct that line.empty() is equivalent to iter == begin (before the iter reassignment). However, in the case of empty line we return boost::iterator_range<const char*>(iter, end) after the reassignment, so we're skipping the crlf sequence.
I could do it the way you're proposing, but then I would have to replicate the code that skips crlf. WDYT?
| const std::string crlf("\r\n"); | ||
| auto begin = std::begin(range); | ||
| auto end = std::end(range); | ||
| const size_t available = std::distance(begin, end); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
available is the available bytes in the current notification, which may include the remainder of this chunk and the start of the next one. Suggestions welcome.
src/kubernetes.cc
Outdated
| } | ||
| std::string line(begin, iter); | ||
| iter = std::next(iter, crlf.size()); | ||
| if (line.empty()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oops, right. I was missing the iter = std::next(iter, crlf.size()); part. I'd still vote for keeping the edge cases (invalid chunk and blank line) together though.
src/kubernetes.cc
Outdated
| CompleteChunk(); | ||
|
|
||
| // Process the next batch. | ||
| while (remaining_bytes_ == 0 && std::begin(pos) != std::end(pos)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. I created b/70889561 for the potential refactoring.
| const std::string crlf("\r\n"); | ||
| auto begin = std::begin(range); | ||
| auto end = std::end(range); | ||
| const size_t available = std::distance(begin, end); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a comment?
// The available bytes in the current notification, which may include the remainder of this chunk and the start of the next one.
igorpeshansky
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed feedback. PTAL.
| const std::string crlf("\r\n"); | ||
| auto begin = std::begin(range); | ||
| auto end = std::end(range); | ||
| const size_t available = std::distance(begin, end); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Also renamed remaining_bytes_ to remaining_chunk_bytes_.
src/kubernetes.cc
Outdated
| } | ||
| std::string line(begin, iter); | ||
| iter = std::next(iter, crlf.size()); | ||
| if (line.empty()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you're right, it does read better. Changed.
f7c65d9 to
38d3db0
Compare
qingling128
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
|
Please address the conflicts though. |
02e756f to
d623dcb
Compare
| json::value node_raw_metadata = json::object({ | ||
| {"blobs", json::object({ | ||
| {"association", json::object({ | ||
| {"version", json::string(kRawContentVersion)}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What version does kRawContentVersion refer to?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's an arbitrary constant we (jointly with the Resource Metadata API) picked to represent the version of the envelope object schema.
| std::string(kKubernetesEndpointPath) + "/nodes/" + node_name); | ||
| Timestamp collected_at = std::chrono::system_clock::now(); | ||
| // TODO: find is_deleted. | ||
| //const json::Object* status = pod->Get<json::Object>("status"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drop the commented line?
Also, what does the TODO mean? Should it be "compute" is_deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The commented line is a starting point for finding is_deleted (if it's anywhere, it's in the pod status). The point here is that we don't really know how to compute is_deleted, so this is a compromise between define is_deleted and compute is_deleted. :-)
Also, this seems to be an unrelated change from the earlier refactoring that accidentally cropped up in this PR. Let me rebase the commits off master.
| docker_prefix_end); | ||
| // TODO: find is_deleted. | ||
| //const json::Object* state = container->Get<json::Object>("state"); | ||
| bool is_deleted = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto.
d20cef7 to
d623dcb
Compare
d21a6af to
a7ce50a
Compare
igorpeshansky
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Responded to your comments. PTAL.
| json::value node_raw_metadata = json::object({ | ||
| {"blobs", json::object({ | ||
| {"association", json::object({ | ||
| {"version", json::string(kRawContentVersion)}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's an arbitrary constant we (jointly with the Resource Metadata API) picked to represent the version of the envelope object schema.
| std::string(kKubernetesEndpointPath) + "/nodes/" + node_name); | ||
| Timestamp collected_at = std::chrono::system_clock::now(); | ||
| // TODO: find is_deleted. | ||
| //const json::Object* status = pod->Get<json::Object>("status"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The commented line is a starting point for finding is_deleted (if it's anywhere, it's in the pod status). The point here is that we don't really know how to compute is_deleted, so this is a compromise between define is_deleted and compute is_deleted. :-)
Also, this seems to be an unrelated change from the earlier refactoring that accidentally cropped up in this PR. Let me rebase the commits off master.
| docker_prefix_end); | ||
| // TODO: find is_deleted. | ||
| //const json::Object* state = container->Get<json::Object>("state"); | ||
| bool is_deleted = false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto.
a7ce50a to
2297cf8
Compare
supriyagarg
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah - this PR looks much shorter now :)
|
Yeah, rebasing does help. Sorry about the mess. |
Reviewers, please let me know if this needs to be squashed.