Skip to content
This repository was archived by the owner on Aug 19, 2019. It is now read-only.

Conversation

@supriyagarg
Copy link
Contributor

Will add unit tests in the same PR. Would like your feedback on the code first.

Copy link
Contributor

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The basic implementation structure looks good.

return std::move(result);
}


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for an extra blank line here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


std::vector<json::value> KubernetesReader::GetServiceList(
const std::string cluster_name, const std::string location
) const throw(json::Exception) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to format this as:

std::vector<json::value> KubernetesReader::GetServiceList(
    const std::string& cluster_name, const std::string& location) const
    throw(json::Exception) {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done



std::vector<json::value> KubernetesReader::GetServiceList(
const std::string cluster_name, const std::string location
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const std::string& in both cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

) const throw(json::Exception) {
std::lock_guard<std::recursive_mutex> lock(service_mutex_);
std::vector<json::value> service_list;
for (auto const& service_it : service_to_metadata_) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const auto& is more conventional...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

src/kubernetes.h Outdated
// A memoized map from an encoded owner reference to the owner object.
mutable std::map<std::string, json::value> owners_;
// Mutex for the service related caches.
mutable std::recursive_mutex service_mutex_;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you expect to ever call a function while holding this lock that would need to re-acquire the lock? If not, you can just make this an std::mutex instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed - it is not being re-acquired right now.


// TODO: using a temporary did not work here.
std::vector<MetadataUpdater::ResourceMetadata> result_vector;
result_vector.emplace_back(GetClusterMetadata(collected_at, is_deleted));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Careful — this is_deleted refers to the service, right? You don't want to notify the API that the whole cluster has been deleted just because one service was. I would suggest just not having an is_deleted parameter in GetClusterMetadata.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

WatchMaster(
"Service",
std::string(kKubernetesEndpointPath) + "/watch/services/",
[=](const json::Object* service, Timestamp collected_at, bool is_deleted) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fit in 80 columns?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed - it was too long


// TODO: using a temporary did not work here.
std::vector<MetadataUpdater::ResourceMetadata> result_vector;
result_vector.emplace_back(GetClusterMetadata(collected_at, is_deleted));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here — this is_deleted refers to the endpoints object.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

MetadataUpdater::UpdateCallback callback,
const json::Object* endpoints, Timestamp collected_at, bool is_deleted)
throw(json::Exception) {
UpdateServiceToPodsCache(endpoints, is_deleted);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does it mean for an endpoint to be deleted? Can one be deleted without the corresponding service being deleted, or are they always correlated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the two are correlated - endpoints is deleted when the service is deleted.

pod_watch_thread_ = std::thread([=]() {
reader_.WatchPods(watched_node, cb);
});
if (config().KubernetesClusterLevelMetadata()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it makes sense to define an extra option just for service metadata? Or will we always want to retrieve it along with unscheduled pods?

Copy link
Contributor Author

@supriyagarg supriyagarg Mar 31, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a separate option. I wonder how the the two flags interact - i.e. if KubernetesClusterLevelMetadata is false, does it make sense for KubernetesServiceMetadata to be true?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, KubernetesClusterLevelMetadata means it's running the separate instance at the cluster level, rather than the per-node one. There are many things we can watch at cluster level. KubernetesServiceMetadata controls specifically whether we watch services/endpoints at cluster level. So KubernetesServiceMetadata is ignored when KubernetesClusterLevelMetadata is false, but it can certainly be set to true. See my other comment about guarding the watch threads.

@supriyagarg supriyagarg force-pushed the service_metadata_streaming branch from e0ba7df to 202503c Compare March 31, 2018 16:01
auto endpoints_it = service_to_pods_.find(service_key);
const std::vector<std::string>& pod_names =
(endpoints_it != service_to_pods_.end()) ? endpoints_it->second
: kNoPods;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something's off with the indentation here...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aligned the "?" and ":"

src/kubernetes.h Outdated
void UpdateServiceToPodsCache(
const json::Object* endpoints, bool is_deleted) throw(json::Exception);

// Const data.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't helpful. How about // An empty vector value for endpoints that have no pods.?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

k8s_cluster,
#ifdef ENABLE_KUBERNETES_METADATA
MetadataStore::Metadata(config_.MetadataIngestionRawContentVersion(),
/*is_deleted=*/ false, created_at, collected_at,
Copy link
Contributor

@igorpeshansky igorpeshansky Mar 31, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No space after comment (so it reads /*is_deleted=*/false).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const json::Object* metadata = service->Get<json::Object>("metadata");
const std::string namespace_name = metadata->Get<json::String>("namespace");
const std::string service_name = metadata->Get<json::String>("name");
const std::string encoded_ref = boost::algorithm::join(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

D'oh. Even better! Yes, please!

} else {
auto it_inserted =
service_to_pods_.emplace(encoded_ref, std::vector<std::string>());
service_to_pods_.at(encoded_ref) = pod_names;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's avoid the extra lookup, and use it_inserted.first->second instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

src/kubernetes.h Outdated
// Return the cluster metadata based on the cached values for
// service_to_metadta_ and service_to_pods_.
MetadataUpdater::ResourceMetadata GetClusterMetadata(
Timestamp collected_at) const throw(json::Exception);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be:

   MetadataUpdater::ResourceMetadata GetClusterMetadata(Timestamp collected_at)
       const throw(json::Exception);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

kubernetes_cluster_level_metadata_(
kKubernetesDefaultClusterLevelMetadata),
kubernetes_service_metadata_(
kKubernetesDefaultServiceMetadata),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fits on one line.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

pod_watch_thread_ = std::thread([=]() {
reader_.WatchPods(watched_node, cb);
});
if (config().KubernetesServiceMetadata()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually needs both config().KubernetesClusterLevelMetadata() && config().KubernetesServiceMetadata().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

if (service_it == service_to_pods_.end()) {
service_to_pods_.emplace(encoded_ref, pod_names);
} else {
service_it->second = pod_names;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, you're right. I forgot that we have to re-construct the k8s_cluster resource each time. Oh, well.
I am wondering whether the cost of copying large vectors around would be comparable with the cost of constructing the vector in-place, but we can defer it to later (or, optionally, add a TODO to investigate).

pod_watch_thread_ = std::thread([=]() {
reader_.WatchPods(watched_node, cb);
});
if (config().KubernetesClusterLevelMetadata()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, KubernetesClusterLevelMetadata means it's running the separate instance at the cluster level, rather than the per-node one. There are many things we can watch at cluster level. KubernetesServiceMetadata controls specifically whether we watch services/endpoints at cluster level. So KubernetesServiceMetadata is ignored when KubernetesClusterLevelMetadata is false, but it can certainly be set to true. See my other comment about guarding the watch threads.

src/kubernetes.h Outdated
mutable std::mutex service_mutex_;
// Map from service key to service metadata. This map is built based on the
// response from WatchServices.
mutable std::map<std::pair<std::string, std::string>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Optional] You can define a private helper type:

  using ServiceKey = std::pair<std::string, std::string>;
  // Map from service key to service metadata. This map is built based on the
  // response from WatchServices.
  mutable std::map<ServiceKey, json::value> service_to_metadata_;
  // Map from service key to names of pods in the service. This map is built
  // based on the response from WatchEndpoints.
  mutable std::map<ServiceKey, std::vector<std::string>> service_to_pods_;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

version_to_kind_to_name_;
// A memoized map from an encoded owner reference to the owner object.
mutable std::map<std::string, json::value> owners_;
// Mutex for the service related caches.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a blank line before this group of variables.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const json::Object* metadata = service->Get<json::Object>("metadata");
const std::string namespace_name = metadata->Get<json::String>("namespace");
const std::string service_name = metadata->Get<json::String>("name");
const std::pair<std::string, std::string> service_key (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No space before the ('.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

const std::string namespace_name = metadata->Get<json::String>("namespace");
// Endpoints name is same as the matching service name.
const std::string service_name = metadata->Get<json::String>("name");
const std::pair<std::string, std::string> service_key (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No space before the ('.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

constexpr const char kKubernetesDefaultNodeName[] = "";
constexpr const bool kKubernetesDefaultUseWatch = true;
constexpr const bool kKubernetesDefaultClusterLevelMetadata = false;
constexpr const bool kKubernetesDefaultServiceMetadata = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Optional] Now that you have an additional guard, this can default to true...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since the cluster level flag is already false, prefer to have it be the same.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would mean that users would have to enable cluster-level watches and the service watches. I would have thought they'd want services by default if they're running at cluster level...
Note: I just turned KubernetesUseWatch off by default, so that's an additional guard, too. So, back to the original question: if a customer runs the agent and explicitly turns on both KubernetesUseWatch and KubernetesClusterLevelMetadata, should they expect to see service metadata? I would guess yes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

constexpr const char kKubernetesDefaultNodeName[] = "";
constexpr const bool kKubernetesDefaultUseWatch = true;
constexpr const bool kKubernetesDefaultClusterLevelMetadata = false;
constexpr const bool kKubernetesDefaultServiceMetadata = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would mean that users would have to enable cluster-level watches and the service watches. I would have thought they'd want services by default if they're running at cluster level...
Note: I just turned KubernetesUseWatch off by default, so that's an additional guard, too. So, back to the original question: if a customer runs the agent and explicitly turns on both KubernetesUseWatch and KubernetesClusterLevelMetadata, should they expect to see service metadata? I would guess yes.

src/kubernetes.h Outdated
// Mutex for the service related caches.
mutable std::mutex service_mutex_;

using ServiceKey = std::pair<std::string, std::string>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's just put this type before the mutex and remove the blank line between them (i.e., have one blank line that delimits this type and the data from the previous data fields).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :shipit:

Copy link
Contributor

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewing tests.


void UpdateServiceToMetadataCache(
KubernetesReader& reader, const json::Object* service,
bool is_deleted)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fits on the previous line.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


void UpdateServiceToPodsCache(
KubernetesReader& reader, const json::Object* endpoints,
bool is_deleted)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fits on the previous line.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}

void UpdateServiceToMetadataCache(
KubernetesReader& reader, const json::Object* service,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's prefer pointers to non-const references (style guide). Also in UpdateServiceToPodsCache.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

}

TEST_F(KubernetesTest, GetClusterMetadataEmpty) {
Configuration config(std::stringstream(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::istringstream everywhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Environment environment(config);
KubernetesReader reader(config, nullptr); // Don't need HealthChecker.
const auto m = GetClusterMetadata(reader, Timestamp());
EXPECT_EQ(0, m.ids().size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EXPECT_TRUE(m.ids().empty()). Also below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

{"location", "TestClusterLocation"},
}), m.resource());
EXPECT_EQ("TestVersion", m.metadata().version);
EXPECT_EQ(false, m.metadata().is_deleted);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EXPECT_FALSE(m.metadata().is_deleted). Also below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

{"metadata", json::object({
{"name", json::string("testname")},
{"namespace", json::string("testnamespace")},
})}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep the trailing comma.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

{"metadata", json::object({
{"name", json::string("testname")},
{"namespace", json::string("testnamespace")},
})}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trailing comma.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

json::object({
{"api", json::object({
{"pods", json::array({
pod_mr.ToJSON(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2-space indent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

});
KubernetesReader reader(config, nullptr); // Don't need HealthChecker.
UpdateServiceToMetadataCache(
reader, service->As<json::Object>(), /*is_deleted=*/false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to also test deleted services in a separate test?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added a separate test.

@supriyagarg supriyagarg force-pushed the service_metadata_streaming branch from ce9b650 to 4865a82 Compare April 1, 2018 04:22
UpdateServiceToMetadataCache(
&reader, service->As<json::Object>(), /*is_deleted=*/false);
const auto m = GetClusterMetadata(reader, Timestamp());
EXPECT_EQ(0, m.ids().size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EXPECT_TRUE(m.ids().empty()).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :shipit:

std::vector<json::value> service_list;
for (const auto& service_it : service_to_metadata_) {
const std::string& service_key = service_it.first;
const std::string namespace_name =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding a comment that explains what we're extracting would be super helpful.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done - both here, and a line in the header file.

EXPECT_EQ("", config.KubernetesClusterLocation());
EXPECT_EQ("", config.KubernetesNodeName());
EXPECT_EQ(true, config.KubernetesUseWatch());
EXPECT_EQ(false, config.KubernetesClusterLevelMetadata());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we rename this to KubernetesClusterMetadata? I'm not sure what Level is implying.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This flag is not new in this PR - it is basically set to True if users want cluster level watch for unscheduled pods + service metadata.

The one added here is below: KubernetesServiceMetadata(). I was just updating the unittest to take care of a missing default flag.

I agree that we need documentation around their usage.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Cluster-level" is the opposite of "node-level". It includes, e.g., unscheduled pods, which are not part of the cluster resource metadata.

}

void KubernetesReader::UpdateServiceToPodsCache(
const json::Object* endpoints, bool is_deleted) throw(json::Exception) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Endpoints represents a single endpoint, could we rename this to be singular throughout?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#endpoints-v1-core

The resource is called Endpoints, not Endpoint, since it lists all endpoints used by a single service.

Added comments to this method in the header file.

}
}

void KubernetesReader::UpdateServiceToPodsCache(
Copy link
Contributor

@bmoyles0117 bmoyles0117 Apr 1, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little concerned calling this UpdateServiceToPodsCache, a service itself doesn't have pods, the endpoint does. I may have missed a lot of the conversation, but I'm of the opinion that we should address endpoints as endpoints internally, even if we end up mapping it to a service at the end.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It really is the service that has the pods - Endpoints is just an API that provides the mapping. According to the docs:
Kubernetes offers a simple Endpoints API that is updated whenever the set of Pods in a Service changes.

const json::value& service_metadata = service_it.second;
auto endpoints_it = service_to_pods_.find(service_key);
const std::vector<std::string>& pod_names =
(endpoints_it != service_to_pods_.end()) ? endpoints_it->second
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an extension of what caused my confusion, I feel like it would be clearer to retain "endpoints_to_pods_", with a shared service key, so that this line makes sense when we read it here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see the other comment - the pods really belong to the service, so I prefer to keep the name.

Hope the renaming of the local iterators helps - at this point the source matters less, we should just be referring to the caches with what they contain.

Copy link
Contributor

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks even better!

EXPECT_EQ("", config.KubernetesClusterLocation());
EXPECT_EQ("", config.KubernetesNodeName());
EXPECT_EQ(true, config.KubernetesUseWatch());
EXPECT_EQ(false, config.KubernetesClusterLevelMetadata());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Cluster-level" is the opposite of "node-level". It includes, e.g., unscheduled pods, which are not part of the cluster resource metadata.

src/kubernetes.h Outdated
// A memoized map from an encoded owner reference to the owner object.
mutable std::map<std::string, json::value> owners_;

// Unique identifier of a service in a cluster, based on the namespace name
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about: // ServiceKey is a pair of the namespace name and the service name that uniquely identifies a service in a cluster.?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

std::lock_guard<std::mutex> lock(service_mutex_);
std::vector<json::value> service_list;
for (const auto& metadata_it : service_to_metadata_) {
// service_key is a std::pair containing (namespace_name, service_name).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems really redundant... You could have said something like: // The namespace name is the first component of the service key., but even that basically replicates the code in line 464.
Maybe just // A service key consists of a namespace name and a service name.?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

@igorpeshansky igorpeshansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM :shipit:

@supriyagarg supriyagarg changed the base branch from igorp-upstream-watch-chunk to master April 2, 2018 21:27
@supriyagarg supriyagarg force-pushed the service_metadata_streaming branch from 7240c18 to 1dab2f2 Compare April 2, 2018 22:05
Copy link
Contributor

@bmoyles0117 bmoyles0117 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@igorpeshansky
Copy link
Contributor

Looks like you need to rebase again.

@supriyagarg supriyagarg force-pushed the service_metadata_streaming branch from 1dab2f2 to 2a7223c Compare April 3, 2018 22:30
@supriyagarg
Copy link
Contributor Author

Rebased and tested locally.

@igorpeshansky igorpeshansky merged commit fe316cb into Stackdriver:master Apr 3, 2018
@supriyagarg supriyagarg deleted the service_metadata_streaming branch April 3, 2018 23:23
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants