-
Notifications
You must be signed in to change notification settings - Fork 0
add logging stack #21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
On another note, we could consider using consul for service discovery and as a distributed kv-store because it is very nice |
|
On the same other note: the current solution, i.e. using traefik and manual dns config for service "discovery" was picked because nomads built in service discovery doesn't support discovering services in other namespaces, and when I tried consul I tried to use the service mesh which I did not get working so I threw out consul entirely. But using it for service discovery but not service mesh might be nice. |
|
This is all deployed and working. Still todo:
|
Poizon7
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Adds:
Some implementation details:
We use vector as a nomad job, where it mounts to the local docker socket to read docker logs, and ships them to loki.
Prometheus can use Nomad service discovery to find scraping targets. For example, you could add a new endpoint to your service that exposes metrics (different than the public port ofc.), and add metrics like:
For service discovery for node metrics and similar, we can use DNS based service discovery. Ie. prometheus does a query to
_node._tcp.monitoring.dsekt.internal, and gets SRV records for IPs and ports to scrape.Note: Grafana, Loki, and Prometheus jobs are already deployed and working.
PS. I have no idea of how we effectively deploy/test this.