-
Notifications
You must be signed in to change notification settings - Fork 630
Refine bus monitor #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
362dc9c
e17286f
a3eb63f
72d39ae
62bd4e8
baeb541
cef2d3b
612a8e8
11ef741
9f261d1
a7449c1
0654496
a468a1e
fb308f7
aa822d6
ce5debd
61d9f85
886cdb5
c2e4f4f
cf17add
fe854e2
24e7f7a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -39,14 +39,27 @@ type Bus struct { | |
| // BusSpec (what the user wants) for a bus | ||
| type BusSpec struct { | ||
|
|
||
| // Parameters configuration params for the bus | ||
| Parameters *[]Parameter `json:"parameters,omitempty"` | ||
| // Parameters exposed by the bus for channels and subscriptions | ||
| Parameters *BusParameters `json:"parameters,omitempty"` | ||
|
|
||
| // Provisioner container definition to manage channels on the bus. | ||
| Provisioner *kapi.Container `json:"provisioner,omitempty"` | ||
|
|
||
| // Dispatcher container definition to use for the bus data plane. | ||
| Dispatcher kapi.Container `json:"dispatcher"` | ||
|
|
||
| // Volumes to be mounted inside the provisioner or dispatcher containers | ||
| Volumes *[]kapi.Volume `json:"volumes,omitempty"` | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Looking at this now, I'm wondering whether it makes sense to have a single bus controller which launches long-running Deployments for provisioning and dispatching, or to have multiple bus controllers which each reconcile a subset of the Buses in the system. In particular, if there is a bus which natively implements an HTTP event transport but uses (for example) a StatefulSet rather than a Deployment, requiring a container -> Deployment in the Bus spec would introduce an extra forwarding hop to no benefit. (Similarly, if we could stamp auth on the outgoing traffic to Google PubSub using Istio, it's possible we could avoid needing to run any Delivery pods in-cluster. Similarly, if there is a Provisioner (e.g. the PubSub provisioner) which only needs to run a few small commands to do the provisioning, it seems more efficient to run a single controller shared across multiple buses (this is similar to OpenWhisk provisioning, IIRC). If we moved the Provision/Dispatch to controller-specific configuration, I think this would look like a WDYT? |
||
| } | ||
|
|
||
| // BusParameters parameters exposed by the bus | ||
| type BusParameters struct { | ||
|
|
||
| // Channel configuration params for channels on the bus | ||
| Channel *[]Parameter `json:"channel,omitempty"` | ||
|
|
||
| // Subscription configuration params for subscriptions on the bus | ||
| Subscription *[]Parameter `json:"subscription,omitempty"` | ||
| } | ||
|
|
||
| // BusStatus (computed) for a bus | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We should probably include at least some basic status such as "provisioning completed" for the Bus.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes. I opened #103 to track and assigned it to myself. |
||
|
|
||
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perhaps it might be better to just use a podspec if you need volumes as well? Just worried that we'll end up there but it doesn't then look like a podspec?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, I considered using a PodSpec but didn't want to expose the service account since the bus needs to act like a controller with access to the k8s api server.
Perhaps I'm just being overly paranoid with attempting to apply least privilege access control. Instead we could document that from an access control perspective applying a Bus is equivalent to applying a Pod/Deployment.
Another option would be to use a PodSpec and then overwrite the service account with the account we provide, but that may be surprising to users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I'm just curious where it's controlled who the provisioner / dispatcher runs as in your model? Same as the launcher? If so, do you foresee there being more flexibility in being able to specify a different service account to run as? If it runs with a different service account, you can then control possibly tighter with RBAC rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bus controller (which is already merged), will create a service account for each bus, and binds to a preconfigured cluster role.