-
Notifications
You must be signed in to change notification settings - Fork 29
upgrade to k8s.io/api/autoscaling/v2beta2
#245
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
k8s.io/api/autoscaling/v2beta2
k8s.io/api/autoscaling/v2beta2k8s.io/api/autoscaling/v2beta2
|
@nlu90 @fantapsody @tuteng Please help to review this PR when you have time, thanks. |
|
It seems we'll need user inputs to customize the autocaling metrics and behavior but I think might not be easy for users to determine a proper policy. Is it possible for us to put more best practices into the controller like what metrics and behavior should be used in general or for certain kinds of workloads (CPU/IO/Network/... intense) to make the operator easy to use? |
|
@fantapsody thanks for the comment. We can provide some general kinds of workloads to make function-mesh more user friendly. But if we want to support the custom metrics provided by the specified Pulsar IO Connector, the harder way is still needed as well. So what do you think if we keep the current design, but provide an extra config like |
|
Sounds good. |
6dcae21 to
ecd3f0a
Compare
|
@fantapsody just added builtin rules and tests, PTAL when you have time, thanks. |
autoscaling/v1toautoscaling/v2beta2maxReplicasbackward compatibilityautoScalingMetricsandautoScalingBehaviortoPodPolicyto allow user customizeHorizontalPodAutoscalerSpecbuiltinAutoscalerto support builtin HPA rules