This repository was archived by the owner on Jul 20, 2022. It is now read-only.
-
-
Notifications
You must be signed in to change notification settings - Fork 2
This repository was archived by the owner on Jul 20, 2022. It is now read-only.
configOverrides only works at role level #233
Copy link
Copy link
Open
Description
Attempting to set configOverrides at the service level doesn't work. Overrides can be set for the individual roles. In the example below I set a spark-defaults.conf and spark-env.conf override at the service and role level.
apiVersion: spark.stackable.tech/v1alpha1
kind: SparkCluster
metadata:
name: spark-qs
spec:
version: "3.0.1"
config:
logDir: "file:///stackable/spark/logs"
enableMonitoring: true
configOverrides:
spark-defaults.conf:
global.override: value2
spark-env.sh:
GLOBAL_ENV_VAR: test_value
masters:
roleGroups:
default:
selector:
matchLabels:
kubernetes.io/os: linux
replicas: 1
config:
masterPort: 7078
masterWebUiPort: 8081
configOverrides:
spark-defaults.conf:
masters.override: value2
spark-env.sh:
MASTER_ENV_VAR: test_value
workers:
When checking the Spark Master configuration I can see the properties added in the role configOverride but not the service level one.
bash-4.4$ cat spark-env.sh
MASTER_ENV_VAR=test_value
SPARK_MASTER_PORT=7078
SPARK_MASTER_WEBUI_PORT=8081
bash-4.4$ cat spark-defaults.conf
masters.override value2
spark.eventLog.dir file:///stackable/spark/logs
spark.eventLog.enabled true
spark.port.maxRetries 0
The operator should allow a user to set a property at the service level that appears in all of the role configuration files. It should also allow the role level configOverrides to take precedence over the service level one.
Metadata
Metadata
Assignees
Labels
No labels