Skip to content

Proxy and Redis issues #291

@JoaoPPCastelo

Description

@JoaoPPCastelo

Hi all!

I was trying to deploy Authentik on a K8s cluster made of Raspberry Pis 5 (home lab) and i'm getting some issues that haven't been able to fix...

I'm using the following config

authentik:
  secret_key: "<redacted>"
  # This sends anonymous usage-data, stack traces on errors and
  # performance data to sentry.io, and is fully opt-in
  error_reporting:
    enabled: false
  postgresql:
    password: "<redacted>"

server:
  ingress:
    # Specify kubernetes ingress controller class name
    ingressClassName: traefik
    enabled: true
    hosts:
      - <redacted>
    tls:
      - secretName: authentik-tls
        hosts:
          - <redacted>
    https: true

postgresql:
  enabled: true
  auth:
    password: "<redacted>"
redis:
  enabled: false

And so far found the following errors:

  • When using redis.enabled: true, the redis pod was constantly failing with the error <jemalloc>: Unsupported system page size. I found some issues related to this, like [bitnami/redis] container crashed when docker run on arm64 bitnami/containers#26062 . There seems that a fix was provided on the latest images, and i started a container with the bitnami/redis:latest and at least didn't got the error and the pod was up and running, so probably authentik just needs to update to a newer container tag/ chart version?
  • To avoid the previous error and try to proceed with the deployment, deployed the helm chart with redis.enabled: false and:
    -- (1) both on the logs for the server and worker pods, there was a {"event": "Redis Connection failed, retrying... (Timeout connecting to server)", "level": "info", "logger": "authentik.lib.config", "timestamp": 1729369834.256011} error and the pods restarted. So or there's an issue on the config that is not propagating the redis.enabled: false to all the places it needs or redis is required?
    -- (2) was getting the error {"error":"authentik starting","event":"failed to proxy to backend","level":"warning","logger":"authentik.router","timestamp":"2024-10-19T20:31:39Z"} on the server pod. The pods were terminated automatically and replaced by new ones, but always with the same errors
NAME                                READY   STATUS    RESTARTS        AGE
authentik-postgresql-0              1/1     Running   0               34m
authentik-server-7d7699d4d5-bsn2f   0/1     Running   3 (4m27s ago)   34m
authentik-worker-67cf9cf89-dlmzs    0/1     Running   3 (3m26s ago)   34m

Helm chart version 2024.8.3 from https://artifacthub.io/packages/helm/goauthentik/authentik
Running on a k3s cluster on Raspberry Pi 5s with version v1.30.5+k3s1

Adding the logs from pods on the comments to avoid an even bigger description

But any insight on how to get those issues fixed and get Authentik running?
And thank you for the support

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions