Standalone pod not starting with no usable messages apparent #573

Closed
opened 2023-11-22 00:35:34 +00:00 by AntonOfTheWoods · 7 comments

I have the following overrides to try and set up a single-pod system.

redis-cluster:
  enabled: false
postgresql:
  enabled: true
postgresql-ha:
  enabled: false

persistence:
  enabled: true

ingress:
  enabled: true
  className: nginx
  annotations:
    kubernetes.io/ingress.class: nginx
    cert-manager.io/cluster-issuer: letsencrypt-staging
  hosts:
    - host: gitea.adomain.org
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: gitea-tls
      hosts:
        - gitea.adomain.org

gitea:
  admin:
    username: "a_username"
    password: "a_password"
    email: "admin@anemail.com"
  config:
    database:
      DB_TYPE: postgres
    session:
      PROVIDER: db
    cache:
      ADAPTER: memory
    queue:
      TYPE: level
    indexer:
      ISSUE_INDEXER_TYPE: bleve
      REPO_INDEXER_ENABLED: true

Unfortunately, I'm just getting the following events:

Events:
  Type     Reason   Age                   From     Message
  ----     ------   ----                  ----     -------
  Normal   Pulled   15m (x165 over 13h)   kubelet  Container image "gitea/gitea:1.21.0-rootless" already present on machine
  Warning  BackOff  37s (x3825 over 13h)  kubelet  Back-off restarting failed container configure-gitea in pod gitea-f9894656c-jwjlr_default(eb2581b6-df08-485c-b10d-785b2e0db1e1)

And I'm not sure how I can get any further debug information. Is there something I can do to get debug information?

I have the following overrides to try and set up a single-pod system. ``` redis-cluster: enabled: false postgresql: enabled: true postgresql-ha: enabled: false persistence: enabled: true ingress: enabled: true className: nginx annotations: kubernetes.io/ingress.class: nginx cert-manager.io/cluster-issuer: letsencrypt-staging hosts: - host: gitea.adomain.org paths: - path: / pathType: Prefix tls: - secretName: gitea-tls hosts: - gitea.adomain.org gitea: admin: username: "a_username" password: "a_password" email: "admin@anemail.com" config: database: DB_TYPE: postgres session: PROVIDER: db cache: ADAPTER: memory queue: TYPE: level indexer: ISSUE_INDEXER_TYPE: bleve REPO_INDEXER_ENABLED: true ``` Unfortunately, I'm just getting the following events: ``` Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 15m (x165 over 13h) kubelet Container image "gitea/gitea:1.21.0-rootless" already present on machine Warning BackOff 37s (x3825 over 13h) kubelet Back-off restarting failed container configure-gitea in pod gitea-f9894656c-jwjlr_default(eb2581b6-df08-485c-b10d-785b2e0db1e1) ``` And I'm not sure how I can get any further debug information. Is there something I can do to get debug information?
AntonOfTheWoods changed title from Standalone pod not starting with no usable messages anywhere to Standalone pod not starting with no usable messages apparent 2023-11-22 00:35:53 +00:00
Member

If you have no other redis instance while disabling the redis-cluster, this is the issue. See #564 as reference.

I am working on a fix for that.

If you have no other redis instance while disabling the redis-cluster, this is the issue. See #564 as reference. I am working on a fix for that.

Thanks @justusbunsi - at the very least the readme needs to be updated! I am just getting started so have no idea what to suggest though, sorry!

Thanks @justusbunsi - at the very least the readme needs to be updated! I am just getting started so have no idea what to suggest though, sorry!
Member

@AntonOfTheWoods

I have to apologize for my initial response. I overlooked that you configured the cache for memory. As stated in the readme. This should work when running the first time.

There is a really nasty bug with updating the app.ini. If you have tried the default values first and then switched to the single-pod configuration, there is a high chance that some redis configuration is still in the app.ini.

Please check this and remove any redis related setting manually. Then try the sample from README again.

@AntonOfTheWoods I have to apologize for my initial response. I overlooked that you configured the cache for memory. As stated in the readme. This should work when running the first time. There is a really nasty bug with updating the app.ini. If you have tried the default values first and then switched to the single-pod configuration, there is a high chance that some redis configuration is still in the app.ini. Please check this and remove any redis related setting manually. Then try the sample from README again.

Actually I simply can't get it to work at all. If I put nothing for redis or

redis-cluster:
  cluster:
    nodes: 1

Then it creates (1 or 3) redis nodes but I still get the same error with no messages anywhere. I of course deleted the chart and all resources, including manually deleting all pvcs. It just doesn't work.

Actually I simply can't get it to work at all. If I put nothing for redis or ``` redis-cluster: cluster: nodes: 1 ``` Then it creates (1 or 3) redis nodes but I still get the same error with no messages anywhere. I of course deleted the chart and all resources, including manually deleting all pvcs. It just doesn't work.
Member

Thank you for your input. It shouldn't happen but obviously does. That's unfortunate. I'll take a closer look to check if I can reproduce that with your settings.

Meanwhile: redis-cluster requires at least 3 pods to successfully start. Setting it to 1 will cause it to stuck and consequently Gitea being not able to start. As you started without redis, this is most likely not the root cause and just an FYI. There was an internal discussion about standalone redis. More to come about this soon.

The pod status you posted shows a restart loop in the configure-gitea init container. Please have a closer look into the logs of that specific container. That will give more hints about what's wrong.

Thank you for your input. It shouldn't happen but obviously does. That's unfortunate. I'll take a closer look to check if I can reproduce that with your settings. Meanwhile: redis-cluster requires at least 3 pods to successfully start. Setting it to 1 will cause it to stuck and consequently Gitea being not able to start. As you started without redis, this is most likely not the root cause and just an FYI. There was an internal discussion about standalone redis. More to come about this soon. The pod status you posted shows a restart loop in the configure-gitea init container. Please have a closer look into the logs of that specific container. That will give more hints about what's wrong.

Ok, I am at least mainly to blame here. The problem was that I don't use the default clusterDomain, and I should have noticed the init container. Setting that and making sure a redis-cluster is properly installed seems to work. I'm not sure open registration with no email validation is the best default but that's another story!

Thanks.

Ok, I am at least mainly to blame here. The problem was that I don't use the default `clusterDomain`, and I should have noticed the init container. Setting that and making sure a redis-cluster is properly installed seems to work. I'm not sure open registration with no email validation is the best default but that's another story! Thanks.
Member

I've just tried again the single pod setup referenced in the README and it works (in a fresh namespace):

redis-cluster:
  enabled: false
postgresql:
  enabled: true
postgresql-ha:
  enabled: false

persistence:
  enabled: true

gitea:
  config:
    database:
      DB_TYPE: postgres
    session:
      PROVIDER: db
    cache:
      ADAPTER: memory
    queue:
      TYPE: level
    indexer:
      ISSUE_INDEXER_TYPE: bleve
      REPO_INDEXER_ENABLED: true

I'm not sure open registration with no email validation is the best default but that's another story!

We don't claim that dependency settings are "top notch" or "good defaults". We rely on the defaults of the respective dependencies. Users are encouraged to review them and possibly adapt them to their needs.

WRT to fallbacks and their handling: the topic has already been discussed recently. As of now, the chart works with the provided defaults. Custom setups are referenced in the README. (IMO) Users shouldn't expect a working chart when they turn off required default dependencies without putting a valid alternative configuration in place. Gitea is able to run with so many different config settings that we could define five (magic) fallbacks in case X, Y or Z are turned off/not used. But I don't think this should be the goal but to also force users to think about their config and setup before the deployment.

What makes switching between setups hard is #356 as the config file is not properly reset. We are working on that.

Thanks for your contribution and question! If there's information missing anywhere or unclear, please let us know.

I've just tried again the single pod setup referenced in the README and it works (in a fresh namespace): ```yml redis-cluster: enabled: false postgresql: enabled: true postgresql-ha: enabled: false persistence: enabled: true gitea: config: database: DB_TYPE: postgres session: PROVIDER: db cache: ADAPTER: memory queue: TYPE: level indexer: ISSUE_INDEXER_TYPE: bleve REPO_INDEXER_ENABLED: true ``` > I'm not sure open registration with no email validation is the best default but that's another story! We don't claim that dependency settings are "top notch" or "good defaults". We rely on the defaults of the respective dependencies. Users are encouraged to review them and possibly adapt them to their needs. WRT to fallbacks and their handling: the topic has already been discussed recently. As of now, the chart works with the provided defaults. Custom setups are referenced in the README. (IMO) Users shouldn't expect a working chart when they turn off required default dependencies without putting a valid alternative configuration in place. Gitea is able to run with so many different config settings that we could define five (magic) fallbacks in case X, Y or Z are turned off/not used. But I don't think this should be the goal but to also force users to think about their config and setup before the deployment. What makes switching between setups hard is #356 as the config file is not properly reset. We are working on that. Thanks for your contribution and question! If there's information missing anywhere or unclear, please let us know.
Sign in to join this conversation.
No Milestone
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: gitea/helm-chart#573
No description provided.