I've a problem with a installation on kubernetes with helm. All applications working fine but the repository has a rare error.
Pods:
kubectl get pods NAME READY STATUS RESTARTS AGE fegortest-activemq-778fcd74d4-k4rsm 1/1 Running 0 21m fegortest-alfresco-cs-ce-imagemagick-8457cf77df-htczx 1/1 Running 0 21m fegortest-alfresco-cs-ce-libreoffice-66dc48b6b8-qzvz2 1/1 Running 1 21m fegortest-alfresco-cs-ce-pdfrenderer-68cfdb69b-vcwdz 1/1 Running 0 21m fegortest-alfresco-cs-ce-repository-568c7d9696-p6v5x 0/1 CrashLoopBackOff 8 21m fegortest-alfresco-cs-ce-share-cc9fb57f-n6r77 0/1 Running 6 21m fegortest-alfresco-cs-ce-tika-54f45b599f-75n5j 1/1 Running 0 21m fegortest-alfresco-cs-ce-transform-misc-5974955c59-ggf6v 1/1 Running 1 21m fegortest-alfresco-search-solr-759bf5c6b-v2zph 1/1 Running 0 21m fegortest-postgresql-acs-87fc78674-gq44s 1/1 Running 0 21m
Description:
kubectl describe pod fegortest-alfresco-cs-ce-repository-568c7d9696-p6v5x Name: fegortest-alfresco-cs-ce-repository-568c7d9696-p6v5x Namespace: default Priority: 0 Node: gke-fegortest-default-pool-90275dc2-dqkl/10.132.0.15 Start Time: Tue, 06 Oct 2020 11:10:29 +0200 Labels: app=fegortest-alfresco-cs-ce-repository component=repository pod-template-hash=568c7d9696 release=fegortest Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for init container init-db; cpu request for init container init-fs Status: Running IP: 10.20.0.10 IPs: <none> Controlled By: ReplicaSet/fegortest-alfresco-cs-ce-repository-568c7d9696 Init Containers: init-db: Container ID: docker://24c943a241d52222b05e470160e0c19ddddc816443134d564facb8db00c0bad6 Image: busybox Image ID: docker-pullable://busybox@sha256:2ca5e69e244d2da7368f7088ea3ad0653c3ce7aaccd0b8823d11b0d5de956002 Port: <none> Host Port: <none> Command: sh -c until nc -w1 fegortest-postgresql-acs 5432; do echo "waiting for fegortest-postgresql-acs"; sleep 2; done; State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 06 Oct 2020 11:10:58 +0200 Finished: Tue, 06 Oct 2020 11:11:46 +0200 Ready: True Restart Count: 0 Requests: cpu: 100m Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-l7njv (ro) init-fs: Container ID: docker://d1586b8b8392b4bb52f10cec80bb50c84353d86f51f1d329687269cb51c97fe6 Image: busybox Image ID: docker-pullable://busybox@sha256:2ca5e69e244d2da7368f7088ea3ad0653c3ce7aaccd0b8823d11b0d5de956002 Port: <none> Host Port: <none> Command: sh -c chown -R 33000:1000 /usr/local/tomcat/alf_data State: Terminated Reason: Completed Exit Code: 0 Started: Tue, 06 Oct 2020 11:11:47 +0200 Finished: Tue, 06 Oct 2020 11:11:47 +0200 Ready: True Restart Count: 0 Requests: cpu: 100m Environment: <none> Mounts: /usr/local/tomcat/alf_data from data (rw,path="alfresco-content-services/repository-data") /var/run/secrets/kubernetes.io/serviceaccount from default-token-l7njv (ro) Containers: alfresco-content-services-community: Container ID: docker://3ca29516da792c5ae45caf70c1151b6562b3589f54e53c72937400ea6c0193f2 Image: alfresco/alfresco-content-repository-community:6.2.1-A8 Image ID: docker-pullable://alfresco/alfresco-content-repository-community@sha256:d77e597e32808390fa5e2574d667efdc4078b6af2b598cac7af3c55e4fc266f6 Port: 8080/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Tue, 06 Oct 2020 11:28:38 +0200 Finished: Tue, 06 Oct 2020 11:28:38 +0200 Ready: False Restart Count: 8 Limits: cpu: 500m memory: 3000Mi Requests: cpu: 500m memory: 3000Mi Liveness: http-get http://:8080/alfresco/api/-default-/public/alfresco/versions/1/probes/-live- delay=130s timeout=10s period=20s #success=1 #failure=1 Readiness: http-get http://:8080/alfresco/api/-default-/public/alfresco/versions/1/probes/-ready- delay=60s timeout=10s period=20s #success=1 #failure=6 Environment Variables from: fegortest-alfresco-cs-ce-dbsecret Secret Optional: false fegortest-alfresco-cs-ce-repository-configmap ConfigMap Optional: false Environment: <none> Mounts: /usr/local/tomcat/alf_data from data (rw,path="alfresco-content-services/repository-data") /var/run/secrets/kubernetes.io/serviceaccount from default-token-l7njv (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: alfresco-volume-claim ReadOnly: false default-token-l7njv: Type: Secret (a volume populated by a Secret) SecretName: default-token-l7njv Optional: false QoS Class: Guaranteed Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 22m (x2 over 22m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 3 times) Warning FailedScheduling 22m default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times) Normal Scheduled 22m default-scheduler Successfully assigned default/fegortest-alfresco-cs-ce-repository-568c7d9696-p6v5x to gke-fegortest-default-pool-90275dc2-dqkl Normal SuccessfulAttachVolume 22m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-7863e0ee-1271-4e0f-833e-b267da7b8254" Normal Pulling 22m kubelet, gke-fegortest-default-pool-90275dc2-dqkl Pulling image "busybox" Normal Created 22m kubelet, gke-fegortest-default-pool-90275dc2-dqkl Created container init-db Normal Pulled 22m kubelet, gke-fegortest-default-pool-90275dc2-dqkl Successfully pulled image "busybox" Normal Started 22m kubelet, gke-fegortest-default-pool-90275dc2-dqkl Started container init-db Normal Started 21m kubelet, gke-fegortest-default-pool-90275dc2-dqkl Started container init-fs Normal Pulled 21m kubelet, gke-fegortest-default-pool-90275dc2-dqkl Successfully pulled image "busybox" Normal Created 21m kubelet, gke-fegortest-default-pool-90275dc2-dqkl Created container init-fs Normal Pulling 21m kubelet, gke-fegortest-default-pool-90275dc2-dqkl Pulling image "busybox" Normal Pulling 20m (x3 over 21m) kubelet, gke-fegortest-default-pool-90275dc2-dqkl Pulling image "alfresco/alfresco-content-repository-community:6.2.1-A8" Normal Pulled 20m (x3 over 21m) kubelet, gke-fegortest-default-pool-90275dc2-dqkl Successfully pulled image "alfresco/alfresco-content-repository-community:6.2.1-A8" Normal Created 20m (x3 over 21m) kubelet, gke-fegortest-default-pool-90275dc2-dqkl Created container alfresco-content-services-community Normal Started 20m (x3 over 21m) kubelet, gke-fegortest-default-pool-90275dc2-dqkl Started container alfresco-content-services-community Warning BackOff 2m19s (x90 over 20m) kubelet, gke-fegortest-default-pool-90275dc2-dqkl Back-off restarting failed container
Logs:
kubectl logs fegortest-alfresco-cs-ce-repository-568c7d9696-p6v5x NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED Error: Could not find or load main class 209c6174da490caeb422f3fa5a7ae634 Caused by: java.lang.ClassNotFoundException: 209c6174da490caeb422f3fa5a7ae634
Nodes: 3
Machine/Image type: e2-standard-2 (2 vCPUs, 8 GB of memory)
Solved! Go to Solution.
Hi, Eddie!
I've was some problems but I finally solved.
For example, use the same VC ('alfresco-volume-claim') and is a problem with a three nodes in GKE with 'ReadWriteOnce' as 'accessModes'.
Tag of image of Alfresco Community is 6.2.2-RC1.
Memory of nodes in total is 24G (3 nodes * 8G).
Best regards,
Fegor
Hello,
Can you please tell me how you deployed Alfresco in Kubernetes and did you manage to find a solution to your problem?
Thank you.
Hi @fegor
It would be good to get a few more details and exact steps / versions that you followed.
For example, are you following https://github.com/Alfresco/acs-deployment/tree/master/docs/helm ? Maybe check which commit id from master (the last tag of ACS Deployment project itself was 5.0.0-M1 on 16/Oct) =>https://github.com/Alfresco/acs-deployment/blob/v5.0.0-M1/docs/helm/README.md
Also, 8GB of memory isn't really enough & is likely to cause unpredictable behaviour. Even running locally in Docker Desktop, 16Gb would be absolute minimum, but 24 Gb or 32 Gb would make more sense. See =>
Hi, Eddie!
I've was some problems but I finally solved.
For example, use the same VC ('alfresco-volume-claim') and is a problem with a three nodes in GKE with 'ReadWriteOnce' as 'accessModes'.
Tag of image of Alfresco Community is 6.2.2-RC1.
Memory of nodes in total is 24G (3 nodes * 8G).
Best regards,
Fegor
Hi @fegor
Glad you found a solution & thanks for reporting back. I've marked this as solved.
Best wishes,
Ask for and offer help to other Alfresco Content Services Users and members of the Alfresco team.
Related links:
By using this site, you are agreeing to allow us to collect and use cookies as outlined in Alfresco’s Cookie Statement and Terms of Use (and you have a legitimate interest in Alfresco and our products, authorizing us to contact you in such methods). If you are not ok with these terms, please do not use this website.