Forgejo on Kubernetes
By chris
- 8 minutes read - 1623 wordsI (and my sons) want to be able to host code and other text locally, so I decided to run on it on my K3s lab.
The first decision to make was which platform to use. I have used Gitlab for years at work and love it, but it is quite a beast to administer and run. I have used Gogs and Gitea in the past and they are far more lightweight than Gitlab, so I decided to use Forgejo
Info
Gogs vs Gitea vs Forgejo is complicated story. At first there was Gogs which was run by a single maintainer who did not accept much community input. Gitea was forked from Gogs by community members who wanted to be more active. Fast forward to October 2022 and Gitea also upset the community and a Forgejo was born, initially as a straight fork. In early 2024 Forgejo made the decision to be hard fork and compatibility with Gitea was no longer guaranteed.
While not as complex as Gitlab, I still want Forgojo to be scalable (because I can, even though I probably will not) so there are a few moving parts:
- A database
- Redis for cache and sessions
- Storage
- Forgejo itself
- Forgejo Runner (for CI/CD aka Actions)
I took these one and at a time and added them into my FluxCD repo.
Forgejo publish the Helm chart as an OCI repo, so I started with that by creating a source.yaml
for that:
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: OCIRepository
metadata:
name: forgejo
namespace: forgejo
spec:
interval: 10m0s
provider: generic
ref:
tag: 8.2.0
url: oci://code.forgejo.org/forgejo-helm/forgejo
Database
I could choose either MariaDB or PostgreSQL and have no real preference. I flipped a coin and went with Mariadb. I have the mariadb-operator running in my cluster, so I created db.yaml
to define that:
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: forgejo-mariadb
namespace: forgejo
spec:
rootPasswordSecretKeyRef:
name: mariadb-root
key: password
generate: true
username: mariadb
passwordSecretKeyRef:
name: mariadb-password
key: password
generate: true
database: mariadb
port: 3306
storage:
size: 1Gi
storageClassName: longhorn
service:
type: ClusterIP
resources:
requests:
memory: 256Mi
limits:
memory: 256Mi
metrics:
enabled: true
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: Database
metadata:
name: forgejo
spec:
# If you want the database to be created with a different name than the resource name
# name: my-logical-database
mariaDbRef:
name: forgejo-mariadb
characterSet: utf8
collate: utf8_general_ci
requeueInterval: 30s
retryInterval: 5s
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: User
metadata:
name: forgejo
spec:
mariaDbRef:
name: forgejo-mariadb
passwordSecretKeyRef:
name: forgejo-mariadb
key: password
host: "%"
# # This field is immutable and defaults to 10
# maxUserConnections: 20
# retryInterval: 5s
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: Grant
metadata:
name: forgejo
spec:
mariaDbRef:
name: forgejo-mariadb
privileges:
- "ALL PRIVILEGES"
database: forgejo
table: "*"
username: forgejo
grantOption: true
host: "%"
requeueInterval: 30s
retryInterval: 5s
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgojo-db-dumpall
namespace: forgejo
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 25Gi
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: Backup
metadata:
name: forgejo-backup-scheduled
spec:
mariaDbRef:
name: forgejo-mariadb
schedule:
cron: "2 4 * * *"
suspend: false
storage:
persistentVolumeClaim:
storageClassName: nfs-client
resources:
requests:
storage: 50Gi
accessModes:
- ReadWriteOnce
args:
- --single-transaction
- --all-databases
- --verbose
That does quite a lot:
- creates the database host and database
- creates the forgojo user along with random credentials
- grants permissions on that DB to the user
- Configures a daily backup
- Creates secrets containing the generated credentials that I can reference later.
I generate the passwords myself and put them into secrets. These are referenced by the Operator above, but can also be used by the HelmRelease
we will create later to install Forgejo itself.
Info
To avoid putting secret data in my repo, I use Bitnami’s Sealed Secrets Controller and add encrypted files into my repo. The actual secret is created dynamically by the controller running in my cluster.
Forgejo Itself
Before install Forgejo, we need to the values to pass to Helm. I like to keep them in a dedicated file in my Flux repo. Mine looks like this:
redis-cluster:
enabled: false
redis:
enabled: true
postgresql:
enabled: false
postgresql-ha:
enabled: false
persistence:
enabled: true
storageClass: longhorn
accessModes:
- ReadWriteMany
size: 20Gi
gitea:
metrics:
enabled: true
serviceMonitor:
enabled: true
additionalLabels:
release: prometheus
admin:
existingSecret: forgejo-admin
# oauth:
# - name: "Authentik"
# provider: "openidConnect"
# existingSecret: forgejo-oauth
# autoDiscoverUrl: "https://auth.lab.cowley.tech/application/o/forgejo/.well-known/openid-configuration"
# iconUrl: " https://auth.lab.cowley.tech/static/dist/assets/icons/icon.svg"
# scopes: "email profile"
config:
database:
DB_TYPE: mysql
HOST: forgejo-mariadb
NAME: forgejo
USER: forgejo
indexer:
ISSUE_INDEXER_TYPE: bleve
REPO_INDEXER_ENABLED: true
additionalConfigFromEnvs:
- name: FORGEJO__SERVICE__DISABLE_REGISTRATION
value: "true"
- name: FORGEJO__DATABASE__PASSWD
valueFrom:
secretKeyRef:
name: forgejo-mariadb
key: password
service:
ssh:
type: LoadBalancer
annotations:
metallb.universe.tf/address-pool: lab-pool
metallb.universe.tf/allow-shared-ip: ingress-nginx
externalTrafficPolicy: Cluster
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt
external-dns.alpha.kubernetes.io/controller: dns-controller
nginx.ingress.kubernetes.io/proxy-body-size: 8G
hosts:
- host: code.lab.cowley.tech
paths:
- path: /
pathType: Prefix
tls:
- secretName: forgejo-tls
hosts:
- code.lab.cowley.tech
There is quite a bit going on here, so let’s take it one step at a time.
I am using my own DB, but I choose to let Helm manage Redis. So I ensure the postgres sub-charts are disabled and enable the Redis sub-chart.
I choose to persist on my Longhorn cluster. I tried it on NFS, but the performance was not great.
Of course I enable metrics because I have Prometheus running so why wouldn’t I?
In .gitea.admin
I decided to use an existing secret to pre-create my admin user. This is simply a secret (called forgejo-admin
) that contains 2 bits of data:
- username (cannot be admin)
- password
After that we have a commented out section to do with Oauth. I run Authentik and use for SSO and most of my applications, but it did not work on my first attempt with Forgejo. I will return to that as I am sure it is something silly.
Next we come to the .gitea.config
block where configure Forgejo itself. We start with the DB config (minus the password) in .gitea.config.database
. The password goes in .gitea.config.additionalConfigFromEnvs
in and variable called FORGEJO__DATABASE__PASSWD
. The is populated from the forgejo-mariadb
secret that was created earlier.
The final bit of config we have for Forgejo is an environment variable to disable registration (FORGEJO__SERVICE__DISABLE_REGISTRATION
). Notice that the environment variables start with FORGEJO__
. Everything in the app.ini
file can be defined using environment variables using the rules described here. In this instance, it is the key disable_registration
in the section [services]
.
The final piece dedicated to Forgejo is in .service
. I want to be able to clone using SSH over port 22 so I need the LoadBalancer service for my Ingress controller to also server TCP22. Fortunately MetalLB support this exact scenario. This creates a Load Balancer service that shares the same IP as my Nginx ingress controller. My Ingress controller has the corresponding configuration:
controller:
service:
annotations:
metallb.universe.tf/address-pool: lab-pool
metallb.universe.tf/allow-shared-ip: ingress-nginx
externalTrafficPolicy: Cluster
Finally we have a block that creates my ingress, including the annotations for Cert-Manager to create a certificate automatically using LetsEncrypt. The only interesting bit in the .ingress
block is the nginx.ingress.kubernetes.io/proxy-body-size: 8G
annotation. That enables pushing larger files to Forgejo. It was the container registry that prompted that in my case. I choose 8GB pretty randomly - anyone that creates container images larger than that should be sacked quite frankly.
All that will be put into the configMapGenerator
provided by Kustomize. This will generate a new configmap each time file is modified and update the HelmRelease
accordingly.
Finally, we want a release.yaml
for our HelmRelease
:
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: forgejo
namespace: forgejo
spec:
chartRef:
kind: OCIRepository
name: forgejo
namespace: forgejo
interval: 30m
timeout: 10m
install:
crds: CreateReplace
remediation:
retries: 3
upgrade:
crds: CreateReplace
valuesFrom:
- kind: ConfigMap
name: forgojo-values
Nothing special there except to note that there is no version. As it refers to an OCIRepository
the version is handles there rather than in the HelmRelease
.
To bring all that together our kustomization.yaml
looks like:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: forgejo
resources:
- source.yaml
- db.yaml
- forgejo-mariadb-sealed-secret.yaml
- forgejo-admin-sealed-secret.yaml
- forgejo-runner-sealed-secret.yaml
- release.yaml
configMapGenerator:
- name: forgojo-values
files:
- values.yaml=values.yaml
- name: forgejo-runner-values
files:
- values.yaml=values-runner.yaml
configurations:
- kustomizeconfig.yaml
That refers to kustomizeconfig.yaml
:
nameReference:
- kind: ConfigMap
version: v1
fieldSpecs:
- path: spec/valuesFrom/name
kind: HelmRelease
That will get you a working install and you can log in with admin credentials you created earlier.
Forgejo Runner
What we are currently missing is some way of running CI jobs. I have run Drone in the past and considered using Woodpecker. After about 5 minutes thought I decided to just stick with the built-in Actions which meant I needed a Runner.
Once again, I turned to Artifacthub where I found a chart already made. It isn’t the best documented, so I had to dig around in the values.yaml
a little, but I came up with something like the following.
Add the repo to our existing sources.yaml
:
...
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: OCIRepository
metadata:
name: forgejo-runner
namespace: forgejo
spec:
interval: 10m0s
provider: generic
ref:
tag: 0.2.8
url: oci://codeberg.org/wrenix/helm-charts/forgejo-runner
We need a values-runner.yaml
to configure the release. This simply contains:
runner:
config:
existingSecret: forgejo-runner-secret
This contains a single key .runner
that contains a file we shall generate using a temporary container. Run the forgejo-runner on your workstation, but go straight into bash
:
podman run -it -v ~/data:/data code.forgejo.org/forgejo/runner:3.5.1 \
forgejo-runner register \
--no-interactive \
--token ${TOKEN} \
--name <runner-name> \
--instance https://<url.to.your.forgejo>
That will create the file ~/data/.runner
which you can use to create the secret:
cd ~/data
kubectl create secret generic forgejo-runner-secret \
--from-file .runner
Now you can add the HelmRelease
to release.yaml
:
...
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: forgejo-runner
namespace: forgejo
spec:
chartRef:
kind: OCIRepository
name: forgejo-runner
namespace: forgejo
interval: 30m
timeout: 10m
install:
crds: CreateReplace
remediation:
retries: 3
upgrade:
crds: CreateReplace
valuesFrom:
- kind: ConfigMap
name: forgejo-runner-values
We have our own self-hosted Github (okay, not exactly) running on our K8s cluster.