Compare commits

..

170 Commits

Author SHA1 Message Date
Jack Jackson
1e767ec1eb Formatting the secret as JSON 2025-04-18 12:55:01 -07:00
Jack Jackson
6aba9bf11b Try using Vault Sidecar Injection
Referencing
[here](https://developer.hashicorp.com/vault/tutorials/kubernetes/kubernetes-sidecar#configure-kubernetes-authentication),
comparing with the Secrets Operator that I used
[here](https://blog.scubbo.org/posts/base-app-infrastructure/). I
_think_ I prefer this because:

* It doesn't create a Kubernetes secret (which is, contrary to
  expectation, [not entirely
  secure](https://kubernetes.io/docs/concepts/configuration/secret/))
* The YAML/template changes required are smaller
* It looks like it _might_ be able to write a whole Vault path as a
  single file, rather than one-file-per-key - though it'll need some
  template wizardry (in a follow-on commit) to format that right.
2025-04-18 12:42:52 -07:00
Imagebot
f49906b12f Update EDH ELO commit to "9b4e6c3b4d852883a372332461253ef9eae6d014" 2025-04-18 08:34:26 +00:00
Jack Jackson
9c504e5145 More port-alignment, and enable Ingress 2025-04-07 20:26:08 -07:00
Jack Jackson
a225b0130a Specify miniflux target port explicitly 2025-04-07 16:18:07 -07:00
Jack Jackson
5e1bf66aeb Try circumventing miniflux readiness 2025-04-07 15:53:40 -07:00
Jack Jackson
d379cafc7b Remove Drone (in favour of Gitea) 2025-04-07 15:40:30 -07:00
Jack Jackson
b6ce1b3a24 Update miniflux to latest version 2025-04-07 15:39:55 -07:00
Imagebot
4c29e3f62e Update EDH ELO commit to "0434ec1e98b127f58d5c95f548d30fb40ec09918" 2025-04-07 06:39:53 +00:00
Imagebot
441b4c5e3c Update EDH ELO commit to "2506f4fa6dae5e09022c75aac045ec6b998a5b57" 2025-04-07 06:34:04 +00:00
Jack Jackson
4d743a87bd Latest version of edh-elo 2025-04-05 16:10:16 -07:00
Jack Jackson
0946871712 Use newer version of edh-elo 2025-04-05 13:48:57 -07:00
Jack Jackson
a90cc33d1c Add Plugins dir for Vault 2025-03-17 15:38:32 -07:00
Jack Jackson
d8cad832ba Switch Vault to Jsonnett definition
As a precursor to:
* Enabling Plugins
* So that I can get GitHub credentials from Vault via [this
    plugin](https://github.com/martinbaillie/vault-plugin-secrets-github)
* So that I can use [this history-syncing
    plugin](https://gitea.scubbo.org/scubbo/commit-report-sync) without
    needing to refresh tokens, including in _this_ repo.
* At which point I want to [use LetsEncrypt to provide certs for Traefik
   Ingresses](https://adamtheautomator.com/letsencrypt-with-k3s-kubernetes/#Ensuring_Seamless_Certificate_Renewals_with_a_ClusterIssuer)
* So that I can use Keycloak, which [demands an http
    scheme](https://github.com/keycloak/keycloak/issues/30977#issuecomment-2208679081).

What a deep rabbit-hole I am in! :)
2025-03-14 20:46:59 -07:00
Jack Jackson
fb7e8cd98e Migrate blog to a) -deployment repo, b) jsonnet-format definition 2025-02-26 19:35:53 -08:00
Jack Jackson
5e08c653a3 Add scubbo.org Ingress host 2025-02-21 20:20:46 -08:00
Jack Jackson
6925418684 Add notes on Jellyfin external availability 2025-02-10 09:32:32 -08:00
Jack Jackson
ddd9be2280 Explicitly request GPU 2025-02-08 16:05:33 -08:00
Jack Jackson
4710c36228 Set env variables 2025-02-08 14:33:37 -08:00
Jack Jackson
dcb62c838d Use nvidia runtimeClassName 2025-02-08 14:13:49 -08:00
Jack Jackson
cbc77be2a3 Direct-mount /dev/dri in 2025-02-08 12:35:31 -08:00
Jack Jackson
a5f24642ae Revert mounting-in card via smarter-devices 2025-02-08 12:29:41 -08:00
Jack Jackson
668e1c01bb Install openwebui 2025-02-06 21:01:30 -08:00
Jack Jackson
6dbc94cec0 Remove non-truenas Media storage 2025-02-02 22:15:26 -08:00
Jack Jackson
37704b2433 Add note on onboard graphics 2025-01-22 23:08:21 -08:00
Jack Jackson
19c0577655 Add supplementalGroups for permissions to access video card 2025-01-21 22:50:28 -08:00
Jack Jackson
1dd75693cb Mount-in render device 2025-01-21 22:47:12 -08:00
Jack Jackson
e9145df641 Mount devices with Smarter Device Management 2025-01-21 22:32:33 -08:00
Jack Jackson
807785daca Mount video card into Jellyfin container
To permit Hardware Acceleration.

See [here](https://stackoverflow.com/a/59291859),
[here](https://jellyfin.org/docs/general/administration/hardware-acceleration/),
and
[here](https://old.reddit.com/r/jellyfin/comments/i2r4h9/how_to_enable_hardware_acceleration_with_docker/).
2025-01-21 21:59:34 -08:00
Jack Jackson
60417775be Update EDH ELO commit to "c6a279a703" 2024-11-19 04:01:29 +00:00
Jack Jackson
1b617368b8 Introduce Miniflux 2024-11-13 20:39:46 -08:00
Jack Jackson
b5eee54ac3 Remove unused config PVC 2024-11-01 12:02:07 -07:00
Jack Jackson
46d6ee105f Remove hardcoded nodename that was overriding nodeSelector set in previous commit 2024-11-01 11:54:47 -07:00
Jack Jackson
42e40bf23e NodeSelect to Epsilon while rasnu1 is misbehaving 2024-10-29 16:39:01 -07:00
Jack Jackson
73accc5b7b Update EDH ELO commit to "8b5d96e76f" 2024-09-05 05:01:48 +00:00
Jack Jackson
ab2d7a3c30 Update EDH ELO commit to "5d2183bbf0" 2024-08-23 17:02:11 +00:00
Jack Jackson
78302757cd Update EDH ELO commit to "f120336f1d" 2024-08-23 16:39:22 +00:00
Jack Jackson
f5cbefc00e Enable Readarr 2024-08-21 20:02:17 -07:00
Jack Jackson
843252d917 Update EDH ELO commit to "460467bd0b" 2024-08-08 05:45:05 +00:00
Jack Jackson
0244e53970 Update EDH ELO commit to "bf7997bd1d" 2024-08-08 05:29:24 +00:00
Jack Jackson
144c55c2b1 Update EDH ELO commit to "fff79bf883" 2024-08-08 04:19:54 +00:00
Jack Jackson
489ad4b726 Update EDH ELO commit to "c874de3a4c" 2024-08-08 04:18:18 +00:00
Jack Jackson
34e6f91ba0 Update EDH ELO commit to "b01831b5ac" 2024-08-08 04:16:28 +00:00
Jack Jackson
a64a420e94 Update EDH ELO commit to "2ba4cf9a30" 2024-08-01 03:56:14 +00:00
Jack Jackson
facac2a99f Update EDH ELO commit to "5d71f422f3" 2024-08-01 03:53:38 +00:00
Jack Jackson
2fd086fa34 Update EDH ELO commit to "fff4fa6c57" 2024-07-30 17:02:54 +00:00
Jack Jackson
0671898319 Update EDH ELO commit to "3b1c3d7eb3" 2024-07-30 03:18:56 +00:00
Jack Jackson
492bf8e10d Update EDH ELO commit to "105dc438bd" 2024-07-29 02:21:31 +00:00
Jack Jackson
f71cbf8c50 Update EDH ELO commit to "0e69452403" 2024-07-28 04:28:57 +00:00
Jack Jackson
8ee06464a7 Update EDH ELO commit to "2fb5a291e5" 2024-07-28 03:09:50 +00:00
Jack Jackson
cb8d11ec1a Revert "Revert "Update to latest version of Sonarr""
This reverts commit d204131de34c2d03a2b2e207e4b779548464336f.
2024-07-15 22:35:48 -07:00
Jack Jackson
d204131de3 Revert "Update to latest version of Sonarr"
This reverts commit 378046ac62767d3ec7f1d2411c65c4ce6f189ff5.
2024-07-15 22:09:17 -07:00
Jack Jackson
378046ac62 Update to latest version of Sonarr 2024-07-15 21:34:46 -07:00
Jack Jackson
6004858c85 Update EDH ELO commit to "1c9aa30721" 2024-07-11 04:23:47 +00:00
Jack Jackson
19089def9b Update EDH ELO commit to "a4b17daca0" 2024-07-11 04:02:45 +00:00
Jack Jackson
1ae48be3ea Testing credentials 2024-07-09 01:01:09 -07:00
Jack Jackson
46c20001ca Standardize on existing secret for postgres auth 2024-07-01 21:37:02 -07:00
Jack Jackson
322db77194 Update EDH ELO tag 2024-07-01 20:59:54 -07:00
Jack Jackson
7e6c394929 Update edh-elo commit 2024-06-27 10:45:48 -07:00
Jack Jackson
be10ebe8a4 Update edh-elo commit 2024-06-27 10:15:48 -07:00
Jack Jackson
93dd5c424f Update edh-elo commit 2024-06-27 10:14:46 -07:00
Jack Jackson
e879b0ba05 Use legal database-user name 2024-06-27 09:43:05 -07:00
Jack Jackson
89511e3747 Update edh-elo commit 2024-06-27 09:35:34 -07:00
Jack Jackson
864b8189e3 Update git commit 2024-06-26 19:17:25 -07:00
Jack Jackson
2ff2c4224c Deploy edh-elo 2024-06-24 21:11:16 -07:00
Jack Jackson
8d70bbe78b Enable Drone Kubernetes Secrets Chart
Interestingly, the existence of this chart somewhat contradicts the
[docs](https://docs.drone.io/runner/extensions/kube/), which suggest you
should "_\[d\]eploy the secret extension in the same Pod as your
Kubernetes runner_". Though the interaction appears to be via an HTTP
call, so that doesn't seem like would be an issue.
2024-06-05 15:05:53 -07:00
Jack Jackson
4cc1c531e2 Provide a k8s secret containing Mastodon Access Token
To auto-post on publishing a new blog post.
2024-06-04 17:03:09 -07:00
Jack Jackson
2d1fd9ef0c Specify MaxTTL for Tokens from BaseAppInfra
I encoutered an issue where tokens were being created without TTLs and
thus clogging up the storage of the system. I haven't found a smoking
gun pointing to this being the cause, but I do suspect that it's
_something_ to do with the Vault/Crossplane integration, since a) that's
really my only use-case for Vault, and b) there's the string
`vault-provider` in the display_name below:

```
$ vault token lookup -accessor zcRF0YAUQtP7vrbZHTW5y322
Key                 Value
---                 -----
accessor            zcRF0YAUQtP7vrbZHTW5y322
creation_time       1715766311
creation_ttl        0s
display_name        token-vault-provider-token
entity_id           n/a
expire_time         <nil>
explicit_max_ttl    0s
id                  n/a
issue_time          2024-05-15T09:45:11.720412011Z
meta                <nil>
num_uses            0
orphan              false
path                auth/token/create
policies            [root]
renewable           false
ttl                 0s
type                service
```
2024-06-04 15:43:42 -07:00
Jack Jackson
496c2f13b0 Expand (and explicitly specify storageclass of) Vault storage
Due to currently-unknown fault, my Vault storage got full up (I
_suspect_ it's due to not setting a default TTL on Tokens, and so they
all hung around. Surprised they were created at such a rate, but w/e). I
wasn't able to directly expand the volume - and, anyway, it's on
Longhorn which is a Storage Provisioner that I'm moving away from - so
the solution was to:
* Create a temporary PV (on FreeNas, though that doesn't actually
  matter) and copy data onto it (by mounting both it and the existing
  Volume onto a debug pod, using a variant of [this
  script](https://blog.scubbo.org/posts/pvc-debug-pod/))
* Delete the existing PVC and PV
* Make this update, and sync
  * A new _empty_ PV will be created (and probably populated with some
    stuff)
* Scale-down the StatefulSet, do the double-mount-to-debug-pod trick
  again, and copy data from the temporary PV onto this one
* Delete Debug Pod, re-scale-up StatefulSet...and hope that there's
  nothing stateful in the data which means that copying it from one
  volume to another makes it invalid (e.g. if encrypted with an
  encryption key which would change on a new spin-up of the pod - which
  _seems_ unlikely, but 🤷)
2024-06-04 14:07:45 -07:00
Jack Jackson
e798564692 First steps in Crossplane-Vault integration 2024-05-08 23:45:39 -07:00
Jack Jackson
bcb2bd28d7 Enable sabnzbd 2024-05-08 07:35:36 -07:00
Jack Jackson
4c82c014f8 Add vault-sourced secret in Drone setup 2024-04-21 14:02:43 -07:00
Jack Jackson
1926560274 Jsonnify Drone 2024-04-21 13:08:41 -07:00
Jack Jackson
b856fd2bc5 Set up Vault Secrets Operator
Prerequisite that Vault is configured with authentication per
https://developer.hashicorp.com/vault/tutorials/kubernetes/vault-secrets-operator#configure-vault

The plan would eventually be to manage Vault objects via
[Crossplane](https://www.crossplane.io/).
2024-04-21 12:46:01 -07:00
Jack Jackson
3140ea8b0d Correctly represent env variable 2024-04-20 13:45:13 -07:00
Jack Jackson
185af7901a Remove initContainer backup approach 2024-04-20 13:21:41 -07:00
Jack Jackson
b4c9947e4c Try including date in backup name 2024-04-19 21:32:13 -07:00
Jack Jackson
6d338157fa Put Keycloak backup volumes in right namespace 2024-04-19 21:01:26 -07:00
Jack Jackson
abc71fd7f1 Set securityContext to permit truenas file operations 2024-04-10 17:49:16 -07:00
Jack Jackson
40427c0426 Add Keycloak Backup job 2024-04-06 17:33:07 -07:00
Jack Jackson
a98d915658 Add backup as crontab 2024-04-06 14:53:42 -07:00
Jack Jackson
68f83a23b3 Install keycloak 2024-04-06 13:20:14 -07:00
Jack Jackson
de944bac48 Remove Grafana Oncall 2024-03-12 19:10:13 -07:00
Jack Jackson
b107f1e839 Dehelmify, and install Crossplane via Jsonnet
Need to remove `Chart.yaml` so that Argo doesn't try to treat
`app-of-apps/` as a Helm application (because that would stop it from
using Jsonnet parsing).
2024-03-12 18:49:06 -07:00
Jack Jackson
d1e000dc10 Avoid Drone-runner on the cursed node 2024-02-19 13:42:35 -08:00
Jack Jackson
7c3364fef9 Addressing Sonarr DB Migration error 2024-02-17 18:44:14 -08:00
Jack Jackson
3dfc818f5f First attempt at installing OpenProject 2024-01-14 20:00:56 -08:00
Jack Jackson
a3b154adf8 Mount Truenas directly at /data 2023-12-08 21:45:58 -08:00
Jack Jackson
5548684b7a Create admin Drone user 2023-12-01 22:56:40 -08:00
Jack Jackson
657942071a Fully migrate to TrueNas for Nzbget 2023-11-28 19:17:16 -08:00
Jack Jackson
feee5d6979 Add Blog application 2023-11-24 14:28:36 -08:00
Jack Jackson
ab1bc63f84 Re-enable Vault
Note that I was wrong before - there was no need to disable while
setting up TrueNAS, because Vault suggests using integrated storage.
2023-10-30 22:13:46 -07:00
Jack Jackson
7eb215f7fa Remove Longhorn Media volumes now fully migrated 2023-10-04 10:00:57 -07:00
Jack Jackson
69b15c1ad6 Temporarily mount TrueNAS to Jellyfin as Read-Write to transfer data from Longhorn volumes 2023-10-01 19:40:59 -07:00
Jack Jackson
a3e807c406 Mount TrueNAS volume for Usenet Downloads 2023-09-30 16:53:15 -07:00
Jack Jackson
499d3acaf5 Mount TrueNas volume on all appropriate containers 2023-09-30 14:50:15 -07:00
Jack Jackson
b183c2bf6b Reintroduce TrueNAS storage after reconfiguration 2023-09-23 20:13:43 -07:00
Jack Jackson
58bc49412e Remove TrueNAS volume from Jellyfin while reconfiguring 2023-09-23 19:27:59 -07:00
Jack Jackson
0bc8d9b219 Temporarily delete Vault app while I reconfigure TrueNAS 2023-09-23 19:13:59 -07:00
Jack Jackson
7373ba6346 Introduce TrueNas volume for Jellyfin 2023-09-22 22:39:43 -07:00
Jack Jackson
9689cbc52e Enable Ingress 2023-09-20 21:38:34 -07:00
Jack Jackson
1dd97e7338 Deploy Vault 2023-09-20 20:53:44 -07:00
Jack Jackson
6f73b57afe Add Affinity in Jellyfin Metrics 2023-08-30 20:08:48 -07:00
Jack Jackson
98ae54614b Bind Drone Runner to arm64 node 2023-08-30 19:47:57 -07:00
Jack Jackson
311c15b4a8 Update Oncall versions 2023-08-26 18:34:10 -07:00
Jack Jackson
22bc25bc1d Update to latest Grafana version 2023-08-26 17:41:26 -07:00
Jack Jackson
f73941fb8c Add Private Apps 2023-08-05 18:54:57 -07:00
Jack Jackson
a0957a85ea Re-add Oncall, having removed Retained PersistentVolumes 2023-07-27 17:31:35 -07:00
Jack Jackson
f22892e482 Remove Oncall - still need postgres password passthrough 2023-07-26 21:48:38 -07:00
Jack Jackson
f2cd112341 Re-enable Grafana Oncall
Setting redis `nodeSelector` as per [Bitnami
chart](https://github.com/bitnami/charts/blob/main/bitnami/redis/values.yaml)
2023-07-26 21:08:38 -07:00
Jack Jackson
9fdb389814 Disable Grafana Oncall 2023-07-26 19:53:08 -07:00
Jack Jackson
ed039061bd Try Grafana Oncall on x86 2023-07-26 19:21:17 -07:00
Jack Jackson
b13c2a3c50 Fully remove volume 4 2023-07-26 18:52:50 -07:00
Jack Jackson
8d2b346490 Unmount large volume - just wait for NAS 2023-07-26 18:19:48 -07:00
Jack Jackson
9c84e93e65 Create larger volume now rasnu2 is available 2023-07-26 14:20:01 -07:00
Jack Jackson
dd63fb1d2c Longhorn TV volume 3 2023-07-26 00:02:14 -07:00
Jack Jackson
766998c026 Second Longhorn TV Volume 2023-07-25 14:34:51 -07:00
Jack Jackson
a01a1a68f4 Revert "Temporarily mount base media dir as ReadWriteMany to copy data out to Longhorn volume"
This reverts commit 2d622ee971c9bad617b3f9c55d48254c81219b27.
2023-07-25 09:25:02 -07:00
Jack Jackson
2d622ee971 Temporarily mount base media dir as ReadWriteMany to copy data out to Longhorn volume 2023-07-25 09:14:28 -07:00
Jack Jackson
56ef7ddcc4 Re-add Longhorn for (hopefully) last time 2023-07-25 07:53:13 -07:00
Jack Jackson
d9d4031ab7 Remove Volume so Longhorn StorageClass can be recreated with Retain 2023-07-24 22:16:55 -07:00
Jack Jackson
3b58d942ae Recreate volume 2023-07-24 22:03:14 -07:00
Jack Jackson
6ab568964c Undefine PVC 2023-07-24 21:54:55 -07:00
Jack Jackson
f693819cb6 Detach Volume 2023-07-24 21:52:30 -07:00
Jack Jackson
82f7405d4e Re-attach volume 2023-07-24 21:49:15 -07:00
Jack Jackson
db60c3ba9c Remove PVC 2023-07-24 21:44:35 -07:00
Jack Jackson
1f46cad533 Reintroduce PVC 2023-07-24 21:35:29 -07:00
Jack Jackson
bdf2c5dc65 Remove PVC to downsize 2023-07-24 21:15:41 -07:00
Jack Jackson
4c257cdf15 Unbind (but do not undefine) PVC 2023-07-24 21:13:39 -07:00
Jack Jackson
6cd7779aae Reintroduce larger JellyfinTV Longhorn volume 2023-07-24 21:09:07 -07:00
Jack Jackson
e9c311d837 Remove Longhorn TV Volume to resize-up 2023-07-24 21:05:02 -07:00
Jack Jackson
2b1e5e7f5b Remove Ceph/Rook charts 2023-07-24 17:42:00 -07:00
Jack Jackson
808a64b3d4 Reintroduce Longhorn volume 2023-07-24 16:36:51 -07:00
Jack Jackson
3e3dddeaec Remove Ceph PVC 2023-07-24 16:33:49 -07:00
Jack Jackson
67cf86bf60 Recreate Ceph Volume 2023-07-23 18:29:46 -07:00
Jack Jackson
670f32b424 Disable LonhornClaim so it can be deleted 2023-07-23 16:15:54 -07:00
Jack Jackson
36c5c3a41d Try using Ceph 2023-07-23 15:43:52 -07:00
Jack Jackson
91d7b2cc72 Disable values.yaml (PVC-based means look _elsewhere_ for storage, not to provide _via_ storage) 2023-07-23 15:30:42 -07:00
Jack Jackson
3b10ad2abd Create Ceph cluster 2023-07-23 14:21:03 -07:00
Jack Jackson
0534e973de Install Rook to expected namespace 2023-07-23 13:56:06 -07:00
Jack Jackson
f7de513633 Specify version of rook-ceph chart 2023-07-23 13:48:13 -07:00
Jack Jackson
324479a769 Deploy Ceph Operator 2023-07-23 13:36:33 -07:00
Jack Jackson
6c4f138bac Attach smaller volume 2023-07-23 12:58:17 -07:00
Jack Jackson
4fab765f0b Create PVC with smaller size 2023-07-23 12:57:14 -07:00
Jack Jackson
9a808e31ea Remove (delete?) LonghornPV (so it can be downsized) 2023-07-23 12:54:48 -07:00
Jack Jackson
be0dc53e2b Cannot downsize a volume, and cannot delete an attached volume 2023-07-23 12:49:14 -07:00
Jack Jackson
9c4fdc923d Reduce size of TV data volume to make it fit 2023-07-22 19:27:22 -07:00
Jack Jackson
5ba0766dad Longhorn volumes can only be ReadWriteOnce 2023-07-22 18:31:05 -07:00
Jack Jackson
780114f87e Add Longhorn TV volume 2023-07-20 21:46:18 -07:00
Jack Jackson
84d5759cda Prometheus and Grafana tolerate x86 2023-07-18 11:02:07 -07:00
Jack Jackson
ceba50d6f7 Revert "Tolerate x86 architecture in drone-runner"
This reverts commit c7a24e08472578c9940fead531020de1ac9c1b8a.
2023-07-17 19:58:59 -07:00
Jack Jackson
c7a24e0847 Tolerate x86 architecture in drone-runner
This will cause some builds to fail (or, possibly, to build unusable
images), until their builds are migrated to use
https://github.com/thegeeklab/drone-docker-buildx
2023-07-17 19:47:08 -07:00
Jack Jackson
57b22e6cdb Select Jellyfin to Epsilon 2023-07-16 22:23:44 -07:00
Jack Jackson
8a65baafa8 Commenting on dual-image build strategy (do not have one yet!) 2023-07-16 21:55:17 -07:00
Jack Jackson
9e28dd26de Disable Grafana Oncall 2023-07-16 21:43:19 -07:00
Jack Jackson
d04f1bc8f5 Drone tolerations 2023-07-16 21:02:47 -07:00
Jack Jackson
30dccb06fa More docs on bootstrapping 2023-07-16 20:48:13 -07:00
Jack Jackson
c06acb6b74 Ombi tolerate x86 2023-07-16 20:48:00 -07:00
Jack Jackson
86b2b339a8 Add Drone 2023-07-11 19:45:42 -07:00
Jack Jackson
1f455c9e34 Add Grafana-oncall 2023-06-28 20:11:56 -07:00
Jack Jackson
a2d2e9cdc4 Add Ombi 2023-06-28 11:58:24 -07:00
Jack Jackson
e0536fd808 Add ProtonVPN 2023-06-27 20:44:22 -07:00
Jack Jackson
b9325384f1 Grafana Persistence ReadWriteMany
https://stackoverflow.com/questions/70945223/kubernetes-multi-attach-error-for-volume-pvc-volume-is-already-exclusively-att
2023-06-26 22:44:53 -07:00
Jack Jackson
7041bc3757 Move Grafana values to block-file format 2023-06-26 22:30:40 -07:00
Jack Jackson
a66af40b62 Add Prometheus as Datasource to Grafana 2023-06-23 20:13:56 -07:00
Jack Jackson
dec37388b8 Add Kubernetes monitoring 2023-06-23 19:49:51 -07:00
Jack Jackson
e42bda91b0 Enable Grafana sidecar
Precursor to https://github.com/dotdc/grafana-dashboards-kubernetes
2023-06-22 20:26:49 -07:00
Jack Jackson
b40081eec7 Add deletion finalizers 2023-06-21 21:41:31 -07:00
Jack Jackson
5e37beb9fb Disable grafana 2023-06-21 21:27:54 -07:00
Jack Jackson
160a204a28 App-of-apps 2023-06-21 21:07:09 -07:00
67 changed files with 3395 additions and 80 deletions

View File

@ -1,33 +0,0 @@
kind: pipeline
name: publish
type: docker
platform:
os: linux
arch: arm64
trigger:
branch:
- main
- testing-ci
steps:
- name: "Upload New Versions"
image: alpine
commands:
- ./build-tools/upload-new-versions.sh
environment:
GITEA_PASSWORD:
from_secret: gitea_password
ARGO_TOKEN:
from_secret: argo_token
- name: "Install kubectl"
image: bitnami/kubectl
environment:
KUBE_TOKEN:
from_secret: kube_token
commands:
- kubectl apply -s rassigma.avril:6443 --token $KUBE_TOKEN -f application-manifests.yaml
image_pull_secrets:
- dockerconfigjson

86
NOTES.md Normal file
View File

@ -0,0 +1,86 @@
# Device exposure
For [Jellyfin Hardware Acceleration](https://jellyfin.org/docs/general/administration/hardware-acceleration/), following instructions [here](https://github.com/kubernetes/kubernetes/issues/7890#issuecomment-766088805) (originally from [here](https://old.reddit.com/r/jellyfin/comments/i2r4h9/how_to_enable_hardware_acceleration_with_docker/)), I used [smarter-device-manager](https://gitlab.com/arm-research/smarter/smarter-device-manager) to expose devices from the host node (`epsilon`) into containers.
This was installed via a manual `kubectl apply`, though it should be migrated into GitOps-managed definitions - though I had to make some alterations to get ConfigMap to be read.
```yaml
# smarter-management-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: smarter-device-manager
namespace: smarter-device-management
data:
conf.yaml: |
- devicematch: ^fb0$
nummaxdevices: 2
# smarter-management-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: smarter-device-management
namespace: smarter-device-management
spec:
# Mark this pod as a critical add-on; when enabled, the critical add-on
# scheduler reserves resources for critical add-on pods so that they can
# be rescheduled after a failure.
# See https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/
priorityClassName: "system-node-critical"
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
hostname: smarter-device-management
nodeName: epsilon
containers:
- name: smarter-device-manager
image: registry.gitlab.com/arm-research/smarter/smarter-device-manager:v1.20.11
imagePullPolicy: IfNotPresent
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
resources:
limits:
cpu: 100m
memory: 10Mi
requests:
cpu: 10m
memory: 10Mi
volumeMounts:
- name: device-plugin
mountPath: /var/lib/kubelet/device-plugins
- name: dev-dir
mountPath: /dev
- name: sys-dir
mountPath: /sys
- name: config
mountPath: /root/config
volumes:
- name: device-plugin
hostPath:
path: /var/lib/kubelet/device-plugins
- name: dev-dir
hostPath:
path: /dev
- name: sys-dir
hostPath:
path: /sys
- name: config
configMap:
name: smarter-device-manager
terminationGracePeriodSeconds: 30
```
Re: `device-plugin` path, that apparently changed (from `/var/lib/rancher/k3s/agent/kubelet/device-plugins`, which was the provided value) [some time ago](https://github.com/k3s-io/k3s/issues/2664#issuecomment-742013918)
This also required the [Device Plugin Feature Gate](https://github.com/k3s-io/k3s/discussions/4596) to be enabled.
Further useful links:
* [Reddit thread](https://old.reddit.com/r/jellyfin/comments/y7i3uc/trouble_with_quicksync_trancoding_on_new_11th_gen/)
* [Enabling iGPU](https://community.hetzner.com/tutorials/howto-enable-igpu)
---
I spent a couple hours going down the rabbit-hole above, before noting that my server doesn't have an integrated graphics card, and so that was all for naught :) luckily, that is a problem that can be entirely solved with money (those are rare!) - a suitable card should arrive over the weekend and the hacking can continue.

View File

@ -14,18 +14,21 @@ $ curl --user <username>:<password> -X POST --upload-file ./<package>.tgz https:
### Installation
```bash
$ helm repo add --username <username> --password <password> <repo-alias> https://hostname.of.gitea/api/packages/<user>/helm
$ helm install <release-name> <repo-alias>/<name>
```
Bootstrap with `kubectl apply -f main-manifest.yaml`
and/or
TODO: [App-of-apps](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) to manage whole-cluster configuration in a more programmatic way.
```bash
$ kubectl apply -f application-manifests.yaml
```
## Initial bootstrap
TODO: [App-of-apps](https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/#app-of-apps-pattern) to manage whole-cluster configuration.
Note that you need to have manually connected the source Repository _in_ ArgoCD before installing the App-of-apps.
TODO - when we have a better secrets management system, export Gitea user password so that it can be used by ArgoCD to initialize that repository directly (https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#repositories)
## Jsonnet
As of 2024, I started using Jsonnet to define apps in a less repetitious way.
To check the output before submitting, use `jsonnet -J app-of-apps app-of-apps/<filename>.jsonnet`
## Other links

View File

@ -0,0 +1,162 @@
{
helmApplication(
name,
sourceRepoUrl,
sourceChart,
sourceTargetRevision,
namespace="",
helmValues={}) ::
{
apiVersion: "argoproj.io/v1alpha1",
kind: "Application",
metadata: {
name: name,
namespace: "argo",
finalizers: ["resources-finalizer.argocd.argoproj.io"]
},
spec: {
project: "default",
source: {
chart: sourceChart,
repoURL: sourceRepoUrl,
targetRevision: sourceTargetRevision,
[if helmValues != {} then "helm"]: {
valuesObject: helmValues
}
},
destination: {
server: "https://kubernetes.default.svc",
namespace: if namespace == "" then name else namespace
},
syncPolicy: {
automated: {
prune: true
},
syncOptions: ["CreateNamespace=true"]
}
}
},
localApplication(
name,
path="",
namespace="",
nonHelmApp=false) ::
{
apiVersion: "argoproj.io/v1alpha1",
kind: "Application",
metadata: {
name: name,
namespace: "argo",
finalizers: ["resources-finalizer.argocd.argoproj.io"]
},
spec: {
project: "default",
source: {
repoURL: "https://gitea.scubbo.org/scubbo/helm-charts.git",
targetRevision: "HEAD",
path: if path == "" then std.join('/', ['charts', name]) else path,
// I _think_ every locally-defined chart is going to have a `values.yaml`, but we can make this
// parameterized if desired
[if nonHelmApp != true then "helm"]: {
valueFiles: ['values.yaml']
}
},
destination: {
server: 'https://kubernetes.default.svc',
namespace: if namespace == "" then name else namespace
},
syncPolicy: {
automated: {
prune: true
},
syncOptions: ["CreateNamespace=true"]
}
}
},
kustomizeApplication(
name,
repoUrl="",
namespace="",
path="") ::
{
apiVersion: "argoproj.io/v1alpha1",
kind: "Application",
metadata: {
name: name,
namespace: "argo",
finalizers: ["resources-finalizer.argocd.argoproj.io"]
},
spec: {
project: "default",
source: {
repoURL: if repoUrl=="" then std.join('', ['https://gitea.scubbo.org/scubbo/', name, '-deployment']) else repoUrl,
targetRevision: "HEAD",
path: if path == "" then "." else path
},
destination: {
server: 'https://kubernetes.default.svc',
namespace: if namespace == "" then name else namespace
},
syncPolicy: {
automated: {
prune: true
},
syncOptions: ["CreateNamespace=true"]
}
}
},
# Sometimes we want to use an existing remote Helm chart
# but add some locally-defined resources into the Application
helmRemotePlusLocalApplication(
name,
sourceRepoUrl,
sourceChart,
sourceTargetRevision,
pathToLocal="",
namespace="",
helmValues={},
nonHelmApp=false) ::
{
apiVersion: "argoproj.io/v1alpha1",
kind: "Application",
metadata: {
name: name,
namespace: "argo",
finalizers: ["resources-finalizer.argocd.argoproj.io"]
},
spec: {
project: "default",
sources: [
{
chart: sourceChart,
repoURL: sourceRepoUrl,
targetRevision: sourceTargetRevision,
[if helmValues != {} then "helm"]: {
valuesObject: helmValues
}
},
{
repoURL: "https://gitea.scubbo.org/scubbo/helm-charts.git",
targetRevision: "HEAD",
path: if pathToLocal == "" then std.join('/', ['charts', name]) else pathToLocal,
// I _think_ every locally-defined chart is going to have a `values.yaml`, but we can make this
// parameterized if desired
[if nonHelmApp != true then "helm"]: {
valueFiles: ['values.yaml']
}
}
],
destination: {
server: "https://kubernetes.default.svc",
namespace: if namespace == "" then name else namespace
},
syncPolicy: {
automated: {
prune: true
},
syncOptions: ["CreateNamespace=true"]
}
}
}
}

View File

@ -3,6 +3,8 @@ kind: Application
metadata:
name: cert-manager
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
@ -30,6 +32,8 @@ kind: Application
metadata:
name: prom-crds
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
@ -37,6 +41,12 @@ spec:
repoURL: https://github.com/prometheus-community/helm-charts.git
path: charts/kube-prometheus-stack/crds/
targetRevision: kube-prometheus-stack-45.7.1
helm:
values: |
tolerations:
- key: architecture
operator: Equal
value: x86
directory:
recurse: true
@ -57,6 +67,8 @@ kind: Application
metadata:
name: prometheus-community
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
@ -109,6 +121,8 @@ kind: Application
metadata:
name: grafana
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
@ -118,17 +132,39 @@ spec:
targetRevision: "6.49.0"
helm:
parameters:
- name: image.tag
value: "9.3.2"
- name: ingress.enabled
value: true
- name: ingress.hosts[0]
value: grafana.avril
- name: persistence.enabled
value: true
- name: persistence.storageClassName
value: longhorn
values: |
image:
tag: "10.1.0"
tolerations:
- key: architecture
operator: Equal
value: x86
ingress:
enabled: true
hosts:
- grafana.avril
persistence:
enabled: true
storageClassName: longhorn
accessModes:
- ReadWriteMany
sidecar:
dashboards:
enabled: true
defaultFolderName: General
label: grafana_dashboard
labelValue: "1"
folderAnnotation: grafana_folder
searchNamespace: ALL
provider:
foldersFromFilesStructure: "true"
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus.avril
destination:
server: "https://kubernetes.default.svc"
@ -140,6 +176,37 @@ spec:
syncOptions:
- CreateNamespace=true
---
# https://github.com/dotdc/grafana-dashboards-kubernetes/blob/master/argocd-app.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: grafana-dashboards-kubernetes
namespace: argo
labels:
app.kubernetes.io/name: grafana-dashboards-kubernetes
app.kubernetes.io/version: HEAD
app.kubernetes.io/managed-by: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default # You may need to change this!
source:
path: ./
repoURL: https://github.com/dotdc/grafana-dashboards-kubernetes
targetRevision: HEAD
destination:
server: https://kubernetes.default.svc
namespace: monitoring
syncPolicy:
## https://argo-cd.readthedocs.io/en/stable/user-guide/auto_sync
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
- Replace=true
---
# TODO - use Jsonnet or similar to automate building this from all the directories
# (and pull out the common config)
apiVersion: argoproj.io/v1alpha1
@ -147,6 +214,8 @@ kind: Application
metadata:
name: jellyfin
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
@ -168,3 +237,86 @@ spec:
prune: true
syncOptions:
- CreateNamespace=true
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: proton-vpn
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://gitea.scubbo.org/scubbo/helm-charts.git
targetRevision: HEAD
path: charts/proton-vpn
helm:
valueFiles:
- values.yaml
destination:
server: "https://kubernetes.default.svc"
namespace: proton-vpn
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ombi
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://gitea.scubbo.org/scubbo/helm-charts.git
targetRevision: HEAD
path: charts/ombi
helm:
valueFiles:
- values.yaml
destination:
server: "https://kubernetes.default.svc"
namespace: ombi
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
---
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: jackjack-app-of-apps-private
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://gitea.scubbo.org/scubbo/private-apps.git
targetRevision: HEAD
path: app-of-apps
destination:
server: "https://kubernetes.default.svc"
namespace: default
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

5
app-of-apps/blog.jsonnet Normal file
View File

@ -0,0 +1,5 @@
local appDef = import './app-definitions.libsonnet';
[
appDef.kustomizeApplication(name="blog")
]

View File

@ -0,0 +1,53 @@
// https://docs.crossplane.io/v1.15/software/install/#installed-deployments
local appDef = import './app-definitions.libsonnet';
// Installation of Vault Provider is left manually, since it relies on secret creation:
// https://github.com/upbound/provider-vault
//
// Also required created a role to bind to the ServiceAccount:
//
// apiVersion: rbac.authorization.k8s.io/v1
// kind: ClusterRoleBinding
// metadata:
// name: vault-provider-role-binding
// namespace: crossplane-system
// roleRef:
// apiGroup: rbac.authorization.k8s.io
// kind: ClusterRole
// name: vault-provider-role
// subjects:
// - kind: ServiceAccount
// name: provider-vault-b61923ede364
// namespace: crossplane-system
// ---
// apiVersion: rbac.authorization.k8s.io/v1
// kind: ClusterRole
// metadata:
// name: vault-provider-role
// namespace: crossplane-system
// rules:
// - apiGroups:
// - identity.vault.upbound.io
// resources:
// - mfaoktas
// - groupmembergroupidsidses
// - groupmemberentityidsidses
// verbs:
// - get
// - list
// - watch
// - apiGroups:
// - mfa.vault.upbound.io
// resources:
// - oktas
// verbs:
// - get
// - list
// - watch
appDef.helmApplication(
name="crossplane",
sourceRepoUrl="https://charts.crossplane.io/stable",
sourceChart="crossplane",
sourceTargetRevision="1.15.0",
namespace="crossplane-system"
)

65
app-of-apps/drone.jsonnet Normal file
View File

@ -0,0 +1,65 @@
local appDef = import './app-definitions.libsonnet';
[
appDef.localApplication(name="drone"),
// TODO - maybe extract this, too?
{
apiVersion: "secrets.hashicorp.com/v1beta1",
kind: "VaultAuth",
metadata: {
name: "static-auth",
namespace: "drone"
},
spec: {
method: "kubernetes",
mount: "kubernetes",
kubernetes: {
role: "vault-secrets-operator",
serviceAccount: "default",
audiences: ["vault"]
}
}
},
// Note that currently this secret is created manually and statically. It'd be really cool for cold-start setup if OAuth
// App creation could be triggered at Gitea startup, and a secret automatically created!
{
apiVersion: "secrets.hashicorp.com/v1beta1",
kind: "VaultStaticSecret",
metadata: {
name: "gitea-oauth-creds",
namespace: "drone"
},
spec: {
type: "kv-v2",
mount: "shared-secrets",
path: "gitea/oauth-creds",
destination: {
name: "gitea-oauth-creds",
create: true
},
refreshAfter: "30s",
vaultAuthRef: "static-auth"
}
},
{
apiVersion: "secrets.hashicorp.com/v1beta1",
kind: "VaultStaticSecret",
metadata: {
name: "mastodon-creds",
namespace: "drone"
},
spec: {
type: "kv-v2",
mount: "shared-secrets",
path: "mastodon/creds",
destination: {
name: "mastodon-creds",
create: true
},
refreshAfter: "30s",
vaultAuthRef: "static-auth"
}
}
]

View File

@ -0,0 +1,5 @@
local appDef = import './app-definitions.libsonnet';
[
appDef.localApplication(name="edh-elo")
]

View File

@ -0,0 +1,159 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: keycloak-backup
namespace: keycloak
spec:
# Arbitrary non-midnight time.
schedule: "10 2 * * *"
jobTemplate:
spec:
template:
spec:
initContainers:
- args:
- -ec
- |
#!/bin/bash
cp -r /opt/bitnami/keycloak/lib/quarkus/* /quarkus
command:
- /bin/bash
image: docker.io/bitnami/keycloak:24.0.2
imagePullPolicy: IfNotPresent
name: init-quarkus-directories
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: true
runAsUser: 1001
seccompProfile:
type: RuntimeDefault
volumeMounts:
- mountPath: /tmp
name: empty-dir
subPath: tmp-dir
- mountPath: /quarkus
name: empty-dir
subPath: app-quarkus-dir
containers:
- args:
- /script/backup_keycloak.sh
env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: BITNAMI_DEBUG
value: "false"
- name: KEYCLOAK_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: admin-password
name: keycloak
- name: KEYCLOAK_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: keycloak-postgresql
- name: KEYCLOAK_HTTP_RELATIVE_PATH
value: /
- name: KEYCLOAK_CACHE_TYPE
value: local
envFrom:
- configMapRef:
name: keycloak-env-vars
image: docker.io/bitnami/keycloak:24.0.2
imagePullPolicy: IfNotPresent
name: backup-container
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 7800
name: infinispan
protocol: TCP
volumeMounts:
- mountPath: /tmp
name: empty-dir
subPath: tmp-dir
- mountPath: /opt/bitnami/keycloak/conf
name: empty-dir
subPath: app-conf-dir
- mountPath: /opt/bitnami/keycloak/lib/quarkus
name: empty-dir
subPath: app-quarkus-dir
- mountPath: /backup
name: backup-dir
- mountPath: /script
name: script-volume
restartPolicy: Never
securityContext:
# https://stackoverflow.com/questions/50156124/kubernetes-nfs-persistent-volumes-permission-denied
runAsUser: 501
fsGroup: 501
volumes:
- emptyDir: {}
name: empty-dir
- name: backup-dir
persistentVolumeClaim:
claimName: backup-dir-pvc
- name: script-volume
configMap:
name: keycloak-backup-script
defaultMode: 0777
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: backup-dir-pv
namespace: keycloak
spec:
capacity:
storage: 2M
accessModes:
- ReadWriteMany
nfs:
server: galactus.avril
path: /mnt/high-resiliency/manual-nfs/backups/keycloak/
mountOptions:
- nfsvers=4.2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: backup-dir-pvc
namespace: keycloak
spec:
storageClassName: ""
volumeName: backup-dir-pv
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 2M
---
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: "2024-04-20T04:14:45Z"
name: keycloak-backup-script
namespace: keycloak
data:
backup_keycloak.sh: |+
env
echo 'That was the env, now running export'
/opt/bitnami/keycloak/bin/kc.sh export \
--file "/backup/realm-export-$(date '+%Y-%m-%d').json" \
--realm avril \
--db postgres \
--db-url jdbc:postgresql://keycloak-postgresql-hl/bitnami_keycloak \
--db-password "$KEYCLOAK_DATABASE_PASSWORD" \
--db-username bn_keycloak

View File

@ -0,0 +1,24 @@
local appDef = import './app-definitions.libsonnet';
appDef.helmApplication(
name="keycloak",
sourceRepoUrl="https://charts.bitnami.com/bitnami",
sourceChart="keycloak",
sourceTargetRevision="19.3.4",
helmValues={
ingress: {
enabled: true,
hostname: "keycloak.avril"
},
image: {
tag: "24.0.2"
},
extraEnvVars: [
{
// https://github.com/keycloak/keycloak/issues/28384
name: "KEYCLOAK_CACHE_TYPE",
value: "local"
}
]
}
)

View File

@ -0,0 +1,5 @@
local appDef = import './app-definitions.libsonnet';
[
appDef.localApplication(name="miniflux")
]

View File

@ -0,0 +1,36 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: open-project
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
chart: openproject
repoURL: https://charts.openproject.org
targetRevision: 4.3.0
helm:
values: |
ingress:
host: openproject.avril
persistence:
storageClassName: freenas-nfs-csi
postgresql:
auth:
existingSecret: postgres-auth
global:
storageClass: freenas-iscsi-csi
destination:
server: "https://kubernetes.default.svc"
namespace: open-project
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true

View File

@ -0,0 +1,8 @@
local appDef = import './app-definitions.libsonnet';
appDef.helmApplication(
name="openwebui",
sourceRepoUrl="https://open-webui.github.io/helm-charts",
sourceChart="open-webui",
sourceTargetRevision="5.10.0"
)

View File

@ -0,0 +1,3 @@
local appDef = import './app-definitions.libsonnet';
appDef.localApplication(name="vault-crossplane-integration", nonHelmApp=true)

View File

@ -0,0 +1,38 @@
// https://developer.hashicorp.com/vault/tutorials/kubernetes/vault-secrets-operator
//
// Note that this has a prerequiste that the Vault system has been configured with appropriate
// authentication first. In particular, the specification of the set of namespaces that secrets can be synced to is set
// in `bound_service_account_namespaces` in the Vault role.
local appDef = import './app-definitions.libsonnet';
appDef.helmApplication(
name="vault-secrets-operator",
sourceRepoUrl="https://helm.releases.hashicorp.com",
sourceChart="vault-secrets-operator",
sourceTargetRevision="0.5.2",
namespace="vault-secrets-operator-system",
helmValues={
defaultVaultConnection: {
enabled: true,
address: "http://vault.vault.svc.cluster.local:8200",
skipTLSVerify: false
},
controller: {
manager: {
clientCache: {
persistenceModel: "direct-encrypted",
storageEncryption: {
enabled: true,
mount: "demo-auth-mount",
keyName: "vso-client-cache",
transitMount: "demo-transit",
kubernetes: {
role: "auth-role-operator",
serviceAccount: "demo-operator"
}
}
}
}
}
}
)

69
app-of-apps/vault.jsonnet Normal file
View File

@ -0,0 +1,69 @@
local appDef = import './app-definitions.libsonnet';
appDef.helmRemotePlusLocalApplication(
name="vault",
sourceRepoUrl="https://helm.releases.hashicorp.com",
sourceChart="vault",
sourceTargetRevision="0.25.0",
helmValues={
global: {
namespace: "vault"
},
ui: {
enabled: true
},
serverTelemetry: {
serviceMonitor: {
enabled: true
}
},
server: {
ingress: {
enabled: true,
ingressClassName: "traefik",
hosts: [
{
host: "vault.avril",
paths: []
}
]
},
dataStorage: {
size: "20Gi",
storageClass: "freenas-iscsi-csi"
},
standalone: {
config: |||
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "file" {
path = "/vault/data"
}
# Everything above this line is the default.
#
# Enable Plugins (originally for GitHub Secrets Plugin)
plugin_directory = "/etc/vault/plugins"
|||
},
volumes: [
{
name: "plugins",
persistentVolumeClaim: {
claimName: "vault-plugin-claim"
}
}
],
volumeMounts: [
{
name: "plugins",
mountPath: "/etc/vault/plugins"
}
]
}
}
)

20
charts/drone/Chart.yaml Normal file
View File

@ -0,0 +1,20 @@
apiVersion: v2
name: drone-scubbo
description: A personalized Helm chart to deploy Gitea to Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"
dependencies:
- name: drone
repository: https://charts.drone.io
version: "0.6.4"
alias: drone-server
- name: drone-runner-docker
repository: https://charts.drone.io
version: "0.6.1"
alias: drone-runner
- name: drone-kubernetes-secrets
repository: https://charts.drone.io
version: "0.1.4"

13
charts/drone/README.md Normal file
View File

@ -0,0 +1,13 @@
TODO:
* Create the following in an initContainer if they don't exist:
* The Gitea OAuth application at startup
* The Prometheus user (https://cogarius.medium.com/3-3-complete-guide-to-ci-cd-pipelines-with-drone-io-on-kubernetes-drone-metrics-with-prometheus-c2668e42b03f) - probably by mounting the volume, using sqlite3 to parse out admin password, then using that to make API call
* Create `gitea_password` Organization Secret at init.
Ensure that Vault has a secret at `shared-secrets/gitea/oauth-creds` with keys `DRONE_GITEA_CLIENT_ID` and `DRONE_GITEA_CLIENT_SECRET` (see the application definition in `app-of-apps/drone.jsonnet` to see how the secret is injected from Vault into k8s). Remember also to create an Organization Secret named `gitea_password` for pulling.
For MTU problem diagnosis:
https://github.com/gliderlabs/docker-alpine/issues/307#issuecomment-634852419
https://liejuntao001.medium.com/fix-docker-in-docker-network-issue-in-kubernetes-cc18c229d9e5

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "drone-scubbo.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "drone-scubbo.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "drone-scubbo.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "drone-scubbo.labels" -}}
helm.sh/chart: {{ include "drone-scubbo.chart" . }}
{{ include "drone-scubbo.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "drone-scubbo.selectorLabels" -}}
app.kubernetes.io/name: {{ include "drone-scubbo.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "drone-scubbo.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "drone-scubbo.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,22 @@
{{- /*
https://itnext.io/manage-auto-generated-secrets-in-your-helm-charts-5aee48ba6918
*/}}
apiVersion: v1
kind: Secret
metadata:
name: "kubernetes-secrets-secret"
annotations:
"helm.sh/resource-policy": "keep"
type: Opaque
data:
# retrieve the secret data using lookup function and when not exists, return an empty dictionary / map as result
{{- $existing_secret := (lookup "v1" "Secret" .Release.Namespace "kubernetes-secrets-secret") | default dict }}
{{- $secretData := (get $existing_secret "data") | default dict }}
# set $secret to existing secret data or generate a random one when not exists
{{- $secret := (get $secretData "secret") | default (randAlphaNum 32 | b64enc) }}
# generate 32 chars long random string, base64 encode it and then double-quote the result string.
SECRET_KEY: {{ $secret | quote }}
# Duplicate the secret-value with a different key so that it can be mounted into the environment of a pod which
# required a different name (to the best of my knowledge, there's no way to mount a secret as an env variable but
# transform the key)
DRONE_SECRET_PLUGIN_TOKEN: {{ $secret | quote }}

View File

@ -0,0 +1,20 @@
{{- /*
https://itnext.io/manage-auto-generated-secrets-in-your-helm-charts-5aee48ba6918
*/}}
{{- if empty .Values.manualRPCSecretName }}
apiVersion: v1
kind: Secret
metadata:
name: "rpc-secret"
annotations:
"helm.sh/resource-policy": "keep"
type: Opaque
data:
# retrieve the secret data using lookup function and when not exists, return an empty dictionary / map as result
{{- $existing_secret := (lookup "v1" "Secret" .Release.Namespace "rpc-secret") | default dict }}
{{- $secretData := (get $existing_secret "data") | default dict }}
# set $secret to existing secret data or generate a random one when not exists
{{- $secret := (get $secretData "secret") | default (randAlphaNum 32 | b64enc) }}
# generate 32 chars long random string, base64 encode it and then double-quote the result string.
secret: {{ $secret | quote }}
{{- end }}

74
charts/drone/values.yaml Normal file
View File

@ -0,0 +1,74 @@
drone-server:
env:
DRONE_SERVER_HOST: drone.scubbo.org
DRONE_SERVER_PROTO: https
DRONE_RPC_SECRET: rpc-secret
DRONE_GITEA_SERVER: https://gitea.scubbo.org
DRONE_USER_CREATE: username:scubbo,admin:true
extraSecretNamesForEnvFrom:
- gitea-oauth-creds
service:
port: 3500
persistentVolume:
storageClass: longhorn
# Keep the Runner untolerant for now, until I progress to intentionally building dual-architecture images.
tolerations:
- key: architecture
operator: Equal
value: x86
drone-runner:
env:
DRONE_RPC_SECRET: rpc-secret
DRONE_RPC_HOST: drone-drone-server:3500 # This is the name of the service for the runner
DRONE_RUNNER_NETWORK_OPTS: "com.docker.network.driver.mtu:1450"
DRONE_SECRET_PLUGIN_ENDPOINT: "http://drone-drone-kubernetes-secrets:3000"
extraSecretNamesForEnvFrom:
- kubernetes-secrets-secret
dind:
commandArgs:
- "--host"
- "tcp://localhost:2375"
- "--mtu=1450"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
# Avoid the cursed node!
- key: kubernetes.io/hostname
operator: NotIn
values:
- rasnu2
drone-kubernetes-secrets:
rbac:
secretNamespace: drone
env:
KUBERNETES_NAMESPACE: drone
extraSecretNamesForEnvFrom:
- kubernetes-secrets-secret
drone:
server: "drone.scubbo.org"
volume:
nfsServer: rassigma.avril
nfsPath: /mnt/BERTHA/drone
service:
type: ClusterIP
port: 3500
gitea:
server: https://gitea.scubbo.org
# Secret with keys `clientId` and `clientSecret`
oauthSecretName: gitea-oauth-creds
# Set this if you want to use an existing secret for the RPC
# secret (otherwise, a fresh one will be created if necessary)
manualRPCSecretName: ""

View File

@ -0,0 +1,6 @@
dependencies:
- name: postgresql
repository: https://charts.bitnami.com/bitnami
version: 15.5.9
digest: sha256:7f365bc259a1e72293bc76edb00334d277a58f6db69aa0f2021c09c1bab5a089
generated: "2024-06-23T15:37:12.419204-07:00"

17
charts/edh-elo/Chart.yaml Normal file
View File

@ -0,0 +1,17 @@
apiVersion: v2
name: edh-elo
description: A personalized Helm chart to deploy Gitea to Kubernetes
type: application
version: 0.1.0
appVersion: "1.0.0"
dependencies:
- name: postgresql
version: "15.5.9"
repository: https://charts.bitnami.com/bitnami
condition: postgresql.enabled
tags:
- services
- db
- write

Binary file not shown.

View File

@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "edh-elo.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "edh-elo.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "edh-elo.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "edh-elo.labels" -}}
helm.sh/chart: {{ include "edh-elo.chart" . }}
{{ include "edh-elo.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "edh-elo.selectorLabels" -}}
app.kubernetes.io/name: {{ include "edh-elo.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "edh-elo.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "edh-elo.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,47 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "edh-elo.fullname" . }}
labels:
{{- include "edh-elo.labels" . | nindent 4 }}
spec:
selector:
matchLabels:
{{- include "edh-elo.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "edh-elo.selectorLabels" . | nindent 8 }}
spec:
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
{{- if .Values.extraEnv }}
{{- with .Values.extraEnv }}
env:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- end}}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "edh-elo.fullname" . }}
labels:
{{- include "edh-elo.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 8000
protocol: TCP
sessionAffinity: ClientIP
selector:
{{- include "edh-elo.selectorLabels" . | nindent 4 }}

104
charts/edh-elo/values.yaml Normal file
View File

@ -0,0 +1,104 @@
image:
repository: gitea.scubbo.org/scubbo/edh-elo
tag: "9b4e6c3b4d852883a372332461253ef9eae6d014"
pullPolicy: IfNotPresent
extraEnv:
- name: DATABASE_URL
value: postgresql://db_user:pass@edh-elo-postgresql/postgres
- name: SPREADSHEET_ID
value: 1ITgXXfq7KaNP8JTQMvoZJSbu7zPpCcfNio_aooULRfc
- name: PATH_TO_GOOGLE_SHEETS_CREDENTIALS
value: /vault/secrets/google-credentials.json
postgresql:
auth:
existing-secret: edh-elo-postgresql
primary:
persistence:
enabled: true
initdb:
# TODO - switch to using a secret (and update `extraEnv`, above)
scripts:
psql.sql: |
CREATE USER db_user WITH PASSWORD 'pass';
GRANT ALL PRIVILEGES ON DATABASE postgres TO db_user;
GRANT ALL ON SCHEMA public TO db_user;
############
# Defaults #
############
replicaCount: 1
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-status: update
vault.hashicorp.com/role: "edh-elo"
vault.hashicorp.com/agent-inject-secret-google-credentials.json: "edh-elo/data/google-credentials"
vault.hashicorp.com/agent-inject-template-google-credentials.json: |
{{- with secret "edh-elo/data/google-credentials" -}}
{{- .Data.data | toJSON -}}
{{- end -}}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: LoadBalancer
port: 8000
ingress:
enabled: false
className: "traefik"
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# hosts:
# - host: edh-elo.avril
# paths:
# - path: /
# pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
# architecture: x86
tolerations: {}
# - key: architecture
# operator: Equal
# value: x86
affinity: {}

91
charts/jellyfin/NOTES.md Normal file
View File

@ -0,0 +1,91 @@
For external availability - use the following CloudFormation template:
```
AWSTemplateFormatVersion: 2010-09-09
Resources:
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: TailnetProxySecurityGroup
GroupDescription: Tailnet Proxy Security Group
SecurityGroupEgress:
- CidrIp: 0.0.0.0/0
FromPort: 443
ToPort: 443
IpProtocol: -1
- CidrIp: 0.0.0.0/0
FromPort: 80
ToPort: 80
IpProtocol: -1
SecurityGroupIngress:
- CidrIp: 0.0.0.0/0
FromPort: 22
ToPort: 22
IpProtocol: -1
VpcId: vpc-952036f0
LaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Properties:
LaunchTemplateName: TailnetLaunchTemplate
LaunchTemplateData:
UserData:
Fn::Base64: |
#!/bin/bash
# https://docs.docker.com/engine/install/ubuntu/
sudo apt-get update
sudo apt-get install -y ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
cat <<EOF | sudo docker compose -f - up -d
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- "80:80"
- "81:81"
- "443:443"
volumes:
- data:/data
- letsencrypt:/etc/letsencrypt
volumes:
data:
letsencrypt:
EOF
curl -fsSL https://tailscale.com/install.sh | sh
# Manual setup:
# * Access `<public>:81`, log in with `admin@example.com // changeme` - prompted to create new account
# * Create "New Proxy Host" from Domain Name to jellyfin.avril
# * Set DNS to forward jellyfin.scubbo.org -> <public IP>
# * `sudo tailscale up` and follow the resultant URL to connect to the TailNet
#
# TODO - provide a secret in an AWS Secret so `sudo tailscale up` can be autonomous (then don't need to open port 81)
JellyfinProxyInstance:
Type: AWS::EC2::Instance
DependsOn: "LaunchTemplate"
Properties:
# ImageId: ami-00beae93a2d981137
ImageId: ami-04b4f1a9cf54c11d0
InstanceType: t2.micro
LaunchTemplate:
LaunchTemplateName: TailnetLaunchTemplate
Version: "1"
NetworkInterfaces:
- AssociatePublicIpAddress: "true"
DeviceIndex: "0"
GroupSet:
- Ref: "SecurityGroup"
SubnetId: "subnet-535f3d78"
```

View File

@ -26,18 +26,26 @@ spec:
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- if .Values.runtimeClassName }}
runtimeClassName: {{ .Values.runtimeClassName }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: NVIDIA_DRIVER_CAPABILITIES
value: all
- name: NVIDIA_VISIBLE_DEVICES
value: all
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- mountPath: /media
name: jf-media
readOnly: True
- mountPath: /truenas-media
name: jf-truenas-media
# readOnly: True
- mountPath: /config
name: jf-config
- mountPath: /cache
@ -48,9 +56,9 @@ spec:
value: bad
effect: NoSchedule
volumes:
- name: jf-media
- name: jf-truenas-media
persistentVolumeClaim:
claimName: jf-media-pvc
claimName: jf-truenas-media-pvc
- name: jf-config
persistentVolumeClaim:
claimName: jf-config-pvc

View File

@ -25,4 +25,8 @@ spec:
secretKeyRef:
name: jellyfin-metrics-secret
key: api-key
{{- with .Values.metrics.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -32,6 +32,32 @@ spec:
requests:
storage: {{ .config.size | quote }}
{{- end}}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jf-truenas-media-pvc
spec:
storageClassName: ""
volumeName: jf-truenas-media-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20T
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: jf-truenas-media-pv
spec:
capacity:
storage: 20T
accessModes:
- ReadWriteMany
nfs:
server: galactus.avril
path: /mnt/low-resiliency-with-read-cache/ombi-data/
# ---
# # https://forum.jellyfin.org/t-could-not-apply-migration-migrateactivitylogdatabase
# apiVersion: v1

View File

@ -28,8 +28,14 @@ podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
securityContext:
runAsUser: 1000
fsGroup: 1000
supplementalGroups:
- 44 # `getent group video | cut -d: -f3` - https://jellyfin.org/docs/general/administration/hardware-acceleration/intel#kubernetes
capabilities:
add:
- "SYS_ADMIN"
# drop:
# - ALL
# readOnlyRootFilesystem: true
@ -51,22 +57,21 @@ ingress:
paths:
- path: /
pathType: ImplementationSpecific
- host: jellyfin.scubbo.org
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# https://github.com/NVIDIA/k8s-device-plugin?tab=readme-ov-file#running-gpu-jobs
resources:
requests:
nvidia.com/gpu: 1
limits:
nvidia.com/gpu: 1
autoscaling:
enabled: false
@ -75,9 +80,13 @@ autoscaling:
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
nodeSelector:
architecture: x86
tolerations: []
tolerations:
- key: architecture
operator: Equal
value: x86
affinity: {}
@ -104,17 +113,21 @@ volumes:
nfs:
server: rassigma.avril
path: "/mnt/BERTHA/etc/jellyfin/config"
- name: media
config:
size: 3T
accessMode: ReadOnlyMany
nfs:
server: rasnu2.avril
path: "/mnt/NEW_BERTHA/ombi-data/media"
metricsImage:
repository: gitea.scubbo.org/scubbo/jellyfin-library-count-prometheus-exporter
tag: latest
runtimeClassName: nvidia
metrics:
apiUrl: "http://jellyfin.avril"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64

View File

@ -0,0 +1,7 @@
apiVersion: v2
name: miniflux-scubbo
description: A personalized Helm chart deploying Miniflux
type: application
version: 0.1.0
appVersion: "1.0.0"

View File

@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "miniflux.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "miniflux.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "miniflux.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "miniflux.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View File

@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "miniflux.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "miniflux.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "miniflux.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "miniflux.labels" -}}
helm.sh/chart: {{ include "miniflux.chart" . }}
{{ include "miniflux.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "miniflux.selectorLabels" -}}
app.kubernetes.io/name: {{ include "miniflux.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "miniflux.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "miniflux.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,98 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "miniflux.fullname" . }}
labels:
{{- include "miniflux.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "miniflux.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "miniflux.labels" . | nindent 8 }}
{{- with .Values.podLabels }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "miniflux.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
env:
- name: DATABASE_URL
value: postgres://miniflux:secret@localhost:5432/miniflux?sslmode=disable
- name: RUN_MIGRATIONS
value: "1"
- name: CREATE_ADMIN
value: "1"
- name: ADMIN_USERNAME
value: "admin"
- name: ADMIN_PASSWORD
value: "test123"
# Note - values above are only used for initialization. After first installation, they're changed (manually.
# It'd be super-cool to have a Job as part of the deployment that makes that change, but :shrug:)
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- port: {{ .Values.service.port }}
containerPort: 8080
name: http
protocol: TCP
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.volumeMounts }}
volumeMounts:
{{- toYaml . | nindent 12 }}
{{- end }}
- name: postgres
image: "postgres:17-alpine"
env:
- name: POSTGRES_USER
value: miniflux
- name: POSTGRES_PASSWORD
value: secret
- name: POSTGRES_DB
value: miniflux
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-data
{{- with .Values.volumes }}
volumes:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,61 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "miniflux.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "miniflux.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "miniflux.fullname" . }}
labels:
{{- include "miniflux.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "miniflux.selectorLabels" . | nindent 4 }}

View File

@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "miniflux.serviceAccountName" . }}
labels:
{{- include "miniflux.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
automountServiceAccountToken: {{ .Values.serviceAccount.automount }}
{{- end }}

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: "{{ include "miniflux.fullname" . }}-test-connection"
labels:
{{- include "miniflux.labels" . | nindent 4 }}
annotations:
"helm.sh/hook": test
spec:
containers:
- name: wget
image: busybox
command: ['wget']
args: ['{{ include "miniflux.fullname" . }}:{{ .Values.service.port }}']
restartPolicy: Never

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data-pvc
spec:
storageClassName: "freenas-iscsi-csi"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi

View File

@ -0,0 +1,96 @@
# Default values for miniflux.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: docker.io/miniflux/miniflux
pullPolicy: IfNotPresent
tag: "2.2.7"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Automatically mount a ServiceAccount's API credentials?
automount: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podLabels: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: LoadBalancer
port: 8597
ingress:
enabled: true
className: "traefik"
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: miniflux.avril
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
# Additional volumes on the output Deployment definition.
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-data-pvc
# Additional volumeMounts on the output Deployment definition.
volumeMounts: []
# - name: foo
# mountPath: "/etc/foo"
# readOnly: true
nodeSelector: {}
tolerations: []
affinity: {}

24
charts/ombi/Chart.yaml Normal file
View File

@ -0,0 +1,24 @@
apiVersion: v2
name: ombi
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"

36
charts/ombi/README.md Normal file
View File

@ -0,0 +1,36 @@
Expects a secret named `nzbget-creds`, with key `password`
# Supporting services
Ombi, Sonarr, Radarr, and NzbGet do nothing in isolation - you need to hook them up to supporting services to access any data.
## Indexers
These are the services that translate search requests into sets of Usenet post addresses to be downloaded and collated.
I currently use:
* NzbPlanet
And have been advised to try:
* DrunkenSlug
* Nzb.su
* NZBFinder
* NZBGeek
## Providers
These are the services that host the actual data
I use:
* Usenetserver
And have been advised to try:
* usenet.farm
# See also
The helm chart under `proton-vpn`

View File

@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "ombi.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "ombi.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "ombi.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "ombi.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View File

@ -0,0 +1,103 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "ombi.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "ombi.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "ombi.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "ombi.labels" -}}
helm.sh/chart: {{ include "ombi.chart" . }}
{{ include "ombi.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "ombi.selectorLabels" -}}
app.kubernetes.io/name: {{ include "ombi.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "ombi.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "ombi.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
Templatify creation of standard config PV-and-PVCs
Accepts `service` as a parameter, which should be a mapping containing:
* name - a string (like `sonarr` or `qbit`)
* size - a string (with the standard Kubernetes restrictions on size-strings)
* path - a string (defining the path in the NFS server where this config dir lives)
Note that this assumes NFS as the storage type. A more extensible definition would permit arbitrary storage types. But hey, this is just for me :P
Not currently working, but I'm keeping it checked-in for future inspiration!
*/}}
{{- define "ombi.configvolumedefinition" -}}
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: ( include "ombi.fullname" . )-( .name )-config-pv
spec:
capacity:
storage: {{ .size }}
accessModes:
- ReadWriteMany
nfs:
server: {{ $.Values.volume.nfsServer }}
path: {{ .path }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ( include "ombi.fullname" . )-{{ .name }}-config-pvc
spec:
storageClassName: ""
volumeName: ( include "ombi.fullname" . )-{{ .name }}-config-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: {{ .size }}
{{- end }}

View File

@ -0,0 +1,216 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "ombi.fullname" . }}
labels:
{{- include "ombi.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "ombi.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "ombi.selectorLabels" . | nindent 8 }}
spec:
# Necessary for Pod to have a static hostname in order to expose ports:
# https://docs.k8s-at-home.com/guides/pod-gateway/#exposing-routed-pod-ports-from-the-gateway
hostname: omni
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
{{ if .Values.ombi.enabled }}
- name: ombi
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: PUID
value: "1000"
- name: GUID
value: "1000"
- name: TZ
value: "America/Los_Angeles"
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- mountPath: /config
name: ombi-config
{{- end -}}
{{ if .Values.sonarr.enabled }}
- name: sonarr
securityContext:
{{- toYaml .Values.securityContext | nindent 12}}
image: "lscr.io/linuxserver/sonarr:{{ .Values.sonarr.tag | default "latest" }}"
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- mountPath: /config
name: sonarr-config
- mountPath: /data
name: ombi-truenas-data
{{- end -}}
{{ if .Values.radarr.enabled }}
- name: radarr
securityContext:
{{- toYaml .Values.securityContext | nindent 12}}
image: "lscr.io/linuxserver/radarr:latest"
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- mountPath: /config
name: radarr-config
- mountPath: /data
name: ombi-truenas-data
{{- end -}}
{{ if .Values.readarr.enabled }}
- name: readarr
securityContext:
{{- toYaml .Values.securityContext | nindent 12}}
image: "lscr.io/linuxserver/readarr:develop"
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- mountPath: /config
name: readarr-config
- mountPath: /data
name: ombi-truenas-data
{{- end -}}
{{if .Values.prowlarr.enabled}}
- name: prowlarr
securityContext:
{{- toYaml .Values.securityContext | nindent 12}}
image: "lscr.io/linuxserver/prowlarr:latest"
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
resources:
{{- toYaml .Values.resources | nindent 12}}
volumeMounts:
- mountPath: /config
name: prowlarr-config
- mountPath: /data
name: ombi-truenas-data
{{- end -}}
{{ if .Values.nzbget.enabled }}
- name: nzbget
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "lscr.io/linuxserver/nzbget:latest"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
- name: TZ
value: "America/Los_Angeles"
- name: NZBGET_USER
value: nzbget
- name: NZBGET_PASS
valueFrom:
secretKeyRef:
name: nzbget-creds
key: password
optional: false
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- mountPath: /config
name: nzbget-config
- mountPath: /data/usenet
name: usenet-truenas-downloads
{{ end }}
{{ if .Values.sabnzbd.enabled }}
- name: sabnzbd
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "lscr.io/linuxserver/sabnzbd:latest"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: PUID
value: "1000"
- name: PGID
value: "1000"
- name: TZ
value: "America/Los_Angeles"
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumeMounts:
- mountPath: /config
name: sabnzbd-config
- mountPath: /data/usenet
name: usenet-truenas-downloads
{{ end }}
volumes:
- name: ombi-config
persistentVolumeClaim:
claimName: {{ include "ombi.fullname" . }}-ombi-config-pvc
- name: ombi-data
persistentVolumeClaim:
claimName: {{ include "ombi.fullname" . }}-data-pvc
- name: ombi-truenas-data
persistentVolumeClaim:
claimName: {{ include "ombi.fullname" . }}-truenas-data-pvc
- name: sonarr-config
persistentVolumeClaim:
claimName: {{ include "ombi.fullname" . }}-sonarr-config-pvc
- name: radarr-config
persistentVolumeClaim:
claimName: {{ include "ombi.fullname" . }}-radarr-config-pvc
- name: readarr-config
persistentVolumeClaim:
claimName: {{ include "ombi.fullname" . }}-readarr-config-pvc
- name: prowlarr-config
persistentVolumeClaim:
claimName: {{ include "ombi.fullname" . }}-prowlarr-config-pvc
- name: nzbget-config
persistentVolumeClaim:
claimName: {{include "ombi.fullname" .}}-nzbget-config-pvc
- name: sabnzbd-config
persistentVolumeClaim:
claimName: {{include "ombi.fullname" .}}-sabnzbd-config-pvc
- name: usenet-truenas-downloads
persistentVolumeClaim:
claimName: {{ include "ombi.fullname" . }}-truenas-usenet-downloads-pvc
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@ -0,0 +1,121 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "ombi.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }}
{{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }}
{{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}}
{{- end }}
{{- end }}
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "ombi.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
{{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
pathType: {{ .pathType }}
{{- end }}
backend:
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
{{- else }}
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
- host: sonarr.avril
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ include "ombi.fullname" . }}-sonarr
port:
number: {{ .Values.service.sonarrPort }}
- host: radarr.avril
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ include "ombi.fullname" . }}-radarr
port:
number: {{ .Values.service.radarrPort }}
- host: readarr.avril
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ include "ombi.fullname" . }}-readarr
port:
number: {{ .Values.service.readarrPort }}
- host: prowlarr.avril
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{include "ombi.fullname" .}}-prowlarr
port:
number: {{.Values.service.prowlarrPort}}
- host: nzbget.avril
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ include "ombi.fullname" . }}-nzbget
port:
number: {{ .Values.service.nzbgetWebPort }}
- host: sabnzbd.avril
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: {{ include "ombi.fullname" . }}-sabnzbd
port:
number: {{ .Values.service.sabnzbdWebPort }}
{{- end }}

View File

@ -0,0 +1,104 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "ombi.fullname" . }}
labels:
{{- include "ombi.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: 3579
protocol: TCP
selector:
{{- include "ombi.selectorLabels" . | nindent 4 }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "ombi.fullname" . }}-sonarr
labels:
{{- include "ombi.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.sonarrPort }}
targetPort: 8989
protocol: TCP
selector:
{{- include "ombi.selectorLabels" . | nindent 4 }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "ombi.fullname" . }}-radarr
labels:
{{- include "ombi.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.radarrPort }}
targetPort: 7878
protocol: TCP
selector:
{{- include "ombi.selectorLabels" . | nindent 4 }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "ombi.fullname" . }}-readarr
labels:
{{- include "ombi.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.readarrPort }}
targetPort: 8787
protocol: TCP
selector:
{{- include "ombi.selectorLabels" . | nindent 4 }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "ombi.fullname" . }}-prowlarr
labels:
{{- include "ombi.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.prowlarrPort }}
targetPort: 9696
protocol: TCP
selector:
{{- include "ombi.selectorLabels" . | nindent 4 }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "ombi.fullname" . }}-nzbget
labels:
{{- include "ombi.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.nzbgetWebPort }}
targetPort: 6789
protocol: TCP
selector:
{{- include "ombi.selectorLabels" . | nindent 4 }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "ombi.fullname" . }}-sabnzbd
labels:
{{- include "ombi.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.sabnzbdWebPort }}
targetPort: 8080
protocol: TCP
selector:
{{- include "ombi.selectorLabels" . | nindent 4 }}

View File

@ -0,0 +1,265 @@
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-data-pv
namespace: {{ .Release.Namespace }}
spec:
capacity:
storage: 5T
accessModes:
- ReadWriteMany
nfs:
server: {{ .Values.volume.dataNFSServer }}
path: {{ .Values.volume.dataNFSPath }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-data-pvc
namespace: {{ .Release.Namespace }}
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-data-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5T
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-truenas-data-pv
namespace: {{ .Release.Namespace }}
spec:
capacity:
storage: 20T
accessModes:
- ReadWriteMany
nfs:
server: galactus.avril
path: /mnt/low-resiliency-with-read-cache/ombi-data/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-truenas-data-pvc
namespace: {{ .Release.Namespace }}
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-truenas-data-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20T
# TODO - templatize these similar definitions
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-ombi-config-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: {{ $.Values.volume.configNFSServer }}
path: /mnt/BERTHA/etc/ombi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-ombi-config-pvc
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-ombi-config-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-sonarr-config-pv
spec:
capacity:
storage: 10M
accessModes:
- ReadWriteMany
nfs:
server: {{ $.Values.volume.configNFSServer }}
path: /mnt/BERTHA/etc/sonarr
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-sonarr-config-pvc
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-sonarr-config-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10M
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-radarr-config-pv
spec:
capacity:
storage: 10M
accessModes:
- ReadWriteMany
nfs:
server: {{ $.Values.volume.configNFSServer }}
path: /mnt/BERTHA/etc/radarr
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-radarr-config-pvc
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-radarr-config-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10M
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-readarr-config-pv
spec:
capacity:
storage: 10M
accessModes:
- ReadWriteMany
nfs:
server: {{ $.Values.volume.configNFSServer }}
path: /mnt/BERTHA/etc/readarr
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-readarr-config-pvc
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-readarr-config-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10M
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-prowlarr-config-pv
spec:
capacity:
storage: 10M
accessModes:
- ReadWriteMany
nfs:
server: {{ $.Values.volume.configNFSServer }}
path: /mnt/BERTHA/etc/prowlarr
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-prowlarr-config-pvc
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-prowlarr-config-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10M
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-nzbget-config-pv
spec:
capacity:
storage: 10M
accessModes:
- ReadWriteMany
nfs:
server: {{ $.Values.volume.configNFSServer }}
path: /mnt/BERTHA/etc/nzbget
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-nzbget-config-pvc
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-nzbget-config-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10M
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-sabnzbd-config-pv
spec:
capacity:
storage: 10M
accessModes:
- ReadWriteMany
nfs:
server: {{ $.Values.volume.configNFSServer }}
path: /mnt/BERTHA/etc/sabnzbd
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-sabnzbd-config-pvc
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-sabnzbd-config-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10M
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ include "ombi.fullname" . }}-truenas-usenet-downloads-pv
spec:
capacity:
storage: 1T
accessModes:
- ReadWriteMany
nfs:
server: galactus.avril
path: /mnt/low-resiliency-with-read-cache/ombi-data/usenet
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ include "ombi.fullname" . }}-truenas-usenet-downloads-pvc
spec:
storageClassName: ""
volumeName: {{ include "ombi.fullname" . }}-truenas-usenet-downloads-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1T

123
charts/ombi/values.yaml Normal file
View File

@ -0,0 +1,123 @@
# Default values for ombi.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: linuxserver/ombi
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "latest"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 3579
sonarrPort: 8989
radarrPort: 7878
readarrPort: 8787
prowlarrPort: 9696
nzbgetWebPort: 6789
sabnzbdWebPort: 8080
ingress:
enabled: true
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: ombi.avril
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector:
kubernetes.io/hostname: epsilon
tolerations:
- key: architecture
operator: "Equal"
value: x86
affinity: {}
# Custom values below here
ombi:
enabled: true
sonarr:
enabled: true
# Hard-coded to address https://forums.sonarr.tv/t/unraid-binhex-sonarr-crashes-constantly-epic-fail/33175/
# https://github.com/Sonarr/Sonarr/issues/5929 / https://old.reddit.com/r/sonarr/comments/15p160j/v4_consoleapp_epic_fail_error/
# tag: "develop-version-4.0.0.613"
tag: "4.0.7"
radarr:
enabled: true
readarr:
enabled: true
prowlarr:
enabled: true
nzbget:
enabled: true
sabnzbd:
enabled: true
volume:
configNFSServer: rassigma.avril
dataNFSServer: rasnu2.avril
dataNFSPath: /mnt/NEW_BERTHA/ombi-data

View File

@ -0,0 +1,31 @@
apiVersion: v2
name: proton-vpn
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
dependencies:
# https://github.com/k8s-at-home/charts/tree/master/charts/stable/pod-gateway
# https://github.com/k8s-at-home/charts/commit/bc8aee9648feb02fbe03246026e799cd1bd50ae5
- name: pod-gateway
version: "2.0.0"
repository: https://k8s-at-home.com/charts/

View File

@ -0,0 +1,73 @@
Implements [this guide](https://docs.k8s-at-home.com/guides/pod-gateway/). Note that I only tested this with OpenVPN, not Wireguard.
## Dependencies
### Cert-manager
Depends on the CRDs installed as part of `cert-manager`, which apparently will not be installed if that chart is a dependency of this one - so it's installed manually in its own directory.
If you need to install it manually, run `helm repo add jetstack https://charts.jetstack.io; helm repo update; helm install --create-namespace -n security jetstack/cert-manager cert-manager --set installCRDs=true`
## Secrets
Note that the names of both of these secrets are arbitrary (though the keys within them are not) - the expected names are set in `values.yaml`.
### Config file
Depends on the existence of a secret called `openvpn-config`, with a key `vpnConfigfile` that contains the appropriate config file. Download it from [here](https://account.protonvpn.com/downloads) and upload it with:
```
kubectl -n proton-vpn create secret generic openvpn-config --from-file=vpnConfigfile=<path_to_config_file>
```
### OpenVPN creds
Fetch from [here](https://account.protonvpn.com/account) (note - these are different from your ProtonVPN credentials!), then upload with:
```
kubectl -n proton-vpn create secret generic openvpn-creds --from-literal="VPN_AUTH=<username>;<password>"
```
Note that you can (apparently!) append various suffices to the OpenVPN username to enable extra features if you are a paying member:
* `<username>+f1` as username to enable anti-malware filtering
* `<username>+f2` as username to additionally enable ad-blocking filtering
* `<username>+nr` as username to enable Moderate NAT
I haven't tested - use at your own risk! Probably best to get a functioning connection working before messing around with extra features.
### update-resolv-conf
TODO: (Not sure if this is required for all servers...) This is required by the ProtonVPN OpenVPN configuration (line 124)
## Debugging
### `GATEWAY_IP=';; connection timed out; no servers could be reached'`
As per [here](https://docs.k8s-at-home.com/guides/pod-gateway/#routed-pod-fails-to-init), "_try setting the_ `NOT_ROUTED_TO_GATEWAY_CIDRS:` _with your cluster cidr and service cidrs_". The way to find those values is described [here](https://stackoverflow.com/questions/44190607/how-do-you-find-the-cluster-service-cidr-of-a-kubernetes-cluster)
## More info
Some OpenVPN server configurations rely on a script at `/etc/openvpn/update-resolv-conf.sh`, which isn't provided by default. It [looks like](https://github.com/dperson/openvpn-client/issues/90) it's been replaced with `/etc/openvpn/up.sh` and `.../down.sh` - you should be able to manually edit the `.ovpn` file to reference those scripts instead.
If you really need the original file - get it from [here](https://github.com/alfredopalhares/openvpn-update-resolv-conf) and provide it in a ConfigMap:
```
curl -s https://raw.githubusercontent.com/alfredopalhares/openvpn-update-resolv-conf/master/update-resolv-conf.sh -o /tmp/update-resolv-conf
```
### Debugging image
Useful tools to install:
```
apt update -y
apt install -y traceroute net-tools iputils-ping dnsutils
```
## References
* [Values definition for VPN](https://github.com/k8s-at-home/library-charts/blob/2b4e0aa1ef5f8c6ef4ac14c2335fc9a008394ed6/charts/stable/common/values.yaml#L479)
* [Charts for VPN](https://github.com/k8s-at-home/library-charts/tree/2b4e0aa1ef5f8c6ef4ac14c2335fc9a008394ed6/charts/stable/common/templates/addons/vpn)
* [Pod Gateway templates](https://github.com/k8s-at-home/charts/tree/master/charts/stable/pod-gateway/templates)

Binary file not shown.

View File

@ -0,0 +1,62 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "proton-vpn.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "proton-vpn.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "proton-vpn.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "proton-vpn.labels" -}}
helm.sh/chart: {{ include "proton-vpn.chart" . }}
{{ include "proton-vpn.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "proton-vpn.selectorLabels" -}}
app.kubernetes.io/name: {{ include "proton-vpn.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "proton-vpn.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "proton-vpn.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,11 @@
# Note these are _not_ the namespace for the items created by this chart, but rather are the namespaces of pods that will
# be routed _through_ this VPN
{{- range (index .Values "pod-gateway" "routed_namespaces") }}
---
apiVersion: v1
kind: Namespace
metadata:
name: {{ . }}
labels:
routed-gateway: "true"
{{- end }}

View File

@ -0,0 +1,59 @@
pod-gateway:
routed_namespaces:
- "vpn"
- "ombi"
settings:
NOT_ROUTED_TO_GATEWAY_CIDRS: "10.42.0.0/16 10.43.0.0/16 192.168.0.0/16"
VPN_BLOCK_OTHER_TRAFFIC: true
# https://github.com/k8s-at-home/charts/tree/master/charts/stable/pod-gateway
VPN_INTERFACE: tun0 # For OpenVPN. For Wireguard, use `wg0`
VPN_TRAFFIC_PORT: 1194 # UDP port - which is generally preferred over TCP. If you use TCP, 443 is probably correct
publicPorts:
- hostname: ombi
IP: 9
ports:
- type: udp
port: 6789
- type: tcp
port: 6789
addons:
# https://github.com/k8s-at-home/library-charts/blob/2b4e0aa1ef5f8c6ef4ac14c2335fc9a008394ed6/charts/stable/common/templates/addons/vpn/openvpn/_container.tpl
# https://github.com/k8s-at-home/library-charts/blob/2b4e0aa1ef5f8c6ef4ac14c2335fc9a008394ed6/charts/stable/common/values.yaml#L477
vpn:
enabled: true
type: openvpn
openvpn:
authSecret: openvpn-creds
configFileSecret: openvpn-config
livenessProbe:
exec:
# Change "CA" to whatever country your VPN connects to
command:
- sh
- -c
- if [ $(curl -s https://ipinfo.io/country) == 'CA' ]; then exit 0; else exit $?; fi
initialDelaySeconds: 30
periodSeconds: 60
failureThreshold: 1
networkPolicy:
enabled: true
egress:
- ports:
- protocol: UDP # Setting settings.VPN_TRAFFIC_PORT is insufficient
port: 1194
to:
- ipBlock:
cidr: 0.0.0.0/0
- to:
- ipBlock:
cidr: 10.0.0.0/8
scripts:
up: true
down: true

View File

@ -0,0 +1,178 @@
apiVersion: apiextensions.crossplane.io/v1
kind: CompositeResourceDefinition
metadata:
name: xbaseapplicationinfrastructures.scubbo.org
spec:
group: scubbo.org
names:
kind: xBaseApplicationInfrastructure
plural: xbaseapplicationinfrastructures
claimNames:
kind: BaseAppInfra
plural: baseappinfras
versions:
- name: v1alpha1
served: true
referenceable: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
appName:
type: string
---
# Sources for the Vault resources are here:
# https://developer.hashicorp.com/vault/tutorials/kubernetes/vault-secrets-operator#configure-vault
apiVersion: apiextensions.crossplane.io/v1
kind: Composition
metadata:
name: base-application-infrastructure
spec:
compositeTypeRef:
apiVersion: scubbo.org/v1alpha1
kind: xBaseApplicationInfrastructure
resources:
- name: vault-role
base:
apiVersion: kubernetes.vault.upbound.io/v1alpha1
kind: AuthBackendRole
spec:
providerConfigRef:
name: vault-provider-config
forProvider:
audience: vault
boundServiceAccountNames:
- default
tokenMaxTtl: 86400
tokenTtl: 86400
patches:
- type: FromCompositeFieldPath
# https://docs.crossplane.io/latest/concepts/composite-resources/#claim-namespace-label
fromFieldPath: metadata.labels["crossplane.io/claim-namespace"]
toFieldPath: spec.forProvider.boundServiceAccountNamespaces
transforms:
- type: string
string:
type: Format
fmt: "[\"%s\"]"
- type: convert
convert:
toType: array
format: json
- type: FromCompositeFieldPath
fromFieldPath: spec.appName
toFieldPath: spec.forProvider.roleName
transforms:
- type: string
string:
type: Format
fmt: "vault-secrets-operator-%s-role"
- type: FromCompositeFieldPath
fromFieldPath: spec.appName
toFieldPath: spec.forProvider.tokenPolicies
transforms:
- type: string
string:
type: Format
fmt: "[\"vault-secrets-operator-%s-policy\"]"
- type: convert
convert:
toType: array
format: json
- name: vault-secrets-mount
base:
apiVersion: vault.vault.upbound.io/v1alpha1
kind: Mount
spec:
providerConfigRef:
name: vault-provider-config
forProvider:
type: kv-v2
patches:
- type: FromCompositeFieldPath
fromFieldPath: spec.appName
toFieldPath: spec.forProvider.path
transforms:
- type: string
string:
type: Format
fmt: "app-%s-kv"
- type: FromCompositeFieldPath
fromFieldPath: spec.appName
toFieldPath: spec.forProvider.description
transforms:
- type: string
string:
type: Format
fmt: "KV storage for app %s"
- name: vault-policy
base:
apiVersion: vault.vault.upbound.io/v1alpha1
kind: Policy
spec:
providerConfigRef:
name: vault-provider-config
forProvider: {}
patches:
- type: FromCompositeFieldPath
fromFieldPath: spec.appName
toFieldPath: spec.forProvider.name
transforms:
- type: string
string:
type: Format
fmt: "vault-secrets-operator-%s-policy"
- type: FromCompositeFieldPath
fromFieldPath: spec.appName
toFieldPath: spec.forProvider.policy
transforms:
- type: string
string:
type: Format
fmt: "path \"app-%s-kv/*\" {capabilities=[\"read\"]}"
# Note that this is an `Object` created by provider-kubernetes, not by provider-vault
- name: vault-auth
base:
apiVersion: kubernetes.crossplane.io/v1alpha2
kind: Object
spec:
providerConfigRef:
name: kubernetes-provider
forProvider:
manifest:
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
spec:
method: kubernetes
mount: kubernetes # Hard-coded - this is what I used in my setup, but this could be customizable
kubernetes:
serviceAccount: default
audiences:
- vault
patches:
# The Vault Role created earlier in this Composition
- type: FromCompositeFieldPath
fromFieldPath: spec.appName
toFieldPath: spec.forProvider.manifest.spec.kubernetes.role
transforms:
- type: string
string:
type: Format
fmt: "vault-secrets-operator-%s-role"
- type: FromCompositeFieldPath
fromFieldPath: spec.appName
toFieldPath: spec.forProvider.manifest.metadata.name
transforms:
- type: string
string:
type: Format
fmt: "vault-auth-%s"
- type: FromCompositeFieldPath
fromFieldPath: metadata.labels["crossplane.io/claim-namespace"]
toFieldPath: spec.forProvider.manifest.metadata.namespace

7
charts/vault/Chart.yaml Normal file
View File

@ -0,0 +1,7 @@
apiVersion: v2
name: vault-extra-resources
description: Extra resources in support of Vault official Helm Chart
type: application
version: 0.1.0
appVersion: "1.0.0"

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: vault-plugin-claim
spec:
accessModes:
- "ReadWriteOnce"
storageClassName: "freenas-iscsi-csi"
resources:
requests:
storage: "1Gi"

1
charts/vault/values.yaml Normal file
View File

@ -0,0 +1 @@
# No configuration required

24
main-manifest.yaml Normal file
View File

@ -0,0 +1,24 @@
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: jackjack-app-of-apps
namespace: argo
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://gitea.scubbo.org/scubbo/helm-charts.git
targetRevision: HEAD
path: app-of-apps
destination:
server: "https://kubernetes.default.svc"
namespace: default
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true