mirror of
https://git.sr.ht/~magic_rb/website
synced 2024-11-24 17:16:15 +01:00
Begin new blog post
This commit is contained in:
parent
6b5cb3b5ab
commit
292874ff17
47
blog/scalable-concourseci-with-nomad-and-nix.org
Normal file
47
blog/scalable-concourseci-with-nomad-and-nix.org
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
#+TITLE: Scalable ConcourseCI with Nomad and Nix
|
||||||
|
|
||||||
|
In this blog post, I will explain to you how you can deploy ConcourseCI on HashiCorp Nomad with fully automatic and
|
||||||
|
Op-free scaling. We will utilize 3 HashiCorp tools, namely Nomad, Vault, and Consul, then Nix (not necessary, can be
|
||||||
|
replaced) and finally ConcourseCI itself.
|
||||||
|
|
||||||
|
* Requirements
|
||||||
|
+ a functional Nomad installation with Consul and Vault integration
|
||||||
|
+ a Nomad cluster with more than 1 node, to actually witness the scaling
|
||||||
|
+ under 10GB of space, somewhere around 5 GB, but just be safe and have 10GB
|
||||||
|
|
||||||
|
* Versions utilized
|
||||||
|
+ Consul - v1.9.3
|
||||||
|
+ Nomad - v1.0.3
|
||||||
|
+ Vault - v1.6.2
|
||||||
|
+ Linux - 5.11.0
|
||||||
|
+ Nix - 2.4pre20201205_a5d85d0
|
||||||
|
+ ConcourseCI - 7.0.0
|
||||||
|
|
||||||
|
* Overview
|
||||||
|
Our goal is to be able to add a Nomad node to the cluster and have ConcourseCI automatically expand to that node, (We
|
||||||
|
can restrict this later with constraints). For this purpose we'll use the ~system~ scheduler, quoting the Nomad docs:
|
||||||
|
|
||||||
|
#+BEGIN_QUOTE
|
||||||
|
The ~system~ scheduler is used to register jobs that should be run on all clients that meet the job's constraints. The
|
||||||
|
~system~ scheduler is also invoked when clients join the cluster or transition into the ready state. This means that
|
||||||
|
all registered system jobs will be re-evaluated and their tasks will be placed on the newly available nodes if the
|
||||||
|
constraints are met.
|
||||||
|
#+END_QUOTE
|
||||||
|
|
||||||
|
A ConcourseCI worker node, needs it's own key pair, the best case scenario would be, that we would generate this key
|
||||||
|
pair, every time a worker node is brought up, and store it in Vault. Fortunately this is possible with a ~pre-start~
|
||||||
|
task, and Consul Template. \\
|
||||||
|
|
||||||
|
That's about it when it comes to the special and interesting bits of this post, so if you already know how to do this,
|
||||||
|
or you want to take a stab at solving it yourself, you can stop reading. For those that are still with me, please open
|
||||||
|
a terminal and follow along.
|
||||||
|
|
||||||
|
* Realization
|
||||||
|
** Vault setup
|
||||||
|
We'll only use the KV store, version 2, as it's the easiest to use and works fine in for this use-case. I've decided
|
||||||
|
to structure it like so, but you are free to change it around, the only thing that you need to keep the same, is to
|
||||||
|
have a directory with files representing the individual worker nodes, such as =concourse/workers/<worker-hostname>=.
|
||||||
|
|
||||||
|
#+BEGIN_VERBATIM
|
||||||
|
|
||||||
|
#+END_VERBATIM
|
Loading…
Reference in a new issue