Setting up an environment
An environment is defined by the Consul server instances,
all the nodes that are connected to the server instances and the configurations of the nodes and
The first part of creating a new environment is to configure the Consul servers which form the
core of the environment. The number of Consul servers is determined by the need to ensure
quorum between the servers. Practically this
means that there should be at least three servers
but possibly five depending on the degree of fault tolerance required.
Initially when starting a new environment you need to bootstrap
Consul. In general it is sensible to set the number of server nodes that is
expected which will allow Consul
to automatically bootstrap itself when the desired number of server nodes have connected.
One of the more difficult parts of creating an environment is to determine a way in which the
nodes, server or client, can find the server nodes.
This requires that there is some way for Consul server nodes to identify themselves to a node that
wants to join the cluster. Several options exist:
- The simplest way is by making sure that the Consul server nodes always have a fixed set of
IP addresses and then to get each node to try to connect to those addresses. From a performance
perspective it makes sense to put the addresses most likely to be alive first in the
configuration. This method is simple but does require that the Consul server nodes always get
an IP allocated from the fixed pool of addresses. This can for instance be done by setting a
static IP on those nodes or by setting a known MAC address in combination with DHCP rules.
The obvious drawback is that it is harder to automate this method.
- A more complex but better method is to use known DNS names which are attached to the Consul hosts
when they start. Nodes could obtain these via initial provisioning or a service on the node
could request a free name from a pool when it starts up.
- Finally if you are in a cloud then you can use
auto-join method which uses information
known to the cloud system to point to the server nodes.
In order to get the initial configuration into a resource instance Calvinverse assumes that for
VMs an ISO image is attached to the VM when it is initially created. This ISO should contain files
with the following information for Consul
Additionally for server nodes the following information is expected
Finally the ISO also contains the
zone configuration file
for unbound, which is the caching DNS resolver which
is installed on all resources to handle DNS requests.
repository provide an example of the different ISO files that will be made. From this repository
three different ISO files will be created
- Consul server on Linux
- Consul client on Linux
- Consul client on Windows
Configuration of services
Configurations for all other services will be obtained from the
Consul Key-Value store via
Consul-Template. Getting the configurations after
the VM has connected to the Consul environment allows easily changing the configurations and it
disconnects the configuration of the services from the provisioning of the resource.
Again the Calvinverse.Infrastructure
repository provides examples of the different configuration values that need to be set.
For resource specific configuration information the readme for the different resource
repositories provides the required information.
With the provisioning and configuration information collected the final part of the setup is to
create instances of the desired resources in the correct order.
- Determine which method of discovery is going to be used for Consul nodes to discover the server
nodes, be that via IP address, DNS name or some other discovery method. If you plan to use
either IP addresses or DNS names, decide them ahead of time. Then add those to the
retry_join entry in the Consul
configuration file that is going to be included in the different
- Determine the DNS name for the environment. This will later be pointed to the IP address, or
addresses, of the reverse proxies. Thereby giving users access to the services provided by
the environment via a single entrypoint instead of having to use the IP addresses of the
- The first instances that should be created are for the Consul servers.
As indicated at least three virtual machines should be created from the virtual hard drives. The
correct virtual machine settings are described in the readme for the repository. In order for an
environment to form the server instances need to connect to each other.
- Start a single instance and once it is started make sure that it either has a known IP
address or a known DNS name. Make sure this is one of the IP addresses / DNS names that
was decided on earlier on. Once the instance is available see that Consul is running. Note
that it will not have elected itself as a leader because it should be expecting multiple
servers before it will start acting as a server.
- Start the next instances one by one. Ensure that they connect to the other node via the
consul members command.
- Once the last server instance connects the Consul cluster will bootstrap itself and a leader
will be elected. From this point on the Consul cluster will be active and ready to work.
- Once the cluster is ready to work the next step is to upload K-V values. These can be pushed
to the cluster via the KV store endpoints
that Consul provides.
- With the Consul servers up and running and the configuration values set in the KV store the next
step is to create the resource instances that provide insight into the environment. These are
the reverse proxies which allow services to be easily reached outside the environment and the
Consul UI for checking the Consul cluster status.
- Create one or more instances of the reverse proxy
resource. The Fabio loadbalancer does not automatically
provide high availability, however it is fairly trivial to provide a DNS round-robin approach
for high availability by pointing one DNS addres to the IP addresses of the reverse proxy
instances. Once the first instance is provisioned you can find the UI for Fabio on
- Create one or more instances of the Consul UI
resource. Once the instance is provisioned the UI can be found on
- Once the Consul server and the Consul management services are available the supporting services
can be added to the environment. Supporting services that are optional are marked as such. If you
have decided to not include those then you can skip those steps.
- Create at least three instances of the
RabbitMQ resource. The RabbitMQ
documentation should help you decide how many
instances you need for your purposes. In general an odd number of nodes is recommended with the
minimum of three nodes for high availability. After the instances have been provisioned you
can reach the management page via
- Once the RabbitMQ cluster is up you can create the necessary vhosts
and users. The
provides a description of the minimum vhosts, users and queues that should be created.
Make sure to create at least an administrator level user which will be used by
Vault to create temporary users in RabbitMQ for services to
write to exchanges or read from queues in RabbitMQ.
- Create at least two instances of the Vault
- Once the Vault instances have been provisioned you can
initialize one instance. This
will provide a number of keys of which
a subset will be required to unseal the Vault instance. Note that initialization only needs
to be done on a single node but all nodes need to be unsealed individually for them to be used.
- Once the vault instances are initialized and unsealed you can mount
secret engines and set
policies which describe how the
secret engines should be used. The minimum secret engines that should be mounted is the
RabbitMQ secret engine. Additionally
at least one authentication method should be configured for authenticating users. The
provides scripts and configuration files to mount both the RabbitMQ secret engine and the
LDAP authentication method for user authentication.
- Next deploy the metrics instances so that you can get information about the status of all your
instances. Deploy the metrics instances in the following order:
- First deploy an instance of the InfluxDb
resource. Since the open source version doesn't allow H/A it is sensible to only deploy one
instance.Once the instance has been provisioned and connected to the Consul cluster metrics
should start streaming into the database.
- Deploy an instance of the Grafana
resource. After provisioning the instance you can reach it on
http://<ENVIRONMENT_DNS_NAME>/dashboards/metrics. Initially this instance will not have any
dashboards. You can either make those manually or import them by pushing the
dashboard definitions to the
Consul K-V from where they will automatically be provisioned with Grafana.
- Finally you can optionally deploy an instance of Kapacitor and Chronograf.
These services provide alerting and a different way of displaying metrics information.
- The last of the supporting services are the document and log processing services. These are used
to process, store and display logs and other documents which are generated in the environment.
Deploy these resources in the following order:
- First deploy multiple instances of the Elasticsearch
resource. As with the other H/A resources you will need an odd number of instances with three
being the minimum.
- Once the Elasticsearch cluster is running you can deploy a single instance of the
Kibana resource. Once provisioning
is completed you can find it on
monitoring tab in Kibana provides
information about your Elasticsearch cluster.
- Finally you can deploy one or more instances of the Logstash
resource. You can have as many of these instances as you need to process all your logs. Log
processing rules can be loaded into the Consul K-V from where they will be provided to the
Logstash instances. For an example have a look at the
- Finally the build instances can be added to the environment. The first instance that should be
added is that of the Jenkins build controller.
- Once the build controller has been provisioned, one or more
build agents can be provisioned.
The agents will automatically connect to the build controller when they are provided with an
authorization to connect to Vault, from where they will get the username and password to
connect to the build controller.
- The final resource that can optionally be added is the Nexus
resource which stores artefacts, packages and Docker image layers.