If a network like `matrix-whatever` already exists for some reason,
the `docker_network` module would not create our `matrix` network.
Working around it by avoiding `docker_network` and doing it manually.
Fixes Github issue #12
`--log-driver=none` is used for all Docker containers now.
All these containers are started through systemd anyway and get logged in journald,
so there's no need for Docker to be logging the same thing using the default `json-file` driver.
Doing that was growing `/var/lib/docker/containers/..` infinitely until service/container restart.
As a result of this, things like `docker logs matrix-synapse` won't work anymore.
`journalctl -u matrix-synapse` is how one can see the logs.
If the playbook were to run with `--tags=setup-nginx-proxy`,
it wouldn't go into `setup_corporal.yml`, which meant it wouldn't
perform a bunch of `set_fact` calls which override important
nginx proxy configuration.
We run these variable overrides on each call now (tagged with `always`)
to avoid such problems in the future.
This disables federation on the 80 port, as it's
not necessary. We also disable the old Angular webclient.
For the federation port (8448), we disable the client APIs
as those are not necessary. Those can even cause trouble
if one doesn't know about them and thinks that guarding the client
APIs at the 80 port is enough.
Moving away from using the default bridge network to using our own.
This isolates our services from other Docker containers running
on the default network on the same host.
The benefits are that:
- isolation is a little better - we no longer share a default
bridge network with any other containers that might be running on the host
- there are no longer hard dependencies - we do service discovery
by DNS name, and not via explicit `--link` usage during container start,
so containers can start out of order and fail without bringing down others
with them
(`matrix-nginx-proxy` can continue running, even if one of the other services dies)
In the future, when other services get introduced,
the increased resilience and simplicity will help as well.
Until now, we were starting from a fresh configuration, as generated
by Synapse and manipulating it with regex and line replacements,
until we made it work.
This is more fragile and less predictable, so we're moving to a static
configuration file generated from a Jinja template.
The upside is that configuration will be stable and predictable.
The downside of this new approach is that any manual configuration changes
after the playbook is done, will be thrown away on future playbook
invocations.
There are 2 ways to work around the need for manual configuration
changes though:
- making them part of this playbook and its default template
configuration files (which benefits everyone)
- going your own way for a given host and overriding the template files
that gets used (that is, the
`matrix_synapse_template_synapse_homeserver` or
`matrix_synapse_template_synapse_log` variables)
This playbook does not set up guest access in Synapse anyway,
so until the need comes (or someone asks for it), guest access
is removed from riot-web's UI too.
As for supporting custom URLs, this is also not something
that seems like it'd be useful to most deployments.
Since cbee084ac1, this playbook supports Postgres 10.x,
but keeps existing Postgres-9.x installs on 9.x.
This playbook can now also be ran with `--tags=upgrade-postgres`
to make it upgrade from Postgres 9.x to 10.x (or other versions
in the future).
This playbook just tries to avoid trying to setup a Postgres 10
database with existing 9.x files, as that makes Postgres complain.
Due to this, existing installs (still on 9.x) are detected
and left on Postgres 9.x.
They need to be upgraded to Postgres 10.x manually.
Switching from from avhost/docker-matrix (silviof/docker-matrix)
to matrixdotorg/synapse.
The avhost/docker-matrix (silviof/docker-matrix) image used to bundle
in the coturn STUN/TURN server, so as part of the move,
we're separating this to a separately-ran service
(matrix-coturn.service, powered by instrumentisto/coturn-docker-image)
A `.log.config` file may be generated with a different
level of indentation depending on which (Docker image, etc.)
generates it.
With this patch, we tolerate different levels of indentation
(2 spaces, 4 spaces, etc.) and don't break the configuration.
When using matrix-nginx-proxy, the file permissions are organized
in a way that matrix-nginx-proxy could read the challenge files
produced by acmetool.
However, when another own/external webserver was used (like nginx
with our generated sample configuration), this could not work.
From on we're proxying the HTTP requests to port :402 in such a case,
which fixes the problem.
The matrix-nginx-proxy was reloaded on the 3rd day of the month (`15 4 3 * *`),
which makes no sense - it's too infrequently.
It's in line with the renewal time now (+5 minutes).
The non-working script is supposed to be fixed
by https://github.com/matrix-org/synapse/pull/2375
To have it work, we'd need an updated Docker image
of `silviof/matrix-riot-docker:latest`, which is not yet available
at the time of this commit.
Still, the previous patched synapse_port_db didn't work well either,
so it's not like we're regressing much by getting rid of it.
As described here (
https://github.com/matrix-org/synapse/issues/2438#issuecomment-327424711
), using own SSL certificates for the federation port is more fragile,
as renewing them could cause federation outages.
The recommended setup is to use the self-signed certificates generated
by Synapse.
On the 443 port (matrix-nginx-proxy) side, we still use the Let's Encrypt
certificates, which ensures API consumers work without having to trust
"our own CA".
Having done this, we also don't need to ever restart Synapse anymore,
as no new SSL certificates need to be applied there.
It's just matrix-nginx-proxy that needs to be restarted, and it doesn't
even need a full restart as an "nginx reload" does the job of swithing
to the new SSL certificates.
Moving keeps everything in the /matrix directory, so that we
wouldn't contaminate anything else on the system or risk
clashing with something else.
Also retrieving certificates separately for the Riot and Matrix domains,
which should help in multiple ways:
- allows them to be very different (completely separate base domain..)
- allows for Riot to be disabled for the playbook some time later
and still have the code not break