Per default relay bot functionality is disabled; the bridge user permissions depends on the relay bot, if enabled the base domain users are on level relay, else remain on user;
I changed the conditional statement in prosody systemd template to bind the localhost port by default if people have set ```matrix_nginx_proxy_enabled == false ```.
Hopefully that should make it the default behaviour now.
Jitsi-meet enabled websockets by default, claiming better reliability.
Matrix-nginx-proxy configuration has been set up according to the
Prosody documentation: https://prosody.im/doc/websocket
**HSTS Preloading:**
In its strongest and recommended form, the [HSTS policy](https://www.chromium.org/hsts) includes all subdomains, and indicates a willingness to be “preloaded” into browsers:
`Strict-Transport-Security: max-age=31536000; includeSubDomains; preload`
**X-Xss-Protection:**
`1; mode=block` which tells the browser to block the response if it detects an attack rather than sanitising the script.
Expected to have regressed after https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/1008
This patch comes with its own downsides (as described in the comments
for matrix_prometheus_node_exporter_container_http_host_bind_port),
but at least there's:
- no security issue
- metrics remain readable from matrix-prometheus (even if the network metrics are inaccurate)
A better patch is certainly welcome.
These old versions of TLS rely on MD5 and SHA-1, both now broken, and contain other flaws. TLS 1.0 is no longer PCI-DSS compliant and the TLS working group has adopted a document to deprecate TLS 1.0 and TLS 1.1.
See these last commits:
tulir/mautrix-signal@4fc34330c1f6947aece67863b0d04da34c776f80
tulir/mautrix-signal@64bc5c36a509ba435a0b01cf44afb1b5d2642efd
tulir/mautrix-signal@ddda1666d41d28750cc59d070e4388b24add6ad9
Related to https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/963
This also simplifies Prerequisites, which is great.
It'd be nice if we were doing these checks in some optional manner
and reporting them as helpful messages (using
`matrix_playbook_runtime_results`), but that's more complicated.
I'd rather drop these checks completely.
This variable was previously undefined in the role and was only getting
defined via `group_vars/matrix_servers`.
We now properly initialize it (and its good default value) in the role
itself.
Fixes a regression caused by a5ee39266c.
If the user id and group id were different than 991:991
(which used to be a hardcoded default for us long ago),
there was a mismatch between what Synapse was trying to use (991:991)
and what it was actually started with (in `--user=..`). It was then
trying to change ownership, which was failing.
This was mostly affecting newer installations which were not using the
991:991 defaults we had long ago (since a1c5a197a9).
We're talking about a webserver running on the same machine, which
imports the configuration files generated by the `matrix-nginx-proxy`
in the `/matrix/nginx-proxy/conf.d` directory.
Users who run an nginx webserver on some other machine will need to do
something different.
This give us the possibility to run multiple instances of
workers that that don't expose a port.
Right now, we don't support that, but in the future we could
run multiple `federation_sender` or `pusher` workers, without
them fighting over naming (previously, they'd all be named
something like `matrix-synapse-worker-pusher-0`, because
they'd all define `port` as `0`).
This leads to much easier management and potential safety
features (validation). In the future, we could try to avoid port
conflicts as well, but it didn't seem worth the effort to do it now.
Our port ranges seem large enough.
This can also pave the way for a "presets" feature
(similar to `matrix_nginx_proxy_ssl_presets`) which makes it even easier
for people to configure worker counts.
The quotes around "host" for both `--pid` and `--net` were
causing trouble for me:
> docker: --pid: invalid PID mode.
and:
> docker: Error response from daemon: network "host" not found.
I've also changed the `-v` call to `--mount` for consistency with the
rest of the playbook.
Also includes the dashboards for Synapse and for Node Exporter.
Again has only been tested on debian amd64 so far, but the grafana docker image is available for arm64 and arm32. Nice.
Basic system stats, to show stuff the synapse metrics
can't show such as resource usage by bridges, etc
Seems to work fine as well.
This too has only been tested on debian amd64 so far
I felt that adding another variable was probably going to be the easiest way to do this. I may end up adding another variable to enable this feature, for consistency with some of the other things.
This passes any arguments given to 'matrix-postgres-cli' to the 'psql' command.
Examples:
$ # start an interactive shell connected to a given db
$ sudo matrix-postgres-cli -d synapse
$ # run a query, non-interactively
$ sudo matrix-postgres-cli -d synapse -c 'SELECT group_id FROM groups;'
If they do, our next playbook runs would simply revert it
and report "changed" for that task.
There's no benefit to letting the bridge spew a new config file.
This does not apply to the mautrix whatsapp bridge, because that one
is written in Go (not Python) and takes different flags. There's no
equivalent flag there.
Fixes a regression introduced in f6097fbba1, which was cauing Synapse
to die with this error message:
> ValueError: sender_localpart needs characters which are not URL encoded.
These are just defensive cleanup tasks that we run.
In the good case, there's nothing to kill or remove, so they trigger an
error like this:
> Error response from daemon: Cannot kill container: something: No such container: something
and:
> Error: No such container: something
People often ask us if this is a problem, so instead of always having to
answer with "no, this is to be expected", we'd rather eliminate it now
and make logs cleaner.
In the event that:
- a container is really stuck and needs cleanup using kill/rm
- and cleanup fails, and we fail to report it because of error
suppression (`2>/dev/null`)
.. we'd still get an error when launching ("container name already in use .."),
so it shouldn't be too hard to investigate.
Not specifying bind addresses for the worker resulted in this warning:
> synapse.app - 47 - WARNING - None - Failed to listen on 0.0.0.0, continuing because listening on [::]
Additionally, metrics listening only on 127.0.0.1 seems like a no-op.
Only having it accessible from within the container is likely not what
we intend. Changed that to all interfaces as well.
Whether it actually gets exposed or not depends on the systemd service
and `matrix_synapse_workers_container_host_bind_address`.
This switches the `docker exec` method of spawning
Synapse workers inside the `matrix-synapse` container with
dedicated containers for each worker.
We also have dedicated systemd services for each worker,
so this are now:
- more consistent with everything else (we don't use systemd
instantiated services anywhere)
- we don't need the "parse systemd instance name into worker name +
port" part
- we don't need to keep track of PIDs manually
- we don't need jq (less depenendencies)
- workers dying would be restarted by systemd correctly, like any other
service
- `docker ps` shows each worker separately and we can observe resource
usage
We do this by creating one more layer of indirection.
First we reach some generic vhost handling matrix.DOMAIN.
A bunch of override rules are added there (capturing traffic to send to
ma1sd, etc). nginx-status and similar generic things also live there.
We then proxy to the homeserver on some other vhost (only Synapse being
available right now, but repointing this to Dendrite or other will be
possible in the future).
Then that homeserver-specific vhost does its thing to proxy to the
homeserver. It may or may not use workers, etc.
Without matrix-corporal, the flow is now:
1. matrix.DOMAIN (matrix-nginx-proxy/matrix-domain.conf)
2. matrix-nginx-proxy/matrix-synapse.conf
3. matrix-synapse
With matrix-corporal enabled, it becomes:
1. matrix.DOMAIN (matrix-nginx-proxy/matrix-domain.conf)
2. matrix-corporal
3. matrix-nginx-proxy/matrix-synapse.conf
4. matrix-synapse
(matrix-corporal gets injected at step 2).
This removes some `multi-target.wants` symlinks as well, etc.
But despite systemd saying:
> Removed symlink /etc/systemd/system/matrix-synapse.service.wants/matrix-synapse-worker@appservice:0.service
.. I still see such symlinks tehre for me for some reason, so keeping the
code (below) to find & delete them still seems like a good idea.
There was a `matrix_nginx_proxy_enabled|default(False)` check, but:
- it didn't seem to work reliably for some reason (hmm)
- referring to a `matrix_nginx_proxy_*` variable from within the
`matrix-synapse` role is not ideal
- exposing always happened on `127.0.0.1`, which may not be good enough
for some rarer setups (where the own webserver is external to the host)
I guess it didn't hurt to do it until now, but it's not great serving
federation APIs on the client-server API port, etc.
matrix-corporal doesn't work yet (still something to be solved in the
future), but its firewalling operations will also be sabotaged
by Client-Server APIs being served on the federation port (it's a way to get around its firewalling).
If we load it at runtime, during matrix-synapse role execution,
it's good enough for matrix-synapse and all roles after that,
but.. it breaks when someone uses `--tags=setup-nginx-proxy` alone.
The downside of including this vars file like this in `setup.yml`
is that the variables contained in it cannot be overriden by the user
(in their inventory's `vars.yml`).
... but it's not like overriding these variables was possible anyway
when including them at runtime.
Some people run Coturn or Jitsi, etc., by themselves and disable it
in the playbook.
Because the playbook is trying to be nice and clean up after itself,
it was deleting these Docker images.
However, people wish to pull and use them separately and would rather
they don't get deleted.
We could make this configurable for the sake of this special case, but
it's simpler to just avoid deleting these images.
It's not like this "cleaning things up" thing works anyway.
As time goes on, the playbook gets updated with newer image tags
and we leave so many images behind. If one doesn't run
`docker system prune -a` manually once in a while, they'd get swamped
with images anyway. Whether we leave a few images behind due to the lack
of this cleanup now is pretty much irrelevant.
We log everything in systemd/journald for every service already,
so there's no need for double-logging, bridges rotating log files
manually and other such nonsense.
In short, this makes Synapse a 2nd class citizen,
preparing for a future where it's just one-of-many homeserver software
options.
We also no longer have a default Postgres superuser password,
which improves security.
The changelog explains more as to why this was done
and how to proceed from here.
I had intentionally held it back in 39ea3496a4
until:
- it received more testing (there were a few bugs during the
migration, but now it seems OK)
- this migration guide was written
While administering we will occasionally invoke this script interactively with the "non-interactive" switch still there, yet still sit at the desk waiting for 300 seconds for this timer to run out.
The systemd-timer already uses a 3h randomized delay for automatic renewals, which serves this purpose well.
The `mobile` branch got merged to `master`, which ends up becoming
`:latest`. It's a "rewrite" of the bridge's backend and only
supports a Postgres database.
We'd like to go back (well, forward) to `:latest`, but that will take
a little longer, because:
- we need to handle and document things for people still on SQLite
(especially those with external Postgres, who are likely on SQLite for
bridges)
- I'd rather test the new builds (and migration) a bit before
releasing it to others and possibly breaking their bridge
Brave ones who are already using the bridge with Postgres
can jump on `:latest` and report their experience.
Fixes https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/756
Related to https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/737
I feel like timers are somewhat more complicated and dirty (compared to
cronjobs), but they come with these benefits:
- log output goes to journald
- on newer systemd distros, you can see when the timer fired, when it
will fire, etc.
- we don't need to rely on cron (reducing our dependencies to just
systemd + Docker)
Cronjobs work well, but it's one more dependency that needs to be
installed. We were even asking people to install it manually
(in `docs/prerequisites.md`), which could have gone unnoticed.
Once in a while someone says "my SSL certificates didn't renew"
and it's likely because they forgot to install a cron daemon.
Switching to systemd timers means that installation is simpler
and more unified.
This reverts commit 2a25b63bb6.
Looking at other roles, we trigger building regardless of this.
It's better to always trigger it, because it's less fragile.
If the build fails and we only trigger it on "git changes"
then we won't trigger it for a while. That's not good.
Triggering it each and every time may seem like a waste,
but it supposedly runs quickly due to Docker caching.
This variable has been useless since 2019-01-08.
We probably don't need to check for its usage anymore,
given how much time has passed since then, but ..
Before we potentially clone to that path, we'd better make sure it exists.
We also simplify `when` statements a bit.
Given that we're in `setup_install.yml`, we know that the bridge is enabled,
so there's no need to check for that.
Not sure if it breaks with them or not, but no other directive
uses quotes and the nginx docs show examples without quotes,
so we're being consistent with all of that.
The different configurations are now all lower case, for consistent
naming.
`matrix_nginx_proxy_ssl_config` is now called
`matrix_nginx_proxy_ssl_preset`. The different options for "modern",
"intermediate" and "old" are stored in the main.yml file, instead of
being hardcoded in the configuration files. This will improve the
maintainability of the code.
The "custom" preset was removed. Now if one of the variables is set, it
will use it instead of the preset. This will allow to mix and match more
easily, for example using all the intermediate options but only
supporting TLSv1.2. This will also provide better backward
compatibility.
While it's kind of nice having it, it's also somewhat raw
and unnecessary.
Having a good default and not even mentioning it seems better
for most users.
People who need a more exposed bridge (rare) can use
override the default configuration using
`matrix_mautrix_signal_configuration_extension_yaml`.
The answer to these is: it's good to have them in both places.
The role defines the obvious things it depends on (not knowing
what setup it will find itself into), and then
`group_vars/matrix_servers` "extends" it based on everything else it
knows (the homeserver being Synapse, whether or not the internal
Postgres server is being used, etc.)
We need to suppress systemd service-stopping requests in certain rare
cases like https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/771
That issue seems to describe a case, where a migration from mxisd to
ma1sd was happening (DB files had just been moved), and then we were
attemping to stop `matrix-ma1sd.service` so we could import that database into
Postgres. However, there's neither `matrix-mxisd.service`, nor
`matrix-ma1sd.service` after `migrate_mxisd.yml` had just run, so
stopping `matrix-ma1sd.service` was failing.
Fixes a problem like this:
> File "/usr/lib/python3.8/site-packages/mautrix/bridge/e2ee.py", line 79, in __init__
> raise RuntimeError("Unsupported database scheme")
mautrix-python's e2ee.py module expects to find `postgres://` instead of
`postgresql://`.
Our old (base-path -> data-path) SQLite migration can't work otherwise.
It's probably not necessary to keep it anymore, but since we still do,
at least we should take care to ensure it works.
Raspbian doesn't seem to support arm64, so this is somewhat pointless
right now.
However, they might in the future. Doing this should also unify us
some more with `setup_debian.yml` with the ultimate goal of
eliminating `setup_raspbian.yml`.
Until now, we've only supported non-amd64 on Raspbian.
Seems like there are now people running Debian/Ubuntu on ARM,
so we were forcing them into amd64 Docker packages.
I've gotten a report that this change fixes support
for Ubuntu Server 20.04 on RPi 4B.
A new variable called `matrix_nginx_proxy_ssl_config` is created for
configuring how the nginx proxy configures SSL. Also a new configuration
validation option and other auxiliary variables are created.
A new variable configuration called `matrix_nginx_proxy_ssl_config` is
created. This allow to set the SSL configuration easily using the
default options proposed by Mozilla. The default configuration is set to
"Intermediate", removing the weak ciphers used in the old
configurations.
The new variable can also be set to "Custom" for a more granular control.
This allows to set another three variables called:
- `matrix_nginx_proxy_ssl_protocols`,
- `matrix_nginx_proxy_ssl_prefer_server_ciphers`
- `matrix_nginx_proxy_ssl_ciphers`
Also a new task is added to validate the SSL configuration variable.
Revert "Correct inabillity for appservice-discord to connect"
This reverts commit 673e19f830.
While certain things do work even with such a local URL, sending
messages leads to an error like this:
> [DiscordBot] verbose: DiscordAPIError: Invalid Form Body
> avatar_url: Not a well formed URL.
Fixes https://github.com/Half-Shot/matrix-appservice-discord/issues/649
The sample configuration file for appservice-discord
c29cfc72f5/config/config.sample.yaml (L8)
explicitly says that we need a public URL.
Now that 0.7.2 is out, the Docker image supports Postgres
and we can do the (SQLite -> Postgres) migration.
I've also found out that we needed to fix up the `tokens.ex_date` column
data type a bit to prevent matrix-registration from raising exceptions
when comparing `datetime.now()` with `ex_date` coming from the database.
Example:
> File "/usr/local/lib/python3.8/site-packages/matrix_registration/tokens.py", line 58, in valid
> expired = self.ex_date < datetime.now()
> TypeError: can't compare offset-naive and offset-aware datetimes
In cases where pgloader is not enough and we need to do some additional
migration work after it, we can now use
`additional_psql_statements_list` and
`additional_psql_statements_db_name`.
This is to be used when migrating `matrix-registration`'s data at the
very least.
This switches us to a container image maintained by the
matrix-registration developer.
0.7.2 also supports a `base_url` configuration option we can use to
make it easier to reverse-proxy at a different base URL.
We still keep some workarounds, because of this issue:
https://github.com/ZerataX/matrix-registration/issues/47
We were running into conflicts, because having initialized
the roles (users) and databases, trying to import leads to
errors (role XXX already exists, etc.).
We were previously ignoring the Synapse database (`homeserver`)
when upgrading/importing, because that one gets created by default
whenever the container starts.
For our additional databases, it's a similar situation now.
It's not created by default as soon as Postgres starts with an empty
database, but rather we create it as part of running the playbook.
So we either need to skip those role/database creation statements
while upgrading/importing, or to avoid creating the additional database
and rely on the import for that. I've gone for the former, because
it's already similar to what we were doing and it's simpler
(it lets `setup_postgres.yml` be the same in all scenarios).
Auto-migration and everything seems to work. It's just that
matrix-registration cannot load the Python modules required
for talking to a Postgres database.
Tracked here: https://github.com/ZerataX/matrix-registration/issues/44
Until this gets fixed, we'll continue default to 'sqlite'.
I was thinking that it makes sense to be more specific,
and using `_postgres_` also separated these variables
from the `_database_` variables that ended up in bridge configuration.
However, @jdreichmann makes a good point
(https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/740#discussion_r542281102)
that we don't need to be so specific and can allow for other engines (like MySQL) to use these variables.
Regression since 2d99ade72f and 9bf8ce878e, respectively.
When SQLite is to be used, these bridges expect an `sqlite://`
connection string, and not a plain file name (path), like Appservice
Discord and mautrix-whatsapp do.
Instead of passing the connection string, we can now pass a name of a
variable, which contains a connection string.
Both are supported for having extra flexibility.
Since we'll likely have generic SQLite database importing
via [pgloader](https://pgloader.io/) for migrating bridge
databases from SQLite to Postgres, we'd rather avoid
calling the "import Synapse SQLite database" command
as just `--tags=import-sqlite-db`.
Similarly, for the media store, we'd like to mention that it's
related to Synapse as well.
We'd like to be more explicit, so as to be less confusing,
especially in light of other homeserver implementations
coming in the future.
People can toggle between them now. The playbook also defaults
to using SQLite if an external Postgres server is used.
Ideally, we'd be able to create databases/users in external Postgres
servers as well, but our initialization logic (and `docker run` command,
etc.) hardcode too many things right now.
While these modules are really nice and helpful, we can't use them
for at least 2 reasons:
- for us, Postgres runs in a container on a private Docker network
(`--network=matrix`) without usually being exposed to the host.
These modules execute on the host so they won't be able to reach it.
- these modules require `psycopg2`, so we need to install it before
using it. This might or might not be its own can of worms.
The tasks in `create_additional_databases.yml` will likely
ensure `matrix-postgres.service` is started, etc.
If no additional databases are defined, we'd rather not execute that
file and all these tasks that it may do in the future.
> Invalid data passed to 'loop', it requires a list, got this instead: matrix_postgres_additional_databases. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup.
Well, or working around it, as I've done in this commit (which seems
more sane than `wantlist=True` stuff).
To avoid needing to have `jq` installed on the machine, we could:
- try to run jq in a Docker container using some small image providing
that
- better yet, avoid `jq` altogether
Moving it above the "uninstalling" set of tasks is better.
Extracting it out to another file at the same time, for readability,
especially given that it will probably have to become more complex in
the future (potentially installing `jq`, etc.)
v0.7.0 is broken right now, because it calls
`/_matrix/client/r0/admin/register`, which is now at
`/_synapse/admin/v1/register`.
This has been fixed here: 6b26255fea
.. but it's not part of any release.
Switching to `master` (`docker.io/devture/zeratax-matrix-registration:latest`) until it gets resolved.
Reported upstream here: https://github.com/ZerataX/matrix-registration/issues/43
Starting with Docker 20.10, `--hostname` seems to have the side-effect
of making Docker's internal DNS server resolve said hostname to the IP
address of the container.
Because we were giving the mailer service a hostname of `matrix.DOMAIN`,
all requests destined for `matrix.DOMAIN` originating from other
services on the container network were resolving to `matrix-mailer`.
This is obviously wrong.
Initially reported here: https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/748
We normally try to not use the public hostname (and IP address) on the
container network and try to make services talk to one another locally,
but it sometimes could happen.
With this, we use a `matrix-mailer` hostname for the matrix-mailer
container. My testing shows that it doesn't cause any trouble with
email deliverability.
If a service is enabled, a database for it is created in postgres with a uniqque password. The service can then use this database for data storage instead of relying on sqlite.
The Docker 19.04 -> 20.10 upgrade contains the following change
in `/usr/lib/systemd/system/docker.service`:
```
-BindsTo=containerd.service
-After=network-online.target firewalld.service containerd.service
+After=network-online.target firewalld.service containerd.service multi-user.target
-Requires=docker.socket
+Requires=docker.socket containerd.service
Wants=network-online.target
```
The `multi-user.target` requirement in `After` seems to be in conflict
with our `WantedBy=multi-user.target` and `After=docker.service` /
`Requires=docker.service` definitions, causing the following error on
startup for all of our systemd services:
> Job matrix-synapse.service/start deleted to break ordering cycle starting with multi-user.target/start
A workaround which appears to work is to add `DefaultDependencies=no`
to all of our services.
After recently updating my matrix-docker-ansible-deploy installation, matrix-appservice-discord would refuse to start, logging ECONNREFUSED to https://matrix.[mydomain]:443, which was resolving to 172.18.0.2 due to the `--hostname` in mailer grabbing that hostname.
Curious why the IRC bridge didn't have this issue, I looked into it, and it was connecting to `http://matrix-synapse:8008`. Correcting this one to that URL resolved the issue.
ma1sd requires the openid endpoints for certain functionality.
Example: 90b2b5301c/src/main/java/io/kamax/mxisd/auth/AccountManager.java (L67-L99)
If federation is disabled, we still need to expose these openid APIs on the
federation port.
Previously, we were doing similar magic for Dimension.
As per its documentation, when running unfederated, one is to enable
the openid listener as well. As per their recommendation, people
are advised to do enable it on the Client-Server API port
and use the `federationUrl` variable to override where the federation
port is (making federation requests go to the Client-Server API).
Because ma1sd always uses the federation port (unless you do some
DNS overwriting magic using its configuration -- which we'd rather not
do), it's better if we just default to putting the `openid` listener
where it belongs - on the federation port.
With this commit, we retain the "automatically enable openid APIs" thing
we've been doing for Dimension, but move it to the federation port instead.
We also now do the same thing when ma1sd is enabled.
We've had a report of the `connection` value getting cut off,
supposedly because it contains something that breaks off the string.
Using `|to_json` takes care of it.
This may be a bit premature, because the bridge didn't work for me
the last time I tried it (RC3).
Some bugs have been fixed to make our config compatible with v1.0.0
though, so it may work for some people (especially those starting
fresh).
I'm not for shipping potentially broken things, but given that we were
using `docker.io/halfshot/matrix-appservice-discord:latest` and that
points to v1.0.0 already (with no other tag we can use), our setup was
already broken in any case.
Now, at least it has some chance of running.
Many people probably didn't even know this - that ansible can be
quite a bit picky about what it will be willing to work with remotely.
Thanks @maxklenk !
Some people requested that `--tags=start` not set up service autostart.
One can now do `--tags=start --extra-vars="matrix_services_autostart_enabled=false"`
to just start services ones and not set up autostarting.
It's not like it worked anyway, because we don't have the necessary
services installed for transcription (Jigasi), nor recording (Jibri).
Disabling these, should hopefully disable their related elements
in the Jitsi Web UI.
Fixes https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/726
This supersedes/fixes-up this Pull Request:
https://github.com/spantaleev/matrix-docker-ansible-deploy/pull/719
The Jitsi Web and JVB containers now (in build 5142) always
start by bulding their own default configuration
(`config.js` and `sip-communicator.properties`, respectively).
The fact that we were generating these files ourselves was no longer of use,
because our configuration was thrown away in favor of the one created
by the containers on startup.
With this commit, we're completely redoing things. We no longer
generate these configuration files. We try to pass the proper
environment variables, so that Jitsi services can generate the
configuration files themselves.
Besides that, we try to use the "custom configuration" mechanism
provided by Jitsi Web and Jitsi JVB (`custom-config.js` and
`custom-sip-communicator.properties`, respectively), so that
we and our users can inject additional configuration.
Some configuration options we had are gone now. Others are no longer
controllable via variables and need to be injected using
the `_config_extension` variables that we provide.
The validation logic that is part of the role should take care
to inform people about how to upgrade (if they're using some custom
configuration, which needs special care now). Most users should not
have to do anything special though.
Since the switch from `-v` to `--mount` (in 1fca917ad1),
we've regressed when `matrix_ssl_retrieval_method == 'none'`.
In such a case, we don't create `/matrix/ssl` directories at all
and shouldn't be trying to mount them into the `matrix-nginx-proxy`
container.
Previously, with `-v`, Docker would auto-create them, effectively hiding
our mistake. Now that `--mount` doesn't do such auto-creation magic,
the `matrix-nginx-proxy` container was failing to start.
Fixes https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/734
`-v` magically creates the source destination as a directory,
if it doesn't exist already. We'd like to avoid this magic
and the potential breakage that it might cause.
We'd rather fail while Docker tries to find things to `--mount`
than have it automatically create directories and fail anyway,
while having contaminated the filesystem.
There's a lot more `-v` instances remaining to be fixed later on.
This is just some start.
Things like `matrix_synapse_container_additional_volumes` and
`matrix_nginx_proxy_container_additional_volumes` were not changed to
use `--mount`, as options for each one are passed differently
(`ro` is `ro`, but `rw` doesn't exist and `slave` is `bind-propagation=slave`).
To avoid breaking people's custom volume mounts, we keep it as it is for now.
A deficiency with `--mount` is that it lacks the `z` option (SELinux
ownership changes), and some of our `-v` instances use that. I'm not
sure how supported SELinux is for us right now, but it might be,
and breaking that would not be a good idea.
Fixes https://github.com/spantaleev/matrix-docker-ansible-deploy/issues/716
This patch makes us use more fully-qualified container image names
(either prefixed with docker.io/ or with localhost/).
The latter happens when self-building is enabled.
We've recently had issues where if an image was removed manually
and the service was restarted (making `docker run` fetch it from Docker Hub, etc.),
we'd end up with a pulled image, even though we're aiming for a self-built one.
Re-running the playbook would then not do a rebuild, because:
- the image with that name already exists (even though it's something
else)
- we sometimes had conditional logic where we'd build only if the git
repo changed
By explicitly changing the name of the images (prefixing with localhost/),
we avoid such confusion and the possibility that we'd automatically pul something
which is not what we expect.
Also, I've removed that condition where building would happen on git
changes only. We now always build (unless an image with that name
already exists). We just force-build when the git repo changes.
We'd like the roles to be self-contained (as much as possible).
Thus, the `matrix-nginx-proxy` shouldn't reference any variables from
other roles. Instead, we rely on injection via
`group_vars/matrix_servers`.
Related to #681 (Github Pull Request)
Having it unset in the role itself (while referencign it) is a little strange.
Now people can look at the `roles/matrix-dynamic-dns/defaults/main.yml`
file and figure out everything that's necessary to run the role.
Related to #681 (Github Pull Request)
This broke in 63a49bb2dc.
Proxying the OpenID Connect endpoints is now possible,
but needs to be enabled explicitly now.
Supersedes #702 (Github Pull Request).
This patch builds up on the idea from that Pull Request,
but does things in a cleaner way.
We do this to match Synapse's new default "max_upload_size" (50MB).
This `matrix_nginx_proxy_proxy_matrix_client_api_client_max_body_size_mb`
default value only affects standalone usage of the `matrix-nginx-proxy`
role. When the role is used in the context of the playbook,
the value is dynamically assigned from `group_vars/matrix_servers`.
Somewhat related to #692 (Github Issue).
The regex introduced in 63a49bb2dc seems to take precedence
over the bare location blocks, causing a regression.
> It is important to understand that, by default, Nginx will serve regular expression matches in preference to prefix matches.
> However, it evaluates prefix locations first, allowing for the administer to override this tendency by specifying locations using the = and ^~ modifiers.
Source: https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms
also, worker.yaml.j2:
- hone worker_name
- remove worker_pid_file entry (would only be used if worker_daemonize
set to true; also, synapse only knows about the container namespace
and thus can not provide the required host-view PID)
- cherry-pick "Ensure worker config exists in systemd service (#7528)"
from synapse d74cdc1a42e8b487d74c214b1d0ca575429d546a:
"check that the worker config file exists instead of silently failing."
If the SQLite database was from an older version of Synapse, it appears
that Synapse would try to run migrations on it first, before importing.
This was failing, because the file wasn't writable.
Hopefully, this fixes the problem.
Interestingly, no one has reported this failure before #662 (Github
Issue).
It doesn't make sense to keep saying that we support such old Ansible
versions, when we're not even testing on anything close to those.
Time is also passing and such versions are getting more and more
ancient. It's time we bumped our requirements to something that is more
likely to work.
showLabsSettings is the new enableLabs I guess. enableLabs doesn't seem to do anything anymore. It had been deprecated for a while.
This PR also removes @riot-bot:matrix.org as the default welcome_user_id since it doesn't exist anymore.
We recently had a report of the Postgres backup container's log file
growing the size of /var/lib/docker until it ran out of disk space.
Trying to prevent similar problems in the future.
Certain more-minimal Debian installations may not have
lsb-release installed, which makes the playbook fail.
We need lsb-release on Debian, so that ansible_lsb
could tell us if this is Debian or Raspbian.
In #628 I proposed a CORS change that turns out not to be the root of the issue. Caffeine-addled diagnosis leads to sloppy thinking, and this change should be reverted. In fact, if left it will cause problems for new installations.
Even with the v2 updates listed in #503 and partially addressed in #614, this is still needed to enable identity services to function with Element Desktop/Web. Testing on multiple clients with a clean config has confirmed this, at least for my installation.
Fix regression since 2a50b8b6bb (#597).
Dimension is intended to be embedded in various clients,
be it the Element service that we host (at element.DOMAIN),
some other Element (element-desktop running locally), etc.
The when statement is supposed to be on the block, not on the individual task.
It affects all tasks within the block (they're all to be executed when ma1sd is enabled and self-building is requested0.
The tag format used in the `ma1sd` repo have change. Versions no longer
start with 'v', and when building for non-amd64, we also need to strip
off the '-$arch' bit from the Docker image name.
Further, when building the .jar file, `ma1sd` currently names the .jar
based on the project's directory, which we call 'docker-src'. This means
other parts of the `ma1sd` build can't find the .jar file. Remedy this
by ensuring that the dir is called `docker-src/ma1sd`.
`matrix_container_images_self_build` was not really doing anything
anymore. It previously was influencing `matrix_*_self_build` variables,
but it's no longer the case since some time ago.
Individual `matrix_*_self_build` variables are still available.
People that would like to toggle self-building for a specific component
ought to use those.
These variables are also controlled automatically (via
`group_vars/matrix_servers`) depending on `matrix_architecture`.
In other words, self-building is being done automatically for
all components when they don't have a prebuilt image for the specified
architecture. Some components only support `amd64`, while others also
have images for other architectures.
There's no change in the source code. Just a release bump for packaing
reasons. It doesn't matter much for us here, but let's be on the latest
tag anyway.
Postgres setup like
matrix_mautrix_telegram_configuration_extension_yaml: |
appservice:
database: "postgres://XXX:XXX@matrix-postgres:5432/mxtg"
will fail without the right Dockernetwork
`/etc/nginx/conf.d/default.conf` was previously causing
some issues when used with our `--user`.
It's not the case anymore, so we can remove it.
Fixes#369 (Github Issue).
Depending on the distro, common commands like sleep and chown may either
be located in /bin or /usr/bin.
Systemd added path lookup to ExecStart in v239, allowing only the
command name to be put in unit files and not the full path as
historically required. At least Ubuntu 18.04 LTS is however still on
v237 so we should maintain portability for a while longer.
the current version fails the import, because the volume for the media is missing. It still fails if you have the optional shared secret password provider is enabled, so that might need another mount. Commenting out the password provider in the hoimeserver.yaml during the run works as well.
This is mostly here to guard against problems happening
due to server migration and doing `chown -R matrix:matrix /matrix`.
Normally, the file is owned by `1000:1000`, as expected.
If ownership changes, Dimension could still start, but it will fail the
first time it tries to write to the database. Explicitly chowning
before startup guards against this.
Related to #485 and #486 (Github Pull Requests).
Also related to ccc7aaf0ce.
Dimension runs as the `node` user in the container (`1000:1000`).
It doesn't seem like we have a way around it. Thus, its configuration
must also be readable by that user (or group, in this case).
We don't really need to fail in such a spectactular way,
but it's probably good to do. It will only happen for people
who are defining their own user/group id, which is rare.
It seems like a good idea to tell them that this doesn't work
as they expect anymore and to ask them to remove these variables,
which otherwise give them a fake sense of hope.
Related to #486 (Github Pull Request).
If one runs the playbook with `--tags=setup-all`, it would have been
fine.
But running with a specific tag (e.g. `--tags=setup-riot-web`) would
have made that initialization be skipped, and the `matrix-riot-web` role
would fail, due to missing variables.
Ansible will migrate the ownership of the base path and config path, but
manual intervention will be required in order to migrate the ownership
of files in those directories (i.e. dimension.db).
Stop the services:
(local)$ ansible-playbook -i inventory/hosts setup.yml --tags=stop
Fix the permissions on the server:
(server)# chown -Rv "{{ matrix_user_username }}:{{ matrix_user_username }}" "{{ matrix_dimension_base_path }}"
which would typically look like:
(server)# chown -Rv matrix:matrix /matrix/dimension/
Reconfigure Dimension and start the services:
(local)$ ansible-playbook -i inventory/hosts setup.yml --tags=setup-dimension,start
* add permalinkPrefix to riot-web config
* add feature to change default theme of riot-web via its config file
* remove matrix_riot_web_change_default_theme and provide sane default
· 😅 How to keep this in sync with the matrix-synapse documentation?
· regex location matching is expensive
· nginx syntax limit: one location only per block / statement
· thus, lots of duplicate statements in this file
Well, actually 8cd9cde won't work, unless we put the
`|to_nice_yaml` thing on a new line.
We can, but that takes more lines and makes things look uglier.
Using `|to_json` seems good enough.
The whole file is parsed as YAML later on and merged with the
`_extension` variable before being dumped as YAML again in the end.
Hopefully fixes an error like this (which I haven't been able to
reproduce, but..):
> [modules/xmpp/strophe.util.js] <Object.i.Strophe.log>: Strophe: Error: Failed to construct 'RTCPeerConnection': 'matrix.DOMAIN' is not one of the supported URL schemes 'stun', 'turn' or 'turns'.
We define this password in the `sip-communicator.properties`
configuration file, so this is not needed for actually running JVB.
However, it does a (useless) safety check during container startup,
and we need to make that check happy.
We do this for 2 reasons:
- so we can control things which are not controllable using environment
variables (for example `stunServers` in jitsi/web, since we don't wish
to use the hardcoded Google STUN servers if our own Coturn is enabled)
- so playbook variable changes will properly rebuild the configuration.
When using Jitsi environment variables, the configuration is only built
once (the first time) and never rebuilt again. This is not the
consistent with the rest of the playbook and with how Ansible operates.
We're not perfect at it (yet), because we still let the Jitsi containers
generate some files on their own, but we are closer and it should be
good enough for most things.
Related to #415 (Github Pull Request).
This keeps the roles cleaner and more independent of matrix-base,
which may be important for people building their own playbook
out of the individual roles and not using the matrix-base role.
This adds into the Riot config.json the field
'default_server_config.m.homeserver.server_name'
with, by default, the value of the playbook's 'matrix_domain' variable.
Riot displays this string in its login page and will now say 'Sign in to
your Matrix account on example.org' (the server name) instead of 'Sign
in ... on matrix.example.org' (the server domain-name).
This string can be configured by setting the playbook variable
'matrix_riot_web_default_server_name'
to any string, so we can make Riot say for example 'Sign in ... on Our
Server'.
This fixes an incorrect indentation in the database specification for
appservice-irc which caused matrix-appservice-irc to refuse to start
with the remarkably unhelpful error message:
```
ERROR:CLI Failed to run bridge.
```
This also updates doc links to the new matrixdotorg repo because the
tedomum repo contains out-of-date documentation.
Synapse v1.9.0 changed some things which made the REST Auth Password
Provider break.
The ma1uta/matrix-synapse-rest-password-provider implements some
workarounds for now and will likely deliver a proper fix in the future.
Not much has changed between the 2 projects, so this should be a
painless transition.
This change allows us to work with both our existing Docker image
(`tedomum/matrix-appservice-irc:latest`) and with the
official Docker image (`matrixdotorg/matrix-appservice-irc`).
The actual change to the official Docker image requires more testing
and will be done separately.
Can you double check that the way I have this set only exposes it locally? It is important that the manhole is not available to the outside world since it is quite powerful and the password is hard coded.
Switching to the official image (vectorim/riot-web) should ensure:
- there's less breakage, as it's maintained by the same team as riot-web
- there's fewer actors we need to trust
- we can upgrade riot-web faster, as newer versions should be released
on Docker hub at the same time riot-web releases are made
Riot used to be fine with it being blank but now it complains. This creates an ugly looking comma when there is an identity server configured but I guess that's fine.
Prompted by: https://matrix.org/blog/2019/11/09/avoiding-unwelcome-visitors-on-private-matrix-servers
This is a bit controversial, because.. the Synapse default remains open,
while the general advice (as per the blog post) is to make it more private.
I'm not sure exactly what kind of server people set up and whether they
want to make the room directory public. Our general goal is to favor
privacy and security when running personal (family & friends) and corporate
homeservers, both of which likely benefit from having a more secure default.
Don't mention systemd-journald adjustment anymore, because
we've changed log levels to WARNING and Synapse is not chatty by default
anymore.
The "excessive log messages may get dropped on CentOS" issue no longer
applies to most users and we shouldn't bother them with it.
matrix_synapse_storage_path is already defined in matrix-synapse/defaults/main.yml (with a default of "{{ matrix_synapse_base_path }}/storage"), but was not being used for its presumed purpose in matrix-synapse.service.j2. As a result, if matrix_synapse_storage_path was overridden (in a vars.yml), the synapse service failed to start.
Discord client IDs are numeric (e.g. 12345).
Passing them as integers however, causes the Discord bridge's YAML parser
to parse them as integers and its config schema validation will fail.
Fixes#240 (Github Issue)
This only gets triggered if:
- the Synapse role is used standalone and the default values are used
- the whole playbook is used, with `matrix_mxisd_enabled: false`
Continuation of #234 (Github Pull Request).
I had unintentionally updated the documentation for the feature,
saying the page is available at `https://matrix.DOMAIN/nginx_status`.
Looks like it wasn't the case, going against my expectations.
I'm correcting this with this patch.
The status page is being made available on both HTTP and HTTPS.
Serving over HTTP is likely necessary for services like
Longview
(https://www.linode.com/docs/platform/longview/longview-app-for-nginx/)
# Auth server config
auth:
# Publicly accessible base URL for the login endpoints.
# The prefix below is not implicitly added. This URL and all subpaths should be proxied
# or otherwise pointed to the appservice's webserver to the path specified below (prefix).
# This path should usually include a trailing slash.
public: http://example.com/login/
# Internal prefix in the appservice web server for the login endpoints.
prefix: /login
Also discussed previously in #213 (Github Pull Request).
shared-secret-auth and rest-auth logging is still at `INFO`
intentionally, as user login events seem more important to keep.
Those modules typically don't spam as much.
We recently had someone in the support room who set it to `false`
and the playbook ran without any issues.
This currently seems to yield the same result as 'none', but it's
better to avoid such behavior.
It adds support for a new `DISABLE_SENDER_VERIFICATION` environment
variable that can be used to disable verification of sender addresses.
It doesn't matter for us, but we upgrade to keep up with latest.
Looks like these client ids are actually integers,
but unless we pass them as a string, the bridge would complain with
an error like:
{"field":"data.auth.clientID","message":"is the wrong type","value":123456789012345678,"type":"string","schemaPath":["properties","auth","properties","clientID"]}
Explicitly-casting to a string should fix the problem.
The Discord bridge should probably be improved to handle both ints and
strings though.
Regression since 174a6fcd1b, #204 (Github Pull Request),
which only affects new servers.
Old servers which had their passkey.pem file relocated were okay.
ef5e4ad061 intentionally makes us conform to
the logging format suggested by the official Docker image.
Reverting this part, because it's uglier.
This likely should be fixed upstream as well though.
Somewhat related to #213 (Github Pull Request).
We've been moving in the opposite direction for quite a long time.
All services should just leave logging to systemd's journald.
Fixes a regression introduced during the upgrade to
Synapse v1.1.0 (in 2b3865ceea).
Since Synapse v1.1.0 upgraded to Python 3.7
(https://github.com/matrix-org/synapse/pull/5546),
we need to use a different modules directory when mounting
password provider modules.
Well, `config.yaml` has been playbook-managed for a long time.
It's now extended to match the default sample config of the Discord
bridge.
With this patch, we also make `registration.yaml` playbook-managed,
which leads us to consistency with all other bridges.
Along with that, we introduce `./config` and `./data` separation,
like we do for the other bridges.
I've been thinking of doing before, but haven't.
Now that the Whatsapp bridge does it (since 4797469383),
it makes sense to do it for all other bridges as well.
(Except for the IRC bridge - that one manages most of registration.yaml by itself)
appservice-irc doesn't have permission to create files in its project
directory and the intention is to log to the console, anyway. By
commenting out the file names, appservice-irc won't attempt to open the
files.
This means we need to explicitly specify a `media_url` now,
because without it, `url` would be used for building public URLs to
files/images. That doesn't work when `url` is not a public URL.
Until now, if `--tags=setup-synapse` was used, bridge tasks would not
run and bridges would fail to register with the `matrix-synapse` role.
This means that Synapse's configuration would be generated with an empty
list of appservices (`app_service_config_files: []`).
.. and then bridges would fail, because Synapse would not be aware of
there being any bridges.
From now on, bridges always run their init tasks and always register
with Synapse.
For the Telegram bridge, the same applies to registering with
matrix-nginx-proxy. Previously, running `--tags=setup-nginx-proxy` would
get rid of the Telegram endpoint configuration for the same reason.
Not anymore.
With most people on Synapse v0.99+ and Synapse v1.0 now available,
we should no longer try to be backward compatible with Synapse 0.34,
because this just complicates the instructions for no good reason.
Using `|to_json` on a string is expected to correctly wrap it in quotes (e.g. `"4"`).
Wrapping it explicitly in double-quotes results in undesirable double-quoting (`""4""`).
We do use some `:latest` images by default for the following services:
- matrix-dimension
- Goofys (in the matrix-synapse role)
- matrix-bridge-appservice-irc
- matrix-bridge-appservice-discord
- matrix-bridge-mautrix-facebook
- matrix-bridge-mautrix-whatsapp
It's terribly unfortunate that those software projects don't release
anything other than `:latest`, but that's how it is for now.
Updating that software requires that users manually do `docker pull`
on the server. The playbook didn't force-repull images that it already
had.
With this patch, it starts doing so. Any image tagged `:latest` will be
force re-pulled by the playbook every time it's executed.
It should be noted that even though we ask the `docker_image` module to
force-pull, it only reports "changed" when it actually pulls something
new. This is nice, because it lets people know exactly when something
gets updated, as opposed to giving the indication that it's always
updating the images (even though it isn't).
Previously, it only mentioned exposing for psql-usage purposes.
Realistically, it can be used for much more. Especially given that
psql can be easily accessed via our matrix-postgres-cli script,
without exposing the container port.
We log to journald anyway. There's no need for double-logging.
It should not that matrix-synapse logs to journald and to files,
but that's likely to change in the future as well.
Because Synapse's logs are insanely verbose right now (and may get
dropped by journald), it's more reliable to have file-logging too.
As Synapse matures and gets more stable, logging should hopefully
get less, we should be able to only use journald and stop writing to
files for it as well.
Using a separate directory allows easier backups
(only need to back up the Ansible playbook configuration and the
bridge's `./data` directory).
The playbook takes care of migrating an existing database file
from the base directory into the `./data` directory.
In the future, we can also mount the configuration read-only,
to ensure the bridge won't touch it.
For now, mautrix-facebook is keen on rebuilding the `config.yaml`
file on startup though, so this will have to wait.
Related to #193, but for the Facebook bridge.
(other bridges can be changed to do the same later).
This patch makes the bridge configuration entirely managed by the
Ansible playbook. The bridge's `config.yaml` and `registration.yaml`
configuration files are regenerated every time the playbook runs.
This allows us to apply updates to those files and to avoid
people having to manage the configuration files manually on the server.
-------------------------------------------------------------
A deficiency of the current approach to dumping YAML configuration in
`config.yaml` is that we strip all comments from it.
Later on, when the bridge actually starts, it will load and redump
(this time with comments), which will make the `config.yaml` file
change.
Subsequent playbook runs will report "changed" for the
"Ensure mautrix-facebook config.yaml installed" task, which is a little
strange.
We might wish to improve this in the future, if possible.
Still, it's better to have a (usually) somewhat meaningless "changed"
task than to what we had -- never rebuilding the configuration.
Bridges start matrix-synapse.service as a dependency, but
Synapse is sometimes slow to start, while bridges are quick to
hit it and die (if unavailable).
They'll auto-restart later, but .. this still breaks `--tags=start`,
which doesn't wait long enough for such a restart to happen.
This attempts to slow down bridge startup enough to ensure Synapse
is up and no failures happen at all.
Attempt to fix#192 (Github Issue), potential regression since
70487061f4.
Serializing as JSON/YAML explicitly is much better than relying on
magic (well, Python serialization being valid YAML..).
It seems like Python may prefix strings with `u` sometimes (Python 3?),
which causes Python serialization to not be compatible with YAML.
This doesn't replace all usage of `-v`, but it's a start.
People sometimes troubleshoot by deleting files (especially bridge
config files). Restarting Synapse with a missing registration.yaml file
for a given bridge, causes the `-v
/something/registration.yaml:/something/registration.yaml:ro` option
to force-create `/something/registration.yaml` as a directory.
When a path that's provided to the `-v` option is missing, Docker
auto-creates that path as a directory.
This causes more breakage and confusion later on.
We'd rather fail, instead of magically creating directories.
Using `--mount`, instead of `-v` is the solution to this.
From Docker's documentation:
> When you use --mount with type=bind, the host-path must refer to an existing path on the host.
> The path will not be created for you and the service will fail with an error if the path does not exist.
Related to #189 (Github Issue).
People had proxying problems if:
- they used the whole playbook (including the `matrix-nginx-proxy` role)
- and they were disabling the proxy (`matrix_nginx_proxy_enabled: false`)
- and they were proxying with their own nginx server
For them,
`matrix_nginx_proxy_proxy_matrix_additional_server_configuration_blocks`
would not be modified to inject the necessary proxying configuration.
While using certbot means we'll have both files retrieved,
it's actually the fullchain.pem file that we use in nginx configuration.
Using that one for the check makes more sense.
Reasoning is the same as for matrix-org/synapse#5023.
For us, the journal used to contain `docker` for all services, which
is not very helpful when looking at them all together (`journalctl -f`).
The goal is to move each bridge into its own separate role.
This commit starts off the work on this with 2 bridges:
- mautrix-telegram
- mautrix-whatsapp
Each bridge's role (including these 2) is meant to:
- depend only on the matrix-base role
- integrate nicely with the matrix-synapse role (if available)
- integrate nicely with the matrix-nginx-proxy role (if available and if
required). mautrix-telegram bridge benefits from integrating with
it.
- not break if matrix-synapse or matrix-nginx-proxy are not used at all
This has been provoked by #174 (Github Issue).
As discussed in #151 (Github Pull Request), it's
a good idea to not selectively apply casting, but to do it in all
cases involving arithmetic operations.
Previously, we'd show an error like this:
{"changed": false, "item": null, "msg": "Detected an undefined required variable"}
.. which didn't mention the variable name
(`matrix_ssl_lets_encrypt_support_email`).
Looks like we may not have to do this,
since 1.4.2 fixes edge cases for people who used the broken
1.4.0 release.
We jumped straight to 1.4.1, so maybe we're okay.
Still, upgrading anyway, just in case.
Use an int conversion in the computation of the value of
matrix_nginx_proxy_tmp_directory_size_mb, to have the integer value
multiplied by 50 instead of having the string repeated 50 times.
It doesn't hurt to attempt renewal more frequently, as it only does
real work if it's actually necessary.
Reloading, we postpone some more, because certbot adds some random delay
(between 1 and 8 * 60 seconds) when renewing. We want to ensure
we reload at least 8 minutes later, which wasn't the case.
To make it even safer (in case future certbot versions use a longer
delay), we reload a whole hour later. We're in no rush to start using
the new certificates anyway, especially given that we attempt renewal
often.
Somewhat fixes#146 (Github Issue)
The code used to check for a `homeserver.yaml` file and generate
a configuration (+ key) only if such a configuration file didn't exist.
Certain rare cases (setting up with one server name and then
changing to another) lead to `homeserver.yaml` being there,
but a `matrix.DOMAIN.signing.key` file missing (because the domain
changed).
A new signing key file would never get generated, because `homeserver.yaml`'s
existence used to be (incorrectly) satisfactory for us.
From now on, we don't mix things up like that.
We don't care about `homeserver.yaml` anymore, but rather
about the actual signing key.
The rest of the configuration (`homeserver.yaml` and
`matrix.DOMAIN.log.config`) is rebuilt by us in any case, so whether
it exists or not is irrelevant and doesn't need checking.
- matrix_enable_room_list_search - Controls whether searching the public room list is enabled.
- matrix_alias_creation_rules - Controls who's allowed to create aliases on this server.
- matrix_room_list_publication_rules - Controls who can publish and which rooms can be published in the public room list.
`{% matrix_s3_media_store_custom_endpoint_enabled %}` should have
been `{% if matrix_s3_media_store_custom_endpoint_enabled %}` instead.
Related to #132 (Github Pull Request).
In most cases, there's not really a need to touch the system
firewall, as Docker manages iptables by itself
(see https://docs.docker.com/network/iptables/).
All ports exposed by Docker containers are automatically whitelisted
in iptables and wired to the correct container.
This made installing firewalld and whitelisting ports pointless,
as far as this playbook's services are concerned.
People that wish to install firewalld (for other reasons), can do so
manually from now on.
This is inspired by and fixes#97 (Github Issue).
Fixes#129 (Github Issue).
Unfortunately, we rely on `service_facts`, which is only available
in Ansible >= 2.5.
There's little reason to stick to an old version such as Ansible 2.4:
- some time has passed since we've raised version requirements - it's
time to move into the future (a little bit)
- we've recently (in 82b4640072) improved the way one can run
Ansible in a Docker container
From now on, Ansible >= 2.5 is required.
By default, `--tags=self-check` no longer validates certificates
when `matrix_ssl_retrieval_method` is set to `self-signed`.
Besides this default, people can also enable/disable validation using the
individual role variables manually.
Fixes#124 (Github Issue)
Most (all?) of our Matrix services are running in the `matrix` network,
so they were safe -- not accessible from Coturn to begin with.
Isolating Coturn into its own network is a security improvement
for people who were starting other services in the default
Docker network. Those services were potentially reachable over the
private Docker network from Coturn.
Discussed in #120 (Github Pull Request)
This is more explicit than hiding it in the role defaults.
People who reuse the roles in their own playbook (and not only) may
incorrectly define `ansible_host` to be a hostname or some local address.
Making it more explicit is more likely to prevent such mistakes.
Currently the nginx reload cron fails on Debian 9 because the path to
systemctl is /bin/systemctl rather than /usr/bin/systemctl.
CentOS 7 places systemctl in both /bin and /usr/bin, so we can just use
/bin/systemctl as the full path.
This allows overriding the default value for `include_content`. Setting
this to false allows homeserver admins to ensure that message content
isn't sent in the clear through third party servers.
`matrix_nginx_proxy_data_path` has always served as a base path,
so we're renaming it to reflect that.
Along with this, we're also introducing a new "data path" variable
(`matrix_nginx_proxy_data_path`), which is really a data path this time.
It's used for storing additional, non-configuration, files related to
matrix-nginx-proxy.
It's been reported that YAML parsing errors
would occur on certain Ansible/Python combinations for some reason.
It appears that a bare `{{ matrix_dimension_admins }}` would sometimes
yield things like `[u'@user:domain.com', ..]` (note the `u` string prefix).
To prevent such problems, we now explicitly serialize with `|to_json`.
The Server spec says that redirects should be followed for
`/.well-known/matrix/server`. So we follow them.
The Client-Server specs doesn't mention redirects, so we don't
follow redirects there.
Using `docker_container` with a `cap_drop` argument requires
Ansible >=2.7.
We want to support older versions too (2.4), so we either need to
stop invoking it with `cap_drop` (insecure), or just stop using
the module altogether.
Since it was suffering from other bugs too (not deleting containers
on failure), we've decided to remove `docker_container` usage completely.
Some resources shouldn't be cached right now,
as per https://github.com/vector-im/riot-web/pull/8702
(note all of the suggestions from that pull request were applied,
because some of them do not seem relevant - no such files)
Fixes#98 (Github Issue)
`matrix_synapse_no_tls` is now implicit, so we've gotten rid of it.
The `homeserver.yaml.j2` template has been synchronized with the
configuration generated by Synapse v0.99.1 (some new options
are present, etc.)
For consistency with all our other listeners,
we make this one bind on the `::` address too
(both IPv4 and IPv6).
Additional details are in #91 (Github Pull Request).
People who wish to rely on SRV records can prevent
the `/.well-known/matrix/server` file from being generated
(and thus, served.. which causes trouble).
If someone decides to not use `/.well-known/matrix/server` and only
relies on SRV records, then they would need to serve tcp/8448 using
a certificate for the base domain (not for the matrix) domain.
Until now, they could do that by giving the certificate to Synapse
and setting it terminate TLS. That makes swapping certificates
more annoying (Synapse requires a restart to re-read certificates),
so it's better if we can support it via matrix-nginx-proxy.
Mounting certificates (or any other file) into the matrix-nginx-proxy container
can be done with `matrix_nginx_proxy_container_additional_volumes`,
introduced in 96afbbb5a.
Certain use-cases may require that people mount additional files
into the matrix-nginx-proxy container. Similarly to how we do it
for Synapse, we are introducing a new variable that makes this
possible (`matrix_nginx_proxy_container_additional_volumes`).
This makes the htpasswd file for Synapse Metrics (introduced in #86,
Github Pull Request) to also perform mounting using this new mechanism.
Hopefully, for such an "extension", keeping htpasswd file-creation and
volume definition in the same place (the tasks file) is better.
All other major volumes' mounting mechanism remains the same (explicit
mounting).
Continuation of 1f0cc92b33.
As an explanation for the problem:
when saying `localhost` on the host, it sometimes gets resolved to `::1`
and sometimes to `127.0.0.1`. On the unfortunate occassions that
it gets resolved to `::1`, the container won't be able to serve the
request, because Docker containers don't have IPv6 enabled by default.
To avoid this problem, we simply prevent any lookups from happening
and explicitly use `127.0.0.1`.
This reverts commit 0dac5ea508.
Relying on pyOpenSSL is the Ansible way of doing things, but is
impractical and annoying for users.
`openssl` is easily available on most servers, even by default.
We'd better use that.
Seems like we unintentionally removed the mounting of certificates
(the `/matrix-config` mount) as part of splitting the playbook into
roles in 51312b8250.
It appears that those certificates weren't necessary for coturn to
funciton though, so we might just get rid of the configuration as well.
We run containers as a non-root user (no effective capabilities).
Still, if a setuid binary is available in a container image, it could
potentially be used to give the user the default capabilities that the
container was started with. For Docker, the default set currently is:
- "CAP_CHOWN"
- "CAP_DAC_OVERRIDE"
- "CAP_FSETID"
- "CAP_FOWNER"
- "CAP_MKNOD"
- "CAP_NET_RAW"
- "CAP_SETGID"
- "CAP_SETUID"
- "CAP_SETFCAP"
- "CAP_SETPCAP"
- "CAP_NET_BIND_SERVICE"
- "CAP_SYS_CHROOT"
- "CAP_KILL"
- "CAP_AUDIT_WRITE"
We'd rather prevent such a potential escalation by dropping ALL
capabilities.
The problem is nicely explained here: https://github.com/projectatomic/atomic-site/issues/203
This is a known/intentional regression since f92c4d5a27.
The new stance on this is that most people would not have
dnspython, but may have the `dig` tool. There's no good
reason for not increasing our chances of success by trying both
methods (Ansible dig lookup and using the `dig` CLI tool).
Fixes#85 (Github issue).
This makes all containers (except mautrix-telegram and
mautrix-whatsapp), start as a non-root user.
We do this, because we don't trust some of the images.
In any case, we'd rather not trust ALL images and avoid giving
`root` access at all. We can't be sure they would drop privileges
or what they might do before they do it.
Because Postfix doesn't support running as non-root,
it had to be replaced by an Exim mail server.
The matrix-nginx-proxy nginx container image is patched up
(by replacing its main configuration) so that it can work as non-root.
It seems like there's no other good image that we can use and that is up-to-date
(https://hub.docker.com/r/nginxinc/nginx-unprivileged is outdated).
Likewise for riot-web (https://hub.docker.com/r/bubuntux/riot-web/),
we patch it up ourselves when starting (replacing the main nginx
configuration).
Ideally, it would be fixed upstream so we can simplify.
We do match the defaults anyway (by default that is),
but people can customize `matrix_user_uid` and `matrix_user_uid`
and it wouldn't be correct then.
In any case, it's better to be explicit about such an important thing.
If this is a brand new server and Postgres had never started,
detecting it before we even start it is not possible.
This moves the logic, so that it happens later on, when Postgres
would have had the chance to start and possibly initialize
a new empty database.
Fixes#82 (Github issue)
The matrix-nginx-proxy role can now be used independently.
This makes it consistent with all other roles, with
the `matrix-base` role remaining as their only dependency.
Separating matrix-nginx-proxy was relatively straightforward, with
the exception of the Mautrix Telegram reverse-proxying configuration.
Mautrix Telegram, being an extension/bridge, does not feel important enough
to justify its own special handling in matrix-nginx-proxy.
Thus, we've introduced the concept of "additional configuration blocks"
(`matrix_nginx_proxy_proxy_matrix_additional_server_configuration_blocks`),
where any module can register its own custom nginx server blocks.
For such dynamic registration to work, the order of role execution
becomes important. To make it possible for each module participating
in dynamic registration to verify that the order of execution is
correct, we've also introduced a `matrix_nginx_proxy_role_executed`
variable.
It should be noted that this doesn't make the matrix-synapse role
dependent on matrix-nginx-proxy. It's optional runtime detection
and registration, and it only happens in the matrix-synapse role
when `matrix_mautrix_telegram_enabled: true`.
With this change, the following roles are now only dependent
on the minimal `matrix-base` role:
- `matrix-corporal`
- `matrix-coturn`
- `matrix-mailer`
- `matrix-mxisd`
- `matrix-postgres`
- `matrix-riot-web`
- `matrix-synapse`
The `matrix-nginx-proxy` role still does too much and remains
dependent on the others.
Wiring up the various (now-independent) roles happens
via a glue variables file (`group_vars/matrix-servers`).
It's triggered for all hosts in the `matrix-servers` group.
According to Ansible's rules of priority, we have the following
chain of inclusion/overriding now:
- role defaults (mostly empty or good for independent usage)
- playbook glue variables (`group_vars/matrix-servers`)
- inventory host variables (`inventory/host_vars/matrix.<your-domain>`)
All roles default to enabling their main component
(e.g. `matrix_mxisd_enabled: true`, `matrix_riot_web_enabled: true`).
Reasoning: if a role is included in a playbook (especially separately,
in another playbook), it should "work" by default.
Our playbook disables some of those if they are not generally useful
(e.g. `matrix_corporal_enabled: false`).