parent
881fdd28f0
commit
ac7cb3619d
@ -0,0 +1,137 @@
|
|||||||
|
# Storing Matrix media files on Amazon S3 with Goofys (optional)
|
||||||
|
|
||||||
|
If you'd like to store Synapse's content repository (`media_store`) files on Amazon S3 (or other S3-compatible service),
|
||||||
|
you can let this playbook configure [Goofys](https://github.com/kahing/goofys) for you.
|
||||||
|
|
||||||
|
Another (and better performing) way to use S3 storage with Synapse is [synapse-s3-storage-provider](configuring-playbook-synapse-s3-storage-provider.md).
|
||||||
|
|
||||||
|
Using a Goofys-backed media store works, but performance may not be ideal. If possible, try to use a region which is close to your Matrix server.
|
||||||
|
|
||||||
|
If you'd like to move your locally-stored media store data to Amazon S3 (or another S3-compatible object store), we also provide some migration instructions below.
|
||||||
|
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
After [creating the S3 bucket and configuring it](configuring-playbook-s3.md#bucket-creation-and-security-configuration), you can proceed to configure Goofys in your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
matrix_s3_media_store_enabled: true
|
||||||
|
matrix_s3_media_store_bucket_name: "your-bucket-name"
|
||||||
|
matrix_s3_media_store_aws_access_key: "access-key-goes-here"
|
||||||
|
matrix_s3_media_store_aws_secret_key: "secret-key-goes-here"
|
||||||
|
matrix_s3_media_store_region: "eu-central-1"
|
||||||
|
```
|
||||||
|
|
||||||
|
You can use any S3-compatible object store by **additionally** configuring these variables:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
matrix_s3_media_store_custom_endpoint_enabled: true
|
||||||
|
matrix_s3_media_store_custom_endpoint: "https://your-custom-endpoint"
|
||||||
|
```
|
||||||
|
|
||||||
|
If you have local media store files and wish to migrate to Backblaze B2 subsequently, follow our [migration guide to Backblaze B2](#migrating-to-backblaze-b2) below instead of applying this configuration as-is.
|
||||||
|
|
||||||
|
|
||||||
|
## Migrating from local filesystem storage to S3
|
||||||
|
|
||||||
|
It's a good idea to [make a complete server backup](faq.md#how-do-i-backup-the-data-on-my-server) before migrating your local media store to an S3-backed one.
|
||||||
|
|
||||||
|
Follow one of the guides below for a migration path from a locally-stored media store to one stored on S3-compatible storage:
|
||||||
|
|
||||||
|
- [Storing Matrix media files on Amazon S3 with Goofys (optional)](#storing-matrix-media-files-on-amazon-s3-with-goofys-optional)
|
||||||
|
- [Usage](#usage)
|
||||||
|
- [Migrating from local filesystem storage to S3](#migrating-from-local-filesystem-storage-to-s3)
|
||||||
|
- [Migrating to any S3-compatible storage (universal, but likely slow)](#migrating-to-any-s3-compatible-storage-universal-but-likely-slow)
|
||||||
|
- [Migrating to Backblaze B2](#migrating-to-backblaze-b2)
|
||||||
|
|
||||||
|
### Migrating to any S3-compatible storage (universal, but likely slow)
|
||||||
|
|
||||||
|
It's a good idea to [make a complete server backup](faq.md#how-do-i-backup-the-data-on-my-server) before doing this.
|
||||||
|
|
||||||
|
1. Proceed with the steps below without stopping Matrix services
|
||||||
|
|
||||||
|
2. Start by adding the base S3 configuration in your `vars.yml` file (seen above, may be different depending on the S3 provider of your choice)
|
||||||
|
|
||||||
|
3. In addition to the base configuration you see above, add this to your `vars.yml` file:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
matrix_s3_media_store_path: /matrix/s3-media-store
|
||||||
|
```
|
||||||
|
|
||||||
|
This enables S3 support, but mounts the S3 storage bucket to `/matrix/s3-media-store` without hooking it to your homeserver yet. Your homeserver will still continue using your local filesystem for its media store.
|
||||||
|
|
||||||
|
5. Run the playbook to apply the changes: `ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start`
|
||||||
|
|
||||||
|
6. Do an **initial sync of your files** by running this **on the server** (it may take a very long time):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
sudo -u matrix -- rsync --size-only --ignore-existing -avr /matrix/synapse/storage/media-store/. /matrix/s3-media-store/.
|
||||||
|
```
|
||||||
|
|
||||||
|
You may need to install `rsync` manually.
|
||||||
|
|
||||||
|
7. Stop all Matrix services (`ansible-playbook -i inventory/hosts setup.yml --tags=stop`)
|
||||||
|
|
||||||
|
8. Start the S3 service by running this **on the server**: `systemctl start matrix-goofys`
|
||||||
|
|
||||||
|
9. Sync the files again by re-running the `rsync` command you see in step #6
|
||||||
|
|
||||||
|
10. Stop the S3 service by running this **on the server**: `systemctl stop matrix-goofys`
|
||||||
|
|
||||||
|
11. Get the old media store out of the way by running this command on the server:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mv /matrix/synapse/storage/media-store /matrix/synapse/storage/media-store-local-backup
|
||||||
|
```
|
||||||
|
|
||||||
|
12. Remove the `matrix_s3_media_store_path` configuration from your `vars.yml` file (undoing step #3 above)
|
||||||
|
|
||||||
|
13. Run the playbook: `ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start`
|
||||||
|
|
||||||
|
14. You're done! Verify that loading existing (old) media files works and that you can upload new ones.
|
||||||
|
|
||||||
|
15. When confident that it all works, get rid of the local media store directory: `rm -rf /matrix/synapse/storage/media-store-local-backup`
|
||||||
|
|
||||||
|
|
||||||
|
### Migrating to Backblaze B2
|
||||||
|
|
||||||
|
It's a good idea to [make a complete server backup](faq.md#how-do-i-backup-the-data-on-my-server) before doing this.
|
||||||
|
|
||||||
|
1. While all Matrix services are running, run the following command on the server:
|
||||||
|
|
||||||
|
(you need to adjust the 3 `--env` line below with your own data)
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker run -it --rm -w /work \
|
||||||
|
--env='B2_KEY_ID=YOUR_KEY_GOES_HERE' \
|
||||||
|
--env='B2_KEY_SECRET=YOUR_SECRET_GOES_HERE' \
|
||||||
|
--env='B2_BUCKET_NAME=YOUR_BUCKET_NAME_GOES_HERE' \
|
||||||
|
-v /matrix/synapse/storage/media-store/:/work \
|
||||||
|
--entrypoint=/bin/sh \
|
||||||
|
docker.io/tianon/backblaze-b2:2.1.0 \
|
||||||
|
-c 'b2 authorize-account $B2_KEY_ID $B2_KEY_SECRET > /dev/null && b2 sync /work/ b2://$B2_BUCKET_NAME'
|
||||||
|
```
|
||||||
|
|
||||||
|
This is some initial file sync, which may take a very long time.
|
||||||
|
|
||||||
|
2. Stop all Matrix services (`ansible-playbook -i inventory/hosts setup.yml --tags=stop`)
|
||||||
|
|
||||||
|
3. Run the command from step #1 again.
|
||||||
|
|
||||||
|
Doing this will sync any new files that may have been created locally in the meantime.
|
||||||
|
|
||||||
|
Now that Matrix services aren't running, we're sure to get Backblaze B2 and your local media store fully in sync.
|
||||||
|
|
||||||
|
4. Get the old media store out of the way by running this command on the server:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
mv /matrix/synapse/storage/media-store /matrix/synapse/storage/media-store-local-backup
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Put the [Backblaze B2 settings seen above](#backblaze-b2) in your `vars.yml` file
|
||||||
|
|
||||||
|
6. Run the playbook: `ansible-playbook -i inventory/hosts setup.yml --tags=setup-all,start`
|
||||||
|
|
||||||
|
7. You're done! Verify that loading existing (old) media files works and that you can upload new ones.
|
||||||
|
|
||||||
|
8. When confident that it all works, get rid of the local media store directory: `rm -rf /matrix/synapse/storage/media-store-local-backup`
|
@ -0,0 +1,104 @@
|
|||||||
|
# Storing Synapse media files on Amazon S3 with synapse-s3-storage-provider (optional)
|
||||||
|
|
||||||
|
If you'd like to store Synapse's content repository (`media_store`) files on Amazon S3 (or other S3-compatible service),
|
||||||
|
you can use the [synapse-s3-storage-provider](https://github.com/matrix-org/synapse-s3-storage-provider) media provider module for Synapse.
|
||||||
|
|
||||||
|
An alternative (which has worse performance) is to use [Goofys to mount the S3 store to the local filesystem](configuring-playbook-s3-goofys.md).
|
||||||
|
|
||||||
|
|
||||||
|
## How it works?
|
||||||
|
|
||||||
|
Summarized writings here are inspired by [this article](https://quentin.dufour.io/blog/2021-09-14/matrix-synapse-s3-storage/).
|
||||||
|
|
||||||
|
The way media storage providers in Synapse work has some caveats:
|
||||||
|
|
||||||
|
- Synapse still continues to use locally-stored files (for creating thumbnails, serving files, etc)
|
||||||
|
- the media storage provider is just an extra storage mechanism (in addition to the local filesystem)
|
||||||
|
- all files are stored locally at first, and then copied to the media storage provider (either synchronously or asynchronously)
|
||||||
|
- if a file is not available on the local filesystem, it's pulled from a media storage provider
|
||||||
|
|
||||||
|
You may be thinking **if all files are stored locally as well, what's the point**?
|
||||||
|
|
||||||
|
You can run some scripts to delete the local files once in a while, thus freeing up local disk space. If these files are needed in the future (for serving them to users, etc.), Synapse will pull them from the media storage provider on demand.
|
||||||
|
|
||||||
|
While you will need some local disk space around, it's only to accommodate usage, etc., and won't grow as large as your S3 store.
|
||||||
|
|
||||||
|
|
||||||
|
## Installing
|
||||||
|
|
||||||
|
After [creating the S3 bucket and configuring it](configuring-playbook-s3.md#bucket-creation-and-security-configuration), you can proceed to configure Goofys in your configuration file (`inventory/host_vars/matrix.<your-domain>/vars.yml`):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
matrix_synapse_ext_synapse_s3_storage_provider_enabled: true
|
||||||
|
matrix_synapse_ext_synapse_s3_storage_provider_config_bucket: your-bucket-name
|
||||||
|
matrix_synapse_ext_synapse_s3_storage_provider_config_region_name: some-region-name # e.g. eu-central-1
|
||||||
|
matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url: https://.. # delete this whole line for Amazon S3
|
||||||
|
matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id: access-key-goes-here
|
||||||
|
matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key: secret-key-goes-here
|
||||||
|
matrix_synapse_ext_synapse_s3_storage_provider_config_storage_class: STANDARD # or STANDARD_IA, etc.
|
||||||
|
|
||||||
|
# For additional advanced settings, take a look at `roles/matrix-synapse/defaults/main.yml`
|
||||||
|
```
|
||||||
|
|
||||||
|
If you have existing files in Synapse's media repository (`/matrix/synapse/media-store/..`):
|
||||||
|
|
||||||
|
- new files will start being stored both locally and on the S3 store
|
||||||
|
- the existing files will remain on the local filesystem only until [migrating them to the S3 store](#migrating-your-existing-media-files-to-the-s3-store)
|
||||||
|
- at some point (and periodically in the future), you can delete local files which have been uploaded to the S3 store already
|
||||||
|
|
||||||
|
|
||||||
|
## Migrating your existing media files to the S3 store
|
||||||
|
|
||||||
|
Migrating your existing data can happen in multiple ways:
|
||||||
|
|
||||||
|
- [using the `s3_media_upload` script from `synapse-s3-storage-provider`](#using-the-s3_media_upload-script-from-synapse-s3-storage-provider) (very slow when dealing with lots of data)
|
||||||
|
- [using another tool in combination with `s3_media_upload`](#using-another-tool-in-combination-with-s3_media_upload) (quicker when dealing with lots of data)
|
||||||
|
|
||||||
|
### Using the `s3_media_upload` script from `synapse-s3-storage-provider`
|
||||||
|
|
||||||
|
Instead of using `s3_media_upload` directly, which is very slow and painful for an initial data migration, we recommend [using another tool in combination with `s3_media_upload`](#using-another-tool-in-combination-with-s3_media_upload).
|
||||||
|
|
||||||
|
To copy your existing files, SSH into the server and run `/usr/local/bin/matrix-synapse-s3-storage-provider-shell`.
|
||||||
|
|
||||||
|
This launches a Synapse container, which has access to the local media store, Postgres database, S3 store and has some convenient environment variables configured for you to use (`MEDIA_PATH`, `BUCKET`, `ENDPOINT`, `UPDATE_DB_DAYS`, etc).
|
||||||
|
|
||||||
|
Then use the following commands (`$` values come from environment variables - they're **not placeholders** that you need to substitute):
|
||||||
|
|
||||||
|
- `s3_media_upload update-db $UPDATE_DB_DURATION` - create a local SQLite database (`cache.db`) with a list of media repository files (from the `synapse` Postgres database) eligible for operating on
|
||||||
|
- `$UPDATE_DB_DURATION` is influenced by the `matrix_synapse_ext_synapse_s3_storage_provider_update_db_day_count` variable (defaults to `0`)
|
||||||
|
- `$UPDATE_DB_DURATION` defaults to `0d` (0 days), which means **include files which haven't been accessed for more than 0 days** (that is, **all files will be included**).
|
||||||
|
- `s3_media_upload check-deleted $MEDIA_PATH` - check whether files in the local cache still exist in the local media repository directory
|
||||||
|
- `s3_media_upload upload $MEDIA_PATH $BUCKET --delete --endpoint-url $ENDPOINT` - uploads locally-stored files to S3 and deletes them from the local media repository directory
|
||||||
|
|
||||||
|
The `upload` command may take a lot of time to complete.
|
||||||
|
|
||||||
|
|
||||||
|
### Using another tool in combination with `s3_media_upload`
|
||||||
|
|
||||||
|
To migrate your existing local data to S3, we recommend to:
|
||||||
|
|
||||||
|
- **first** use another tool ([`aws s3`](#copying-data-to-amazon-s3) or [`b2 sync`](#copying-data-to-backblaze-b2), etc.) to copy the local files to the S3 bucket
|
||||||
|
|
||||||
|
- **only then** [use the `s3_media_upload` tool to finish the migration](#using-the-s3_media_upload-script-from-synapse-s3-storage-provider) (this checks to ensure all files are uploaded and then deletes the local files)
|
||||||
|
|
||||||
|
#### Copying data to Amazon S3
|
||||||
|
|
||||||
|
Generally, you need to use the `aws s3` tool.
|
||||||
|
|
||||||
|
This documentation section could use an improvement. Ideally, we'd come up with a guide like the one used in [Copying data to Backblaze B2](#copying-data-to-backblaze-b2) - running `aws s3` in a container, etc.
|
||||||
|
|
||||||
|
#### Copying data to Backblaze B2
|
||||||
|
|
||||||
|
To copy to Backblaze B2, start a container like this:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
docker run -it --rm \
|
||||||
|
-w /work \
|
||||||
|
--env='B2_KEY_ID=YOUR_KEY_GOES_HERE' \
|
||||||
|
--env='B2_KEY_SECRET=YOUR_SECRET_GOES_HERE' \
|
||||||
|
--env='B2_BUCKET_NAME=YOUR_BUCKET_NAME_GOES_HERE' \
|
||||||
|
--mount type=bind,src=/matrix/synapse/storage/media-store,dst=/work,ro \
|
||||||
|
--entrypoint=/bin/sh \
|
||||||
|
tianon/backblaze-b2:3.6.0 \
|
||||||
|
-c 'b2 authorize-account $B2_KEY_ID $B2_KEY_SECRET > /dev/null && b2 sync /work b2://$B2_BUCKET_NAME --skipNewer'
|
||||||
|
```
|
@ -0,0 +1,5 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- ansible.builtin.set_fact:
|
||||||
|
matrix_systemd_services_list: "{{ matrix_systemd_services_list + ['matrix-synapse-s3-storage-provider-migrate.timer'] }}"
|
||||||
|
when: matrix_synapse_ext_synapse_s3_storage_provider_enabled | bool
|
@ -0,0 +1,10 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/ext/s3-storage-provider/validate_config.yml"
|
||||||
|
when: matrix_synapse_ext_synapse_s3_storage_provider_enabled | bool
|
||||||
|
|
||||||
|
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/ext/s3-storage-provider/setup_install.yml"
|
||||||
|
when: matrix_synapse_ext_synapse_s3_storage_provider_enabled | bool
|
||||||
|
|
||||||
|
- ansible.builtin.import_tasks: "{{ role_path }}/tasks/ext/s3-storage-provider/setup_uninstall.yml"
|
||||||
|
when: not matrix_synapse_ext_synapse_s3_storage_provider_enabled | bool
|
@ -0,0 +1,54 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
# We install this into Synapse by making `matrix_synapse_ext_synapse_s3_storage_provider_enabled` influence other variables:
|
||||||
|
# - `matrix_synapse_media_storage_providers` (via `matrix_synapse_media_storage_providers_auto`)
|
||||||
|
# - `matrix_synapse_container_image_customizations_enabled`
|
||||||
|
# - `matrix_synapse_container_image_customizations_s3_storage_provider_installation_enabled`
|
||||||
|
#
|
||||||
|
# Below are additional tasks for setting up various helper scripts, etc.
|
||||||
|
|
||||||
|
- name: Ensure s3-storage-provider env file installed
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: "{{ role_path }}/templates/synapse/ext/s3-storage-provider/env.j2"
|
||||||
|
dest: "{{ matrix_synapse_ext_s3_storage_provider_path }}/env"
|
||||||
|
mode: 0640
|
||||||
|
|
||||||
|
- name: Ensure s3-storage-provider data path exists
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ matrix_synapse_ext_s3_storage_provider_path }}/data"
|
||||||
|
state: directory
|
||||||
|
mode: 0750
|
||||||
|
owner: "{{ matrix_user_username }}"
|
||||||
|
group: "{{ matrix_user_groupname }}"
|
||||||
|
|
||||||
|
- name: Ensure s3-storage-provider database.yaml file installed
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: "{{ role_path }}/templates/synapse/ext/s3-storage-provider/database.yaml.j2"
|
||||||
|
dest: "{{ matrix_synapse_ext_s3_storage_provider_path }}/data/database.yaml"
|
||||||
|
mode: 0640
|
||||||
|
|
||||||
|
- name: Ensure s3-storage-provider scripts installed
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: "{{ role_path }}/templates/synapse/ext/s3-storage-provider/usr-local-bin/{{ item }}.j2"
|
||||||
|
dest: "{{ matrix_local_bin_path }}/{{ item }}"
|
||||||
|
mode: 0750
|
||||||
|
with_items:
|
||||||
|
- matrix-synapse-s3-storage-provider-shell
|
||||||
|
- matrix-synapse-s3-storage-provider-migrate
|
||||||
|
|
||||||
|
- name: Ensure matrix-synapse-s3-storage-provider-migrate.service and timer are installed
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: "{{ role_path }}/templates/systemd/.j2"
|
||||||
|
src: "{{ role_path }}/templates/synapse/ext/s3-storage-provider/systemd/{{ item }}.j2"
|
||||||
|
dest: "{{ matrix_systemd_path }}/{{ item }}"
|
||||||
|
mode: 0640
|
||||||
|
with_items:
|
||||||
|
- matrix-synapse-s3-storage-provider-migrate.service
|
||||||
|
- matrix-synapse-s3-storage-provider-migrate.timer
|
||||||
|
register: matrix_synapse_s3_storage_provider_systemd_service_result
|
||||||
|
|
||||||
|
- name: Ensure systemd reloaded after matrix-synapse-s3-storage-provider-migrate.service installation
|
||||||
|
ansible.builtin.service:
|
||||||
|
daemon_reload: true
|
||||||
|
when: matrix_synapse_s3_storage_provider_systemd_service_result.changed | bool
|
||||||
|
|
@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Ensure matrix-synapse-s3-storage-provider-migrate.service and timer don't exist
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ matrix_systemd_path }}/{{ item }}"
|
||||||
|
state: absent
|
||||||
|
with_items:
|
||||||
|
- matrix-synapse-s3-storage-provider-migrate.timer
|
||||||
|
- matrix-synapse-s3-storage-provider-migrate.service
|
||||||
|
register: matrix_synapse_s3_storage_provider_migrate_sevice_removal
|
||||||
|
|
||||||
|
- name: Ensure systemd reloaded after matrix-synapse-s3-storage-provider-migrate.service removal
|
||||||
|
ansible.builtin.service:
|
||||||
|
daemon_reload: true
|
||||||
|
when: matrix_synapse_s3_storage_provider_migrate_sevice_removal.changed | bool
|
||||||
|
|
||||||
|
- name: Ensure s3-storage-provider files don't exist
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: absent
|
||||||
|
with_items:
|
||||||
|
- "{{ matrix_local_bin_path }}/matrix-synapse-s3-storage-provider-shell"
|
||||||
|
- "{{ matrix_local_bin_path }}/matrix-synapse-s3-storage-provider-migrate"
|
||||||
|
- "{{ matrix_synapse_ext_s3_storage_provider_path }}"
|
@ -0,0 +1,18 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
- name: Fail if required s3-storage-provider settings not defined
|
||||||
|
ansible.builtin.fail:
|
||||||
|
msg: >-
|
||||||
|
You need to define a required configuration setting (`{{ item }}`) for using s3-storage-provider.
|
||||||
|
when: "vars[item] == ''"
|
||||||
|
with_items:
|
||||||
|
- "matrix_synapse_ext_synapse_s3_storage_provider_config_bucket"
|
||||||
|
- "matrix_synapse_ext_synapse_s3_storage_provider_config_region_name"
|
||||||
|
- "matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id"
|
||||||
|
- "matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key"
|
||||||
|
|
||||||
|
- name: Fail if required matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url looks invalid
|
||||||
|
ansible.builtin.fail:
|
||||||
|
msg: >-
|
||||||
|
`matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url` needs to look like a URL (`http://` or `https://` prefix).
|
||||||
|
when: "matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url != '' and not matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url.startswith('http')"
|
@ -1,3 +1,7 @@
|
|||||||
FROM {{ matrix_synapse_docker_image }}
|
FROM {{ matrix_synapse_docker_image }}
|
||||||
|
|
||||||
|
{% if matrix_synapse_container_image_customizations_s3_storage_provider_installation_enabled %}
|
||||||
|
RUN pip install synapse-s3-storage-provider=={{ matrix_synapse_ext_synapse_s3_storage_provider_version }}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
{{ matrix_synapse_container_image_customizations_dockerfile_body_custom }}
|
{{ matrix_synapse_container_image_customizations_dockerfile_body_custom }}
|
||||||
|
@ -0,0 +1,5 @@
|
|||||||
|
user: {{ matrix_synapse_database_user | to_json }}
|
||||||
|
password: {{ matrix_synapse_database_password | to_json }}
|
||||||
|
database: {{ matrix_synapse_database_database | to_json }}
|
||||||
|
host: {{ matrix_synapse_database_host | to_json }}
|
||||||
|
port: {{ matrix_synapse_database_port | to_json }}
|
@ -0,0 +1,16 @@
|
|||||||
|
AWS_ACCESS_KEY_ID={{ matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id }}
|
||||||
|
AWS_SECRET_ACCESS_KEY={{ matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key }}
|
||||||
|
AWS_DEFAULT_REGION={{ matrix_synapse_ext_synapse_s3_storage_provider_config_region_name }}
|
||||||
|
|
||||||
|
ENDPOINT={{ matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url }}
|
||||||
|
BUCKET={{ matrix_synapse_ext_synapse_s3_storage_provider_config_bucket }}
|
||||||
|
|
||||||
|
PG_USER={{ matrix_synapse_database_user }}
|
||||||
|
PG_PASS={{ matrix_synapse_database_password }}
|
||||||
|
PG_DB={{ matrix_synapse_database_database }}
|
||||||
|
PG_HOST={{ matrix_synapse_database_host }}
|
||||||
|
PG_PORT={{ matrix_synapse_database_port }}
|
||||||
|
|
||||||
|
MEDIA_PATH=/matrix-media-store-parent/{{ matrix_synapse_media_store_directory_name }}
|
||||||
|
|
||||||
|
UPDATE_DB_DURATION={{ matrix_synapse_ext_synapse_s3_storage_provider_update_db_day_count }}d
|
@ -0,0 +1,14 @@
|
|||||||
|
module: s3_storage_provider.S3StorageProviderBackend
|
||||||
|
store_local: {{ matrix_synapse_ext_synapse_s3_storage_provider_store_local | to_json }}
|
||||||
|
store_remote: {{ matrix_synapse_ext_synapse_s3_storage_provider_store_remote | to_json }}
|
||||||
|
store_synchronous: {{ matrix_synapse_ext_synapse_s3_storage_provider_store_synchronous | to_json }}
|
||||||
|
config:
|
||||||
|
bucket: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_bucket | to_json }}
|
||||||
|
region_name: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_region_name | to_json }}
|
||||||
|
endpoint_url: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_endpoint_url | to_json }}
|
||||||
|
access_key_id: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_access_key_id | to_json }}
|
||||||
|
secret_access_key: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_secret_access_key | to_json }}
|
||||||
|
|
||||||
|
storage_class: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_storage_class | to_json }}
|
||||||
|
|
||||||
|
threadpool_size: {{ matrix_synapse_ext_synapse_s3_storage_provider_config_threadpool_size | to_json }}
|
@ -0,0 +1,7 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Migrates locally-stored Synapse media store files to S3
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
Environment="HOME={{ matrix_systemd_unit_home_path }}"
|
||||||
|
ExecStart={{ matrix_local_bin_path }}/matrix-synapse-s3-storage-provider-migrate
|
@ -0,0 +1,10 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Migrates locally-stored Synapse media store files to S3
|
||||||
|
|
||||||
|
[Timer]
|
||||||
|
Unit=matrix-synapse-s3-storage-provider-migrate.service
|
||||||
|
OnCalendar=*-*-* 05:00:00
|
||||||
|
RandomizedDelaySec=2h
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=timers.target
|
@ -0,0 +1,13 @@
|
|||||||
|
#jinja2: lstrip_blocks: "True"
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
{{ matrix_host_command_docker }} run \
|
||||||
|
--rm \
|
||||||
|
--env-file={{ matrix_synapse_ext_s3_storage_provider_path }}/env \
|
||||||
|
--mount type=bind,src={{ matrix_synapse_storage_path }},dst=/matrix-media-store-parent,bind-propagation=slave \
|
||||||
|
--mount type=bind,src={{ matrix_synapse_ext_s3_storage_provider_path }}/data,dst=/data \
|
||||||
|
--workdir=/data \
|
||||||
|
--network={{ matrix_docker_network }} \
|
||||||
|
--entrypoint=/bin/bash \
|
||||||
|
{{ matrix_synapse_docker_image_final }} \
|
||||||
|
-c 's3_media_upload update-db $UPDATE_DB_DURATION && s3_media_upload --no-progress check-deleted $MEDIA_PATH && s3_media_upload --no-progress upload $MEDIA_PATH $BUCKET --delete --endpoint-url $ENDPOINT'
|
@ -0,0 +1,13 @@
|
|||||||
|
#jinja2: lstrip_blocks: "True"
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
{{ matrix_host_command_docker }} run \
|
||||||
|
-it \
|
||||||
|
--rm \
|
||||||
|
--env-file={{ matrix_synapse_ext_s3_storage_provider_path }}/env \
|
||||||
|
--mount type=bind,src={{ matrix_synapse_storage_path }},dst=/matrix-media-store-parent,bind-propagation=slave \
|
||||||
|
--mount type=bind,src={{ matrix_synapse_ext_s3_storage_provider_path }}/data,dst=/data \
|
||||||
|
--workdir=/data \
|
||||||
|
--network={{ matrix_docker_network }} \
|
||||||
|
--entrypoint=/bin/bash \
|
||||||
|
{{ matrix_synapse_docker_image_final }}
|
Loading…
Reference in new issue