Changed the code renderers

This commit is contained in:
Roger Gonzalez 2020-04-27 10:33:09 -03:00
parent 29d309db51
commit e0d43b6682
5 changed files with 315 additions and 302 deletions

View File

@ -22,60 +22,64 @@ To start, I'm using their $5 server which at the time of this writing includes:
## Installation
On my first SSH to the server I perform basic tasks such as updating and upgrading the server:
sudo apt update && sudo apt ugrade - y
```bash
sudo apt update && sudo apt ugrade - y
```
Then I install some essentials like Ubuntu Common Properties (used to add new repositories using `add-apt-repository`) NGINX, HTOP, GIT and Emacs, the best text editor in this planet <small>vim sucks</small>
sudo apt install software-properties-common nginx htop git emacs
```bash
sudo apt install software-properties-common nginx htop git emacs
```
For SSL certificates I'm going to use Certbot because it is the most simple and usefull tool for it. This one requires some extra steps:
sudo add-apt-repository ppa:certbot/certbot -y
sudo apt update
sudo apt install python-certbot-nginx -y
```bash
sudo add-apt-repository ppa:certbot/certbot -y
sudo apt update
sudo apt install python-certbot-nginx -y
```
By default DigitalOcean servers have no `swap`, so I'll add it by pasting some [DigitalOcean boilerplate](https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-18-04) on to the terminal:
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo cp /etc/fstab /etc/fstab.bak
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
sudo sysctl vm.swappiness=10
sudo sysctl vm.vfs_cache_pressure=50
sudo echo "vm.swappiness=10" >> /etc/sysctl.conf
sudo echo "vm.vfs_cache_pressure=50" >> /etc/sysctl.conf
```bash
sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
sudo cp /etc/fstab /etc/fstab.bak
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
sudo sysctl vm.swappiness=10
sudo sysctl vm.vfs_cache_pressure=50
sudo echo "vm.swappiness=10" >> /etc/sysctl.conf
sudo echo "vm.vfs_cache_pressure=50" >> /etc/sysctl.conf
```
This adds 2GB of `swap`
Then I set up my firewall with UFW:
```bash
sudo ufw allow 22 #SSH
sudo ufw allow 80 #HTTP
sudo ufw allow 443 #HTTPS
sudo ufw allow 25 #IMAP
sudo ufw allow 143 #IMAP
sudo ufw allow 993 #IMAPS
sudo ufw allow 110 #POP3
sudo ufw allow 995 #POP3S
sudo ufw allow 587 #SMTP
sudo ufw allow 465 #SMTPS
sudo ufw allow 4190 #Manage Sieve
sudo ufw allow 22 #SSH
sudo ufw allow 80 #HTTP
sudo ufw allow 443 #HTTPS
sudo ufw allow 25 #IMAP
sudo ufw allow 143 #IMAP
sudo ufw allow 993 #IMAPS
sudo ufw allow 110 #POP3
sudo ufw allow 995 #POP3S
sudo ufw allow 587 #SMTP
sudo ufw allow 465 #SMTPS
sudo ufw allow 4190 #Manage Sieve
sudo ufw enable
sudo ufw enable
```
Finally, I install `docker` and `docker-compose`, which are going to be the main software running on both servers.
```bash
# Docker
curl -sSL https://get.docker.com/ | CHANNEL=stable sh
systemctl enable docker.service
systemctl start docker.service
# Docker
curl -sSL https://get.docker.com/ | CHANNEL=stable sh
systemctl enable docker.service
systemctl start docker.service
# Docker compose
curl -L https://github.com/docker/compose/releases/download/$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
# Docker compose
curl -L https://github.com/docker/compose/releases/download/$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
```
Now that everything is done, we can continue configuring the first server!
@ -90,14 +94,14 @@ For my email I chose Mailcow. Why?
## Installation & Setup
Installation was simple, first I followed the instructions on their [official documentation](https://mailcow.github.io/mailcow-dockerized-docs/i_u_m_install/)
cd /opt
git clone https://github.com/mailcow/mailcow-dockerized
cd mailcow-dockerized
./generate_config.sh
# The process will ask you for your FQDN to automatically configure NGINX.
# Mine is mail.rogs.me, but yours might be whatever you want
```bash
cd /opt
git clone https://github.com/mailcow/mailcow-dockerized
cd mailcow-dockerized
./generate_config.sh
# The process will ask you for your FQDN to automatically configure NGINX.
# Mine is mail.rogs.me, but yours might be whatever you want
```
I pointed my subdomain (an A record in Cloudflare) and I finally opened my browser and visited [https://mail.rogs.me](https://mail.rogs.me) and there it was, beautiful as I was expecting.
![Captura-de-pantalla-de-2019-03-20-17-20-49](/Captura-de-pantalla-de-2019-03-20-17-20-49.png)

View File

@ -28,43 +28,44 @@ The first step is to set up the server. I'm not going to explain that again, but
## Installation
For my Nextcloud installation I went straight to the [official docker documentation](https://github.com/nextcloud/docker) and extracted this docker compose:
```yaml
version: '2'
version: '2'
volumes:
nextcloud:
db:
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
nextcloud:
db:
services:
db:
image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
restart: always
volumes:
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=my_super_secure_root_password
- MYSQL_PASSWORD=my_super_secure_password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
restart: always
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=my_super_secure_root_password
- MYSQL_PASSWORD=my_super_secure_password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
app:
image: nextcloud
ports:
- 8080:80
links:
- db
volumes:
- nextcloud:/var/www/html
restart: always
```
**Some mistakes were made**
I forgot to mount the volumes and Docker automatically mounted them in /var/lib/docker/volumes/. This was a small problem I haven't solved yet because it hasn't bringed any serious issues. If someone knows if this is going to be problematic in the long run, please let me know. I didn't wanted to fix this just for the posts, I'm writing about my experience and of course it wasn't perfect.
I created the route `/opt/nextcloud` to keep my docker-compose file and finally ran:
docker-compose pull
docker-compose up -d
```bash
docker-compose pull
docker-compose up -d
```
It was that simple! The app was running on port 8080! But that is not what I wanted. I wanted it running on port 80 and 443\. For that I used a reverse proxy with NGINX and Let's Encrypt
@ -73,7 +74,7 @@ It was that simple! The app was running on port 8080! But that is not what I wan
Configuring NGINX is dead simple. Here is my configuration
`/etc/nginx/sites-available/nextcloud:`
```nginx
server {
listen 80 default_server;
listen [::]:80 default_server;
@ -110,11 +111,11 @@ Configuring NGINX is dead simple. Here is my configuration
# Set the client_max_body_size to 1000M so NGINX doesn't cut uploads
client_max_body_size 1000M;
}
```
Then I created a soft link from the configuration file to the "sites enabled" folder:
ln -s /etc/nginx/sites-available/nextcloud /etc/nginx/sites-enabled
```bash
ln -s /etc/nginx/sites-available/nextcloud /etc/nginx/sites-enabled
```
and that was it!
In this configuration you will see that I'm already referencing the SSL certificates even though they don't exist yet. We are going to create them on the next step.
@ -122,15 +123,16 @@ In this configuration you will see that I'm already referencing the SSL certific
## Let's Encrypt configuration
To generate the SSL certificates first you need to point your domain/subdomain to your server. Every DNS manager is different, so you will have to figure that out. The command I will use throught this blog series to create certificates is the following:
sudo -H certbot certonly --nginx-d mydomain.com
```bash
sudo -H certbot certonly --nginx-d mydomain.com
```
The first time you run Let's Encrypt, you have to configure some stuff. They will ask you for your email and some questions. Input that information and finish the process.
To enable automatic SSL certificates renovation, create a new cron job (`crontab -e`) with the following information:
0 3 * * * certbot renew -q
```bash
0 3 * * * certbot renew -q
```
This will run every morning at 3AM and check if any of your domains needs to be renewed. If they do, it will renew it.
At the end, you should be able to visit [https://myclouddomain.com](https://myclouddomain.com) and be greeted with a nice NextCloud screen:
@ -149,9 +151,9 @@ Once that was fixed, Nextcloud was 100% ready to be used!
![Captura-de-pantalla-de-2019-03-28-16-19-13](/Captura-de-pantalla-de-2019-03-28-16-19-13.png)
After that I went straight to "Settings/Basic settings" and noticed that my background jobs were set to "Ajax". That's not good, because if I don't open the site, the tasks will never run. I changed it to "Cron" and created a new cron on my server with the following information:
*/15 * * * * /usr/bin/docker exec --user www-data nextcloud_app_1 php cron.php
```bash
*/15 * * * * /usr/bin/docker exec --user www-data nextcloud_app_1 php cron.php
```
This will run the Nextcloud cronjob in the docker machine every 15 mins.
Then, in "Settings/Overview" I noticed a bunch of errors on the "Security & setup warnings" part. Those were very easy to fix, but since all installations aren't the same I won't go deep into this. [DuckDuckGo](https://duckduckgo.com/) is your friend.
@ -178,13 +180,13 @@ Now that NextCloud was up and running, I needed my "Google Docs" part. Enter Col
If you don't know what it is, Collabora is like Google Docs / Sheets / Slides but free and open source. You can check more about the project [here](https://nextcloud.com/collaboraonline/)
This was a very easy installation. I ran it directly with docker:
docker run -t -d -p 127.0.0.1:9980:9980 -e 'domain=mynextclouddomain.com' --restart always --cap-add MKNOD collabora/code
```bash
docker run -t -d -p 127.0.0.1:9980:9980 -e 'domain=mynextclouddomain.com' --restart always --cap-add MKNOD collabora/code
```
Created a new NGINX reverse proxy
`/etc/nginx/sites-available/collabora`:
```nginx
# Taken from https://icewind.nl/entry/collabora-online/
server {
listen 443 ssl;
@ -229,15 +231,15 @@ Created a new NGINX reverse proxy
proxy_read_timeout 36000s;
}
}
```
Created the SSL certificate for the collabora installation:
sudo -H certbot certonly --nginx-d office.mydomain.com
```bash
sudo -H certbot certonly --nginx-d office.mydomain.com
```
And finally I created a soft link from the configuration file to the "sites enabled" folder:
ln -s /etc/nginx/sites-available/collabora /etc/nginx/sites-enabled
```bash
ln -s /etc/nginx/sites-available/collabora /etc/nginx/sites-enabled
```
Pretty easy stuff.
## Nextcloud configuration

View File

@ -18,53 +18,53 @@ On this post, we get to the fun part: What am I going to do to improve my online
[Ghost](https://ghost.org/) is an open source, headless blogging platform made in NodeJS. The community is quite large and most importantly, it fitted all my requirements (Open source and runs in a docker container).
For the installation, I kept it simple. I went to the [DockerHub page for Ghost](https://hub.docker.com/_/ghost/) and used their base `docker-compose` config for myself. This is what I came up with:
```yaml
version: '3.1'
version: '3.1'
services:
services:
ghost:
image: ghost:1-alpine
restart: always
ports:
- 7000:2368
environment:
database__client: mysql
database__connection__host: db
database__connection__user: root
database__connection__password: my_super_secure_mysql_password
database__connection__database: ghost
url: https://blog.rogs.me
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: my_super_secure_mysql_password
ghost:
image: ghost:1-alpine
restart: always
ports:
- 7000:2368
environment:
database__client: mysql
database__connection__host: db
database__connection__user: root
database__connection__password: my_super_secure_mysql_password
database__connection__database: ghost
url: https://blog.rogs.me
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: my_super_secure_mysql_password
```
Simple enough. The base ghost image and a MySQL db image. Simple, readable, functional.
For the NGINX configuration I used a simple proxy:
```nginx
server {
listen 80;
listen [::]:80;
server_name blog.rogs.me;
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
server {
listen 80;
listen [::]:80;
server_name blog.rogs.me;
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
location / {
proxy_pass http://127.0.0.1:7000;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_read_timeout 5m;
}
client_max_body_size 10M;
location / {
proxy_pass http://127.0.0.1:7000;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_read_timeout 5m;
}
client_max_body_size 10M;
}
```
What does this mean? This config is just "Hey NGINX! proxy port 7000 through port 80 please, thanks"
And that was it. So simple, there's nothing much to say. Just like the title of the series,`¯\_(ツ)_/¯`
@ -80,15 +80,15 @@ I have always admired tech people that have their own wikis. It's like a place w
While doing research, I found [Dokuwiki](https://www.dokuwiki.org/dokuwiki), which is not only open source, but it uses no database! Everything is kept in files which compose your wiki. P R E T T Y N I C E.
On this one, DockerHub had no oficial Dokuwiki image, but I used a very good one from the user [mprasil](https://hub.docker.com/r/mprasil/dokuwiki). I used his recommended configuration (no `docker-compose` needed since it was a single docker image):
docker run -d -p 8000:80 --name my_wiki \
-v /data/docker/dokuwiki/data:/dokuwiki/data \
-v /data/docker/dokuwiki/conf:/dokuwiki/conf \
-v /data/docker/dokuwiki/lib/plugins:/dokuwiki/lib/plugins \
-v /data/docker/dokuwiki/lib/tpl:/dokuwiki/lib/tpl \
-v /data/docker/dokuwiki/logs:/var/log \
mprasil/dokuwiki
```bash
docker run -d -p 8000:80 --name my_wiki \
-v /data/docker/dokuwiki/data:/dokuwiki/data \
-v /data/docker/dokuwiki/conf:/dokuwiki/conf \
-v /data/docker/dokuwiki/lib/plugins:/dokuwiki/lib/plugins \
-v /data/docker/dokuwiki/lib/tpl:/dokuwiki/lib/tpl \
-v /data/docker/dokuwiki/logs:/var/log \
mprasil/dokuwiki
```
**Some mistakes were made, again**
I was following instructions blindly, I'm dumb. I mounted the Dokuwiki files on the /data/docker directory, which is not what I wanted. In the process of working on this project, I have learned one big thing:
@ -97,26 +97,26 @@ _Always. check. installation. folders and/or mounting points_
Just like the last one, I didn't want to fix this just for the posts, I'm writing about my experience and of course it wasn't perfect.
Let's continue. Once the docker container was running, I configured NGINX with another simple proxy redirect:
```nginx
server {
listen 80;
listen [::]:80;
server_name wiki.rogs.me;
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
server {
listen 80;
listen [::]:80;
server_name wiki.rogs.me;
add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_read_timeout 5m;
}
client_max_body_size 10M;
location / {
proxy_pass http://127.0.0.1:8000;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect off;
proxy_read_timeout 5m;
}
client_max_body_size 10M;
}
```
Just as the other one: "Hey NGINX! Foward port 8000 to port 80 please :) Thanks!"
![Captura-de-pantalla-de-2019-11-16-20-15-35](/Captura-de-pantalla-de-2019-11-16-20-15-35.png)

View File

@ -34,59 +34,59 @@ So, by using GPG I can encrypt my files before uploading to Wasabi, so if for an
# Script
## Nextcloud
```bash
#!/bin/sh
#!/bin/sh
# Nextcloud
echo "======================================"
echo "Backing up Nextcloud"
cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
# Nextcloud
echo "======================================"
echo "Backing up Nextcloud"
cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup
echo $NEXTCLOUD_FILE_NAME
NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup
echo $NEXTCLOUD_FILE_NAME
echo "Compressing"
tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/
echo "Compressing"
tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/
echo "Encrypting"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz
echo "Encrypting"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz
echo "Uploading"
aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
echo "Uploading"
aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
```
### A breakdown
#!/bin/sh
```bash
#!/bin/sh
```
This is to specify this is a shell script. The standard for this type of scripts.
```bash
# Nextcloud
echo "======================================"
echo "Backing up Nextcloud"
cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
# Nextcloud
echo "======================================"
echo "Backing up Nextcloud"
cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup
echo $NEXTCLOUD_FILE_NAME
NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup
echo $NEXTCLOUD_FILE_NAME
```
Here, I `cd`ed to where my Nextcloud files are located. On [De-Google my life part 3](https://blog.rogs.me/2019/03/29/de-google-my-life-part-3-of-_-tu-_-nextcloud-collabora/) I talk about my mistake of not setting my volumes correctly, that's why I have to go to this location. I also create a new filename for my backup file using the current date information.
```bash
echo "Compressing"
tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/
echo "Compressing"
tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/
echo "Encrypting"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz
echo "Encrypting"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz
```
Then, I compress the file into a `tar.gz` file. After, it is where the encryption happens. I have a file located somewhere in my server with my GPG password, it is used to encrypt my files using the `gpg` command. The command then returns a "filename.tar.gz.gpg" file, which is then uploaded to Wasabi.
```bash
echo "Uploading"
aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Uploading"
aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
echo "Deleting"
rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
```
Finally, I upload everything to Wasabi using `awscli` and delete the file, so I keep my filesystem clean.
## Is that it?
@ -94,26 +94,26 @@ Finally, I upload everything to Wasabi using `awscli` and delete the file, so I
This is the basic setup for backups, and it is repeated among all my apps, with few variations
## Dokuwiki
```bash
# Dokuwiki
echo "======================================"
echo "Backing up Dokuwiki"
cd /data/docker
# Dokuwiki
echo "======================================"
echo "Backing up Dokuwiki"
cd /data/docker
DOKUWIKI_FILE_NAME=$(date +"%Y_%m_%d")_dokuwiki_backup
DOKUWIKI_FILE_NAME=$(date +"%Y_%m_%d")_dokuwiki_backup
echo "Compressing"
tar czf /root/$DOKUWIKI_FILE_NAME.tar.gz dokuwiki/
echo "Compressing"
tar czf /root/$DOKUWIKI_FILE_NAME.tar.gz dokuwiki/
echo "Encrypting"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$DOKUWIKI_FILE_NAME.tar.gz
echo "Encrypting"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$DOKUWIKI_FILE_NAME.tar.gz
echo "Uploading"
aws s3 cp /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg s3://backups-cloud/Dokuwiki/$DOKUWIKI_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm /root/$DOKUWIKI_FILE_NAME.tar.gz /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg
echo "Uploading"
aws s3 cp /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg s3://backups-cloud/Dokuwiki/$DOKUWIKI_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm /root/$DOKUWIKI_FILE_NAME.tar.gz /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg
```
Pretty much the same as the last one, so here is a quick explanation:
* `cd` to a folder
@ -123,42 +123,43 @@ Pretty much the same as the last one, so here is a quick explanation:
* delete the local files
## Ghost
```bash
# Ghost
echo "======================================"
echo "Backing up Ghost"
cd /root
# Ghost
echo "======================================"
echo "Backing up Ghost"
cd /root
GHOST_FILE_NAME=$(date +"%Y_%m_%d")_ghost_backup
GHOST_FILE_NAME=$(date +"%Y_%m_%d")_ghost_backup
docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME
docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql
docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME
docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql
echo "Compressing"
tar czf /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME/
echo "Compressing"
tar czf /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME/
echo "Encrypting"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$GHOST_FILE_NAME.tar.gz
echo "Encrypting"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$GHOST_FILE_NAME.tar.gz
echo "Uploading"
aws s3 cp /root/$GHOST_FILE_NAME.tar.gz.gpg s3://backups-cloud/Ghost/$GHOST_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm -r /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME /root/$GHOST_FILE_NAME.tar.gz.gpg
echo "Uploading"
aws s3 cp /root/$GHOST_FILE_NAME.tar.gz.gpg s3://backups-cloud/Ghost/$GHOST_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm -r /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME /root/$GHOST_FILE_NAME.tar.gz.gpg
```
## A few differences!
docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME
docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql
```bash
docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME
docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql
```
Something new! Since on Ghost I didn't mount any volumes, I had to get the files directly from the docker container and then get a DB dump for safekeeping. Nothing too groundbreaking, but worth explaining.
# All done! How do I run it automatically?
Almost done! I just need to run everything automatically, so I can just set it and forget it. Just like before, whenever I want to run something programatically, I will use a cronjob:
0 0 * * 1 sh /opt/backup.sh
```bash
0 0 * * 1 sh /opt/backup.sh
```
This means:
_Please, can you run this script every Monday at 0:00? Thanks, server :_*

View File

@ -19,15 +19,15 @@ Wow. **523.803.417 records**.
At least the model was not that complex:
On `models.py`:
```python
class HugeTable(models.Model):
"""Huge table information"""
search_field = models.CharField(max_length=10, db_index=True, unique=True)
is_valid = models.BooleanField(default=True)
class HugeTable(models.Model):
"""Huge table information"""
search_field = models.CharField(max_length=10, db_index=True, unique=True)
is_valid = models.BooleanField(default=True)
def __str__(self):
return self.search_field
def __str__(self):
return self.search_field
```
So for Django admin, it should be a breeze, right? **WRONG.**
## The process
@ -35,11 +35,12 @@ So for Django admin, it should be a breeze, right? **WRONG.**
First, I just added the search field on the admin.py:
On `admin.py`:
```python
class HugeTableAdmin(admin.ModelAdmin):
search_fields = ('search_field', )
class HugeTableAdmin(admin.ModelAdmin):
search_fields = ('search_field', )
admin.site.register(HugeTable, HugeTableAdmin)
admin.site.register(HugeTable, HugeTableAdmin)
```
And it worked! I had a functioning search field on my admin.
![2020-02-14-154646](/2020-02-14-154646.png)
@ -66,36 +67,38 @@ A quick look at the Django docs told me how to deactivate the "see more" query:
> Set show_full_result_count to control whether the full count of objects should be displayed on a filtered admin page (e.g. 99 results (103 total)). If this option is set to False, a text like 99 results (Show all) is displayed instead.
On `admin.py`:
```python
class HugeTableAdmin(admin.ModelAdmin):
search_fields = ('search_field', )
show_full_result_count = False
class HugeTableAdmin(admin.ModelAdmin):
search_fields = ('search_field', )
show_full_result_count = False
admin.site.register(HugeTable, HugeTableAdmin)
admin.site.register(HugeTable, HugeTableAdmin)
```
That fixed one problem, but how about the other? It seemed I needed to do my paginator.
Thankfully, I found an _awesome_ post by Haki Benita called ["Optimizing the Django Admin Paginator"](https://hakibenita.com/optimizing-the-django-admin-paginator) that explained exactly that. Since I didn't need to know the records count, I went with the "Dumb" approach:
On `admin.py`:
```python
from django.core.paginator import Paginator
from Django.utils.functional import cached_property
from django.core.paginator import Paginator
from Django.utils.functional import cached_property
class DumbPaginator(Paginator):
"""
Paginator that does not count the rows in the table.
"""
@cached_property
def count(self):
return 9999999999
class DumbPaginator(Paginator):
"""
Paginator that does not count the rows in the table.
"""
@cached_property
def count(self):
return 9999999999
class HugeTableAdmin(admin.ModelAdmin):
search_fields = ('search_field', )
show_full_result_count = False
paginator = DumbPaginator
class HugeTableAdmin(admin.ModelAdmin):
search_fields = ('search_field', )
show_full_result_count = False
paginator = DumbPaginator
admin.site.register(HugeTable, HugeTableAdmin)
admin.site.register(HugeTable, HugeTableAdmin)
```
And it worked! The page was loading blazingly fast :) But the search was still **ultra slow**. So let's fix that.
![2020-02-14-153840](/2020-02-14-153840.png)
@ -105,36 +108,39 @@ And it worked! The page was loading blazingly fast :) But the search was still *
I checked A LOT of options. I almost went with [Haystack](https://haystacksearch.org/), but it seemed a bit overkill for what I needed. I finally found this super cool tool: [djangoql](https://github.com/ivelum/djangoql/). It allowed me to search the table by using _sql like_ operations, so I could search by `search_field` and make use of the indexation. So I installed it:
On `settings.py`:
INSTALLED_APPS = [
...
'djangoql',
...
]
```python
INSTALLED_APPS = [
...
'djangoql',
...
]
```
On `admin.py`:
```python
from django.core.paginator import Paginator
from django.utils.functional import cached_property
from djangoql.admin import DjangoQLSearchMixin
from django.core.paginator import Paginator
from django.utils.functional import cached_property
from djangoql.admin import DjangoQLSearchMixin
class DumbPaginator(Paginator):
"""
Paginator that does not count the rows in the table.
"""
@cached_property
def count(self):
return 9999999999
class DumbPaginator(Paginator):
"""
Paginator that does not count the rows in the table.
"""
@cached_property
def count(self):
return 9999999999
class HugeTableAdmin(DjangoQLSearchMixin, admin.ModelAdmin):
show_full_result_count = False
paginator = DumbPaginator
class HugeTableAdmin(DjangoQLSearchMixin, admin.ModelAdmin):
show_full_result_count = False
paginator = DumbPaginator
admin.site.register(HugeTable, HugeTableAdmin)
admin.site.register(HugeTable, HugeTableAdmin)
```
And it worked! By performing the query:
search_field = "my search query"
```python
search_field = "my search query"
```
I get my results in around 1 second.