Changed the code renderers

This commit is contained in:
Roger Gonzalez 2020-04-27 10:33:09 -03:00
parent 29d309db51
commit e0d43b6682
5 changed files with 315 additions and 302 deletions

View File

@ -22,60 +22,64 @@ To start, I'm using their $5 server which at the time of this writing includes:
## Installation ## Installation
On my first SSH to the server I perform basic tasks such as updating and upgrading the server: On my first SSH to the server I perform basic tasks such as updating and upgrading the server:
```bash
sudo apt update && sudo apt ugrade - y sudo apt update && sudo apt ugrade - y
```
Then I install some essentials like Ubuntu Common Properties (used to add new repositories using `add-apt-repository`) NGINX, HTOP, GIT and Emacs, the best text editor in this planet <small>vim sucks</small> Then I install some essentials like Ubuntu Common Properties (used to add new repositories using `add-apt-repository`) NGINX, HTOP, GIT and Emacs, the best text editor in this planet <small>vim sucks</small>
```bash
sudo apt install software-properties-common nginx htop git emacs sudo apt install software-properties-common nginx htop git emacs
```
For SSL certificates I'm going to use Certbot because it is the most simple and usefull tool for it. This one requires some extra steps: For SSL certificates I'm going to use Certbot because it is the most simple and usefull tool for it. This one requires some extra steps:
```bash
sudo add-apt-repository ppa:certbot/certbot -y sudo add-apt-repository ppa:certbot/certbot -y
sudo apt update sudo apt update
sudo apt install python-certbot-nginx -y sudo apt install python-certbot-nginx -y
```
By default DigitalOcean servers have no `swap`, so I'll add it by pasting some [DigitalOcean boilerplate](https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-18-04) on to the terminal: By default DigitalOcean servers have no `swap`, so I'll add it by pasting some [DigitalOcean boilerplate](https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-18-04) on to the terminal:
```bash
sudo fallocate -l 2G /swapfile sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile sudo chmod 600 /swapfile
sudo mkswap /swapfile sudo mkswap /swapfile
sudo swapon /swapfile sudo swapon /swapfile
sudo cp /etc/fstab /etc/fstab.bak sudo cp /etc/fstab /etc/fstab.bak
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
sudo sysctl vm.swappiness=10 sudo sysctl vm.swappiness=10
sudo sysctl vm.vfs_cache_pressure=50 sudo sysctl vm.vfs_cache_pressure=50
sudo echo "vm.swappiness=10" >> /etc/sysctl.conf sudo echo "vm.swappiness=10" >> /etc/sysctl.conf
sudo echo "vm.vfs_cache_pressure=50" >> /etc/sysctl.conf sudo echo "vm.vfs_cache_pressure=50" >> /etc/sysctl.conf
```
This adds 2GB of `swap` This adds 2GB of `swap`
Then I set up my firewall with UFW: Then I set up my firewall with UFW:
```bash
sudo ufw allow 22 #SSH
sudo ufw allow 80 #HTTP
sudo ufw allow 443 #HTTPS
sudo ufw allow 25 #IMAP
sudo ufw allow 143 #IMAP
sudo ufw allow 993 #IMAPS
sudo ufw allow 110 #POP3
sudo ufw allow 995 #POP3S
sudo ufw allow 587 #SMTP
sudo ufw allow 465 #SMTPS
sudo ufw allow 4190 #Manage Sieve
sudo ufw allow 22 #SSH sudo ufw enable
sudo ufw allow 80 #HTTP ```
sudo ufw allow 443 #HTTPS
sudo ufw allow 25 #IMAP
sudo ufw allow 143 #IMAP
sudo ufw allow 993 #IMAPS
sudo ufw allow 110 #POP3
sudo ufw allow 995 #POP3S
sudo ufw allow 587 #SMTP
sudo ufw allow 465 #SMTPS
sudo ufw allow 4190 #Manage Sieve
sudo ufw enable
Finally, I install `docker` and `docker-compose`, which are going to be the main software running on both servers. Finally, I install `docker` and `docker-compose`, which are going to be the main software running on both servers.
```bash
# Docker
curl -sSL https://get.docker.com/ | CHANNEL=stable sh
systemctl enable docker.service
systemctl start docker.service
# Docker # Docker compose
curl -sSL https://get.docker.com/ | CHANNEL=stable sh curl -L https://github.com/docker/compose/releases/download/$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/bin/docker-compose
systemctl enable docker.service chmod +x /usr/local/bin/docker-compose
systemctl start docker.service ```
# Docker compose
curl -L https://github.com/docker/compose/releases/download/$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
Now that everything is done, we can continue configuring the first server! Now that everything is done, we can continue configuring the first server!
@ -90,14 +94,14 @@ For my email I chose Mailcow. Why?
## Installation & Setup ## Installation & Setup
Installation was simple, first I followed the instructions on their [official documentation](https://mailcow.github.io/mailcow-dockerized-docs/i_u_m_install/) Installation was simple, first I followed the instructions on their [official documentation](https://mailcow.github.io/mailcow-dockerized-docs/i_u_m_install/)
```bash
cd /opt cd /opt
git clone https://github.com/mailcow/mailcow-dockerized git clone https://github.com/mailcow/mailcow-dockerized
cd mailcow-dockerized cd mailcow-dockerized
./generate_config.sh ./generate_config.sh
# The process will ask you for your FQDN to automatically configure NGINX. # The process will ask you for your FQDN to automatically configure NGINX.
# Mine is mail.rogs.me, but yours might be whatever you want # Mine is mail.rogs.me, but yours might be whatever you want
```
I pointed my subdomain (an A record in Cloudflare) and I finally opened my browser and visited [https://mail.rogs.me](https://mail.rogs.me) and there it was, beautiful as I was expecting. I pointed my subdomain (an A record in Cloudflare) and I finally opened my browser and visited [https://mail.rogs.me](https://mail.rogs.me) and there it was, beautiful as I was expecting.
![Captura-de-pantalla-de-2019-03-20-17-20-49](/Captura-de-pantalla-de-2019-03-20-17-20-49.png) ![Captura-de-pantalla-de-2019-03-20-17-20-49](/Captura-de-pantalla-de-2019-03-20-17-20-49.png)

View File

@ -28,14 +28,14 @@ The first step is to set up the server. I'm not going to explain that again, but
## Installation ## Installation
For my Nextcloud installation I went straight to the [official docker documentation](https://github.com/nextcloud/docker) and extracted this docker compose: For my Nextcloud installation I went straight to the [official docker documentation](https://github.com/nextcloud/docker) and extracted this docker compose:
```yaml
version: '2'
version: '2' volumes:
volumes:
nextcloud: nextcloud:
db: db:
services: services:
db: db:
image: mariadb image: mariadb
command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
@ -57,14 +57,15 @@ For my Nextcloud installation I went straight to the [official docker documentat
volumes: volumes:
- nextcloud:/var/www/html - nextcloud:/var/www/html
restart: always restart: always
```
**Some mistakes were made** **Some mistakes were made**
I forgot to mount the volumes and Docker automatically mounted them in /var/lib/docker/volumes/. This was a small problem I haven't solved yet because it hasn't bringed any serious issues. If someone knows if this is going to be problematic in the long run, please let me know. I didn't wanted to fix this just for the posts, I'm writing about my experience and of course it wasn't perfect. I forgot to mount the volumes and Docker automatically mounted them in /var/lib/docker/volumes/. This was a small problem I haven't solved yet because it hasn't bringed any serious issues. If someone knows if this is going to be problematic in the long run, please let me know. I didn't wanted to fix this just for the posts, I'm writing about my experience and of course it wasn't perfect.
I created the route `/opt/nextcloud` to keep my docker-compose file and finally ran: I created the route `/opt/nextcloud` to keep my docker-compose file and finally ran:
```bash
docker-compose pull docker-compose pull
docker-compose up -d docker-compose up -d
```
It was that simple! The app was running on port 8080! But that is not what I wanted. I wanted it running on port 80 and 443\. For that I used a reverse proxy with NGINX and Let's Encrypt It was that simple! The app was running on port 8080! But that is not what I wanted. I wanted it running on port 80 and 443\. For that I used a reverse proxy with NGINX and Let's Encrypt
@ -73,7 +74,7 @@ It was that simple! The app was running on port 8080! But that is not what I wan
Configuring NGINX is dead simple. Here is my configuration Configuring NGINX is dead simple. Here is my configuration
`/etc/nginx/sites-available/nextcloud:` `/etc/nginx/sites-available/nextcloud:`
```nginx
server { server {
listen 80 default_server; listen 80 default_server;
listen [::]:80 default_server; listen [::]:80 default_server;
@ -110,11 +111,11 @@ Configuring NGINX is dead simple. Here is my configuration
# Set the client_max_body_size to 1000M so NGINX doesn't cut uploads # Set the client_max_body_size to 1000M so NGINX doesn't cut uploads
client_max_body_size 1000M; client_max_body_size 1000M;
} }
```
Then I created a soft link from the configuration file to the "sites enabled" folder: Then I created a soft link from the configuration file to the "sites enabled" folder:
```bash
ln -s /etc/nginx/sites-available/nextcloud /etc/nginx/sites-enabled ln -s /etc/nginx/sites-available/nextcloud /etc/nginx/sites-enabled
```
and that was it! and that was it!
In this configuration you will see that I'm already referencing the SSL certificates even though they don't exist yet. We are going to create them on the next step. In this configuration you will see that I'm already referencing the SSL certificates even though they don't exist yet. We are going to create them on the next step.
@ -122,15 +123,16 @@ In this configuration you will see that I'm already referencing the SSL certific
## Let's Encrypt configuration ## Let's Encrypt configuration
To generate the SSL certificates first you need to point your domain/subdomain to your server. Every DNS manager is different, so you will have to figure that out. The command I will use throught this blog series to create certificates is the following: To generate the SSL certificates first you need to point your domain/subdomain to your server. Every DNS manager is different, so you will have to figure that out. The command I will use throught this blog series to create certificates is the following:
```bash
sudo -H certbot certonly --nginx-d mydomain.com sudo -H certbot certonly --nginx-d mydomain.com
```
The first time you run Let's Encrypt, you have to configure some stuff. They will ask you for your email and some questions. Input that information and finish the process. The first time you run Let's Encrypt, you have to configure some stuff. They will ask you for your email and some questions. Input that information and finish the process.
To enable automatic SSL certificates renovation, create a new cron job (`crontab -e`) with the following information: To enable automatic SSL certificates renovation, create a new cron job (`crontab -e`) with the following information:
```bash
0 3 * * * certbot renew -q 0 3 * * * certbot renew -q
```
This will run every morning at 3AM and check if any of your domains needs to be renewed. If they do, it will renew it. This will run every morning at 3AM and check if any of your domains needs to be renewed. If they do, it will renew it.
At the end, you should be able to visit [https://myclouddomain.com](https://myclouddomain.com) and be greeted with a nice NextCloud screen: At the end, you should be able to visit [https://myclouddomain.com](https://myclouddomain.com) and be greeted with a nice NextCloud screen:
@ -149,9 +151,9 @@ Once that was fixed, Nextcloud was 100% ready to be used!
![Captura-de-pantalla-de-2019-03-28-16-19-13](/Captura-de-pantalla-de-2019-03-28-16-19-13.png) ![Captura-de-pantalla-de-2019-03-28-16-19-13](/Captura-de-pantalla-de-2019-03-28-16-19-13.png)
After that I went straight to "Settings/Basic settings" and noticed that my background jobs were set to "Ajax". That's not good, because if I don't open the site, the tasks will never run. I changed it to "Cron" and created a new cron on my server with the following information: After that I went straight to "Settings/Basic settings" and noticed that my background jobs were set to "Ajax". That's not good, because if I don't open the site, the tasks will never run. I changed it to "Cron" and created a new cron on my server with the following information:
```bash
*/15 * * * * /usr/bin/docker exec --user www-data nextcloud_app_1 php cron.php */15 * * * * /usr/bin/docker exec --user www-data nextcloud_app_1 php cron.php
```
This will run the Nextcloud cronjob in the docker machine every 15 mins. This will run the Nextcloud cronjob in the docker machine every 15 mins.
Then, in "Settings/Overview" I noticed a bunch of errors on the "Security & setup warnings" part. Those were very easy to fix, but since all installations aren't the same I won't go deep into this. [DuckDuckGo](https://duckduckgo.com/) is your friend. Then, in "Settings/Overview" I noticed a bunch of errors on the "Security & setup warnings" part. Those were very easy to fix, but since all installations aren't the same I won't go deep into this. [DuckDuckGo](https://duckduckgo.com/) is your friend.
@ -178,13 +180,13 @@ Now that NextCloud was up and running, I needed my "Google Docs" part. Enter Col
If you don't know what it is, Collabora is like Google Docs / Sheets / Slides but free and open source. You can check more about the project [here](https://nextcloud.com/collaboraonline/) If you don't know what it is, Collabora is like Google Docs / Sheets / Slides but free and open source. You can check more about the project [here](https://nextcloud.com/collaboraonline/)
This was a very easy installation. I ran it directly with docker: This was a very easy installation. I ran it directly with docker:
```bash
docker run -t -d -p 127.0.0.1:9980:9980 -e 'domain=mynextclouddomain.com' --restart always --cap-add MKNOD collabora/code docker run -t -d -p 127.0.0.1:9980:9980 -e 'domain=mynextclouddomain.com' --restart always --cap-add MKNOD collabora/code
```
Created a new NGINX reverse proxy Created a new NGINX reverse proxy
`/etc/nginx/sites-available/collabora`: `/etc/nginx/sites-available/collabora`:
```nginx
# Taken from https://icewind.nl/entry/collabora-online/ # Taken from https://icewind.nl/entry/collabora-online/
server { server {
listen 443 ssl; listen 443 ssl;
@ -229,15 +231,15 @@ Created a new NGINX reverse proxy
proxy_read_timeout 36000s; proxy_read_timeout 36000s;
} }
} }
```
Created the SSL certificate for the collabora installation: Created the SSL certificate for the collabora installation:
```bash
sudo -H certbot certonly --nginx-d office.mydomain.com sudo -H certbot certonly --nginx-d office.mydomain.com
```
And finally I created a soft link from the configuration file to the "sites enabled" folder: And finally I created a soft link from the configuration file to the "sites enabled" folder:
```bash
ln -s /etc/nginx/sites-available/collabora /etc/nginx/sites-enabled ln -s /etc/nginx/sites-available/collabora /etc/nginx/sites-enabled
```
Pretty easy stuff. Pretty easy stuff.
## Nextcloud configuration ## Nextcloud configuration

View File

@ -18,10 +18,10 @@ On this post, we get to the fun part: What am I going to do to improve my online
[Ghost](https://ghost.org/) is an open source, headless blogging platform made in NodeJS. The community is quite large and most importantly, it fitted all my requirements (Open source and runs in a docker container). [Ghost](https://ghost.org/) is an open source, headless blogging platform made in NodeJS. The community is quite large and most importantly, it fitted all my requirements (Open source and runs in a docker container).
For the installation, I kept it simple. I went to the [DockerHub page for Ghost](https://hub.docker.com/_/ghost/) and used their base `docker-compose` config for myself. This is what I came up with: For the installation, I kept it simple. I went to the [DockerHub page for Ghost](https://hub.docker.com/_/ghost/) and used their base `docker-compose` config for myself. This is what I came up with:
```yaml
version: '3.1'
version: '3.1' services:
services:
ghost: ghost:
image: ghost:1-alpine image: ghost:1-alpine
@ -41,12 +41,12 @@ For the installation, I kept it simple. I went to the [DockerHub page for Ghost]
restart: always restart: always
environment: environment:
MYSQL_ROOT_PASSWORD: my_super_secure_mysql_password MYSQL_ROOT_PASSWORD: my_super_secure_mysql_password
```
Simple enough. The base ghost image and a MySQL db image. Simple, readable, functional. Simple enough. The base ghost image and a MySQL db image. Simple, readable, functional.
For the NGINX configuration I used a simple proxy: For the NGINX configuration I used a simple proxy:
```nginx
server { server {
listen 80; listen 80;
listen [::]:80; listen [::]:80;
server_name blog.rogs.me; server_name blog.rogs.me;
@ -63,8 +63,8 @@ For the NGINX configuration I used a simple proxy:
proxy_read_timeout 5m; proxy_read_timeout 5m;
} }
client_max_body_size 10M; client_max_body_size 10M;
} }
```
What does this mean? This config is just "Hey NGINX! proxy port 7000 through port 80 please, thanks" What does this mean? This config is just "Hey NGINX! proxy port 7000 through port 80 please, thanks"
And that was it. So simple, there's nothing much to say. Just like the title of the series,`¯\_(ツ)_/¯` And that was it. So simple, there's nothing much to say. Just like the title of the series,`¯\_(ツ)_/¯`
@ -80,15 +80,15 @@ I have always admired tech people that have their own wikis. It's like a place w
While doing research, I found [Dokuwiki](https://www.dokuwiki.org/dokuwiki), which is not only open source, but it uses no database! Everything is kept in files which compose your wiki. P R E T T Y N I C E. While doing research, I found [Dokuwiki](https://www.dokuwiki.org/dokuwiki), which is not only open source, but it uses no database! Everything is kept in files which compose your wiki. P R E T T Y N I C E.
On this one, DockerHub had no oficial Dokuwiki image, but I used a very good one from the user [mprasil](https://hub.docker.com/r/mprasil/dokuwiki). I used his recommended configuration (no `docker-compose` needed since it was a single docker image): On this one, DockerHub had no oficial Dokuwiki image, but I used a very good one from the user [mprasil](https://hub.docker.com/r/mprasil/dokuwiki). I used his recommended configuration (no `docker-compose` needed since it was a single docker image):
```bash
docker run -d -p 8000:80 --name my_wiki \ docker run -d -p 8000:80 --name my_wiki \
-v /data/docker/dokuwiki/data:/dokuwiki/data \ -v /data/docker/dokuwiki/data:/dokuwiki/data \
-v /data/docker/dokuwiki/conf:/dokuwiki/conf \ -v /data/docker/dokuwiki/conf:/dokuwiki/conf \
-v /data/docker/dokuwiki/lib/plugins:/dokuwiki/lib/plugins \ -v /data/docker/dokuwiki/lib/plugins:/dokuwiki/lib/plugins \
-v /data/docker/dokuwiki/lib/tpl:/dokuwiki/lib/tpl \ -v /data/docker/dokuwiki/lib/tpl:/dokuwiki/lib/tpl \
-v /data/docker/dokuwiki/logs:/var/log \ -v /data/docker/dokuwiki/logs:/var/log \
mprasil/dokuwiki mprasil/dokuwiki
```
**Some mistakes were made, again** **Some mistakes were made, again**
I was following instructions blindly, I'm dumb. I mounted the Dokuwiki files on the /data/docker directory, which is not what I wanted. In the process of working on this project, I have learned one big thing: I was following instructions blindly, I'm dumb. I mounted the Dokuwiki files on the /data/docker directory, which is not what I wanted. In the process of working on this project, I have learned one big thing:
@ -97,8 +97,8 @@ _Always. check. installation. folders and/or mounting points_
Just like the last one, I didn't want to fix this just for the posts, I'm writing about my experience and of course it wasn't perfect. Just like the last one, I didn't want to fix this just for the posts, I'm writing about my experience and of course it wasn't perfect.
Let's continue. Once the docker container was running, I configured NGINX with another simple proxy redirect: Let's continue. Once the docker container was running, I configured NGINX with another simple proxy redirect:
```nginx
server { server {
listen 80; listen 80;
listen [::]:80; listen [::]:80;
server_name wiki.rogs.me; server_name wiki.rogs.me;
@ -115,8 +115,8 @@ Let's continue. Once the docker container was running, I configured NGINX with a
proxy_read_timeout 5m; proxy_read_timeout 5m;
} }
client_max_body_size 10M; client_max_body_size 10M;
} }
```
Just as the other one: "Hey NGINX! Foward port 8000 to port 80 please :) Thanks!" Just as the other one: "Hey NGINX! Foward port 8000 to port 80 please :) Thanks!"
![Captura-de-pantalla-de-2019-11-16-20-15-35](/Captura-de-pantalla-de-2019-11-16-20-15-35.png) ![Captura-de-pantalla-de-2019-11-16-20-15-35](/Captura-de-pantalla-de-2019-11-16-20-15-35.png)

View File

@ -34,59 +34,59 @@ So, by using GPG I can encrypt my files before uploading to Wasabi, so if for an
# Script # Script
## Nextcloud ## Nextcloud
```bash
#!/bin/sh
#!/bin/sh # Nextcloud
echo "======================================"
echo "Backing up Nextcloud"
cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
# Nextcloud NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup
echo "======================================" echo $NEXTCLOUD_FILE_NAME
echo "Backing up Nextcloud"
cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup echo "Compressing"
echo $NEXTCLOUD_FILE_NAME tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/
echo "Compressing" echo "Encrypting"
tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/ gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz
echo "Encrypting" echo "Uploading"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Uploading"
aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
echo "Deleting"
rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
```
### A breakdown ### A breakdown
```bash
#!/bin/sh #!/bin/sh
```
This is to specify this is a shell script. The standard for this type of scripts. This is to specify this is a shell script. The standard for this type of scripts.
```bash
# Nextcloud
echo "======================================"
echo "Backing up Nextcloud"
cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
# Nextcloud NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup
echo "======================================" echo $NEXTCLOUD_FILE_NAME
echo "Backing up Nextcloud" ```
cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup
echo $NEXTCLOUD_FILE_NAME
Here, I `cd`ed to where my Nextcloud files are located. On [De-Google my life part 3](https://blog.rogs.me/2019/03/29/de-google-my-life-part-3-of-_-tu-_-nextcloud-collabora/) I talk about my mistake of not setting my volumes correctly, that's why I have to go to this location. I also create a new filename for my backup file using the current date information. Here, I `cd`ed to where my Nextcloud files are located. On [De-Google my life part 3](https://blog.rogs.me/2019/03/29/de-google-my-life-part-3-of-_-tu-_-nextcloud-collabora/) I talk about my mistake of not setting my volumes correctly, that's why I have to go to this location. I also create a new filename for my backup file using the current date information.
```bash
echo "Compressing"
tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/
echo "Compressing" echo "Encrypting"
tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/ gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz
```
echo "Encrypting"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz
Then, I compress the file into a `tar.gz` file. After, it is where the encryption happens. I have a file located somewhere in my server with my GPG password, it is used to encrypt my files using the `gpg` command. The command then returns a "filename.tar.gz.gpg" file, which is then uploaded to Wasabi. Then, I compress the file into a `tar.gz` file. After, it is where the encryption happens. I have a file located somewhere in my server with my GPG password, it is used to encrypt my files using the `gpg` command. The command then returns a "filename.tar.gz.gpg" file, which is then uploaded to Wasabi.
```bash
echo "Uploading"
aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Uploading" echo "Deleting"
aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
```
echo "Deleting"
rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
Finally, I upload everything to Wasabi using `awscli` and delete the file, so I keep my filesystem clean. Finally, I upload everything to Wasabi using `awscli` and delete the file, so I keep my filesystem clean.
## Is that it? ## Is that it?
@ -94,26 +94,26 @@ Finally, I upload everything to Wasabi using `awscli` and delete the file, so I
This is the basic setup for backups, and it is repeated among all my apps, with few variations This is the basic setup for backups, and it is repeated among all my apps, with few variations
## Dokuwiki ## Dokuwiki
```bash
# Dokuwiki
echo "======================================"
echo "Backing up Dokuwiki"
cd /data/docker
# Dokuwiki DOKUWIKI_FILE_NAME=$(date +"%Y_%m_%d")_dokuwiki_backup
echo "======================================"
echo "Backing up Dokuwiki"
cd /data/docker
DOKUWIKI_FILE_NAME=$(date +"%Y_%m_%d")_dokuwiki_backup echo "Compressing"
tar czf /root/$DOKUWIKI_FILE_NAME.tar.gz dokuwiki/
echo "Compressing" echo "Encrypting"
tar czf /root/$DOKUWIKI_FILE_NAME.tar.gz dokuwiki/ gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$DOKUWIKI_FILE_NAME.tar.gz
echo "Encrypting" echo "Uploading"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$DOKUWIKI_FILE_NAME.tar.gz aws s3 cp /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg s3://backups-cloud/Dokuwiki/$DOKUWIKI_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Uploading"
aws s3 cp /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg s3://backups-cloud/Dokuwiki/$DOKUWIKI_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm /root/$DOKUWIKI_FILE_NAME.tar.gz /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg
echo "Deleting"
rm /root/$DOKUWIKI_FILE_NAME.tar.gz /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg
```
Pretty much the same as the last one, so here is a quick explanation: Pretty much the same as the last one, so here is a quick explanation:
* `cd` to a folder * `cd` to a folder
@ -123,42 +123,43 @@ Pretty much the same as the last one, so here is a quick explanation:
* delete the local files * delete the local files
## Ghost ## Ghost
```bash
# Ghost
echo "======================================"
echo "Backing up Ghost"
cd /root
# Ghost GHOST_FILE_NAME=$(date +"%Y_%m_%d")_ghost_backup
echo "======================================"
echo "Backing up Ghost"
cd /root
GHOST_FILE_NAME=$(date +"%Y_%m_%d")_ghost_backup docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME
docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql
docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME echo "Compressing"
docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql tar czf /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME/
echo "Compressing" echo "Encrypting"
tar czf /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME/ gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$GHOST_FILE_NAME.tar.gz
echo "Encrypting" echo "Uploading"
gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$GHOST_FILE_NAME.tar.gz aws s3 cp /root/$GHOST_FILE_NAME.tar.gz.gpg s3://backups-cloud/Ghost/$GHOST_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Uploading"
aws s3 cp /root/$GHOST_FILE_NAME.tar.gz.gpg s3://backups-cloud/Ghost/$GHOST_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
echo "Deleting"
rm -r /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME /root/$GHOST_FILE_NAME.tar.gz.gpg
echo "Deleting"
rm -r /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME /root/$GHOST_FILE_NAME.tar.gz.gpg
```
## A few differences! ## A few differences!
```bash
docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME
docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql
```
Something new! Since on Ghost I didn't mount any volumes, I had to get the files directly from the docker container and then get a DB dump for safekeeping. Nothing too groundbreaking, but worth explaining. Something new! Since on Ghost I didn't mount any volumes, I had to get the files directly from the docker container and then get a DB dump for safekeeping. Nothing too groundbreaking, but worth explaining.
# All done! How do I run it automatically? # All done! How do I run it automatically?
Almost done! I just need to run everything automatically, so I can just set it and forget it. Just like before, whenever I want to run something programatically, I will use a cronjob: Almost done! I just need to run everything automatically, so I can just set it and forget it. Just like before, whenever I want to run something programatically, I will use a cronjob:
```bash
0 0 * * 1 sh /opt/backup.sh 0 0 * * 1 sh /opt/backup.sh
```
This means: This means:
_Please, can you run this script every Monday at 0:00? Thanks, server :_* _Please, can you run this script every Monday at 0:00? Thanks, server :_*

View File

@ -19,15 +19,15 @@ Wow. **523.803.417 records**.
At least the model was not that complex: At least the model was not that complex:
On `models.py`: On `models.py`:
```python
class HugeTable(models.Model): class HugeTable(models.Model):
"""Huge table information""" """Huge table information"""
search_field = models.CharField(max_length=10, db_index=True, unique=True) search_field = models.CharField(max_length=10, db_index=True, unique=True)
is_valid = models.BooleanField(default=True) is_valid = models.BooleanField(default=True)
def __str__(self): def __str__(self):
return self.search_field return self.search_field
```
So for Django admin, it should be a breeze, right? **WRONG.** So for Django admin, it should be a breeze, right? **WRONG.**
## The process ## The process
@ -35,11 +35,12 @@ So for Django admin, it should be a breeze, right? **WRONG.**
First, I just added the search field on the admin.py: First, I just added the search field on the admin.py:
On `admin.py`: On `admin.py`:
```python
class HugeTableAdmin(admin.ModelAdmin): class HugeTableAdmin(admin.ModelAdmin):
search_fields = ('search_field', ) search_fields = ('search_field', )
admin.site.register(HugeTable, HugeTableAdmin) admin.site.register(HugeTable, HugeTableAdmin)
```
And it worked! I had a functioning search field on my admin. And it worked! I had a functioning search field on my admin.
![2020-02-14-154646](/2020-02-14-154646.png) ![2020-02-14-154646](/2020-02-14-154646.png)
@ -66,23 +67,24 @@ A quick look at the Django docs told me how to deactivate the "see more" query:
> Set show_full_result_count to control whether the full count of objects should be displayed on a filtered admin page (e.g. 99 results (103 total)). If this option is set to False, a text like 99 results (Show all) is displayed instead. > Set show_full_result_count to control whether the full count of objects should be displayed on a filtered admin page (e.g. 99 results (103 total)). If this option is set to False, a text like 99 results (Show all) is displayed instead.
On `admin.py`: On `admin.py`:
```python
class HugeTableAdmin(admin.ModelAdmin): class HugeTableAdmin(admin.ModelAdmin):
search_fields = ('search_field', ) search_fields = ('search_field', )
show_full_result_count = False show_full_result_count = False
admin.site.register(HugeTable, HugeTableAdmin) admin.site.register(HugeTable, HugeTableAdmin)
```
That fixed one problem, but how about the other? It seemed I needed to do my paginator. That fixed one problem, but how about the other? It seemed I needed to do my paginator.
Thankfully, I found an _awesome_ post by Haki Benita called ["Optimizing the Django Admin Paginator"](https://hakibenita.com/optimizing-the-django-admin-paginator) that explained exactly that. Since I didn't need to know the records count, I went with the "Dumb" approach: Thankfully, I found an _awesome_ post by Haki Benita called ["Optimizing the Django Admin Paginator"](https://hakibenita.com/optimizing-the-django-admin-paginator) that explained exactly that. Since I didn't need to know the records count, I went with the "Dumb" approach:
On `admin.py`: On `admin.py`:
```python
from django.core.paginator import Paginator
from Django.utils.functional import cached_property
from django.core.paginator import Paginator class DumbPaginator(Paginator):
from Django.utils.functional import cached_property
class DumbPaginator(Paginator):
""" """
Paginator that does not count the rows in the table. Paginator that does not count the rows in the table.
""" """
@ -90,12 +92,13 @@ On `admin.py`:
def count(self): def count(self):
return 9999999999 return 9999999999
class HugeTableAdmin(admin.ModelAdmin): class HugeTableAdmin(admin.ModelAdmin):
search_fields = ('search_field', ) search_fields = ('search_field', )
show_full_result_count = False show_full_result_count = False
paginator = DumbPaginator paginator = DumbPaginator
admin.site.register(HugeTable, HugeTableAdmin) admin.site.register(HugeTable, HugeTableAdmin)
```
And it worked! The page was loading blazingly fast :) But the search was still **ultra slow**. So let's fix that. And it worked! The page was loading blazingly fast :) But the search was still **ultra slow**. So let's fix that.
![2020-02-14-153840](/2020-02-14-153840.png) ![2020-02-14-153840](/2020-02-14-153840.png)
@ -105,20 +108,21 @@ And it worked! The page was loading blazingly fast :) But the search was still *
I checked A LOT of options. I almost went with [Haystack](https://haystacksearch.org/), but it seemed a bit overkill for what I needed. I finally found this super cool tool: [djangoql](https://github.com/ivelum/djangoql/). It allowed me to search the table by using _sql like_ operations, so I could search by `search_field` and make use of the indexation. So I installed it: I checked A LOT of options. I almost went with [Haystack](https://haystacksearch.org/), but it seemed a bit overkill for what I needed. I finally found this super cool tool: [djangoql](https://github.com/ivelum/djangoql/). It allowed me to search the table by using _sql like_ operations, so I could search by `search_field` and make use of the indexation. So I installed it:
On `settings.py`: On `settings.py`:
```python
INSTALLED_APPS = [ INSTALLED_APPS = [
... ...
'djangoql', 'djangoql',
... ...
] ]
```
On `admin.py`: On `admin.py`:
```python
from django.core.paginator import Paginator
from django.utils.functional import cached_property
from djangoql.admin import DjangoQLSearchMixin
from django.core.paginator import Paginator class DumbPaginator(Paginator):
from django.utils.functional import cached_property
from djangoql.admin import DjangoQLSearchMixin
class DumbPaginator(Paginator):
""" """
Paginator that does not count the rows in the table. Paginator that does not count the rows in the table.
""" """
@ -126,15 +130,17 @@ On `admin.py`:
def count(self): def count(self):
return 9999999999 return 9999999999
class HugeTableAdmin(DjangoQLSearchMixin, admin.ModelAdmin): class HugeTableAdmin(DjangoQLSearchMixin, admin.ModelAdmin):
show_full_result_count = False show_full_result_count = False
paginator = DumbPaginator paginator = DumbPaginator
admin.site.register(HugeTable, HugeTableAdmin) admin.site.register(HugeTable, HugeTableAdmin)
```
And it worked! By performing the query: And it worked! By performing the query:
```python
search_field = "my search query" search_field = "my search query"
```
I get my results in around 1 second. I get my results in around 1 second.