summaryrefslogtreecommitdiff
path: root/content
diff options
context:
space:
mode:
authorRoger Gonzalez <roger@rogs.me>2020-04-25 16:50:56 -0300
committerRoger Gonzalez <roger@rogs.me>2020-04-25 16:50:56 -0300
commit29d309db5196099982d96933acdc4c0d0ae45436 (patch)
treed05796b60487d2befc2780d35ae1f864eff225c7 /content
Initial commit, migrating from Ghost
Diffstat (limited to 'content')
-rw-r--r--content/posts/degoogle-my-life-part-1.md60
-rw-r--r--content/posts/degoogle-my-life-part-2.md129
-rw-r--r--content/posts/degoogle-my-life-part-3.md279
-rw-r--r--content/posts/degoogle-my-life-part-4.md134
-rw-r--r--content/posts/degoogle-my-life-part-5.md188
-rw-r--r--content/posts/how-to-search-in-a-huge-table-on-django-admin.md147
-rw-r--r--content/posts/my-mom-was-always-right.md56
7 files changed, 993 insertions, 0 deletions
diff --git a/content/posts/degoogle-my-life-part-1.md b/content/posts/degoogle-my-life-part-1.md
new file mode 100644
index 0000000..b88c252
--- /dev/null
+++ b/content/posts/degoogle-my-life-part-1.md
@@ -0,0 +1,60 @@
+---
+title: "De-Google my life - Part 1 of ¯\_(ツ)_/¯: Why? How?"
+url: "/2019/03/15/de-google-my-life-part-1-of-_-tu-_-why-how"
+date: 2019-03-15T15:59:00-04:00
+lastmod: 2020-04-25T12:35:53-03:00
+tags : [ "degoogle", "devops" ]
+---
+
+Hi everyone! I'm here with my first project of the year. It is almost done, but I think it is time to start documenting everything.
+
+One day I was hanging out with my girlfriend looking for trips to japan online and found myself bombarded by ads that were disturbingly specific. We realized at the moment that Google knows A LOT of us, and we were not happy about that. With my tech knowledge, I knew that there were a lot of alternatives to Google, but first I needed to answer a bigger question:
+
+# Why?
+
+I told my techie friends about the craziness I was trying to accomplish and they all answered in unison: Why?
+
+So I came up with the following list:
+
+* **Privacy**. The internet is a scary place if you don't know what you are doing. I don't like big corporations knowing everything about me just to sell ads or use my data for whatever they want. I have learned that if something is free it's because [**you** are the product](https://twitter.com/rogergonzalez21/status/1067816233125494784) **EXCEPT** in opensource (thanks to [/u/SnowKissedBerries](https://www.reddit.com/user/SnowKissedBerries) for that clarification.
+* **Security**. I live in a very controlled country (Venezuela). Over here, almost every government agency is looking at you, so using selfhosted alternatives and a VPN is a peace of mind for me and my family.
+* **To learn**. Learning all these skills are going to be good for my career as a Backend / DevOps engineer.
+* **Because I can and it is fun**. Narrowing it all down, I'm doing this because I can. It might be overkill, dumb, unreliable **but** it is really fun. Learning new skills is always a good, fun experience, for me at least.
+
+Perfect! I have all the "Whys" detailed, but how am I going to achieve all of this?
+
+# How?
+
+First of all, I went to the experts (shout out to [/r/selfhosted!](https://www.reddit.com/r/selfhosted)) and read all the interesting topics over there that I could use for my selfhostable endeavours. After 1 week of reading and researching, I came with the following setup:
+
+2 servers, each one with the following stack:
+
+* **Server 1: Mail server**
+ Mailcow for my SMTP / IMAP email server
+* **Server 2: Everything-else server**
+ Nextcloud for my files, calendar, tasks and contacts
+ Some other apps (?) (More on that for the following posts)
+
+I chose DigitalOcean for the hosting because it is cheap and I have a ton of experience with those servers (I have setup more than 100 servers on their platform).
+
+For VPN I chose [PIA](https://www.privateinternetaccess.com/pages/buy-a-vpn/1218buyavpn?invite=U2FsdGVkX1_cGyzYzdmeUMjhrUAwTzDBCMY-PsW-pXA%2CSawh3XnBRwlSt_9084reCHGX1Kk). The criteria for this decision was that one of my friends borrowed me his account for ~2 weeks and it worked super quick. Sometimes I didn't realize I was connected to the VPN on because the internet was super fast.
+
+# Some self-imposed challenges
+
+I knew this wasn't going to be easy, so of course I added more challenges just because <s>I'm dumb</s>.
+
+* **Only use open source software**
+ I wasn't going to install more proprietary software on my servers. I wanted free and open source alternatives for my setup.
+* **Only use Docker**
+ I had "Learn docker" in my backlog for too long, so I used this opportunity to learn it the hard way.
+* **Use a cheap but reliable backup solution**
+ One of the parts that scared me about having my own servers was the backups. If one of the servers goes down, almost all of my work goes with it, so I needed to have a reliable but cheap backup solution.
+
+# Conclusion
+
+This is only the first part, but I'm planning on this being a long and very cool project. I hope I didn't bore you to death with all my yapping, I promise my next post will be more entertaining with code, server configurations, and all of that good stuff.
+
+[Click here for part 2](https://blog.rogs.me/2019/03/22/de-google-my-life-part-2-of-_-tu-_-servers-and-emails/)
+[Click here for part 3](https://blog.rogs.me/2019/03/29/de-google-my-life-part-3-of-_-tu-_-nextcloud-collabora/)
+[Click here for part 4](https://blog.rogs.me/2019/11/20/de-google-my-life-part-4-of-_-tu-_-dokuwiki-ghost/)
+[Click here for part 5](https://blog.rogs.me/2019/11/27/de-google-my-life-part-5-of-_-tu-_-backups/)
diff --git a/content/posts/degoogle-my-life-part-2.md b/content/posts/degoogle-my-life-part-2.md
new file mode 100644
index 0000000..6f55a8c
--- /dev/null
+++ b/content/posts/degoogle-my-life-part-2.md
@@ -0,0 +1,129 @@
+---
+title: "De-Google my life - Part 2 of ¯\_(ツ)_/¯: Servers and Emails"
+url: "/2019/03/22/de-google-my-life-part-2-of-_-tu-_-servers-and-emails"
+date: 2019-03-22T21:03:00-04:00
+lastmod: 2020-04-25T12:35:53-03:00
+tags : [ "degoogle", "devops" ]
+---
+
+Hello everyone! Welcome to the second post of this blog series that aims to de-google my life as much as possible. If you haven't read the first one, you should [definitely check it out](https://blog.rogs.me/2019/03/15/de-google-my-life-part-1-of-_-tu-_-why-how/). On this delivery we'll focus more on code and configurations so I promise you it won't be as boring :)
+
+# Servers configuration
+
+As I mentioned on the previous post, I'll be using two servers that are going to be configured almost the same, so I'm going to explain it only one time. In order to host my servers I'm using [DigitalOcean](https://m.do.co/c/cf0ff9cae16a) because I'm very used to their UI, their prices are excelent and they accept Paypal. If you haven't yet, you should check them out.
+
+To start, I'm using their $5 server which at the time of this writing includes:
+
+* Ubuntu 18.04 64 bits
+* 1GB RAM
+* 1 CPU
+* 1000 GB of monthly transfers
+
+## Installation
+
+On my first SSH to the server I perform basic tasks such as updating and upgrading the server:
+
+ sudo apt update && sudo apt ugrade - y
+
+Then I install some essentials like Ubuntu Common Properties (used to add new repositories using `add-apt-repository`) NGINX, HTOP, GIT and Emacs, the best text editor in this planet <small>vim sucks</small>
+
+ sudo apt install software-properties-common nginx htop git emacs
+
+For SSL certificates I'm going to use Certbot because it is the most simple and usefull tool for it. This one requires some extra steps:
+
+ sudo add-apt-repository ppa:certbot/certbot -y
+ sudo apt update
+ sudo apt install python-certbot-nginx -y
+
+By default DigitalOcean servers have no `swap`, so I'll add it by pasting some [DigitalOcean boilerplate](https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-18-04) on to the terminal:
+
+ sudo fallocate -l 2G /swapfile
+ sudo chmod 600 /swapfile
+ sudo mkswap /swapfile
+ sudo swapon /swapfile
+ sudo cp /etc/fstab /etc/fstab.bak
+ echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
+ sudo sysctl vm.swappiness=10
+ sudo sysctl vm.vfs_cache_pressure=50
+ sudo echo "vm.swappiness=10" >> /etc/sysctl.conf
+ sudo echo "vm.vfs_cache_pressure=50" >> /etc/sysctl.conf
+
+This adds 2GB of `swap`
+
+Then I set up my firewall with UFW:
+
+ sudo ufw allow 22 #SSH
+ sudo ufw allow 80 #HTTP
+ sudo ufw allow 443 #HTTPS
+ sudo ufw allow 25 #IMAP
+ sudo ufw allow 143 #IMAP
+ sudo ufw allow 993 #IMAPS
+ sudo ufw allow 110 #POP3
+ sudo ufw allow 995 #POP3S
+ sudo ufw allow 587 #SMTP
+ sudo ufw allow 465 #SMTPS
+ sudo ufw allow 4190 #Manage Sieve
+
+ sudo ufw enable
+
+Finally, I install `docker` and `docker-compose`, which are going to be the main software running on both servers.
+
+ # Docker
+ curl -sSL https://get.docker.com/ | CHANNEL=stable sh
+ systemctl enable docker.service
+ systemctl start docker.service
+
+ # Docker compose
+ curl -L https://github.com/docker/compose/releases/download/$(curl -Ls https://www.servercow.de/docker-compose/latest.php)/docker-compose-$(uname -s)-$(uname -m) > /usr/local/bin/docker-compose
+ chmod +x /usr/local/bin/docker-compose
+
+Now that everything is done, we can continue configuring the first server!
+
+# Server #1: Mailcow
+
+For my email I chose Mailcow. Why?
+
+* It checks all of my "challenges list" items from last week's post ([open source and dockerized](https://github.com/mailcow/mailcow-dockerized)).
+* The documentation is fantastic, explaining each detail one by one.
+* It has a huge community behind it.
+
+## Installation & Setup
+
+Installation was simple, first I followed the instructions on their [official documentation](https://mailcow.github.io/mailcow-dockerized-docs/i_u_m_install/)
+
+ cd /opt
+ git clone https://github.com/mailcow/mailcow-dockerized
+ cd mailcow-dockerized
+ ./generate_config.sh
+ # The process will ask you for your FQDN to automatically configure NGINX.
+ # Mine is mail.rogs.me, but yours might be whatever you want
+
+I pointed my subdomain (an A record in Cloudflare) and I finally opened my browser and visited [https://mail.rogs.me](https://mail.rogs.me) and there it was, beautiful as I was expecting.
+
+![Captura-de-pantalla-de-2019-03-20-17-20-49](/Captura-de-pantalla-de-2019-03-20-17-20-49.png)
+<small>What a beautiful cow</small>
+
+After that I just followed the documentation to [configure their Let's Encrypt docker image](https://mailcow.github.io/mailcow-dockerized-docs/firststeps-ssl/), [added more records on my DNS](https://mailcow.github.io/mailcow-dockerized-docs/prerequisite-dns/) and tested a lot with [https://www.mail-tester.com/](https://www.mail-tester.com/) until I got a good score
+
+![Captura-de-pantalla-de-2019-03-20-17-25-14](/Captura-de-pantalla-de-2019-03-20-17-25-14.png)
+<small>My actual score. Everything is perfect in self-hosted-mail-land</small>
+
+I know that sometimes that score doesn't mean much, but at least is nice to know my email is completely configured.
+
+## Backups
+
+Since I keep all my emails local, I didn't want a huge backup solution for this server, so I went with the DigitalOcean backup, which costs $1 per month. Cheap, reliable and it just works.
+
+## Edit Nov 23-26 2019
+
+As of now, I'm not using PIA anymore because [they where bought by Kape Technologies, which is known for sending malware through their software and for being scummy in general.](https://www.reddit.com/r/homelab/comments/e05ce4/psa_piaprivateinternetaccess_has_been_bought_by/). I'm now using [Mullvad](https://mullvad.net/), [which really focuses on security](https://mullvad.net/es/help/no-logging-data-policy/). If you were using PIA, I really recommend you change providers.
+
+# Conclusion
+
+With all of this my first server was done, but it was also the easiest. This one was a pretty straightforward installation with nothing fancy going on: No backups, no NGINX configuration, nothing much. On the good side, I had my email working really quick and it was a very satisfying and rewarding experience. This is when the "selfhost everything" bug bit me and this project really started ramp up in speed. On the next post we will talk about the second server, which includes fun stuff as [Nextcloud](https://nextcloud.com/), [Collabora](https://www.collaboraoffice.com/), [Dokuwiki](https://www.dokuwiki.org/dokuwiki) and many more.
+
+Stay tuned!
+
+[Click here for part 3](https://blog.rogs.me/2019/03/29/de-google-my-life-part-3-of-_-tu-_-nextcloud-collabora/)
+[Click here for part 4](https://blog.rogs.me/2019/11/20/de-google-my-life-part-4-of-_-tu-_-dokuwiki-ghost/)
+[Click here for part 5](https://blog.rogs.me/2019/11/27/de-google-my-life-part-5-of-_-tu-_-backups/)
diff --git a/content/posts/degoogle-my-life-part-3.md b/content/posts/degoogle-my-life-part-3.md
new file mode 100644
index 0000000..e670073
--- /dev/null
+++ b/content/posts/degoogle-my-life-part-3.md
@@ -0,0 +1,279 @@
+---
+title: "De-Google my life - Part 3 of ¯\_(ツ)_/¯: Nextcloud & Collabora"
+url: "/2019/03/29/de-google-my-life-part-3-of-_-tu-_-nextcloud-collabora"
+date: 2019-03-28T19:07:00-04:00
+lastmod: 2020-04-25T12:35:53-03:00
+tags : [ "degoogle", "devops" ]
+---
+
+<div class="kg-card-markdown">
+
+Hello everyone! Welcome to the third post of my blogseries "De-Google my life". If you haven't read the other ones you definitely should! ([Part 1](https://blog.rogs.me/2019/03/15/de-google-my-life-part-1-of-_-tu-_-why-how/), [Part 2](https://blog.rogs.me/2019/03/22/de-google-my-life-part-2-of-_-tu-_-servers-and-emails/)). Today we are moving forward with one of the most important apps I'm running on my servers: [Nextcloud](https://nextcloud.com/). A big part of my Google usage was Google Drive (and all it's derivate apps). With Nextcloud I was looking to replace:
+
+* Docs
+* Drive
+* Photos
+* Contacts
+* Calendar
+* Notes
+* Tasks
+* More (?)
+
+I also wanted some new features, like connecting to a S3 bucket directly from my server and have a web interface to interact with it.
+
+The first step is to set up the server. I'm not going to explain that again, but if you want to read more about that, I explain it a bit better on the [second post](https://blog.rogs.me/2019/03/22/de-google-my-life-part-2-of-_-tu-_-servers-and-emails/)
+
+# Nextcloud
+
+## Installation
+
+For my Nextcloud installation I went straight to the [official docker documentation](https://github.com/nextcloud/docker) and extracted this docker compose:
+
+ version: '2'
+
+ volumes:
+ nextcloud:
+ db:
+
+ services:
+ db:
+ image: mariadb
+ command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
+ restart: always
+ volumes:
+ - db:/var/lib/mysql
+ environment:
+ - MYSQL_ROOT_PASSWORD=my_super_secure_root_password
+ - MYSQL_PASSWORD=my_super_secure_password
+ - MYSQL_DATABASE=nextcloud
+ - MYSQL_USER=nextcloud
+
+ app:
+ image: nextcloud
+ ports:
+ - 8080:80
+ links:
+ - db
+ volumes:
+ - nextcloud:/var/www/html
+ restart: always
+
+**Some mistakes were made**
+I forgot to mount the volumes and Docker automatically mounted them in /var/lib/docker/volumes/. This was a small problem I haven't solved yet because it hasn't bringed any serious issues. If someone knows if this is going to be problematic in the long run, please let me know. I didn't wanted to fix this just for the posts, I'm writing about my experience and of course it wasn't perfect.
+
+I created the route `/opt/nextcloud` to keep my docker-compose file and finally ran:
+
+ docker-compose pull
+ docker-compose up -d
+
+It was that simple! The app was running on port 8080! But that is not what I wanted. I wanted it running on port 80 and 443\. For that I used a reverse proxy with NGINX and Let's Encrypt
+
+## NGINX configuration
+
+Configuring NGINX is dead simple. Here is my configuration
+
+`/etc/nginx/sites-available/nextcloud:`
+
+ server {
+ listen 80 default_server;
+ listen [::]:80 default_server;
+ server_name myclouddomain.com;
+ return 301 https://$server_name$request_uri;
+ }
+
+ server {
+ listen 443 ssl;
+ server_name myclouddomain.com;
+ add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
+
+ ssl on;
+ ssl_certificate /etc/letsencrypt/live/myclouddomain.com/fullchain.pem;
+ ssl_certificate_key /etc/letsencrypt/live/myclouddomain.com/privkey.pem;
+
+ location / {
+ proxy_pass http://127.0.0.1:8080;
+ proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto https;
+ proxy_redirect off;
+ proxy_read_timeout 5m;
+ }
+
+ location = /.well-known/carddav {
+ return 301 $scheme://$host/remote.php/dav;
+ }
+ location = /.well-known/caldav {
+ return 301 $scheme://$host/remote.php/dav;
+ }
+ # Set the client_max_body_size to 1000M so NGINX doesn't cut uploads
+ client_max_body_size 1000M;
+ }
+
+Then I created a soft link from the configuration file to the "sites enabled" folder:
+
+ ln -s /etc/nginx/sites-available/nextcloud /etc/nginx/sites-enabled
+
+and that was it!
+
+In this configuration you will see that I'm already referencing the SSL certificates even though they don't exist yet. We are going to create them on the next step.
+
+## Let's Encrypt configuration
+
+To generate the SSL certificates first you need to point your domain/subdomain to your server. Every DNS manager is different, so you will have to figure that out. The command I will use throught this blog series to create certificates is the following:
+
+ sudo -H certbot certonly --nginx-d mydomain.com
+
+The first time you run Let's Encrypt, you have to configure some stuff. They will ask you for your email and some questions. Input that information and finish the process.
+
+To enable automatic SSL certificates renovation, create a new cron job (`crontab -e`) with the following information:
+
+ 0 3 * * * certbot renew -q
+
+This will run every morning at 3AM and check if any of your domains needs to be renewed. If they do, it will renew it.
+
+At the end, you should be able to visit [https://myclouddomain.com](https://myclouddomain.com) and be greeted with a nice NextCloud screen:
+
+![Captura-de-pantalla-de-2019-03-28-10-51-04](/Captura-de-pantalla-de-2019-03-28-10-51-04.png)
+<small>Beautiful yet frustrating blue screen</small>
+
+## Nextcloud configuration
+
+In this part I got super stuck. I had everything up and running, but I couldn't get my database to connect. It was SUPER FRUSTRATING. This is why I had failed:
+
+Since in my docker-compose file I called the MariaDB docker `db`, the database host was not `localhost` but `db`.
+
+Once that was fixed, Nextcloud was 100% ready to be used!
+
+![Captura-de-pantalla-de-2019-03-28-16-19-13](/Captura-de-pantalla-de-2019-03-28-16-19-13.png)
+
+After that I went straight to "Settings/Basic settings" and noticed that my background jobs were set to "Ajax". That's not good, because if I don't open the site, the tasks will never run. I changed it to "Cron" and created a new cron on my server with the following information:
+
+ */15 * * * * /usr/bin/docker exec --user www-data nextcloud_app_1 php cron.php
+
+This will run the Nextcloud cronjob in the docker machine every 15 mins.
+
+Then, in "Settings/Overview" I noticed a bunch of errors on the "Security & setup warnings" part. Those were very easy to fix, but since all installations aren't the same I won't go deep into this. [DuckDuckGo](https://duckduckgo.com/) is your friend.
+
+## Extra stuff
+
+The Nextcloud apps store is filled with some interesting applications. The ones I have installed are:
+
+* [Contacts](https://apps.nextcloud.com/apps/contacts)
+* [Calendar](https://apps.nextcloud.com/apps/calendar)
+* [Notes](https://apps.nextcloud.com/apps/notes)
+* [Tasks](https://apps.nextcloud.com/apps/tasks)
+* [Markdown editor](https://apps.nextcloud.com/apps/files_markdown)
+* [PhoneTrack](https://apps.nextcloud.com/apps/phonetrack)
+
+But you can add as many as you want! You can check them out [here](https://apps.nextcloud.com/)
+
+# Collabora
+
+Now that NextCloud was up and running, I needed my "Google Docs" part. Enter Collabora!
+
+## Installation
+
+If you don't know what it is, Collabora is like Google Docs / Sheets / Slides but free and open source. You can check more about the project [here](https://nextcloud.com/collaboraonline/)
+
+This was a very easy installation. I ran it directly with docker:
+
+ docker run -t -d -p 127.0.0.1:9980:9980 -e 'domain=mynextclouddomain.com' --restart always --cap-add MKNOD collabora/code
+
+Created a new NGINX reverse proxy
+
+`/etc/nginx/sites-available/collabora`:
+
+ # Taken from https://icewind.nl/entry/collabora-online/
+ server {
+ listen 443 ssl;
+ server_name office.mydomain.com;
+
+ ssl_certificate /etc/letsencrypt/live/office.mydomain.com/fullchain.pem;
+ ssl_certificate_key /etc/letsencrypt/live/office.mydomain.com/privkey.pem;
+
+ # static files
+ location ^~ /loleaflet {
+ proxy_pass https://localhost:9980;
+ proxy_set_header Host $http_host;
+ }
+
+ # WOPI discovery URL
+ location ^~ /hosting/discovery {
+ proxy_pass https://localhost:9980;
+ proxy_set_header Host $http_host;
+ }
+
+ # main websocket
+ location ~ ^/lool/(.*)/ws$ {
+ proxy_pass https://localhost:9980;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection "Upgrade";
+ proxy_set_header Host $http_host;
+ proxy_read_timeout 36000s;
+ }
+
+ # download, presentation and image upload
+ location ~ ^/lool {
+ proxy_pass https://localhost:9980;
+ proxy_set_header Host $http_host;
+ }
+
+ # Admin Console websocket
+ location ^~ /lool/adminws {
+ proxy_pass https://localhost:9980;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection "Upgrade";
+ proxy_set_header Host $http_host;
+ proxy_read_timeout 36000s;
+ }
+ }
+
+Created the SSL certificate for the collabora installation:
+
+ sudo -H certbot certonly --nginx-d office.mydomain.com
+
+And finally I created a soft link from the configuration file to the "sites enabled" folder:
+
+ ln -s /etc/nginx/sites-available/collabora /etc/nginx/sites-enabled
+
+Pretty easy stuff.
+
+## Nextcloud configuration
+
+In Nextcloud I installed "Collabora" from the "Apps" menu. On "Settings/Collabora Online" I added my Collabora URL, applied it and voila!
+
+![Captura-de-pantalla-de-2019-03-28-14-53-08](/Captura-de-pantalla-de-2019-03-28-14-53-08.png)
+<small>Sweet Libre Office feel</small>
+
+# S3 bucket
+
+One of my biggest motivation for this project was a cheap, long term storgage solution for some files I don't interact with every day. I'm talking music, movies, videos, ISOS, etc. I used to have a bunch of HDD's but because of all the power outages in Venezuela, almost all my HDDs have died, and new ones are very expensive here, not to say all the issues we have with importing them from the US.
+
+I wanted to look for something S3 like, but as cheap as possible.
+
+In my investigations I found [Wasabi](https://wasabi.com/). Not only it was S3 like, but it was **dirt cheap**. $6 per month for 1TB of data. 1TB OF DATA FOR $6!! I could not believe it!
+
+I created an account and installed the "external storage support" plugin in Nextcloud. After it was installed, I went to "Settings/External Storages" and filled up the information:
+
+![Captura-de-pantalla-de-2019-03-28-15-32-50](/Captura-de-pantalla-de-2019-03-28-15-32-50.png)
+![Captura-de-pantalla-de-2019-03-28-15-34-18](/Captura-de-pantalla-de-2019-03-28-15-34-18.png)
+<small>My bucket name is "long-term-storage" and my local folder name is "Long term storage". You will need to generate API keys for the connection.</small>
+
+I applied the changes and that was it! I could not believe how simple it was, so I uploaded a file just to test:
+
+![Captura-de-pantalla-de-2019-03-28-15-38-38](/Captura-de-pantalla-de-2019-03-28-15-38-38.png)
+![Captura-de-pantalla-de-2019-03-28-15-39-12](/Captura-de-pantalla-de-2019-03-28-15-39-12.png)
+<small>[Classic _noice_ meme](https://knowyourmeme.com/memes/noice) uploaded in Nextcloud, ready to download in Wasabi. _toungue sound_ **Nice**</small>
+
+# Conclusion
+
+The project is looking good! In one sitting I have replaced almost every Google product and even added a humungus amount of storage (virtually infinite!) to the project. For the next delivery I'll add new and fun stuff I always wanted to host myself, like a Wiki, a [Blog](https://blog.rogs.me) (this very same blog!) and many more!
+
+Stay tuned.
+
+[Click here for part 4](https://blog.rogs.me/2019/11/20/de-google-my-life-part-4-of-_-tu-_-dokuwiki-ghost/)
+[Click here for part 5](https://blog.rogs.me/2019/11/27/de-google-my-life-part-5-of-_-tu-_-backups/)
+
+</div>
diff --git a/content/posts/degoogle-my-life-part-4.md b/content/posts/degoogle-my-life-part-4.md
new file mode 100644
index 0000000..6aeff6f
--- /dev/null
+++ b/content/posts/degoogle-my-life-part-4.md
@@ -0,0 +1,134 @@
+---
+title: "De-Google my life - Part 4 of ¯\_(ツ)_/¯: Dokuwiki & Ghost"
+url: "/2019/11/20/de-google-my-life-part-4-of-_-tu-_-dokuwiki-ghost"
+date: 2019-11-20T19:29:00-03:00
+lastmod: 2020-04-25T12:35:53-03:00
+tags : [ "degoogle", "devops" ]
+---
+
+
+Hello everyone! Welcome to the fourth post of my blogseries "De-Google my life". If you haven't read the other ones you definitely should! ([Part 1](https://blog.rogs.me/2019/03/15/de-google-my-life-part-1-of-_-tu-_-why-how/), [Part 2](https://blog.rogs.me/2019/03/22/de-google-my-life-part-2-of-_-tu-_-servers-and-emails/), [Part 3](https://blog.rogs.me/2019/03/29/de-google-my-life-part-3-of-_-tu-_-nextcloud-collabora/)).
+
+First of all, sorry for the long wait. I had a couple of IRL things to take care of (we will discuss those in further posts, I promise ( ͡° ͜ʖ ͡°)), but now I have plenty of time to work on more blog posts and other projects. Thanks for sticking around, and if you are new, welcome to this journey!
+
+On this post, we get to the fun part: What am I going to do to improve my online presence? I began with the simplest answer: A blog (this very same blog you are reading right now lol)
+
+# Ghost
+
+[Ghost](https://ghost.org/) is an open source, headless blogging platform made in NodeJS. The community is quite large and most importantly, it fitted all my requirements (Open source and runs in a docker container).
+
+For the installation, I kept it simple. I went to the [DockerHub page for Ghost](https://hub.docker.com/_/ghost/) and used their base `docker-compose` config for myself. This is what I came up with:
+
+ version: '3.1'
+
+ services:
+
+ ghost:
+ image: ghost:1-alpine
+ restart: always
+ ports:
+ - 7000:2368
+ environment:
+ database__client: mysql
+ database__connection__host: db
+ database__connection__user: root
+ database__connection__password: my_super_secure_mysql_password
+ database__connection__database: ghost
+ url: https://blog.rogs.me
+
+ db:
+ image: mysql:5.7
+ restart: always
+ environment:
+ MYSQL_ROOT_PASSWORD: my_super_secure_mysql_password
+
+Simple enough. The base ghost image and a MySQL db image. Simple, readable, functional.
+
+For the NGINX configuration I used a simple proxy:
+
+ server {
+ listen 80;
+ listen [::]:80;
+ server_name blog.rogs.me;
+ add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
+
+ location / {
+ proxy_pass http://127.0.0.1:7000;
+ proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto https;
+ proxy_redirect off;
+ proxy_read_timeout 5m;
+ }
+ client_max_body_size 10M;
+ }
+
+What does this mean? This config is just "Hey NGINX! proxy port 7000 through port 80 please, thanks"
+
+And that was it. So simple, there's nothing much to say. Just like the title of the series,`¯\_(ツ)_/¯`
+
+![Captura-de-pantalla-de-2019-11-16-19-52-30](/Captura-de-pantalla-de-2019-11-16-19-52-30.png)
+
+After that, it was just configuration and setup. I modified [this theme](https://github.com/kathyqian/crisp) to match a little more with my website colors and themes. I think it came out pretty nice :)
+
+# Dokuwiki
+
+I have always admired tech people that have their own wikis. It's like a place where you can find more about them in a fast and easy way: What they use, what their configurations are, tips, cheatsheets, scripts, anything! I don't consider myself someone worthy of a wiki, but I wanted one just for the funsies.
+
+While doing research, I found [Dokuwiki](https://www.dokuwiki.org/dokuwiki), which is not only open source, but it uses no database! Everything is kept in files which compose your wiki. P R E T T Y N I C E.
+
+On this one, DockerHub had no oficial Dokuwiki image, but I used a very good one from the user [mprasil](https://hub.docker.com/r/mprasil/dokuwiki). I used his recommended configuration (no `docker-compose` needed since it was a single docker image):
+
+ docker run -d -p 8000:80 --name my_wiki \
+ -v /data/docker/dokuwiki/data:/dokuwiki/data \
+ -v /data/docker/dokuwiki/conf:/dokuwiki/conf \
+ -v /data/docker/dokuwiki/lib/plugins:/dokuwiki/lib/plugins \
+ -v /data/docker/dokuwiki/lib/tpl:/dokuwiki/lib/tpl \
+ -v /data/docker/dokuwiki/logs:/var/log \
+ mprasil/dokuwiki
+
+**Some mistakes were made, again**
+I was following instructions blindly, I'm dumb. I mounted the Dokuwiki files on the /data/docker directory, which is not what I wanted. In the process of working on this project, I have learned one big thing:
+
+_Always. check. installation. folders and/or mounting points_
+
+Just like the last one, I didn't want to fix this just for the posts, I'm writing about my experience and of course it wasn't perfect.
+
+Let's continue. Once the docker container was running, I configured NGINX with another simple proxy redirect:
+
+ server {
+ listen 80;
+ listen [::]:80;
+ server_name wiki.rogs.me;
+ add_header Strict-Transport-Security "max-age=15552000; includeSubDomains" always;
+
+ location / {
+ proxy_pass http://127.0.0.1:8000;
+ proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto https;
+ proxy_redirect off;
+ proxy_read_timeout 5m;
+ }
+ client_max_body_size 10M;
+ }
+
+Just as the other one: "Hey NGINX! Foward port 8000 to port 80 please :) Thanks!"
+
+![Captura-de-pantalla-de-2019-11-16-20-15-35](/Captura-de-pantalla-de-2019-11-16-20-15-35.png)
+<small>Simple dokuwiki screen, nothing too fancy</small>
+
+Again, just like the other one, configuration and setup and voila! Everything was up and running.
+
+# Conclusion
+
+I was getting the hang of this "Docker" flow. There were mistakes, yes, but nothing too critical that would hurt me in the long run. Everything was running smoothly, and just with a few commands I had everything running and proxied. Just what I wanted.
+
+Stay tuned for the next delivery, where I'm going to talk about GPG encrypted backups to an external Wasabi "S3 like" bucket. I promise this one won't take 8 months.
+
+[Click here for part 5](https://blog.rogs.me/2019/11/27/de-google-my-life-part-5-of-_-tu-_-backups/)
+
diff --git a/content/posts/degoogle-my-life-part-5.md b/content/posts/degoogle-my-life-part-5.md
new file mode 100644
index 0000000..984f6e9
--- /dev/null
+++ b/content/posts/degoogle-my-life-part-5.md
@@ -0,0 +1,188 @@
+---
+title: "De-Google my life - Part 5 of ¯\_(ツ)_/¯: Backups"
+url: "/2019/11/27/de-google-my-life-part-5-of-_-tu-_-backups"
+date: 2019-11-27T19:30:00-04:00
+lastmod: 2020-04-25T12:35:53-03:00
+tags : [ "degoogle", "devops" ]
+---
+
+Hello everyone! Welcome to the fifth post of my blog series "De-Google my life". If you haven't read the other ones you definitely should! ([Part 1](https://blog.rogs.me/2019/03/15/de-google-my-life-part-1-of-_-tu-_-why-how/), [Part 2](https://blog.rogs.me/2019/03/22/de-google-my-life-part-2-of-_-tu-_-servers-and-emails/), [Part 3](https://blog.rogs.me/2019/03/29/de-google-my-life-part-3-of-_-tu-_-nextcloud-collabora/), [Part 4](https://blog.rogs.me/2019/11/20/de-google-my-life-part-4-of-_-tu-_-dokuwiki-ghost/)).
+
+At this point, our server is up and running and everything is working 100% fine, but we can't always trust that. We need a way to securely backup everything in a place where we can restore quickly if needed.
+
+# Backup location
+
+My backups location was an easy choice. I already had a Wasabi subscription, so why not use it to save my backups as well?
+
+I created a new bucket on Wasabi, just for my backups and that was it.
+
+![Captura-de-pantalla-de-2019-11-24-18-13-55](/Captura-de-pantalla-de-2019-11-24-18-13-55.png)
+<small>There is my bucket, waiting for my _sweet sweet_ backups</small>
+
+# Security
+
+Just uploading everything to Wasabi wasn't secure enough for me, so I'm encrypting my tar files with GPG.
+
+## What is GPG?
+
+From their website:
+
+> GnuPG ([GNU Privacy Guard](https://gnupg.org/)) is a complete and free implementation of the OpenPGP standard as defined by RFC4880 (also known as PGP). GnuPG allows you to encrypt and sign your data and communications; it features a versatile key management system, along with access modules for all kinds of public key directories. GnuPG, also known as GPG, is a command-line tool with features for easy integration with other applications. A wealth of frontend applications and libraries are available. GnuPG also provides support for S/MIME and Secure Shell (ssh).
+
+So, by using GPG I can encrypt my files before uploading to Wasabi, so if for any reason there is a leak, my files will still be protected by my GPG password.
+
+# Script
+
+## Nextcloud
+
+ #!/bin/sh
+
+ # Nextcloud
+ echo "======================================"
+ echo "Backing up Nextcloud"
+ cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
+
+ NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup
+ echo $NEXTCLOUD_FILE_NAME
+
+ echo "Compressing"
+ tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/
+
+ echo "Encrypting"
+ gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz
+
+ echo "Uploading"
+ aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
+
+ echo "Deleting"
+ rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
+
+### A breakdown
+
+ #!/bin/sh
+
+This is to specify this is a shell script. The standard for this type of scripts.
+
+ # Nextcloud
+ echo "======================================"
+ echo "Backing up Nextcloud"
+ cd /var/lib/docker/volumes/nextcloud_nextcloud/_data/data/roger
+
+ NEXTCLOUD_FILE_NAME=$(date +"%Y_%m_%d")_nextcloud_backup
+ echo $NEXTCLOUD_FILE_NAME
+
+Here, I `cd`ed to where my Nextcloud files are located. On [De-Google my life part 3](https://blog.rogs.me/2019/03/29/de-google-my-life-part-3-of-_-tu-_-nextcloud-collabora/) I talk about my mistake of not setting my volumes correctly, that's why I have to go to this location. I also create a new filename for my backup file using the current date information.
+
+ echo "Compressing"
+ tar czf /root/$NEXTCLOUD_FILE_NAME.tar.gz files/
+
+ echo "Encrypting"
+ gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$NEXTCLOUD_FILE_NAME.tar.gz
+
+Then, I compress the file into a `tar.gz` file. After, it is where the encryption happens. I have a file located somewhere in my server with my GPG password, it is used to encrypt my files using the `gpg` command. The command then returns a "filename.tar.gz.gpg" file, which is then uploaded to Wasabi.
+
+ echo "Uploading"
+ aws s3 cp /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg s3://backups-cloud/Nextcloud/$NEXTCLOUD_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
+
+ echo "Deleting"
+ rm /root/$NEXTCLOUD_FILE_NAME.tar.gz /root/$NEXTCLOUD_FILE_NAME.tar.gz.gpg
+
+Finally, I upload everything to Wasabi using `awscli` and delete the file, so I keep my filesystem clean.
+
+## Is that it?
+
+This is the basic setup for backups, and it is repeated among all my apps, with few variations
+
+## Dokuwiki
+
+ # Dokuwiki
+ echo "======================================"
+ echo "Backing up Dokuwiki"
+ cd /data/docker
+
+ DOKUWIKI_FILE_NAME=$(date +"%Y_%m_%d")_dokuwiki_backup
+
+ echo "Compressing"
+ tar czf /root/$DOKUWIKI_FILE_NAME.tar.gz dokuwiki/
+
+ echo "Encrypting"
+ gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$DOKUWIKI_FILE_NAME.tar.gz
+
+ echo "Uploading"
+ aws s3 cp /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg s3://backups-cloud/Dokuwiki/$DOKUWIKI_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
+
+ echo "Deleting"
+ rm /root/$DOKUWIKI_FILE_NAME.tar.gz /root/$DOKUWIKI_FILE_NAME.tar.gz.gpg
+
+Pretty much the same as the last one, so here is a quick explanation:
+
+* `cd` to a folder
+* tar it
+* encrypt it with gpg
+* upload it to a Wasabi bucket
+* delete the local files
+
+## Ghost
+
+ # Ghost
+ echo "======================================"
+ echo "Backing up Ghost"
+ cd /root
+
+ GHOST_FILE_NAME=$(date +"%Y_%m_%d")_ghost_backup
+
+ docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME
+ docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql
+
+ echo "Compressing"
+ tar czf /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME/
+
+ echo "Encrypting"
+ gpg --passphrase-file the/location/of/my/passphrase --batch -c /root/$GHOST_FILE_NAME.tar.gz
+
+ echo "Uploading"
+ aws s3 cp /root/$GHOST_FILE_NAME.tar.gz.gpg s3://backups-cloud/Ghost/$GHOST_FILE_NAME.tar.gz.gpg --endpoint-url=https://s3.wasabisys.com
+
+ echo "Deleting"
+ rm -r /root/$GHOST_FILE_NAME.tar.gz $GHOST_FILE_NAME /root/$GHOST_FILE_NAME.tar.gz.gpg
+
+## A few differences!
+
+ docker container cp ghost_ghost_1:/var/lib/ghost/ $GHOST_FILE_NAME
+ docker exec ghost_db_1 /usr/bin/mysqldump -u root --password=my-secure-root-password ghost > /root/$GHOST_FILE_NAME/ghost.sql
+
+Something new! Since on Ghost I didn't mount any volumes, I had to get the files directly from the docker container and then get a DB dump for safekeeping. Nothing too groundbreaking, but worth explaining.
+
+# All done! How do I run it automatically?
+
+Almost done! I just need to run everything automatically, so I can just set it and forget it. Just like before, whenever I want to run something programatically, I will use a cronjob:
+
+ 0 0 * * 1 sh /opt/backup.sh
+
+This means:
+_Please, can you run this script every Monday at 0:00? Thanks, server :_*
+
+# Looking good! Does it work?
+
+Look for yourself :)
+
+![Captura-de-pantalla-de-2019-11-24-19-26-45](/Captura-de-pantalla-de-2019-11-24-19-26-45.png)
+<small>Nextcloud</small>
+
+![Captura-de-pantalla-de-2019-11-24-19-28-09](/Captura-de-pantalla-de-2019-11-24-19-28-09.png)
+<small>Dokuwiki</small>
+
+![Captura-de-pantalla-de-2019-11-24-19-29-04](/Captura-de-pantalla-de-2019-11-24-19-29-04.png)
+<small>Ghost</small>
+
+# Where do we go from here?
+
+I don't know, I only know this project is not over. I have other apps running (Wallabag, Matomo and Commento), but I don't find them as interesting for a new post (of course, if you still want to read about it I will gladly do it).
+
+I hope you all learned from and enjoyed this experience with me because I sure have! I've had amazing feedback from the community and that's what always kept this project on my mind.
+
+A big thank you to [/r/selfhosted](https://reddit.com/r/selfhosted) and more recently [/r/degoogle](https://www.reddit.com/r/degoogle), I learned A LOT from those communities. If you liked these series, you will definitely like those subreddits.
+
+I'm looking to transform all this knowledge to educational talks soon, so if you are in the Montevideo area, stay tuned for a _possible_ meetup! (I know this is a longshot in a country of around 4 million people, but worth trying hehe).
+
+Again, thank you for joining me on this journey and stay tuned! There is more content coming :)
diff --git a/content/posts/how-to-search-in-a-huge-table-on-django-admin.md b/content/posts/how-to-search-in-a-huge-table-on-django-admin.md
new file mode 100644
index 0000000..3519431
--- /dev/null
+++ b/content/posts/how-to-search-in-a-huge-table-on-django-admin.md
@@ -0,0 +1,147 @@
+---
+title: "How to search in a huge table on Django admin"
+url: "/2020/02/17/how-to-search-in-a-huge-table-on-django-admin"
+date: 2020-02-17T17:08:00-04:00
+lastmod: 2020-04-25T12:35:53-03:00
+tags : [ "python", "django", "programming" ]
+---
+
+<div class="kg-card-markdown">
+
+Hello everyone!
+
+We all know that the Django admin is a super cool tool for Django. You can check your models, and add/edit/delete records from the tables. If you are familiar with Django, I'm sure you already know about it.
+
+I was given a task: Our client wanted to search in a table by one field. It seems easy enough, right? Well, the tricky part is that the table has **523.803.417 records**.
+
+Wow. **523.803.417 records**.
+
+At least the model was not that complex:
+
+On `models.py`:
+
+ class HugeTable(models.Model):
+ """Huge table information"""
+ search_field = models.CharField(max_length=10, db_index=True, unique=True)
+ is_valid = models.BooleanField(default=True)
+
+ def __str__(self):
+ return self.search_field
+
+So for Django admin, it should be a breeze, right? **WRONG.**
+
+## The process
+
+First, I just added the search field on the admin.py:
+
+On `admin.py`:
+
+ class HugeTableAdmin(admin.ModelAdmin):
+ search_fields = ('search_field', )
+
+ admin.site.register(HugeTable, HugeTableAdmin)
+
+And it worked! I had a functioning search field on my admin.
+![2020-02-14-154646](/2020-02-14-154646.png)
+
+Only one problem: It took **3mins+** to load the page and **5mins+** to search. But at least it was working, right?
+
+## WTF?
+
+First, let's split the issues:
+
+1. Why was it taking +3mins just to load the page?
+2. Why was it taking +5mins to search if the search field was indexed?
+
+I started tackling the first one, and found it quite easily: Django was getting only 100 records at a time, but **it had to calculate the length for the paginator and the "see more" button on the search bar**
+![2020-02-14-153605](/2020-02-14-153605.png)
+<small>So near, yet so far</small>
+
+## Improving the page load
+
+A quick look at the Django docs told me how to deactivate the "see more" query:
+
+[ModelAdmin.show_full_result_count](https://docs.djangoproject.com/en/2.2/ref/contrib/admin/#django.contrib.admin.ModelAdmin.show_full_result_count)
+
+> Set show_full_result_count to control whether the full count of objects should be displayed on a filtered admin page (e.g. 99 results (103 total)). If this option is set to False, a text like 99 results (Show all) is displayed instead.
+
+On `admin.py`:
+
+ class HugeTableAdmin(admin.ModelAdmin):
+ search_fields = ('search_field', )
+ show_full_result_count = False
+
+ admin.site.register(HugeTable, HugeTableAdmin)
+
+That fixed one problem, but how about the other? It seemed I needed to do my paginator.
+
+Thankfully, I found an _awesome_ post by Haki Benita called ["Optimizing the Django Admin Paginator"](https://hakibenita.com/optimizing-the-django-admin-paginator) that explained exactly that. Since I didn't need to know the records count, I went with the "Dumb" approach:
+
+On `admin.py`:
+
+ from django.core.paginator import Paginator
+ from Django.utils.functional import cached_property
+
+ class DumbPaginator(Paginator):
+ """
+ Paginator that does not count the rows in the table.
+ """
+ @cached_property
+ def count(self):
+ return 9999999999
+
+ class HugeTableAdmin(admin.ModelAdmin):
+ search_fields = ('search_field', )
+ show_full_result_count = False
+ paginator = DumbPaginator
+
+ admin.site.register(HugeTable, HugeTableAdmin)
+
+And it worked! The page was loading blazingly fast :) But the search was still **ultra slow**. So let's fix that.
+![2020-02-14-153840](/2020-02-14-153840.png)
+
+## Improving the search
+
+I checked A LOT of options. I almost went with [Haystack](https://haystacksearch.org/), but it seemed a bit overkill for what I needed. I finally found this super cool tool: [djangoql](https://github.com/ivelum/djangoql/). It allowed me to search the table by using _sql like_ operations, so I could search by `search_field` and make use of the indexation. So I installed it:
+
+On `settings.py`:
+
+ INSTALLED_APPS = [
+ ...
+ 'djangoql',
+ ...
+ ]
+
+On `admin.py`:
+
+ from django.core.paginator import Paginator
+ from django.utils.functional import cached_property
+ from djangoql.admin import DjangoQLSearchMixin
+
+ class DumbPaginator(Paginator):
+ """
+ Paginator that does not count the rows in the table.
+ """
+ @cached_property
+ def count(self):
+ return 9999999999
+
+ class HugeTableAdmin(DjangoQLSearchMixin, admin.ModelAdmin):
+ show_full_result_count = False
+ paginator = DumbPaginator
+
+ admin.site.register(HugeTable, HugeTableAdmin)
+
+And it worked! By performing the query:
+
+ search_field = "my search query"
+
+I get my results in around 1 second.
+
+![2020-02-14-154418](/2020-02-14-154418.png)
+
+## Is it done?
+
+Yes! Now my client can search by `search_field` on a table of 523.803.417 records, very easily and very quickly.
+
+I'm planning to post more Python/Django things I'm learning by working with this client, so you might want to stay tuned :)
diff --git a/content/posts/my-mom-was-always-right.md b/content/posts/my-mom-was-always-right.md
new file mode 100644
index 0000000..d97487f
--- /dev/null
+++ b/content/posts/my-mom-was-always-right.md
@@ -0,0 +1,56 @@
+---
+title: "My mom was always right | Rant on social media"
+date: 2020-04-25T12:35:53-03:00
+lastmod: 2020-04-25T12:35:53-03:00
+tags : [ "socialmedia", "rant" ]
+---
+
+My mom always hated social media. My bother and I always made fun of her because she was always late to all the news. Her main reason against social media was "why would I want everyone to know what I'm doing? And why should I care what they are doing?". I didn't understand at the time, but now I do.
+
+I remember when I was 13 years old social media started ramping up. I created a MySpace account to go with the flow. I won't lie, I liked MySpace a lot: creating a website that "defined me", sharing my music, posting funny pictures and checking my friends' profiles. It all seemed pretty cool and innovative.
+
+Then it came to Facebook, where you couldn't create your own site or share music players like in MySpace, but you had a wall and your friends could leave messages on your wall! How cool was that? You could share pictures, thoughts, opinions, and your friends didn't have to go to your profile to check it out: Facebook had a feed with all your friends' posts, so there was no need to visit their profile just to see what they were doing. It sounded very cool at the moment!
+
+After that, Twitter. Microblogging, 140 chars max (now its 280 chars, double of what it was before), interactions with people all around the internet, you didn't need be friends with someone to send them a tweet or a private message. Discussions, threads, memes...
+
+Instagram. Sharing pictures, stories, following my friends to see their travels, following superstars to see their perfect lifes, ads, paid content.
+
+Stop.
+
+Just stop.
+
+Social media has become too overwhelming in the last couple of years. People are lonelier and more depressed because of social media ([link](https://guilfordjournals.com/doi/10.1521/jscp.2018.37.10.751)). The [FOMO (Fear of missing out)](https://en.wikipedia.org/wiki/Fear_of_missing_out) is at an all-time high.
+
+Now that I know all of this, **why would I want to be part of something that could make me feel anxiety, depression and have self-esteem issues?** That was the question I made to myself around 6 months ago. I consider myself an exaggerated person, so I went full in. The goal was to stop using **all** social media for 1 month and see the results. So here are my thoughts about the experiment
+
+## I feel more relaxed
+
+I don't have the need to open my phone when I'm at a bus stop or just doing nothing. I don't care what my friends are posting, I don't care if an influencer bought 'x' thing or traveled to 'y' place. Before leaving social media, those things had little impact on me and my daily life, so why should I care?
+
+## I have more free time, or time to do more productive stuff
+
+The first week I realized how much time I was wasting using social media, I was wasting between 3 to 4 hours a day in social media, but now I have that time for myself. Now I have built a server for my apartment, I have improved my programming, updated my website, updated and improved my working PC and many other things.
+
+## I can appreciate things a lot more
+
+This decision came at the same time as I had to migrate from Venezuela to Uruguay. So, being in a new country I wanted to visit a lot of new places. I went to museums, parks, beaches (in winter, a bit dumb), monuments and many other touristic attractions. It is funny how I was one of the few that enjoyed the moment instead of being neck-deep in my phone. I was free to enjoy the new city I live in.
+
+## My "reach to my pocket for my phone" tick stopped
+
+I wasn't checking my phone as much as before. I could meet and talk to people without my phone being on the table, and I also realized how rude it is to be ignored because everyone at the table is checking Instagram on their phones.
+
+## But all of this wasn't always pretty
+
+I had to make a lot of changes because I depended on social media for many other things:
+
+* I installed a Feed aggregator on my PC and added sources for all news I want to watch (and also memes lol). I try to keep it with as few sources as possible, when I see news repeating, I delete the least interesting source. The main difference is that **feed aggregators have endings.** I check it once a day and never spend more than 10 mins on it.
+* My girlfriend now has to find all the "Instagram business". Being foreigners on a strange land we sometimes need supplies from home. We have found a few by word of mouth, but we had to resource back to Instagram and check with groups of local Venezuelans.
+* I have missed some family pictures, but that is easily fixable. My new cousin was born a month ago, and since I don't use any type of social media, I asked my uncle to send me some pictures and I now have a bunch of pictures of the baby, my other cousins, and more family members. Now they even send me pictures without me asking! The conversations have become more intimate than before, where I would just swipe on a feed and "double-tap" to like.
+
+## Conclusions?
+
+My mom was right, as always. The information overload was fun at first, but then it became _overwhelming_. Social media has entered our society as a spy and made us dependent on it, and that's bad. We need to get rid of it.
+
+This experiment started in November 2019 and I can happily say that I left social media for 1 month and never went back. I want to close my Facebook / Instagram accounts (I need to backup my content first) and leave Twitter / LinkedIn because I sometimes use it for work.
+
+I now recommend everyone I know to shut down everything for a while and see how it feels. They might find out that there is a big world out there if they just move their head up and away from their phones.