feat: migrated blogs

Signed-off-by: Ameya Shenoy <shenoy.ameya@gmail.com>
This commit is contained in:
Ameya Shenoy 2025-06-23 17:23:55 +05:30
parent e74f75ae4b
commit aaf98406a2
17 changed files with 2615 additions and 32 deletions

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,14 @@
import MarkdownBlogComponent from "@/components/MarkdownBlogComponent";
import path from "path";
import { fileURLToPath } from "url";
export default async function BlogPage() {
return (
<MarkdownBlogComponent
blogFilePath={path.join(
path.dirname(fileURLToPath(import.meta.url)),
"content.md",
)}
/>
);
}

View file

@ -0,0 +1,81 @@
# CoWIN - Exploring the API 💉
<!-- tags: ["cowin","india","vaccine","covid"] -->
> Just to be fair this has not been tested. This is **not** the way I booked my slot for the vaccination.
**TL;DR**: You can probably automate vaccination slot booking
It is common knowledge that CoWIN has opened up its APIs. You can find more details about version 2 of their APIs on the [API Setu website](https://apisetu.gov.in/public/marketplace/api/cowin/cowin-protected-v2) (I'm yet to meet someone who gets their APIs right in v1 😛).
Most of the writeups which I found online were of people using the API for notifying them of new slots opening up. Some have already created notification services around it like [getjab](https://getjab.in/) and [VaccinateMe](https://www.vaccinateme.in) along with a few Telegram groups popping up to serve the same purpose. Even PayTM has launched a Vaccine Slot Finder tool.
But all these tools were missing a critical component, which was to actually book the slot. Due to shortage of vaccine availability these slots get filled up before anyone has time to react to the notifications. I wanted to go 1 step further and automate the booking process on availability of the slot.
The APIs are divided into 2 parts [Public APIs](https://apisetu.gov.in/public/marketplace/api/cowin) and [Protected APIs](https://apisetu.gov.in/public/marketplace/api/cowin/cowin-protected-v2). The most important are the **Appointment Availability APIs** which are essentially what are used for the notifications feature. They have both a public and a private endpoint. The public endpoint does not need any form of auth, however it may return relatively older data (upto 30 mins old) due to returning from cache. The private endpoint requires auth (although I've hit it numerous times without auth, and it has returned the available slots successfully).
But for booking a slot using [/v2/appointment/schedule](https://apisetu.gov.in/public/marketplace/api/cowin/cowin-protected-v2#/Vaccination%20Appointment%20APIs/schedule) I believe auth is critical.
A POST request needs to be made to the above mentioned endpoint, with the following data
```python
data = {
"center_id": center_id,
"session_id": session_id,
"beneficiaries": [beneficiary],
"slot": slot,
"dose": 1
}
```
The **center_id**, **session_id** and **slot** parameters are obtained from the response of the Appointment Availability APIs.
The **dose** parameter is either **1** or **2** based on which vaccination shot you're going in for.
And the **beneficiary** is obtained from decoding your JWT Access Token.
While testing this out I faced numerous **403 Forbidden** responses, reasons for which I've mentioned at the end of the post. A **403** potentially indicates a ban, and given I didn't have enough data to infer the cause of the ban, I chose to spoof the headers on my request, so as to make it appear to be coming from a browser. You can find the headers used in my request below.
```python
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:87.0) Gecko/20100101 Firefox/87.0',
'Accept': 'application/json, text/plain, */*',
'Accept-Language': 'en-US,en;q=0.5',
'Origin': 'https://selfregistration.cowin.gov.in',
'Authorization': f'Bearer {access_token}',
'DNT': '1',
'Connection': 'keep-alive',
'Referer': 'https://selfregistration.cowin.gov.in/',
'Sec-GPC': '1',
'TE': 'Trailers',
}
```
The **access_token** mentioned above is used for the authentication of your request. You may read more about the same at [jwt.io](https://jwt.io/). They also have an in browser JWT Token decoder on their homepage. To get your personal JWT access token you'll have to login to the [self registration cowin portal](https://selfregistration.cowin.gov.in/) (use a PC). Once logged in, check the Session Storage of your Browser (told you to use a PC) and your JWT Access Token will pe present in the **userToken** variable. You may use decode this on [jwt.io](https://jwt.io) to fetch more details.
```json
{
"user_name": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"user_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"user_type": "BENEFICIARY",
"mobile_number": xxxxxxxxxx,
"beneficiary_reference_id": xxxxxxxxxxxxxx,
"ua": "Mozilla/5.0 (X11; Linux x86_64; rv:87.0) Gecko/20100101 Firefox/87.0",
"date_modified": "2021-05-05T14:32:15.194Z",
"iat": xxxxxxxxxx,
"exp": xxxxxxxxxx
}
```
For obvious reasons I've censored out the sensitive information. You may use this decoded JSON to get your beneficiary ID since it is a required data argument in the POST request. And the entire encoded userToken can be used as is, in the headers for auth.
I've a added a simple python script to automate the entire process from finding a slot to booking a appointment in a [GitHub Gist](https://gist.github.com/codingCoffee/9ef47b80054291a1e236607339efc388). The code certainly could use a lot of cleanup. It is just intended to be a proof of concept. Feel free to use / modify it as required.
Personal Observations and Speculative Notes
---
- Frequetly hitting the API will get you banned. I'm not sure if this is purely an IP level ban or a ban on unauthenticated requests from your IP. I've had a server IP of mine banned. Even authenticated requests are getting 403 responses. This could also be a location (country) based ban.
- User tokens are valid for around 30 mins. After that you may need to login again via OTP and update your access token in the script.
- The CoWIN API website claims that one can make upto 100 requests in 5 minutes. However I believe the threshod is much lower than this. Maybe something like 10 requests per min. Again, more data is needed to infer this. I din't want to risk banning my Home IP, and didn't experiment further.
- The Vaccination Appointment APIs accessible via the GET HTTP method can be used just the same with or without auth. I'm not sure if this is intended since the data is anyways accessible openly or just a misconfiguration. The POST methods however require auth, else they return a **401 Unauthorized**.
- Using all the headers mentioned is obviouly not necessary. But I have faced **403**'s for not having spoofed atleast the "User-Agent" header. I presume this is checked at the backend to verify the request is coming in from a browser and not something like python requests library or curl. Since they would use their own headers.

View file

@ -0,0 +1,14 @@
import MarkdownBlogComponent from "@/components/MarkdownBlogComponent";
import path from "path";
import { fileURLToPath } from "url";
export default async function BlogPage() {
return (
<MarkdownBlogComponent
blogFilePath={path.join(
path.dirname(fileURLToPath(import.meta.url)),
"content.md",
)}
/>
);
}

View file

@ -1,10 +1,7 @@
---
title: Docker Primer
date: 2018-02-13T13:56:12-05:00
tags: ["docker", "containers"]
type: Docker
summary: Docker basics to get you started
---
# Docker Primer
<!-- tags: ["docker", "containers"] -->
When we think of virtualization today, we may think of Virtual Box, which abstracts away the system processes, and lets you run a completely system from another. Think of Docker as Virtual Box, but extremely lightweight (in terms of resource consumption). Obviously I'm over simplifying the explanation a little, and a whole of things are getting lost in simplification. But for now, this will do.

View file

@ -1,24 +1,14 @@
import MarkdownRenderer from "@/components/MarkdownRenderer";
import { promises as fs } from "fs";
import MarkdownBlogComponent from "@/components/MarkdownBlogComponent";
import path from "path";
import { fileURLToPath } from "url";
export default async function BlogPage() {
try {
// Read markdown file from project root
const filePath = "src/app/blog/docker_primer/content.md";
const markdownContent = await fs.readFile(filePath, "utf8");
return (
<main className="flex flex-1 flex-col justify-end items-center font-[family-name:var(--font-inter-sans)] pt-10 md:pt-20 pb-25 md:pb-0">
<div className="md:w-[786px] w-[95%]">
<div className="p-5 pt-10 rounded-lg markdown">
<MarkdownRenderer markdown={markdownContent} />
</div>
</div>
</main>
);
} catch (error) {
console.error("Error loading markdown:", error);
return <div>Failed to load content</div>;
}
return (
<MarkdownBlogComponent
blogFilePath={path.join(
path.dirname(fileURLToPath(import.meta.url)),
"content.md",
)}
/>
);
}

View file

@ -0,0 +1,75 @@
# Firewall init
<!-- tags: ["firewall","ufw","linux","hardening"] -->
Firewall is used to keep a check on the incoming and outgoing connections.
We shall be using `ufw` (**Uncomplicated Firewall**) to close unwanted incoming connections from the Internet and allow outgoing ones.
It's preferable to not use Scaleway servers, as something or the other used to get messed up on those. Digital Ocean and Vultr seem to do just fine.
Make sure this is the first thing you do when setting up a server. This is to ensure, there is no data loss or time loss if anything goes wrong. There is a major chance of losing complete access to the server in case you don't configure something properly. So make sure to backup your data locally or on another server to prevent any data loss.
- Install `ufw`
```sh
apt install ufw
```
- Edit `/etc/default/ufw` and modify all the three lines as shown below
```
DEFAULT_INPUT_POLICY="ACCEPT"
DEFAULT_INPUT_POLICY="ACCEPT"
DEFAULT_INPUT_POLICY="ACCEPT"
```
- Append a drop-all rule to the INPUT chain: Edit `/etc/ufw/after.rules`, add this line just before the final `COMMIT` line:
```
-A ufw-reject-input -j DROP
```
- Disable `ufw` logging (this seems to cause issues with Scaleway's default kernel):
```sh
ufw logging off
```
That's it, `ufw` is up and running, and NBD shouldn't cause issues.
- This is also necessary since there seems to be some permissions issue with the following folders
```sh
chmod 751 /etc/default
chmod 751 /etc
chmod 751 /usr
```
- Setup a basic configuration to allow SSH, HTTPS and HTTP incoming
```sh
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
```
In a new terminal window check that you can still access your host via ssh.
- You can check the configuration at any time with:
```sh
ufw status verbose
```
- You can disable the `ufw` configuration at any time with:
```sh
sudo ufw disable
```

View file

@ -0,0 +1,14 @@
import MarkdownBlogComponent from "@/components/MarkdownBlogComponent";
import path from "path";
import { fileURLToPath } from "url";
export default async function BlogPage() {
return (
<MarkdownBlogComponent
blogFilePath={path.join(
path.dirname(fileURLToPath(import.meta.url)),
"content.md",
)}
/>
);
}

View file

@ -1,8 +1,32 @@
export default function Blog() {
import { promises as fs } from "fs";
import Link from "next/link";
import { Suspense } from "react";
async function getFolders(dirPath: string) {
try {
const items = await fs.readdir(dirPath, { withFileTypes: true });
return items.filter((item) => item.isDirectory()).map((item) => item.name);
} catch (err) {
console.error("Error reading directory:", err);
return [];
}
}
export default async function BlogList() {
const folders = await getFolders("src/app/blog/");
return (
<main className="flex items-center justify-center h-screen">
This is blogs index page
<main className="flex flex-1 flex-col justify-end items-center font-[family-name:var(--font-spacegrotesk-sans)] pt-10 md:pt-20 pb-25 md:pb-0">
<div className="md:w-[786px] w-[95%]">
<h1>Blogs</h1>
<Suspense fallback={<div>Loading...</div>}>
{folders.map((folder) => (
<Link href={`/blog/${folder}`}>
<div key={folder}>{folder}</div>
</Link>
))}
</Suspense>
</div>
</main>
);
}

View file

@ -0,0 +1,112 @@
# The Resolvable Paranoia
Ahh, where should we start? Okay, let's start with, I'm the relatively paranoid type.
And given the super ambitious nature of agencies like NSA and governments around the world, I didn't want to leave any blind spots. So, I decided, that it's about time we do this!
Throughout this blog I'm going to be using mostly layman terms, while citing the technical sources, which you can checkout if you want to.
Also this blog is not targetted at you if you are an ameture, because if you are one, you are probably already doing 10 things incorrectly, and this is the least of your worries. You can still have a good read though. The blog mostly targets intermediate developers and intends to be the single place you need to look for setting up a secure machine.
## SSH and GPG keys
Alright so, being on a linux platform, there are going to be 2 types of Keys you ever need to be worried about.
### SSH Keys
You'll be using these keys as a sort of authentication mechanism, to prove your identity. Popular examples of using these keys is when you need passwordless access to servers or maybe to push commits to your git repository. It is not technically passwordless, since your Key act as your password. They are composed of 2 parts, the public key, and the private key. The public key, sits on the server which you want to access, you can openly disclose it to the world without fearing any compromises, whereas the private key is supposed to stay with you.
### GPG Keys
Next are the GPG Keys, now these are a little more complicated than SSH Keys. GPG Keys can be used for signing messages, or commits, or software, to provide authenticity, of your identity, to prove that you are indeed the one providing the information. They can also be used to encrypt messages, so that they can be read by only the person or organization it is intended for.
## How many keys does a man require?
Seems like the modern version of Leo Tolstoy's story. Anyways jokes aside. With more number of keys comes more security, but also more complexity in handling the keys. To start off lets lay down some logical statements.
1. Keys shouldn't be transfered between devices. Why you ask, well, if you have the same set of keys everywhere, and if one of your devices is compromised then, your entire identity is compromised. However if you have separate keys, you may easily block access to the key present on your compromised device.
2. Keys shouldn't be passwordless. This ensures even in case of a hardware compromise, the keys are unusable. For instance let's say the malicious actor gets hold of your devices, and has his files his hands on the keys, he is still unable to use them without the password which only you know.
3. It's a bad idea to use the same keys for a lifetime. This is a sort of precausionary measure. Assuming the malicious actor alrady has your keys, but hasn't used them for neferous purposes, the threat will be neutralized upon key renewal. Also for renewal it is a good idea to maintain a list of places where the public key has been uploaded, for example GitHub, GitLab, personal server etc., so that it can easily be replaced. I replace them every year. Maybe as I get more paranoid, I'll decrease the duration :)
4. Not to use the the same GPG Key for signing and encrypting. So explaining this one is a little complicated, and will require a bit of math. You may look it up [here](https://crypto.stackexchange.com/questions/12090/using-the-same-rsa-keypair-to-sign-and-encrypt), but like other logical assumptions I've made above, this one also stands.
5. The GPG keys which you use for signing, shouldn't be replaced. These are the only keys, you need to protect with a great deal of precaution. The reason for this is, it will be used to build your web of trust. Replacing this unverifies all your signed entities. Now it is not an end of the world scenatio as I'm making it out to be. I mean nothing will stop working or anything, it's just that, the way I work, I try to seek perfection where possible. And this might be a little far fetched, but I think it is achievable. So in an ideal world, if I sign a commit, and it is verified, I intend it to be verified till the end of time. If my signing keys are compromised, I'll have to remove them, and have them replaced, rendering all my older commits unverfied. It has happened before and I'm not proud of it.
6. Now you may say, that maintaining so many GPG Keys, and that too securely, will be a painful task. But don't worry, OpenPGP has got you covered. It has this concept of [subkeys](https://wiki.debian.org/Subkeys), wherein the main key, need not even be on any of the devices you're using. Which brings me to my next point, to not have the main GPG Key on any of your devices, and rather keep it on an air-gapped device, for super security.
7. Do not delete old keys using `rm`. Use `shred` instead.
8. Use FLOSS only. Okay maybe I'm going a bit overboard with this one. But if you truly want to use secure systems, you need to be able to trust the softwares which you're using. And the best way to do this is to use softwares which have their code out in the open for anyone to review. Don't get me wrong, I'm not saying FLOSS softwares don't have any security flaws in them. All I'm trying to say is, given the codebase is out in the open, a lot of people have eyes on the code to spot the flaws, have them reported and get them fixed. I also get it, that sometimes it is a little hard to do this. Maybe you desperately want to play that game, or want to use that one software which gets everything right for you. One way I could think of resolving this problem is to have a dual boot system, one with FLOSS. I for the most part have replaced most of my applications with FLOSS.
9. For SSH keys, prefer eliptical curve cryptography(ECC) over RSA. ECC keys are a lot shorter, while providing the same level of security as RSA. There are a few more benefits to using ECC, since they are immune to side channel attacks. To generate an RSA you have to generate two large random primes, and the code that does this is complicated an so can more easily be (and in the past has been) compromised to generate weak keys. However certain older systems might not support ECC and hence it is a good idea to have an RSA Key as backup incase ED25519 is not implemented.
## Ground Rules
Based on the rules above, I think it is same to assume the following ground rules
1. Every system should have it's own key.
2. Keys should have a passphrase on them.
3. Replace keys every year.
4. The GPG keys used for signing should'nt be replaced.
5. Use different GPG Keys for signing and for encrypting.
6. Keep your main GPG Key on an air-gapped device.
7. Delete keys using `shred` instead of `rm`
8. Use FLOSS preferably.
9. For SSH, have 2 keys one using ED25519 implementation, and the other RSA.
Boy we've reached 9 rules. I was expecting 3 or 4 at max. Anyways, we gotta get rid of the paranoia.
## Storge Mechanisms and Backups
The next problem arises of storing the keys, and having backups. You need to have backups, because it is a bad idea to put all your e̶g̶g̶s̶ keys in one b̵a̵s̵k̵t̵e̵t̵ device. Your PC may fatally crash, your phone may go dead, or you USB stick may just stop responding. Hence it is a bad idea to have everything at a single location.
Here we follow the 3-2-1 backup rule. It is a best practice to have 3 copies of the data. This way, even if you lose any 2 of them, which in itself is an unlikely occurence, you can still recover.
You may also use something like the physical prinout of you key on paper, using something like [paperkey](http://www.jabberwocky.com/software/paperkey). Or maybe in the form of a QR code, maybe at the bottom of the ocean/ in someone's grave/ inside a nuclear reactor. Whatever, get creative with your ideas! Or maybe keep it simple, and have multiple plain old USB sticks in differnt locations, or a CD-ROM (if anyone even uses that nowadays). Or you could store it on Google Drive, defeating the entire purpose and effort. The cloud could also be considered an option, but then again, I won't necessarily trust the cloud provider to not access my data.
Now earlier I had suggested that it is a bad idea to store it on your device, However you may also store, inside a [tomb](https://github.com/dyne/Tomb). More on that later.
## Random number geenration
Before key creation you may want to install `rng-tools` for entropy creation.
## Creating a SSH Key
Alright now coming to the creation part. Finally comes the creation of the key part! The initial seed should be super random.
Make sure you have OpenSSH installed on your system. Most systems have it, however in case it is not present, simply look it up on the internet, and install it using your package manager.
Creating the ED25519 Key:
```sh
ssh-keygen -a 100 -t ed25519 -f ~/.ssh/id_ed25519 -C "john.doe@mail.com"
```
Creating the RSA Key:
```sh
ssh-keygen -a 100 -b 4096 -f ~/.ssh/id_rsa -C "john.doe@mail.com"
```
In both the cases you'll have to enter your passphrase, while creating the key. Make sure you choose a secure passphrase, preferably 10 characters long, with a combination of symbols, number, capital and small lettes. Also make sure you don't forget it!
## Creating a GPG Key
Make sure you're using `gpg2` instead of `gpg`.
The creation process for the 2 GPG keys is similar, except for some minor changes. To start the process you need to invoke the same command in both the cases.
```sh
gpg2 --verbose --full-gen-key
```
Let's create the signing key first.
You'll be presented with a few prompts. For the 1st prompt, select 4, i.e. `RSA (sign only)`. Next make sure your keys are `4096` bits long. As explained above, given this is a the signing key, the validity of the key should be `0`, i.e. the key does not expire. Finally you'll need to confirm your inputs, enter your passphrase insert you name, email ID, and a comment. In the comment, you may mention `Signing Key`. Note that, the last 3 fields, are inconssequential to the key generation, however they're important for identification purposes. Done, your signing key will be created.
For creation of the encryption key, invoke the same command as above.
Filling the prompts will be a little different this time around. For the 1st prompt, select 1, i.e. `RSA and RSA`. Next make sure your keys are `4096` bits long. For key expiration period, you may select 1 year, i.e. `1y`. Confirm everything and enter a passphrase like above, and your encryption keys are also ready!
Note again, I cannot emphasise enough the importance of using a strong passphrase! The passphase will be what protects your private key in case it is stolen.

View file

@ -0,0 +1,14 @@
import MarkdownBlogComponent from "@/components/MarkdownBlogComponent";
import path from "path";
import { fileURLToPath } from "url";
export default async function BlogPage() {
return (
<MarkdownBlogComponent
blogFilePath={path.join(
path.dirname(fileURLToPath(import.meta.url)),
"content.md",
)}
/>
);
}

View file

@ -0,0 +1,863 @@
# Ubuntu - The start of an amazing journey
I originally started off with the Windows Operating System. It was good until I realized I couldn't customize anything the way I wanted.
As of now I have transitioned to using Arch Linux as my daily driver since it brings along a ton of customization capabilities and a minimalistic approach with I very much like. But I usually suggest people to start off with something like Ubuntu because of the ease with which you can setup your system.
This blog post lists the problems I faced while transitioning from Windows to Ubuntu. Treat it like an FAQ of sorts, since most of the solutions can be applied to any linux distribution in general.
# UBUNTU Setup
## Windows settings
### Allocate Space for Ubuntu
If you plan to dual boot your system then you'll need to allocate space for Ubuntu. For the record, Ubuntu is as capable if not more, an OS as Windows. It just needs some time to get used to if you're coming from a Windows background.
1. Open Disk Utility
2. Select a Drive which has enough space to spare
3. Right Click and select Shrink Volume and select the number in bytes which you want to spare.
I recommend around 200 GB space, but there is no hard and fast rule, you can allocate as you like.
### Disable fastboot on Windows
Before you install Ubuntu you'll have to disable fastboot on your Windows system. Fastboot is a type of hybrid shutdown in which the data present on your RAM is stored in your hard disk for faster boot up speeds, however it renders the hard disk unusable by other operating systems, since there is cached data present on them from Windows.
1. Right click on battery symbol
2. Goto Power Options
3. On the left side select "Choose what the power buttons do"
4. Click on "Change settings that are currently unavailable"
5. Uncheck "Turn on fast startup (recommended)"
## Make a bootable USB
1. Download and install [Etcher](https://etcher.io/)
2. Download the [Ubuntu Image](https://www.ubuntu.com/download/desktop?)
3. Make a bootable USB by opening Etcher, inserting the USB and selecting the ISO file you just downloaded.
## Install Ubuntu
You're all set and ready to install Ubuntu!
1. Insert the bootable USB and restart you PC. Boot into BIOS using the function keys on startup. (On HP systems it's usually the `F10` key which needs to be pressed)
2. Highlight the USB drive and hit Return (ENTER)
The system will boot into Ubuntu and you'll have a Graphical Interface to aid you with the installation.
### Partition Scheme
The only complicated part during the installation if the partition scheme. I prefer having my `/home` and `/` (root) partitions separate and hence this the partition scheme I use.
Select "something else" from the boot window. Then partition, the 100
GB you allocated via windows, as shown below. *
```
swap -- 11 GB - PRIMARY (swap should roughly be 1.5x RAM but is not really mandatory)
/ -- 75 GB - LOGICAL - ext4 (recommended min 50GB)
/boot -- 1 GB - LOGICAL - ext4 (min 500MB)
/home -- 75 GB - LOGICAL - ext4 (or rest of it)
```
Feel free to resize as you see fit as per the amount of space you have to spare.
# FAQs
## Installed Ubuntu 16.04, and wanted to move to 14.04
Make a 14.04 bootable USB and boot it up. Select 'Uninstall Ubuntu
16.04 and reinstall'. \#Make sure the USB you\'re using will be able to
revert back to data storage from bootable mode, since some USB\'s don't
support this, eg. Kingston etc.
## In case the USB gets messed up and cannot be used for data storage follow this:
[Fixing your USB](http://askubuntu.com/questions/198065/how-to-format-a-usb-drive)
## Terminal
All the commands need to be executed inside a terminal. To open the terminal press `Ctrl + Alt + T`
## To update your package list and upgrade you packages
```sh
sudo apt update
sudo apt upgrade
```
*NOTE: This is when it'll ask your password for `sudo` privileges. `sudo` privileges are asked when something needs to be changed in the root directory. Think of `root` as the "C" drive on your Windows system. It has the OS installed along with all the other softwares on your system, hence you need to be a little careful while using it. Do not execute any random as command as root!*
## Install LibreOffice
### Remove LibreOffice 4.x if installed using
```sh
sudo apt-get remove \--purge libreoffice\*
sudo apt-get clean
sudo apt-get autoremove
```
### Install LibreOffice 5.x, open Terminal and type:*
```
sudo add-apt-repository ppa:libreoffice/ppa
sudo apt-get update
sudo apt-get upgrade
```
## To access an FTP server
Open file explorer, and select "Connect to server", and type in the
ftp address.
## To install Spotify
Visit [this link](https://www.spotify.com/int/download/linux/) and follow the instructions.
## Copy files using terminal
1. If you have a file in `/path/to/file` and I want to copy this file to `/new/path` and do not want to delete any thing in `/new/path` directory.
By using -i for interactive you will be asked if you would like
to replace the file:
```
cp -i /path/to/file /new/path
```
or you can use -b to create a backup of your file:
```
cp -b /path/to/file /new/path
```
2. If you want to copy a directory (folder) from `/path/to/directory` to `/path/to/new_directory/` and do not want to delete any thing on `/path/to/new_directory/`
Use -R for recursive and -i for interactive:
```
cp -Ri /path/to/directory /path/to/new_directory
```
## Moving files using terminal
If you want to cut a folder/file and copy to other place without deleting files in that directory, use the same commands as above, but use the `mv` command instead of the `cp` command
## Delete files using terminal
```
rm -rf folderName
```
Be careful with this command. If it says `Permission denied` do not simply use `sudo`. You are getting `Permission denied` because you're trying to delete something which you don't have permissions over. This could very well mean you're trying to delete something from the `root`, which is not recommended unless you know what you're doing.
Note: this is assuming you are already on the same level of the folder
you want to delete in terminal, if not:
```
rm -r /path/to/folderName
```
FYI: you can use letters -f, -r, -v:
- -f = to ignore non-existent files, never prompt
- -r = to remove directories and their contents recursively
- -v = execute commands verbosely
## To install Firefox Developer 64-bit edition
### Uninstall existing Firefox installation if any
```
sudo apt-get purge firefox
```
### Download Firefox
[From here](https://www.mozilla.org/en-US/firefox/developer/all/)
After download completes extract it into the downloads folder, and rename the `firefox` folder to `firefox_dev`. Copy it to the `/opt` directory with the command
```
sudo cp -r Downloads/firefox_dev /opt
```
Open terminal and type:
```
vim ~/.local/share/applications/firefox_dev.desktop
```
I'm using `vim` here. `vim` is a text editor inside the terminal. You can choose any text editor which you like, for e.g, `nano` etc.
Then insert the following text into the file:
```
[Desktop Entry]
Name=Firefox Developer
GenericName=Firefox Developer Edition
Exec=/opt/firefox_dev/firefox
Terminal=false
Icon=/opt/firefox_dev/browser/icons/mozicon128.png
Type=Application
Categories=Application;Network;X-Developer;
Comment=Firefox Developer Edition Web Browser.
```
Now open you file explorer, search for `/opt` folder, open it, open `firefox_dev` folder and open `firefox`
Tip: *Pin this to the taskbar for easy startup*
Also install the security update ppa using:
```
sudo add-apt-repository ppa:ubuntu-mozilla-security/ppa
sudo apt-get update
```
## Install .deb files
Refer: http://unix.stackexchange.com/questions/159094/how-to-install-a-deb-file-by-dpkg-i-or-by-apt
## To install sublime text 3
Install the GPG key:
```
wget -qO - https://download.sublimetext.com/sublimehq-pub.gpg | sudo apt-key add -
```
Ensure apt is set up to work with https sources:
```
sudo apt-get install apt-transport-https
```
Select the channel to use:
Stable
```
echo "deb https://download.sublimetext.com/ apt/stable/" | sudo tee /etc/apt/sources.list.d/sublime-text.list
```
Dev
```
echo "deb https://download.sublimetext.com/ apt/dev/" | sudo tee /etc/apt/sources.list.d/sublime-text.list
```
Update apt sources and install Sublime Text
```
sudo apt-get update
sudo apt-get install sublime-text
```
Tips: *To set it as the default text editor*
Refer: http://askubuntu.com/questions/396938/how-do-i-make-sublime-text-3-the-default-text-editor
## Remove unwanted architectures
*Note: DO NOT follow this unless you're using a 64-bit CPU*
```
dpkg --print-architecture
```
should print `amd64`
```
dpkg --print-foreign-architectures
```
should print `i386` only
```
sudo dpkg --remove-architecture foreign-architecture
```
`foreign-architecture` since it was unwanted
## To install wineHQ on Ubuntu
Wine is used to emulate any Windows Programs which you desperately need in Linux.
TL;DR Refer: http://askubuntu.com/questions/316025/how-to-install-and-configure-wine
```
sudo dpkg \--add-architecture i386
sudo add-apt-repository ppa:wine/wine-builds
sudo apt update
sudo apt install winehq-staging
winecfg
```
## To install uTorrent
Refer: https://www.youtube.com/watch?v=oSiUcgGyiGM
<http://localhost:8080/gui>
User:admin
Password:
(Yes the password is blank)
## To install IDM
Refer: http://askubuntu.com/questions/554062/how-i-can-install-internet-download-manager-on-ubuntu-14-04
## To install VLC
```
sudo apt update
sudo apt install vlc browser-plugin-vlc
```
Refer: http://www.videolan.org/vlc/download-ubuntu.html
*Note: To fix the overlay problem, goto **Tools** > **Preferences** > **Video**, and uncheck **Accelerated Video Output (Overlay)***
## Why use apt update
Ref: http://askubuntu.com/questions/337198/is-sudo-apt-get-update-mandatory-before-every-package-installation
## To be able to recognize exfat formatted USB drives:
```
sudo apt-get install exfat-fuse exfat-utils
```
## To install pip
For pip you'll need python as well on your system. Python 2 is the default on Ubuntu, however I suggest you migrate to 3
```
sudo apt install python3
sudo curl https://bootstrap.pypa.io/3.2/get-pip.py | python3.6
```
## In case of compatibility issues (causing IGN in terminal)
Ref: https://discuss.elastic.co/t/apt-repositories-are-failing-with-404/47713/7
## Setting up virtualenv
Refs: TL;DR
http://stackoverflow.com/questions/5506110/is-it-possible-to-install-another-version-of-python-to-virtualenv
To install virtual environment
```
sudo pip install virtualenv
```
To create a virtual environment
```
mkdir Environments
cd Env*
virtualenv project1_env
```
To enter the virtual environment
```
source project1_env/bin/activate
```
To exit the virtual environment
```
deactivate
```
*Note: This virtual environment is only for the python packages you install. You need not use `sudo` to install packages using `pip`*
## To install Git
```
apt install git
```
## To install Android Studio
Ref: https://developer.android.com/studio/index.html
## To install Zotero
Ref: https://www.zotero.org/download/
## To quit anything abruptly in terminal
`Ctrl + C`
## To close terminal
```
exit
```
## Difference between "cd ~/folder_name" and "cd folder_name"
The tilda (i.e. \~) before the folder path indicates that it is located in home directory, and without tilda it is located in the current directory.
## To install opencv
Steps for installing opencv 2 in python 2
Ref:
1. http://docs.opencv.org/2.4/doc/tutorials/introduction/linux_install/linux_install.html#linux-installation
2. https://medium.com/@manuganji/installation-of-opencv-numpy-scipy-inside-a-virtualenv-bf4d82220313#.ndrkgkel7
Remember:
These are the parameters I passed to the `cmake` command for opencv 2 on python 2
```
cmake -D MAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=$VIRTUAL_ENV/local/ -D PYTHON_EXECUTABLE=$VIRTUAL_ENV/bin/python -D PYTHON_PACKAGES_PATH=$VIRTUAL_ENV/lib/python2.7/site-packages -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_GTK=ON -D WITH_OPENGL=ON ..
```
These are the parameters I passed to the `cmake` command for opencv 3 in python 3
```
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=$VIRTUAL_ENV/local/ -D INSTALL_C_EXAMPLES=OFF -D INSTALL_PYTHON_EXAMPLES=ON -D PYTHON_EXECUTABLE=$VIRTUAL_ENV/bin/python -D PYTHON_PACKAGES_PATH=$VIRTUAL_ENV/lib/python3.5/site-packages ..
```
```
sudo apt-get install liblapacke-dev checkinstall
make -j4
sudo checkinstall
```
Ref:
1. http://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/
2. http://answers.opencv.org/question/121651/fata-error-lapacke_h_path-notfound-when-building-opencv-32/
3. http://docs.opencv.org/3.0-beta/doc/tutorials/introduction/linux_install/linux_install.html
*Note: Remember opencv2 cannot be installed with python 3*
## To install other useful packages
```
pip install scipy
pip install matplotlib
pip install pandas
pip install dlxsudoku
pip install -U scikit-learn
pip install ipython
pip install graphviz
pip install pydotplus
sudo apt install python-xlib python-tk
pip install Image
pip install python-xlib
pip install PyAutoGUI
```
Refs:
1. http://www.numpy.org/
2. https://www.scipy.org/
3. http://matplotlib.org/
4. http://pandas.pydata.org/
5. https://pypi.python.org/pypi/dlxsudoku
6. http://scikit-learn.org/stable/
7. https://ipython.org/
8. http://www.graphviz.org/
9. https://pypi.python.org/pypi/pydotplus
10. http://jupyter.org/
11. https://pyautogui.readthedocs.org/
## To install tensorflow
Ref: https://www.tensorflow.org
## To execute python files using terminal
```
python filename.py
```
## To cd to a directory whose name has a space in it
```
cd folder\ name\ with\ spaces\ in\ it
```
The backslash acts as an escape character.
## To use IP camera as webcam in Python
Ref: https://junise.wordpress.com/2015/05/01/stream-from-wifi-ip-camera-android-phone-wifi-ip-camera-using-python/
## To setup Sublime Text 3
1. Open Sublime Text.
2. Go to 'View', and then to 'Show Console'.
3. Visit this site: https://packagecontrol.io/installation#st3 and copy the code snippet under the heading "SUBLIME TEXT 3".
4. Paste it in the console of Sublime Text and press 'Enter'. Package Control will be installed.
5. In Sublime press 'Ctrl + Shift + P'
6. Type in "install"
7. Click on "Package Control: Install Package".
8. Then select SublimeREPL. It will install it automatically.
9. Again in Sublime press 'Ctrl + Shift + P', type in "install". Again click on "Package Control: Install Package". Then select "SublimeLinter". This will install it automatically.
10. Again in Sublime press 'Ctrl + Shift + P', type in "install". Again click on "Package Control: Install Package". Then select "LaTeXing". This will install it automatically.
## To shutdown PC from terminal
```
sudo poweroff
```
## To reboot PC from terminal
```
sudo reboot
```
## To install arduino
Ref: https://www.arduino.cc/en/Guide/Linux
*Note: Before uploading to the board you'll always have to execute these commands in the terminal. (put the respective port it's connected to instead of 'ttyACM0' below)
```
sudo usermod -a -G dialout username
sudo chmod a+rw /dev/ttyACM0
```
## To run an ftp server
Refs:
1. https://help.ubuntu.com/lts/serverguide/ftp-server.html
2. https://www.youtube.com/watch?v=wXSuqzwLnL4yu
3. http://askubuntu.com/questions/649935/installed-15-04-cannot-restart-ssh-daemon
4. http://askubuntu.com/questions/198567/vsftpd-installed-but-i-cant-restart-stop-it
The username and password will be your ubuntu username and password
Notes:
vsftpd is on by default, to uninstall:
```
sudo apt remove vsftpd
```
## To check network activity
Ref: http://askubuntu.com/questions/37847/is-there-a-command-that-returns-network-utilization
## To check the manual for any command
```
man command
```
## To remove or uninstall any unwanted packages
```
sudo apt-get remove package
```
## To monitor network activity
Ref: http://askubuntu.com/questions/532424/how-to-monitor-bandwidth-usage
```
sudo nethogs wlo1
```
## To install steam
Ref: https://linuxconfig.org/how-to-install-steam-on-ubuntu-16-04-xenial-xerus
## To transfer data between two PCs
Ref: http://askubuntu.com/questions/475697/how-can-i-transfer-data-between-two-computers-using-ethernet-cable-and-ftp-softw
## To run C programs
To compile the file:
```
gcc file_name.c -o compiled_file_name
```
To execute the compiled binary:
```
./compiled_file_name
```
## To run C++ programs**
To compile the file:
```
g++ file_name.cpp -o compiled_file_name
```
To execute the compiled binary:
```
./compiled_file_name
```
## To install django
```
pip install django
```
Ref: https://docs.djangoproject.com/en/1.10/
## To install MEGAsync
```
sudo dpkg -i megasync-xUbuntu_16.04_amd64.deb
sudo apt-get -f install
```
## To resize partitions
Ref: http://askubuntu.com/questions/291888/can-i-adjust-reduce-my-partition-size-for-ubuntu
## To remote control Ubuntu
Ref: http://askubuntu.com/questions/155477/how-do-i-remotely-control-another-ubuntu-desktop-from-ubuntu
## To install docker
Ref: https://docs.docker.com/engine/installation/linux/ubuntulinux/
## To install ROS Kinetic Kane and Turtlebot
It is highly recommended to install ROS on docker. This is to prevent
dependency issues. Some known issues are:
1. ROS Kinetic isn't compatible with turtlebot. However it's working as of now. No bugs encountered yet.
2. ROS Indigo is compatible with turtlebot, however Ubuntu 16.04 isn't compatible with ROS Indigo.
Refs:
1. http://wiki.ros.org/kinetic/Installation/Ubuntu
2. http://wiki.ros.org/turtlebot/Tutorials/indigo/Turtlebot%20Installation
The 2nd link is for installing turtlebot on ROS. Since we're using ROS Kinetic replace "indigo" in all package names in the turtlebot installation with "kinetic". Two packages won't be available. Delete them, don't install them.
```
roslaunch turtlebot_gazebo turtlebot_world.launch
roslaunch turtlebot_teleop keyboard_teleop.launch
```
*To uninstall ROS Kinetic Kane and Turtlebot*
```
sudo apt-get purge ros-*
sudo apt-get autoremove
```
Then modify the `~/.bashrc` file by deleting the line `source
opt/ros...`
## To install sox
```
sudo apt-get install sox
```
Ref: http://sox.sourceforge.net/
## To install theano
Ref: http://deeplearning.net/software/theano/install.html
```
pip install nose
pip install nose-parameterized
```
## To increase swap size in Ubuntu
Ref: http://askubuntu.com/questions/178712/how-to-increase-swap-space
## To open jupyter notebook in a particular directory
```
jupyter notebook /path/to/directory
```
## To install Pytables
```
pip install tables
```
Ref: http://www.pytables.org/usersguide/installation.html
## To install SublimeHighlight in Sublime Text
Ref: http://stackoverflow.com/questions/21037711/sublime-text-2-paste-with-colors-to-ms-word
*Note: The name of the package is "Highlight" and not "SublimeHighlight"*
Set the theme as "monokai"
## To install Compare Side-By-Side in Sublime Text
Ref: https://packagecontrol.io/packages/Compare%20Side-By-Side
*Note: Could consider "Sublimemege 3" also
## To install other useful packages
```
pip install h5py
```
Ref: http://docs.h5py.org/en/latest/index.html
## How to connect Tata photon +(Huawei EC156) dongle
Ref: http://askubuntu.com/questions/536371/how-to-connect-tata-photon-huawei-ec156-in-ubuntu-14-04
## Uninstall package built from source
Ref: http://unix.stackexchange.com/questions/64759/how-do-i-remove-uninstall-a-program-that-i-have-complied-from-source
Problems were faced while uninstalling octave-4.0.3 built from source, and after following the above steps also had to remove the files found using bash individually using sudo rm command. To avoid such situations in the future, consider using checkinstall instead of make install.
Ref: https://help.ubuntu.com/community/CheckInstall
## To install Octave
Ref: http://wiki.octave.org/Octave_for_Debian_systems#Compiling_from_source
Use `./configure --with-hdf5-includedir=/usr/include/hdf5/serial --with-hdf5-libdir=/usr/lib/x86_64-linux-gnu/hdf5/serial JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64`
*Note: After `make`, instead of `sudo make install` do `sudo checkinstall`*
## Using LaTeX in IPython notebooks (Jupyter Notebook)
Just put your LaTeX math inside `$$`
## How to create WiFi hotspot using Ubuntu
Goto "Edit connections", "Add", select "WiFi". Choose an "SSID" (name) for the network. Select "hotspot". Select the "2.4 GHz" band. And click on "Save"
## How to execute MATLAB/Octave code in terminal
Type 'octave file_name.m'
You cannot call octave functions using the above method. You'll have to use octave for the same or call the function name inside an `m` file and then execute it using octave.
## To install Octave kernel in IPython notebooks (Jupyter Notebook)
```
pip install octave_kernel
python -m octave\_kernel.install
```
Ref: https://github.com/Calysto/octave_kernel
## To install Octave-Forge package in Octave
```
pkg install -forge package\_name
```
Ref: http://octave.sourceforge.net
## To uninstall Octave-Forge package in Octave
```
pkg uninstall package_name
```
It is not recommended to install the unmaintained Octave-Forge packages like nnet, due to compatibility issues. Instead you could download the `m` files for the required function and modify it accordingly.
## Create a function in Octave which is available in MATLAB but not in Octave
Create an `m` file the "file_name" same as the "function_name" contained within. Understand the function definition from the MATLAB help page online. Accordingly modify the function contents and save it in the same directory where it is called.
## How to remove an added ppa
Ref: http://askubuntu.com/questions/307/how-can-ppas-be-removed
## To install XDM (Xtreme Download Manager)
Download the tar.gz file, unzip it. cd into the unzipped folder. Execute
the command `sudo ./install.sh`
Ref: http://xdman.sourceforge.net/#downloads
## To install unity tweak tool
```
sudo apt install unity-tweak-tool
```
## To install various themes and icons for Ubuntu
For Macbuntu theme:
```
sudo add-apt-repository ppa:noobslab/macbuntu
sudo apt-get update
sudo apt-get install macbuntu-os-icons-lts-v7
sudo apt-get install macbuntu-os-ithemes-lts-v7
```
For Arc theme and icons:
```
sudo add-apt-repository ppa:noobslab/themes
sudo apt-get update
sudo apt-get install arc-theme
sudo add-apt-repository ppa:noobslab/icons
sudo apt-get update
sudo apt-get install arc-icons
```
For Vivacious theme and icons:
```
sudo add-apt-repository ppa:ravefinity-project/ppa
sudo apt-get update
sudo apt-get install vivacious-colors-gtk-dark
```
## To install network speed indicator
```
sudo add-apt-repository ppa:nilarimogard/webupd8
sudo apt-get update
sudo apt-get install indicator-netspeed
```
## To install system monitor indicator
```
sudo apt-get install indicator-multiload
```
## To uninstall ImageMagic
Don't remove it. It might harm your system.
Ref: http://askubuntu.com/questions/764553/how-to-uninstall-image-magick
## XMind
To install:
Ref: http://www.xmind.net/download/linux/>
To uninstall:
Open "Ubuntu Software" goto the "Installed" tab and "Remove" XMind
## Install new themes in jupyter
One way to do this is to replace the `custom.css` file in `~/.jupyter/custom`
Or you can also tweak using the command line.
Ref: https://github.com/dunovank/jupyter-themes
## To install texlive
```
sudo apt-get install texstudio
sudo apt-get install texlive-full
```
Refs:
1. http://www.texstudio.org/
2. https://www.tug.org/texlive/
## Using namebench to determine best DNS server for you
Ref: https://code.google.com/archive/p/namebench
## To change DNS
Goto "Edit connections" by clicking on the WiFi symbol in the notification bar. Select your WiFi connection and click on edit. Goto IPv4 settings and change method to "DHCP automatic (addresses only)". Now in the DNS field write the address and save it. Now restart the WiFi connection by clicking on your network again via the notification bar.
Ref: https://developers.google.com/speed/public-dns/docs/using
## To install Kodi
```
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:team-xbmc/ppa
sudo apt-get update
sudo apt-get install kodi
```
Ref: http://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux
## To install Go
```
sudo apt-get install golang
```
## To install NodeJs
```
curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo apt-get install -y build-essential
```
Ref: https://nodejs.org/en/download/package-manager/
## LIFE SAVER for graphics issues caused by CUDA
Ref: https://askubuntu.com/questions/760934/graphics-issues-after-while-installing-ubuntu-16-04-16-10-with-nvidia-graphics
## CUDA installation references
Ref:
1. https://askubuntu.com/questions/57994/root-drive-is-running-out-of-disk-space-how-can-i-free-up-space
2. http://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions
3. https://askubuntu.com/questions/799184/how-can-i-install-cuda-on-ubuntu-16-04
4. https://askubuntu.com/questions/767269/how-can-i-install-cudnn-on-ubuntu-16-04
**Note: Don't do it in a virtualenv**
Export path needs to be added to `~/.bashrc`
```
export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
export CUDA_HOME=/usr/local/cuda${CUDA_HOME:+:${CUDA_HOME}}
```
## Symbolic links are like shortcuts in windows
## Caffe installation references
Ref: https://software.intel.com/en-us/forums/intel-math-kernel-library/topic/509013
Install order should be:
*Virtualenv, numpy, scipy etc, openblas, OpenGL---present on system already, intel mkl, nvidia drivers, numpy scipy, cuda, cudnn, opencv, caffe, tensorflow, tflearn, theano, openai, keras, pytorch*

View file

@ -0,0 +1,14 @@
import MarkdownBlogComponent from "@/components/MarkdownBlogComponent";
import path from "path";
import { fileURLToPath } from "url";
export default async function BlogPage() {
return (
<MarkdownBlogComponent
blogFilePath={path.join(
path.dirname(fileURLToPath(import.meta.url)),
"content.md",
)}
/>
);
}

View file

@ -0,0 +1,259 @@
# WireGuard on Kubernetes with Adblocking
Lets be frank, the Internet is simply unusable with all the ads floating around.
I use the [uBlock Origin](https://github.com/gorhill/uBlock) extension in my browser, as do most of the people reading this genre of articles, but the same is not true for the majority of the population, including other members of my family. So in order to enhance their web browsing experience I decided to block ads at the DNS level.
But why stop there I thought, why not also improve their privacy while I'm at it. So I also decided to setup a VPN. Now let me clarify some things here, I'm not a big fan of VPNs, the way they're advertised by the big companies, here's a [great video by Tom Scott](https://www.youtube.com/watch?v=WVDQEoe6ZWY) explaining what I mean. But they also have their use cases, some of which are:
- Prohibiting ISPs from collecting data on my browsing patterns
- Circumvent internet censorship
- Connect to my home network from anywhere
I took a look at apps like [Blokada](https://blokada.org/) and [DNS66](https://github.com/julian-klode/dns66) which grant you device wide ad blocking on mobile devices. On Android [the way this works is](https://block.blokada.org/post/2018/06/17/how-does-blokada-work), by creating an internal VPN on the phone, so that all traffic from the device can be routed via it, and it has a file with all the blacklisted domains, to filter out traffic.
But there's a caveat with this approach. I cannot use another VPN to route my traffic, due to an Android [limitation](https://developer.android.com/reference/android/net/VpnService).
> There can be only one VPN connection running at the same time. The existing interface is deactivated when a new one is created.
This means my browsing patterns are still accessible to my ISP. So, I started looking for ad blocking DNS servers, so that I could point Android's global DNS to it. I found [AdGuard](https://adguard.com) and [PiHole](https://pi-hole.net) to be the top projects. Hosting these at home, on Raspberry Pi seemed like a plausible solution, but then again one can't use it while traveling. The solution is to obviously host it on a publicly accessible server. Amongst the two I found AdGuard more appealing due to the following reasons
- It has out of the box support for DNS-over-TLS
- It maintains a single file for its entire configuration
- It's written in Golang, and is much lighter on resources compared to PiHole
You can find more about their differences [here](https://github.com/AdguardTeam/AdGuardHome#how-does-adguard-home-compare-to-pi-hole). Both are great projects, but AdGuard met my requirements perfectly.
When it comes to VPN, I did not even consider using OpenVPN, [WireGuard](https://www.wireguard.com) was the obvious choice, because of it's speed, smaller, easily auditable codebase (not that I was going to audit it, but still), cross platform compatibility and integration into the Linux Kernel.
Being a big fan of Kubernetes and maintaining infrastructure as code, I wanted a way to be able to easily deploy and version control my deployment. After much searching I stumbled upon, [kilo](https://github.com/squat/kilo), a network overlay built on WireGuard for Kubernetes. It could do exactly what I wanted, while also enhancing the security of my cluster, by encrypting the inter pod communication, and allowing me to build secure clusters, over nodes spanning multiple cloud providers. Also it would give me the added benefit of easily debugging applications deployed on my Kubernetes Cluster, since when connected I would be a peer on the network, thereby getting access to the all the private IPs of the deployments, services etc. You can watch [this talk by Lucas Servén Marín](https://www.youtube.com/watch?v=iPz_DAOOCKA) to know more.
Without further ado, let's jump right into the setup. I'll be explaining the steps for setting it up on a [k3s](https://k3s.io) cluster. You may need to modify them as per your cluster. After deploying k3s, the 1st thing which needs to be done is to setup kilo. First download the [manifest](https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-k3s.yaml) for kilo on k3s.
```sh
curl -LO https://raw.githubusercontent.com/squat/kilo/master/manifests/kilo-k3s.yaml
```
Now you need to modify and add `- --mesh-granularity=full` to the `DaemonSet` section under the `args` for the `kilo` container.
```
...
containers:
- name: kilo
image: squat/kilo
args:
- --kubeconfig=/etc/kubernetes/kubeconfig
- --hostname=$(NODE_NAME)
- --mesh-granularity=full
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
...
```
This is done to ensure all our nodes are meshed together regardless of the datacenter. Then simply apply the manifest.
```sh
kubectl apply -f kilo-k3s.yaml
```
This will later be useful for setting up WireGuard VPN. More on this later. Now we can proceed to setup AdGuard. Here is the spec for AdGuard.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: adguardhome
spec:
selector:
matchLabels:
app: adguardhome
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 0
template:
metadata:
labels:
app: adguardhome
spec:
volumes:
- name: tls-cert-secret
secret:
secretName: production-tls-cert
- name: adguard-config
hostPath:
path: "/path/to/store/conf"
type: DirectoryOrCreate
- name: adguard-logs
hostPath:
path: "/path/to/store/work"
type: DirectoryOrCreate
containers:
- name: adguardhome
image: adguard/adguardhome:v0.102.0
ports:
# Regular DNS Port
- containerPort: 53
hostPort: 53
protocol: UDP
- containerPort: 53
hostPort: 53
protocol: TCP
# DNS over TLS
- containerPort: 853
hostPort: 853
protocol: TCP
volumeMounts:
- name: tls-cert-secret
mountPath: /certs
- name: adguard-config
mountPath: /opt/adguardhome/conf
- name: adguard-logs
mountPath: /opt/adguardhome/work
terminationGracePeriodSeconds: 20
---
apiVersion: v1
kind: Service
metadata:
name: adguardhome
labels:
app: adguardhome
spec:
type: ClusterIP
selector:
app: adguardhome
ports:
- port: 80
# targetPort: 3000
targetPort: 80
protocol: TCP
```
The `RollingUpdate` is intentionally configured to not wait till a new pod is up, and directly terminate the existing pod during deploys. On the surface it seems like an anti-pattern, but since I'm using the `hostPort` directive, a new Pod wouldn't get scheduled unless port `53` was available on the host for it to bind to, so the existing Pod has to terminate before a new Pod can be deployed.
Also I initially intended to use a `ConfigMap` for holding the `AdGuardHome.yml`, but there was some issue with AdGuard trying to write to it while initally comming up, but since `ConfigMap`'s are moundted as `ReadOnly`, Pod creation used to fail, so I decided to go with a Volume instead, until I could figure out the issue.
For the initial setup, the AdGuard admin UI will be accessible on port 3000, so you'll have to switch the `targetPort` to `3000` in the `adguardhome` service initially, access the admin UI, setup the password, and then revert the `targetPort` to `80`.
You may also enable `DNSSEC` under `DNS Settings` for guaranteeing authenticity of DNS responses by signing them, and making tampering detectable.
The volume mounts for `tls-cert-secret` are only necessary if you want to enable DNS-over-TLS. And you need to configure your ingress resource before mounting it here.
For enabling DNS-over-TLS, on the AdGuard admin UI you can goto `Encryption Settings` -> `Enable Encryption`, and put in the Certificate path as `/certs/tls.crt` and the Key path as `/certs/tls.key`. Again let me re-iterate that your Ingress resource needs to be configured properly and you need to have a valid TLS certificate for the domain you're hosting AdGuard on.
On Android phones for Android Pie and later, you may goto `Settings` -> `WiFi and Internet` -> `Private DNS`. Select `Private DNS Hostname Provider` and set it to the domain name to the once you've configured on. You may also configure it on your home router to ensure all the devices get DNS level Ad Blocking.
This concludes the AdGuard part of the setup.
---
Now coming to setting up WireGuard.
I'll be referring to the k3s cluster as the server and the local laptop as the client from here on.
You'll need WireGaurd installed on both your server and client machine. Follow the [steps as per your distribution](https://www.wireguard.com/install) to install the same. If you're using a bleeding edge distro like Archlinux or Gentoo, you don't need to do anything on the server side, since WireGuard is already baked into the kernel at this point. On the client side however you'll need to install it for getting the command line client to enable / disable the interface.
Another useful tool to have on the client side is `kgctl`. You can install it using
```sh
go get github.com/squat/kilo/cmd/kgctl
```
Now we need to create a private and a public key pair on the client.
```sh
wg genkey | tee privatekey | wg pubkey > publickey
```
This'll create 2 files with the respective key contents. This key pair needs to be authorized on the server. You can do this simply by creating a peer resource. Create a file named `archie.yaml`
```yaml
apiVersion: kilo.squat.ai/v1alpha1
kind: Peer
metadata:
name: archie
spec:
allowedIPs:
- 10.120.120.1/32 # This is just and example, you can use any valid available CIDR here
publicKey: CLIENT_PUBLIC_KEY # Enter the public key here, the one you just generated
persistentKeepalive: 10
```
Finally apply the manifest
```sh
kubectl apply -f archie.yaml
```
Remember, the `allowedIPs` should be a valid CIDR, which is available on both the server and the client. Now we can use the `kgctl` tool to generate the `peer` section of the client WireGuard config.
```sh
kgctl showconf peer archie
```
This will return something like
```
[Peer]
AllowedIPs = 10.42.0.0/24, 10.42.0.0/32, 10.4.0.1/32
Endpoint = YOUR_SERVER_IP:51820
PersistentKeepalive = 10
PublicKey = SERVER_PUBLIC_KEY
```
Create a file on the client named `adgaurd.yaml` at the location `/etc/wireguard` and add the following contents to it
```
[Interface]
Address = 10.120.120.1/32 # Use the same CIDR whitelisted in the Peer manifest
PrivateKey = CLIENT_PRIVATE_KEY # The one you generated above on your client
DNS = YOUR_SERVER_IP # Enter the IP of the server to block ads, FQDN don't work on Android for some reason
[Peer]
AllowedIPs = 0.0.0.0/0, ::/0 # This modification is important to route all the traffic from your machine via wireguard interface
Endpoint = YOUR_SERVER_IP:51820
PersistentKeepalive = 10
PublicKey = SERVER_PUBLIC_KEY # as recieved from the above config
```
Now to enable the VPN on your client machine you may use
```sh
wg-quick up adguard
```
You can verify you're connected to the VPN server by visiting [ifconfig.io](https://ifconfig.io). It should show your IP as the IP of your server.
To disconnect, you may use
```sh
wg-quick down adguard
```
Using the same process outlined above you can add multiple Peers and create multiple configs. You can also install and use the `qrencode` utility to convert the same config to a QR code for easy scanning on mobile phones.
```sh
qrencode -t ansiutf8 < /etc/wireguard/adguard.conf
```
This will print a QR code right in your terminal. You can install the [WireGuard Android App](https://f-droid.org/en/packages/com.wireguard.android/) on your phone and this scan this QR code to import the config.
And viola, whenever you're connected, your ISP can't snoop in on the websites you're visiting, add to that all your traffic will be filtered out for ads, while also routing your data through the server for protection against malicious actors in your network, and for the cherry on top, you can have all your devices connected and talking to each other regardless of the location / network they are on.
---
Guess this post made it to the front page on Y Combinator's Hacker News! You can find the [HN post here](https://news.ycombinator.com/item?id=23812063). And here's the [web archive link](https://web.archive.org/web/20200712214849/https://news.ycombinator.com) (8th one on this). This invited a good amount of traffic onto my blog :)
![Bandwidth Usage](../images/codingcoffee-dev-cloudflare-webtraffic-20200714.png)

View file

@ -0,0 +1,14 @@
import MarkdownBlogComponent from "@/components/MarkdownBlogComponent";
import path from "path";
import { fileURLToPath } from "url";
export default async function BlogPage() {
return (
<MarkdownBlogComponent
blogFilePath={path.join(
path.dirname(fileURLToPath(import.meta.url)),
"content.md",
)}
/>
);
}

View file

@ -185,6 +185,16 @@
user-select: none;
}
.markdown h1 {
font-size: 2rem;
text-align: center;
}
.markdown h2 {
font-size: 1.5rem;
font-weight: 600;
}
.markdown hr {
height: 4px;
background: var(--sidebar-accent);

View file

@ -0,0 +1,27 @@
import MarkdownRenderer from "@/components/MarkdownRenderer";
import { promises as fs } from "fs";
export default async function MarkdownBlogComponent({
blogFilePath,
}: {
blogFilePath: string;
}) {
console.log(blogFilePath);
try {
const markdownContent = await fs.readFile(blogFilePath, "utf8");
return (
<main className="flex flex-1 flex-col justify-end items-center font-[family-name:var(--font-spacegrotesk-sans)] pt-10 md:pt-20 pb-25 md:pb-0">
<div className="md:w-[786px] w-[95%]">
<div className="p-5 pt-10 rounded-lg markdown">
<MarkdownRenderer markdown={markdownContent} />
</div>
</div>
</main>
);
} catch (error) {
console.error("Error loading markdown:", error);
return <div>Failed to load content</div>;
}
}