blog.mirabellette.eu

A blog about digital independence and autonomy

Welcome to blog.mirabellette.eu

Par Mirabellette

I hope you will enjoy the visit and find interesting knowledge. You can find below a little summary about information and services accessible in the blog:

  • Articles - A list of all articles published sorted by category - very useful to visit
  • Why - A little explanation about why I think Digital independence is important today
  • Projects - Some information about the purpose of this blog
  • Services - A list of free available services I host for you
  • Media - Some information about how to contact me
  • About - Some information about me and the license of articles
I wish you a good visit,
Mirabellette

A highly secure OpenVPN 2.4 configuration in 2018

Written by Mirabellette / 04 november 2018 / 2 comments

Hello everyone,

Introduction

Today I would like to talk about OpenVPN. For those who did not know, OpenVPN is a free and open-source software application that implements virtual private network (VPN) techniques to create secure point-to-point or site-to-site connections in routed or bridged configurations and remote access facilities * from Wikipedia. It is developed by OpenVPN Incorporation and they offered a service of VPN with the product privatetunnel.

Nowadays, there a plenty of OpenVPN tutorial which describe how to install and configure it. Contrary to them, I will try to choose the most secure configuration possible in 2018 of OpenVPN 2.4.0 with Debian 9.5. I will try to describe as much as possible each step of the tutorial in order to be clearly understood.

Preliminaries

First of all, we have to install the OpenVPN package and some extra tools with the user root

apt update
apt upgrade
apt install -y iptables-persistent openvpn vim sudo

Generate certificates

Downloads and configuration of EasyRSA

To generate certificates, we will use EasyRSA. EasyRSA is command line interface utility to build and manage keys and certificates. You can download the latest version for your desktop here.

mkdir /tmp/openvpn
cd /tmp/openvpn/
wget https://github.com/OpenVPN/easy-rsa/releases/download/v3.0.5/EasyRSA-nix-3.0.5.tgz
tar xf EasyRSA-nix-3.0.5.tgz
cd EasyRSA-3.0.5
cp vars.example vars
vim vars

You must now modify the vars file in order to enable elliptic curve mode and improves the hash algorithm uses.

# Enable elliptic crypto mode
set_var EASYRSA_ALGO ec

# Define the named curve - choose what you like and what is supported - openvpn --show-curves
set_var EASYRSA_CURVE secp521r1

# In how many days should the root CA key expire?
set_var EASYRSA_CA_EXPIRE 3650

# In how many days should certificates expire?
set_var EASYRSA_CERT_EXPIRE 3650

# In how many days should the control happen?
set_var EASYRSA_CRL_DAYS 3650

# Define the Cryptographic digest use, unfortunately, only the md5 and sha family is currently available with EasyRSA
set_var EASYRSA_DIGEST "sha512"

Generate certificates

Please write carefully about the Common name you choose for each certificate, especially for the server certificate. This one will be used by the client to verify the server certificate in order to avoid Man in the middle attack.

# generate a directory to store all files
./easyrsa init-pki

# generate an autority certificate
./easyrsa build-ca nopass

# create server key (server.key) and certificate signing request (server.req)
./easyrsa gen-req server nopass

# sign server certificate signing request by autority certificate (server.ca)
./easyrsa sign-req server server nopass

# create client key (client.key) and certificate signing request (client.req)
./easyrsa gen-req client nopass

# sign client certificate signing request by autority certificate (client.ca)
./easyrsa sign-req client client nopass

# we will now create a more easy to read directory tree
mkdir /tmp/openvpn/server/
mkdir /tmp/openvpn/server/certificates
cp /tmp/openvpn/EasyRSA-3.0.5/pki/ca.crt /tmp/openvpn/server/certificates/ca.crt
cp /tmp/openvpn/EasyRSA-3.0.5/pki/issued/server.crt /tmp/openvpn/server/certificates/server.crt
cp -r /tmp/openvpn/EasyRSA-3.0.5/pki/private/server.key /tmp/openvpn/server/certificates/server.key

mkdir /tmp/openvpn/client/
mkdir /tmp/openvpn/client/certificates
cp /tmp/openvpn/EasyRSA-3.0.5/pki/ca.crt /tmp/openvpn/client/certificates/ca.crt
cp /tmp/openvpn/EasyRSA-3.0.5/pki/issued/client.crt /tmp/openvpn/client/certificates/client.crt
cp /tmp/openvpn/EasyRSA-3.0.5/pki/private/client.key /tmp/openvpn/client/certificates/client.key

Network configuration

Let's consider that the ssh server listens the 22 port and 443 for the VPN server.

Firewall rules

We have to define firewall rules in the file /etc/iptables/rules.v4:

*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
# Forward the VPN traffic to eth0
-A POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE

COMMIT

*filter

# Allow all loopback (lo) traffic and reject anything
# to localhost that does not originate from lo.
-A INPUT -i lo -j ACCEPT
-A INPUT ! -i lo -s 127.0.0.0/8 -j REJECT
-A OUTPUT -o lo -j ACCEPT

# Allow ping and ICMP error returns.
-A INPUT -p icmp -m state --state NEW --icmp-type 8 -j ACCEPT
-A INPUT -p icmp -m state --state ESTABLISHED,RELATED -j ACCEPT
-A OUTPUT -p icmp -j ACCEPT

# Allow SSH.
-A INPUT -i eth0 -p tcp -m state --state NEW,ESTABLISHED --dport 22 -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m state --state ESTABLISHED --sport 22 -j ACCEPT

# Allow TCP traffic.
-A INPUT -i eth0 -p tcp -m state --state NEW,ESTABLISHED --dport 443 -j ACCEPT
-A OUTPUT -o eth0 -p tcp -m state --state ESTABLISHED --sport 443 -j ACCEPT

# Allow DNS resolution and limited HTTP/S on eth0.
# Necessary for updating the server and keeping time.
-A INPUT -i eth0 -p udp -m state --state ESTABLISHED --sport 53 -j ACCEPT
-A OUTPUT -o eth0 -p udp -m state --state NEW,ESTABLISHED --dport 53 -j ACCEPT

# Allow traffic on the TUN interface.
-A INPUT -i tun0 -j ACCEPT
-A OUTPUT -o tun0 -j ACCEPT

# then reject them.
-A INPUT -j REJECT
-A OUTPUT -j REJECT

COMMIT

Be sure to adapt these rules to your needs before applying them.

sudo iptables-restore < /etc/iptables/rules.v4

We can check if the rules are correctly implied:

sudo iptables -L

Enable ipv4 forwarding and disable ipv6

In the file /etc/sysctl.d/99-sysctl.conf, add the following lines to enable ipv4 forwarding and disable ipv6:

net.ipv4.ip_forward = 1

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.eth0.disable_ipv6 = 1

and then apply the new configuration

sudo sysctl -p

Remove ipv6 lines in /etc/hosts:

#::1 localhost ip6-localhost ip6-loopback

Reject ipv6 traffic by editing the file /etc/iptables/rules.v6, it must contains:

*filter

-A INPUT -j REJECT
-A FORWARD -j REJECT
-A OUTPUT -j REJECT

COMMIT

and apply:

sudo ip6tables-restore < /etc/iptables/rules.v6

Examples of very secure configuration

In order to continue, we will use the highly secure configuration below. Be careful, you need to follow the tutorial until the end to make it workable.

Example of server configuration

Please copy the following code in /tmp/openvpn/server/server.conf. It is the OpenVPN server configuration. Don't forget to replace local IP_OF_YOUR_OPENVPN_SERVER by your server ip.

#/tmp/openvpn/server/server.conf
local IP_OF_YOUR_OPENVPN_SERVER
dev tun
topology subnet
proto tcp
port 443
server 10.8.0.0 255.255.255.0
tls-server

ca /etc/openvpn/server/certificates/ca.crt
cert /etc/openvpn/server/certificates/server.crt
key /etc/openvpn/server/certificates/server.key
tls-crypt /etc/openvpn/server/certificates/tls_crypt.key

dh none
ecdh-curve secp521r1
auth SHA512
tls-cipher TLS-ECDHE-ECDSA-WITH-AES-256-GCM-SHA384
cipher AES-256-GCM
ncp-ciphers AES-256-GCM
tls-version-min 1.2
persist-tun
compress

persist-key
keepalive 10 120

user ovpn
group ovpn

status /var/log/openvpn-status.log
log /var/log/openvpn.log

push "redirect-gateway"
push "dhcp-option DNS 10.8.0.1"
push "dhcp-option WINS 10.8.0.1"
push "route-ipv6 2000::/3"

Example of client configuration

Please copy the following code in /tmp/openvpn/client/client.conf. It is the OpenVPN client configuration. Don't forget to replace remote IP_OF_YOUR_OPENVPN_SERVER by your server ip and COMMON_NAME_OF_THE_SERVER_CERTIFICATE by the Common name you gave to the server certificate .cf Generating certificates.

# /tmp/openvpn/client
client
dev tun
remote IP_OF_YOUR_OPENVPN_SERVER 443
proto tcp
resolv-retry infinite
compress
nobind
verify-x509-name "COMMON_NAME_OF_THE_SERVER_CERTIFICATE" name
remote-cert-tls server
auth SHA512
tls-cipher TLS-ECDHE-ECDSA-WITH-AES-256-GCM-SHA384
cipher AES-256-GCM
tls-version-min 1.2
auth-nocache
persist-key
persist-tun

status /var/log/openvpn-status.log
log /var/log/openvpn.log
verb 3

ca /etc/openvpn/client/certificates/ca.crt
cert /etc/openvpn/client/certificates/client.crt
key /etc/openvpn/client/certificates/client.key
tls-crypt /etc/openvpn/client/certificates/tls_crypt.key

Harden OpenVPN configuration

This part will describe most of the security parameters chosen in the final configuration. You will be able to find an example of the final configuration in the last part.

TCP/IP protocol and port listening

In order to avoid most of firewall limit, it is highly recommended to switch from udp protocol to tcp. OpenVPN configured with TCP protocol is a little bit slower but it has more chances to be accessible from a network you do not control.

You also have to configure the OpenVPN server to listen the port 443. Port 443 is the usual port used by a web server configuration and it is most of the time open in firewall output. Moreover, it will also make your OpenVPN configuration harder to detect because it is not the standard OpenVPN port. The standard port is 1194.

proto tcp4
port 443

Diffie-Hellman

Diffie–Hellman key exchange is a method of securely exchanging cryptographic keys over a public channel. In OpenVPN, this protocol is used in the first steps of TLS establishment connection. With a key of 1024 bits and lower size, it is vulnerable to Logjam exploits. The authors of the vulnerability recommend using primes of 2048 bits or more as a defense or switching to elliptic-curve Diffie–Hellman. In this tutorial, we use elliptic-curve Diffie–Hellman

HMAC signature during TLS handshake

Since openVPN 2.4, it is recommended to replace tls-auth by tls-crypt. Mainly because tls-crypt will also encrypt the TLS control channel. That means, to add the following line in server.conf:

tls-crypt /etc/openvpn/certificates/ta.key 0

and generate the HMAC key file in the openvpn server directory and client directory:

openvpn --genkey --secret /tmp/openvpn/server/certificates/tls_crypt.key
cp /tmp/openvpn/server/certificates/tls_crypt.key /tmp/openvpn/client/certificates/tls_crypt.key

Persistent tun/tap device and key

While your connection might be interrupted and OpenVPN is trying to reconnect, you may be using the default network routes again, bypassing the tunnel. For accessing private networks this might not be a big issue as the network addresses may not be reachable from outside the tunnel, but it may expose information you'd rather keep private like an HTTP request containing cookies.

persist-key is not a security option but a problem solving in case OpenVPN restart and the OpenVPN user is not able to read the key file anymore. This parameter avoid this situation.

persist-tun
persist-key

Limited user

Probably one of the most important configuration setting. In order to limit the impact of an OpenVPN vulnerability, it is highly recommended to run it with a user with limited rights. To do that, we have to create a user for our OpenVPN applications. This user has limited privileges:

adduser --system --shell /usr/sbin/nologin --no-create-home ovpn
groupadd ovpn
usermod -a -G ovpn ovpn

and specify the user in the OpenVPN configuration:

user ovpn
group ovpn

Ciphers and digests

In the file /etc/openvpn/server.conf, we have to specify the ciphers and digest we want to use. It looks recommended to use GCM instead of CBC. More information about why here and here

tls-version-min 1.2
ncp-ciphers AES-256-GCM:AES-256-CBC
tls-cipher TLS-ECDHE-RSA-WITH-AES-256-GCM-SHA384
cipher AES-256-GCM
ecdh-curve secp521r1
dh none

We will force to use SHA512 to manage the authentication mechanism. unfortately the best digest available is still sha family. Enabling the auth-nocache parameter will prevent to cache passwords in memory.

auth-nocache
auth SHA512

Pushing configuration to the client

OpenVPN is allowed to push some network rules to the server. The most important is push "redirect-gateway" which forces to route all the client traffic throw the vpn. This parameter replaces the gateway route in the client configuration.

A second very useful possibility is to push a route for ipv6 which does not work. This will avoid in case your client is configured to work with ipv6 to make it unusable.

push "route-ipv6 2000::/3"

Compression algorithms

In the OpenVPN version 2.3, it was the LZ0 compression algorithm which was by default. Since OpenVPN version 2.4, the compression algorithm LZ4 is available. Contrary to what the documentation says, it is not the best one available. There is also the version 2, lz4-v2 of it which is available but not documented yet.

If security if your only criteria, you should disable this feature. Indeed, a family of vulnerability like Beast, Crime and Voracle exist . Those vulnerabilities could allow an attacker to gain information about an encrypted communication in very specific circumstances. To explain differently, if the attacker knows exactly what he is supposed to receive and capture enough packet with little predictable changes, this could help him to reduce the time required to break the encryption.

For me, in the real world, it is really complicated to use this kind of vulnerability. The OpenVPN company recommends to disable compression algorithm and already did it on their products. In the configuration files provide here, it is, of course, disable because we are only focused on security. Feel free to make some performance test with speedtest if you hesitate.

Tuning and performance improvement

OpenVPN could be tuned in a lot of way to improve performance. I will not talk a lot about that because it requires an article by his own. You could change for example, the encryption algorithm, the TCP/IP protocol or modify those parameters to improve it.

  • mssfix
  • fragment
  • tun-mtu
  • compression algorithm

Feel free to read this page to know more about tuning OpenVPN.

Deployement and cleaning

Congratulation! The most complicated part is over. We just have now to deploy each directory in the proper place.

Directory structure

You should have this directory structure in /tmp/openvpn/server/. The server will need the following files to run properly:

  • server.conf
  • certificates/ca.crt
  • certificates/server.key
  • certificates/server.crt
  • certificates/tls_crypt.key

You should now have this directory structure in /tmp/openvpn/client/. The client will need the following files to run properly:

  • client.conf
  • certificates/client.crt
  • certificates/client.key
  • tls_crypt.key

A reader made a very good comment about the client configuration file. You can summarize in client.conf all the cryptography data required to establish the connection to the OpenVPN server. It makes it easier to transport and manage. To do that, you should remove the line about ca, cert, key and tls-crypt and replace them with the following lines.

<ca>
--STRIPPED INLINE CA CERT--
</ca>
<cert>
--STRIPPED INLINE CERT--
</cert>
<key>
--STRIPPED INLINE KEY--
</key>
<tls-crypt>
--STRIPPED INLINE KEY--
</tls-crypt>

Last configuration and deployement

Before moving directories, we will make them available only for the root user in reading mode.

chmod 400 -R /tmp/openvpn/

We will also move the all public and private key to a safe place and move the server directory to the right place:

cp -r /tmp/openvpn/server /etc/openvpn/
cp -r /tmp/openvpn/ /etc/ssl/openvpn-pki
ln -s /etc/openvpn/server/server.conf /etc/openvpn/server.conf

In the client which should be another computer. You just have to copy and paste the directory /tmp/openvpn/client to /etc/openvpn/ and create a symbolic link from /etc/openvpn/client/client.conf to /etc/openvpn/client.conf

Running

To run OpenVPN with Systemd, you need to modify /etc/default/openvpn by uncommenting AUTOSTART="all" and replacing "all" by the name of configuration file. For example with in the server side, you should replace "all" by "server".

systemctl daemon-reload

You are now able to run it with systemctl and the following command:

systemctl start openvpn

You need to do the same thing for the client.

Cleaning

Be careful, do not forget to remove the directory /tmp/openvpn. It contains all your certificates.

Conclusion

Social media

Thank you for your reading, I hope this article was helpful for you. Don't hesitate to comment and if you think I made a mistake, it will be a pleasure to discuss it. If you find this article interesting, feel free to subscribe to the RSS flux of the blog and to follow me on Mastodon.

Sources

This article would not have been possible if other people does not share information about it. Their work help me a lot, thank you!

I failed to install Firefox Accounts Server

Written by Mirabellette / 06 october 2018 / 3 comments

Firefox Account Server is not Firefox Sync Server.

In order to continue to be more and more independent and because I trust less and less Mozilla Foundation, I decided to manage by myself the Firefox authentication system (without Docker). For those who do not know, Firefox divided the whole authentication system and the storage management system. You can manage your data (bookmarks, history, tabs, profile) with Firefox Sync. I deployed it previously and a tutorial is available here. After hosting the most important part of my datas that Firefox manages, I wanted to host the all thing. I worked on it during 21 hours and was still not able to run it properly. I decided to share my experience.

Criticism

Firefox Authentication Server is built in following a microservices architecture. For those who do not know it, it divides an application into little smaller applications. Each of them should have a specific role and perimeter. For example, a microservice dedicated to send email or another dedicated to manage the user interface. However, this architecture, if not well built and documented could have some disadvantages. You can find below a list from Wikipedia:

  • Services form information barriers
  • Inter-service calls over a network have a higher cost in terms of network latency and message processing time than in-process calls within a monolithic service process
  • Testing and deployment are more complicated
  • Moving responsibilities between services is more difficult. It may involve communication between different teams, rewriting the functionality in another language or fitting it into a different infrastructure
  • Viewing the size of services as the primary structuring mechanism can lead to too many services when the alternative of internal modularization may lead to a simpler design.

Unfortunately, I think the Firefox Accounts Server fall in most of them. They are improving it but there is so much work to do. Especially because it seems like Mozilla Foundation wants to maintain the compatibility with the past. You can find below the list of issues I found which made it really hard to deploy it and which demonstrates why it is obsolete.

  • Each microservice has his own structure. In some of them, you have configuration in config/index.js, another one has it in /server/config/local.json, in another one you have two files to configure
  • Each microservice has his own running process. For example, the running command could be different, in another case, you need to build the code to make it runnable.
  • The documentation is clearly missing (no system-d unit, no reverse proxy configuration). Anybody who tries to run it in following the process in the documentation will most of the time failed because some part of it is not documented or is obsolete

Regarding the Firefox Authentication Server in general. I am sorry to say it but it is clearly out of date and has vulnerabilities inside. About obsolescence, I could talk about the need to use mysql 5.6 and about vulnerabilities, the node modules vulnerabilities. It is not ready to be deployed by anybody else than someone who works in this project or in the Mozilla Firefox platform. I do not imagine one second a system administrator without development skill being able to deploy it in less than 3 days.

Just another example about the mess, I made an issue here about the difficulties I got. Two people from Mozilla answered, the first answer was pertinent and helped me in the process. The second one was clearly out of subjects, I am not even sure he read it, he just repeats one thing I said, thing which does not work and he closed the issue without giving a fuck. Yes, it closed it, without waiting for my answer. I just took three days trying to make it works before asking for help and my issue was closed like "OK, thank you".

My responsability

My lack of knowledge was, of course, a reason of my impossibility to succeed in this task. Even if I deployed dozens of applications, I am not used to deploy microservices applications. The only comfort I have is I am not the only who did not succeed.

Installation process

I took three days deploying and configuring Firefox Accounts Server. For those who are interested, you can find below the process I follow to be able to run them. I was able to run 5 services, maybe it required more to make it runnable, but some of them still have issues and it. The list of microservices I deployed:

  • fxa-auth-db-mysql
  • fxa-auth-server
  • fxa-content-server
  • fxa-oauth-server
  • fxa-profile-server

Global installation

In order to prepare the system, you need to to the following stuff:

adduser --system --shell /usr/sbin/nologin --group firefox
As npm needs to have a home directory, we will not add the --no-create-home option.

apt update && apt install -y git python sudo make gcc g++

In debian 9, you will need to install only mysql-server without mariadb

apt install lsb-release # necessary to install mysql
wget https://dev.mysql.com/get/mysql-apt-config_0.8.10-1_all.deb
dpkg -i mysql-apt-config_0.8.10-1_all.deb
apt update
apt install mysql-server

You have to choose mysql version 5.6, I tested with version 8 and mariadb and it doesn't work

cd /opt
# Get the last stable version of node
wget https://nodejs.org/dist/v8.12.0/node-v8.12.0-linux-x64.tar.xz -P /opt
tar xf node-v8.12.0-linux-x64.tar.xz
ln -s /opt/node-v8.12.0-linux-x64/bin/node /bin/
ln -s /opt/node-v8.12.0-linux-x64/bin/npm /bin/

Tips

In order to find the configuration file easily, I recommend you to use grep as much as possible and to read the packages.json file which could help you to find running command. You can find interesting stuff with:

grep -R 127.0.0.1 --exclude-dir=node_modules *
grep -R public_url -i --exclude-dir=node_modules *

Part of the installation process of Firefox Accounts database service

I still have issues with it. db.example.com

git clone https://github.com/mozilla/fxa-auth-db-mysql.git
chown firefox:firefox -R fxa-auth-db-mysql
cd /opt/fxa-auth-db-mysql
sudo -u firefox npm install
# found 28 vulnerabilities (21 low, 5 moderate, 1 high, 1 critical)

sudo -u firefox NODE_ENV=prod npm start

vim config/config.js

Firefox Accounts Server

I still have issues with it. auth.example.com

git clone git://github.com/mozilla/fxa-auth-server.git
chown firefox:firefox fxa-auth-server
cd /opt/fxa-auth-server
sudo -u firefox npm install --production
sudo -u firefox NODE_ENV=prod npm start

To change the listen address of the server, you have to modify the file config/index.js and replace it.

publicUrl: {
format: 'url',
default: 'http://127.0.0.1:9000',
env: 'PUBLIC_URL'
},

Firefox Accounts Content Server

account.example.com
#You will need to install openjdk
apt-cache search java | grep openjdk and then install the most recent version available for your distribution. For me, it was the openjdk-8-jre
apt update && apt install openjdk-8-jre

git clone https://github.com/mozilla/fxa-content-server.git
chown firefox:firefox -R fxa-content-server
cd /opt/fxa-content-server
sudo -u firefox npm install --production
sudo -u firefox npm install bluebird
sudo -u firefox npm run build-production
# found 7 vulnerabilities (6 low, 1 moderate)
sudo -u firefox NODE_ENV=production npm run start-production

All the configuration is in the file server/config/local.json-dist
Firefox Content Server loads his configuration from file we should create. It should be a copy of local.json-dist.

cd config/
sudo -u firefox cp local.json-dist config/local.js
# First of all, we have to replace the secret "YOU_MUST_CHANGE_ME":
head -c 20 /dev/urandom | sha1sum

vim server/lib/configuration.js

default: 'http://127.0.0.1:3030'

public_url: {
default: 'http://127.0.0.1:3030',
doc: 'The publically visible URL of the deployment',
env: 'PUBLIC_URL'
},

I recommend you to disable csp because they are completely obsolete. They still using x-content-security-policy even if it is obsolete since Firefox 23 !

vim server/config/production.json
# csp:false

Firefox Accounts OAuth Server

oauth.example.com

git clone https://github.com/mozilla/fxa-oauth-server.git
chown firefox:firefox -R fxa-oauth-server/
cd /opt/fxa-oauth-server/
sudo -u firefox npm install
# found 7 vulnerabilities (5 low, 1 high, 1 critical)
sudo -u firefox npm audit fix
sudo -u firefox npm start

Firefox Accounts Profile Service

profile.example.com

apt update && apt -y install graphicsmagick

git clone https://github.com/mozilla/fxa-profile-server.git
chown firefox:firefox -R fxa-profile-server
cd /opt/fxa-profile-server
sudo -u firefox npm install
# found 14 vulnerabilities (7 low, 6 moderate, 1 high)
sudo -u firefox NODE_ENV=prod npm start
vim lib/config.js

Sources

Conclusion

I hope it will motivate you NOT to try to install it and save your time. I hope they will improve it and make it easier to configure and deploy. Maybe one day, we will be able to use only the Mozilla Firefox Browser and be able to manage everything behind, maybe.

Social media

If you find this article useful, feel free to follow my RSS flux and to follow me on Mastodon. Don't hesitate to share it if you think it could interested someone.

Some news about the blog 5 : July-August 2018

Written by Mirabellette / 10 september 2018 / no comments

Hello Everyone

I decided to publish each month an article about the blog in general. Contrary to what I said, I decide to publish this article each two months. I think now it is not really relevant to publish it each two months. In this article, you will be able to know:

  • What I achieved during this period.
  • What I accomplished for the community.
  • How is the blog and services popular.
  • Balance sheet of the period.
  • Some words about what I think for the next period.

This article is about the month of July-August 2018.

Period achievements

Articles

Events

  • Nothing special.

The blog

  • Improve the accuracy of the statistic tool.
  • I added a count of the RSS request by day group by IP address. It helps me to know if the blog interests people.
  • I added a little text about personal data I store and how I manipulate them.

Give back to the community

  • As I based my filter from a Github repository of bot, I can now extract automatically bot which request my website and which are not in the Github repository. I will extract this list of bot each month and add it to the Github repository.
  • I published a little commit on Mastodon about the documentation.
  • A little donation as each month for an association or service I find useful.

Balance sheet of the period

Statistics for this period

Some charts about the month of July:

how_many_views_each_day_july how_many_views_by_page_july referer_july.jpg

Some charts about the month of August:

how_many_views_each_day_august how_many_views_by_page_august referer_august.jpg
  • Except the days around the publication in social media, visits are around 30 by days

My point of view

I got a little time off at the end of August. I did not really know what I can write about and I was questioning myself about the sense of it. Even if I know I primarily write for myself, I expected to bring something useful to the community. I begin to accept this blog will not change something and I begin to think about using my time in a better way.

For the next month

  • I do not know.

Classified in : Blog / Tags : none

Why and when install a custom Android distribution?

Written by Mirabellette / 04 september 2018 / no comments

Hello guys,

Sorry for the little delay but I was not sure about what I wanted to write for the month of September.

android_logo

Introduction

Today, I would like to talk about operating system for mobile and especially those based on Android. For those who do not know, Android is an open-source operating system and each manufacturer may customise it with features or tweaks. A customise Android operating system is called a distribution. I do not know the IOS environment that is why I will not talk about it here.

A little lexicon below:

  • IOS: Iphone operating system
  • FAD: Factory Android Distributions
  • CAD: Custom Android Distributions
  • Why and when install a custom Android distribution?

    The issues with the Factory Android Distribution (FAD)

    manufacturers make a lot of work to provide a good mobile phone. However, they are motivated by money contrary to the users who are motivated by good experience and good products.

    Firstly, the most important issue is about updates. Android mobile phone tends to be in general updated for only two years. After this period, your smartphone will not be updated anymore. That means it will contain known vulnerabilities without any possibility to fix it.

    As your phone has very sensitive features (GPS, microphone, camera, sensitive personal data). A mobile phone compromise could create a lot of issues. For example, the GPS could be used in an abusive way. An example with the recent vulnerability published the 29th of August.

    You can find below the list of Android system deploy on smartphones.

    android_version_distribution

    You can see in February 2018, there are:

    • Around 10% in Android 4.4 (published in October 31, 2013)
    • Around 25% in Android 5.0-51 (published in November 12, 2014)
    • Around 28% in Android 6.0 (published in October 5, 2015)
    • Around 25% in Android 7.0-7.1 (published in August 22, 2016)

    I do not know if you understand how bad it is. That just means around 90% of the FAD are not up to date and contain known vulnerabilities. Or, if we are less exigent, it is 65% which is obsolete. For me, that just means one thing. Never trust your Android smartphone or the Android smartphone of your friends. IOS (the operating system for Apple phone) is better but not perfect about security update. I do not find the chart but most of the devices are "up to date".

    Secondly, as they are interested mainly by benefits or have to follow government rules. It appears that some device tracks phone calls, contacts, data and phone usage.

    Pros

    • Custom Android Distribution (CAD) generally tends to provide a more recent Android version. That means better security, better performance, better features and better autonomy
    • CAD do not contain manufacturers features and improvements. You are also free not to install Google applications. That means no tracking features.
    • CAD generally add features which are able to improve the management of your cellular phone. That means, for example, have a better tool to manage backup, update or security. They often have features to manage privacy more precisely. Some applications are made by the maintainers and are free to install.
    • I do not know about the other distributions but LineageOS community provides a very good tutorial about how to install it on your smartphone. An example can be found here with the Galaxy S3.

    Cons

    • Replacing the Factory Android Distribution by one of your choices is not easy and required time. You need to understand the different steps of the process and how an Android operating system works in the main line. Contrary to what you could think, you will not develop at all. You also need to do a little analyse about what you will earn and lose and you need to make the required backup. It required me approximately 12 hours to do it and have a mobile phone which was fully operational whereas I had not a lot of knowledge about the process.
    • CAD do not contain manufacturer features and improvement. It could be positive but it could also be negative. You could lose manufacturer tweaks and have worse performance. You will never know before making a try.
    • Most of the time, unlocking the bootloader (which is a step required to replace your Android distribution) will stop the guaranty.
    • Some features may not work properly (high consumption energy, cameras which do not work or even crash sometimes). However, it could be fixed in the next release which is published each week on LineageOS. For example, I was for one month without a front camera and GPS.
    • Less stable than FAD, the mobile phone may crash and have a higher possibility to lose your data when update. Hopefully, you also have a better tool to get it back but it could not work all the time.

    When to replace the factory Android distribution?

    lineageos_logo replicant_logo

    For casual users or users who do not want a lot of issues,
    when your mobile phone is not updated anymore. When you are in this situation, that means your mobile phone is older than 2 years and the CAD should be quite stable. The tutorial should be quite complete. Issues should be known, fixed or with some work around available.

    For expert users and experimental users,
    some months after the manufacturer releases the new phone. It should let to the maintainers the time to develop enough stable version for your phone. In case of issues, you should be able to roll back to the previous release on your own.

    Advice and warning about a mobile phone with CAD

    • Choose a mobile phone quite popular. The most you have people who use it, the most it is probable than a custom Android distribution will support it well. Quite popular does not mean with a lot of hardware backdoors, you have some choices.
    • Do as little as possible with your phone. First of all, because the mobile phone environment is far more dangerous than the desktop environment. Proprietary applications can literally siphon your data, track your location, use your camera, heard around you.

      Even if you are up to date with a recent phone, your mobile phone could be exploited to hear what it is around you, to locate you, to film around you. Secondly, because you use a CAD, it means less stability, you should be ready for it.

    • Each custom Android distribution has his own purpose. Choose carefully the one you will install regarding stability, performance, security and maintainability.

    Conclusion

    You now have some arguments to make your decision.

    Sources

    Social media

    If you find this article interesting, feel free to subscribe to my RSS flux and to follow me on Mastodon. Don't hesitate to share it if you think he could interest someone else.

Important principles in cybersecurity - 2/2

Written by Mirabellette / 01 august 2018 / no comments

Introduction

Today, I would like to share the second part of the article about important principles in cybersecurity. You can find the first part of these articles about cybersecurity here.

No usability means no security

Probably one of the most important principles in cybersecurity. When you are a professional of security, you are concerned about the risk of leaks and passwords disclosure. That means you are ready to make to some effort to prevent this. However, even if you are aware of that, it is tiring and exigent.

Let's go with an example most of us know. Imagine you have hundreds of website you need to log in. Nowadays, websites ask for complex passwords with long size. People which are not concerns about security will choose a password and they will write it next to their keyboard or worst, easy thing to remember. That means, even if you force the user to use only very difficult passwords, if it is not easy for him to pass it, he will find a way to do it easily.

Speed is crucial

Each day, there are multiple vulnerabilities which are published and accessible by anybody. In an interview given by the NSA, they claimed to be able to transform a vulnerability into a usable exploit in 24 hours. That means, if you are targeted by them, you should be able to patch your services and systems before the exploit is ready. If an agency can do that in 24 hours, we could presume just another agency can fix and deploys with the same efficiency in 24 hours.

Come back to the real world, where we are just system administrator and developer which are maintaining systems and applications. Patches tend to be created before exploits are spread. It was the case for Petya and not Petya. That means, if you are fast enough, you can update your systems before they are attacking. But what can you do if you cannot?

layer

Multiple layers of Security is the answer to threats

You must admit that each of your security layers could be vulnerable and compromised. It is your responsibility as system administrator, software developer or cybersecurity expert to reduce the vulnerability of the layers you are responsible for to the minimum. An example of the effective layer could be the user management system in all operating system. There is a normal user with reduced right and a superuser or root who has more right. It is a basic advice on security but not everybody really follows it. Even in the cybersecurity field where the famous penetration distribution Kali Linux has only a user with all right by default.

Always be sure about the information before doing something

There is a lot of mythology and approximation in every field. Cybersecurity is not avoided by that. As an important position in a company, your words matter and could have important consequences. That means you must be sure about what you say. Oftenly, people speak without knowing enough. For cybersecurity, that means you should answer these 3 questions:

  • Is the vulnerability real?
  • Could some of our systems or application be threatened by them?
  • Should I or how can I mitigate it?

Most of the time, people will ask you about the vulnerability/threat before you have a clear idea of the situation. It is important not to make a presumption. The more just you will be about what you know, the more you will be able to well react to the situation.

Let's make a try with the shiny vulnerability Efail.

efail

We have a wonderful website, one public communication from EFF about what we should do BEFORE any information was publicly disclosed. The Electronic Frontier Foundation (EFF) is an international non-profit digital rights group based in San Francisco, California. EFF recommends to immediately disable and/or uninstall tools that automatically decrypt PGP-encrypted email. They are very listened by all of the people and have a quite good reputation. However, if we do what they recommend, that means changing something based only on the trust we have in them. At this moment, your shiny warning should be ringing a lot and asked you to wait for a little in order to know more about that.

The day later, the vulnerability explanation was published. When we read it carefully, it appears it does not concern OpenPGP but only some products which manage emails. Please to find below the specific conditions which are necessary to let an opponent exploit it.

  • Your email manager must be vulnerable.
  • Your email manager must decrypt encrypted email automatically.
  • You must have the private key of the encrypted email loaded in your email manager.
  • You must have HTML rendering enabled.
  • You must open the email.
  • An attacker must have encrypted contents he wants to decrypt from you.

For me, if we listen to the noises made before the explanation was released, it was a very high critical vulnerability. But after the reading, it was sensitive but not so critical as the noise could let imagine. Mainly because there are a lot of things required to exploit the vulnerability. The NIST quite agrees with me; it gave to the two vulnerabilities behind Efail a complexity grade of high and a global grade of 5.9 (medium).

This was an example to wait and be sure to have enough information before doing something.

Conclusion

I hope you enjoy this second article about cybersecurity. The tone I used was a little bit more engaged than usual.
Feel free to comment if you want to add ideas or discuss it. If you find this article useful, you can subscribe the RSS flux of the blog or follow me on Mastodon. Don't hesitate to share it if you think it could interest someone.

Sources