Linux

I recently visited my parents and realised that it would be great to continue to easily access my network at home. Sadly, as a Virgin Media customer, the upload speed is poor, so using the VPN I set up isn’t the answer, as all traffic would always be routed to my home.

Looking around the internet I found the answer to my problem. Even better, it was already available on Tomato Firmware, which I use on my routers. The solution was tinc.

What is tinc

tinc describes themselves as a Virtual Private Network (VPN) daemon that uses tunnelling and encryption to create a secure private network between hosts on the Internet. It has a few nifty features, such as encryption, compression, mesh routing and a super simple configuration.

My setup

I am in the fortunate position that both my networks have a Asus RT-N66U, the Asus RT-AC66U is the successor, which both run Tomato Firmware by Shibby. This made the configuration very straightforward. Please make sure that you have the AIO builds, that include tinc as not all builds include it.

Router SetupThe networks were configured with the following IP ranges – for this example we will only look at IPv4, not IPv6:

  • London (LDN): 10.0.0.0/24
  • Luxembourg (LUX): 10.10.0.0/24

As neither side have static IPs, I also have DynDNS hostnames set up for both.

Now that the basics are there, let’s configure tinc.

Configuring tinc on Tomato Firmware

Log in to your first router, we’ll start with LDN, and head to VPN Tunneling -> Tinc Daemon.

I recommend running Tinc in ‘tun’ mode. For tun, each node must use a different subnet. These subnets must fit with the ‘VPN Netmask’ found in the config tab. In our example, as we’re using 10.X.X.X IP addresses for our networks, we can use the full 10.0.0.0/8 space, meaning the netmask value would be 255.0.0.0. Tomato by default uses a /24 netmask for it’s networks. You can then add subnets starting from 10.0.0.0/24 – 10.255.255.0/24 and everything in between.

Once you set the interface type to tun and set your VPN Netmask, you can set the name for your node. We’ll set this to ldn for our first router, and lux for our second one.

Next we’ll go to the Generate Keys tab and press Generate Keys at the bottom. You should end up with a set of keys like the following:

tinc generated keysStarting with tinc1.1pre11, only Ed25519 keys are required. RSA keys are only needed for backwards comparability in order to connect to tinc1.0 nodes.

Copy both Ed25519 Private Key and RSA Private Key (if you want to support tinc 1.0) into the Config tab.

Next we’ll head to the Hosts tab. We must create an entry for the node itself in the Hosts section. This information will be shared with other nodes to create connections. As such, on the router ldn, you would create a host ldn with the keys from the Generate Keys tab for that router. Copy the public keys into the fields.

For the address, use your static public IP address if you have one, or a DynDNS hostname. In the subnet column, enter the network IP range that you want that host to share. In the case of LDN, it would be 10.0.0.0/24, LUX would be 10.10.0.0/24.

Once you have done this on both routers, you need to add them to each other and select the ConnectTo checkbox. The nodes share the hosts’ information to help them connect to each other. As such, it isn’t necessary to define every node in every router. If Node A and Node B are connected, and Node A and Node C are connected, then Node B and Node C will learn about each other through Node A. Node B and Node C should then be able to communicate directly to each other.

The hosts table should look like something like this:

tinc hosts

Now you just need to hit Save and Start on both routers.

The Status area is active when tinc is running, and will give you some information about the mesh.

tinc status

‘Edges’ and ‘Connections’ show nodes for which ConnectTo was defined in one or both Nodes. If you don’t see a connection between two particular nodes, this doesn’t mean they aren’t communicating directly to each other. It means that neither had ConnectTo defined for the other, which is fine. The ‘info’ button will give you more detailed information about a particular node. Sometime it says “Reachability: unknown” if neither of those nodes have attempted communicating to each other yet.

There must be some path of ConnectTo’s among the network so all nodes can learn of each other.

The ‘Scripts’ tab allow you to define scripts to run whenever a subnet or host becomes available or unavailable.

That’s it. Enjoy your connected network.

Read more

Having spent the last 24 hours trying to get Proxmox to play nice with the new VRack 1.5, it looks like it works perfectly, including online live migration of venet based OpenVZ containers, which didn’t work in VRack 1.0.

The configuration makes eth1 the default card for traffic from vmbr0, however allows eth0 to function alongside it so that you don’t loose out on monitoring features. We also route IPv6 traffic through the VRack on vmbr0 and add additional IP ranges for your VM use to vmbr0.

All the configuration that’s needed is done in: /etc/network/interfaces.

Here is my resulting configuration:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# for Routing
auto vmbr1
iface vmbr1 inet manual
    post-up /etc/pve/kvm-networking.sh
    bridge_ports dummy0
    bridge_stp off
    bridge_fd 0

# vmbr0: Bridging. Make sure to use only MAC adresses that were assigned to you.
auto vmbr0
iface vmbr0 inet static
    address 94.23.XXX.10
    netmask 255.255.255.0
    network 94.23.XXX.0
    broadcast 94.23.XXX.255
    gateway 94.23.XXX.254
    bridge_ports eth1
    bridge_stp off
    bridge_fd 0
# A secondary IP subnet used for VMs
    up /sbin/ip route add 178.XXX.YYY.128/26 dev vmbr0
        up /sbin/ip route flush cache

#VRack IPv6
iface vmbr0 inet6 static
        address 2001:41d0:XXXX:6810::10
        netmask 56
        post-up /sbin/ip -f inet6 route add 2001:41d0:XXXX:68ff:ffff:ffff:ffff:ff7f dev vmbr0
        post-up /sbin/ip -f inet6 route add default via 2001:41d0:XXXX:68ff:ffff:ffff:ffff:ff7f
        pre-down /sbin/ip -f inet6 route del default via 2001:41d0:XXXX:68ff:ffff:ffff:ffff:ff7f
        pre-down /sbin/ip -f inet6 route del 2001:41d0:XXXX:68ff:ffff:ffff:ffff:ff7f dev vmbr0

auto eth0
iface eth0 inet static
    address 5.XXX.YYY.25
    netmask 255.255.255.0
    broadcast 5.XXX.YYY.255
    #Setting up the routing
    up /sbin/ip route flush table 80
    up /sbin/ip route add table 80 to 5.XXX.YYY.0/24 dev eth0
    up /sbin/ip route add table 80 to default via 5.XXX.YYY.254 dev eth0
    up /sbin/ip rule add from 5.XXX.YYY.0/24 table 80 priority 80
    up /sbin/ip route flush cache
    post-down /sbin/ip route flush table 80
Read more

I’ve been spending this morning optimizing the Flosoft.biz website in terms of load times in Browsers, and one key element of that is sending the correct expires headers to allow Browsers to cache the data.

Now, as of Plesk 11.5, you can edit nginx settings via the Control Panel, but this isn’t always straight forward, so I thought I’d write a small tutorial.

In the Control Panel:

  1. Select your Domain
  2. Click Web Server Settings
  3. Scroll down to nginx settings
  4. If you have “Serve static files directly by nginx” checked (which I recommend), you’ll need to remove the file extensions you’re going to use below, such as jpg,gif,…
  5. In the text box “Additional nginx directives” copy / paste the following configuration:

location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control “public”;
try_files $uri @fallback;
}

That’s it. Just hit OK and enjoy a website that sends the correct headers for your static images and CSS.

 

Read more

One of the legacy systems we still use at Flosoft.biz is Plesk. Over the last few years it has slowly gotten better (don’t worry, it still completely breaks on every version upgrade) and nowadays comes with nginx.

However, I noticed that for some obscure reason, it doesn’t enable GZip compression for the webpages it serves? This is quite odd, having myself worked a lot with nginx over the last few years, it’s a default configuration!

Don’t worry, it’s quite easy to enable it though:

Just edit the following file as root: /etc/nginx/conf.d/gzip.conf

gzip on;
gzip_proxied any;
gzip_types text/plain text/xml text/css application/x-javascript;
gzip_vary on;
gzip_disable “msie6”;

Then run nginx -t to test the configuration and if that’s all ok, restart nginx by running /etc/init.d/nginx restart.

That’s it. Your webserver will now be serving your pages with GZip compression.

Read more

Today I discovered a very interesting initiative. It is about Data Portability.DP Logo

This might be the future for all websites and internet related applications. As a user, your profile, contacts, photos, videos and any other form of data is stored on the service providers servers. This means, for every website, you need to create a new login, invite all your friends manually to the service, upload your avatar and so on.

The next issue is, that you need to have some trust in the operator of the service, because he stores your password and other personal information.

Why shouldn’t it be possible to use a single login for everything? All your account data is stored and managed by yourself, so that the service providers don’t get hold of your personal information.

There have been some attempts in that direction, such as OpenID, but the Web needs alot more.

As mentioned above, your login, contacts, files should be portable from any service to another.

Jabber LightbulbAnd this is where Jabber comes in. I would like to be able to migrate my contacts from any Jabber Server to another. Instead of issuing re-invites to everyone, starting with a new roster, you should be able to transport all your data from any service provider to another.

But what about Identity theft? Isn’t that more dangerous if you have your own identity for every service? I.e. one E-mail adress gets hacked, and boom, you loose your identity?

Is this worthy of an XEP?

Think about it 🙂

Tell me what you think.

/Florian Jensen

Links: XSF; DataPortability

Read more

Proprietary protocols are things from yesterday. Today, Opensource technologies are taking over the world! AOL / ICQ has just launched a test server using XMPP, an open technology. This means that you’ll soon be able to talk to your ICQ / AIM contacts via Jabber. Google has already started using it. So who’s next? MSN!

AOL seems to be making its ICQ and AIM services compatible with XMPP: xmpp.oscar.aol.com PS: it’s still buggy and is only claimed to work in Exodus and Coccinella. It works in nearly all Jabber clients. But that is probably because the server is overloaded.

There has been a vivid discussion today on the Jdev MUC room about this. It looks like Jabber will be the solution which will rule the future of all messengers.

You can try to log in to ICQ with the username icqnumber@aol.com on server xmpp.oscar.aol.com on port 5222. TLS is required.

Let’s see what happens in the next few months with Jabber.

You can find Jabber Hosting for your Domain on Flosoft.biz . You’ll probably be able to chat to your ICQ contacts soon!

UPDATE: AIM seems to work too!

UPDATE2: There is a tutorial here on how to setup your Jabber Client

UPDATE3: I just recieved this picture from jjkobra. It works with Gajim! AOL adopting XMPP aka Jabber

UPDATE4: It seems to work in Psi too!

UPDATE5: A comment by AOL. 

Digg this Article!

Reddit

Slashdot

Heise.de

Tweakers.net

Read more

This howto will explain to you how to install a frontend on your server which is accessible via NX. This tutorial is for Debian based systems, and has been tested on an Flosoft.biz FlexServ (RPS).

1. Check the basic Debian setup.

We need to modify the sources.list

vi /etc/apt/sources.list

Add the following 2 lines:

deb http://ftp.debian.org/debian etch main contrib non-free
deb-src http://ftp.debian.org/debian etch main contrib non-free

Close and save (:wq) and run this command to update:

apt-get update

Then check if your system is up to date, and if necessary install updates.

apt-get dist-upgrade

2. Installing the X window manager xorg.

Just type in:

apt-get install xserver-xorg-core xorg

There will be a few questions at the end, for now just go with the defaults.

3. The Login Manager

Now you have 3 options. You can install any of the following Login Manager’s. Your options are:

  1. KDM
    KDM is probably the best if you want to use KDE
  2. GDM
    GDM is probably the best if you want to use Gnome
  3. XDM
    XDM is probably the best if you want to use Fluxbox or XFCE

Once you have chosen one of the GUIs run one of the following three commands:

apt-get install kdm
apt-get install gdm
apt-get install xdm

4. The GUI

Now again, you have a choice of different Graphical User Interfaces.

  1. KDE
    Personally my favourite on Debian
  2. Gnome
    My favourite on Ubuntu
  3. Fluxbox
    Never used it
  4. XFCE4
    Never used it

Once you have chosen one of the GUIs run one of the following three commands:

apt-get install kde
apt-get install gnome
apt-get install xfce4
apt-get install fluxbox

Thats all for the base setup.

5. Reboot

You should reboot to make sure the X server starts.

shutdown -r now

6. Create your user

Once your server has rebooted, and you have relogged in, you should create a user which you will use for the GUI.

adduser mynewusername

6. Getting the NX packages

Now we need to setup the NX server, so that you are able to connect to the server from your home. So you need to download the NX server pacakges:

wget http://64.34.161.181/download/3.1.0/Linux/nxclient_3.1.0-2_i386.deb
wget http://64.34.161.181/download/3.1.0/Linux/nxnode_3.1.0-3_i386.deb
wget http://64.34.161.181/download/3.1.0/Linux/FE/nxserver_3.1.0-2_i386.deb

7. Installing the NX packages

As you have the packages now in your directory, you need to install them via dpkg.

dpkg -i nxclient_3.1.0-2_i386.deb
dpkg -i nxnode_3.1.0-3_i386.deb
dpkg -i nxserver_3.1.0-2_i386.deb

8. The Services

Now we need to make sure the services are running.

/etc/init.d/ssh restart
/etc/init.d/nxserver restart

9. The Browser

Last, but not least… well actually least … Firefox! You will need a nice Browser, so Firefox is the way to go.

apt-get install firefox

Now you’re system is setup and you’re ready to use it. Simply setup your NX Client and have fun!

If you have any questions, don’t hesitate to ask me or just leave a comment.

Read more

Hey,

are you tired of stupid Flash 7 on your Linux. That you are not able to watch any cool stuff on the web as it is Flash 8 or Flash 9? Are you sick of the audio not being synchrononous with the video?

Well finally someone has thought about us Linux users at Adobe. They have published the first beta of the Flash Player 9 for Linux.

The only problem now is: Why is there no .deb for my Kubuntu?

Well there is no need for it. If you have flashplayer-nonfree installed you can simply do this:

# wget http://download.macromedia.com/pub/labs/
flashplayer9_update/FP9_plugin_beta_101806.tar.gz
# tar xvzf FP9_plugin_beta_101806.tar.gz
# sudo cp flash-player-plugin-9.0.21.55/libflashplayer.so /usr/lib/flashplugin-nonfree/
# sudo cp flash-player-plugin-9.0.21.55/libflashplayer.so /usr/lib/firefox/plugins/

This will get you and install you your FlashPlayer 9. Then just restart Firefox / Opera and you’ll get your FlashPlayer 9!

Have fun watching all that video content on Youtube, Google Videos, DailyMotion or Myspace.

Read more