Featured

Project background

Virtuaaliluonto’s objective is to increase travelling to Finnish nature and rural attractions, as well as to develop digital know-how in Finland. It is a digital service that combines the strengths and possibilities of the Finnish nature, both online and at the scene, into a tempting digitally experienced experience.

What we are trying to achieve is a Raspberry Pi based IoT device with two to three sensors: movement, flame and perhaps smoke. The basic idea is to put the whole thing next to a campfire and collect useful data about the current status of that particular spot. Collected data is then sent over the web to our server which is hosting a site that hikers and campers can use to see whether the campfire spot is already occupied by others or if it has a fire going on.

FinalConceptSmall

One of the key features is to respect peoples privacy, so our prototype will not collect any data that can be linked to any specific person. After all, in these days everything is somewhat connected to the internet and there are is lot of talk about people’s privacy.

The project’s GitHub repository contains all of the code used in the project: https://github.com/wikkii/raspluonto

The project is being funded by the European Agricultural Fund for Rural Development (EAFRD).

EU

The end?

Here we are at the end of our project.

We had our presentation and in short – teachers liked it and with certainty we achieved all of our goals and expectations, that were put on us.

It’s been a crazy ride for the four months. Lots of ups and downs, but we managed to climb every obstacle that was thrown on our path. Plus we didn’t set any unwanted place on fire.

What next?

Our journey is at an end, but the schools project in Virtuaaliluonto will continue with other Universities and EU atleast for two years. Could have been 2020 or 2019, but we have completed our task, proved that our prototype and concept works. So now we leave the next implementation to the next group.

Big thank you to all of our readers, teachers and school mates, we wish you a good new year of 2018.

Thank you and our roads go ever on and on, to new challenges.

The end?
yes, The End.

P.S the github will stay online, so feel free to try out our code
https://github.com/wikkii/raspluonto

 

 

 

Testing in the real world

Hello again,

Like the title tells – testing in the real world. Everyone in our project group knew that these days would come sooner or later.
No matter how much we test inside, in the “perfect conditions”, when taking the prototype outside, it would be a whole different ballgame.

We had planned at least three different tests at the Nuuksio site. Plus other if we needed – of course we needed more.

The balcony test

The first outside test was conducted on a balcony. The main goal was to find that how long does the prototype run on the small battery (2600mah) and how long our wifi modules battery lasts (3000mah).

WhatsApp Image 2017-11-23 at 12.50.01Prototype on the balcony.

The prototype worked with both sensors and we had a small candle to show the status of flame sensor.

The results were that the small battery (2600mah) lasted for 6 hours and 11 minutes in about 0 Celsius.

First Nuuksio test

The first Nuuksio test was to confirm several things:

1. Flame sensors ability to detect flame from about 2.5 meters
2. Movement sensors range
3. Are there problems with the sensors when outside
4. Does the prototype break in current weather conditions

1
On the first test the weather was wet to say at least

2
The whole time we monitored the situation with a laptop

All in all, the first test was a successful – everything worked fine. The only thing needed of tweaking was the server side so that it would not crash. Plus the box needed some black tape to conceal the leds – after all, we did not disable the leds because we can see from those also the status of the prototype.

The second Nuuksio test

The second test was to confirm the changes made from the first test and test more of new things.
This time our list was to test:

1. How long does the small battery last
2. Suitable tick rates for the sensors – that shows on the website the activity of the site
3. How big of a flame does the flame sensor need to work properly

3
Like the first time, the second test was in really wet condition

4
This was the critical amount of flame that the sensor did not pick up.
Basically a normal burning fire is enough, but when the flame is reduced to ember the detection stops. This was perfect to us.

The tick rates we picked was 25 for the movement and 5 for the flame, both inside 5 minutes.
So 25 movement in 5 minutes -> site occupied on the website. 5 flame in 5 minutes -> site has fire going.

The third test/demo day

Third time is the charm, we had also settled with our teacher that this would be the time that we would demo our accomplishment to him.

5
Finalized box with some black duct tape and screwed in place.

6
On the last test day the winter finally came. This was also the coldest time about -5 Celsius.

On this test we only confirmed that everything works and also we left the prototype there to see how long our main battery would last (30 000 mah). The battery had to power the Prototype and also the Wifi module.

The over the weekend test would have gone as planned if a usb cable would not have failed us – so the prototype was on site for about 24 hours and we had to fetch it. To test how long the battery would las,t we made on last test on a backyard – in similar conditions as Nuuksio.

The backyard test

7

This was the final test and we just needed to find out, how long does the battery last. Theoretical time was 42 hours, our lasted 31 hours which was way beyond the expectations.

All in all, everything went better than expected and one thing remains – write the whole report and present it to our teachers and classmates.

Until next time.

Displaying sensor data on the webpage

This is a continuation post to https://raspluonto.wordpress.com/2017/09/27/basic-webpage-for-reading-status-data/.

We had our site’s visual look already established very early on in the beginning of this project. Now that we know how we’re sending, storing and retrieving data, I could make the final changes to the front-end.

site

Our server’s back-end serves sensor data by the request of main.js file. Source code can be found on GitHub: https://github.com/wikkii/raspluonto. With AJAX (Asynchronous JavaScript and XML), the webpage makes a GET-request every 5 seconds, and receives the amount of detections by both of the sensors in JSON. This only updates the tables on the page so there’s no need to refresh the page all the time.

I’m not going through the entire code here, but I’ll explain the most important bits. After all, it’s better to just check the code that we have on GitHub.

First, the page makes a new XMLHttpRequest (AJAX), which then makes the GET-request to Python Flask and the data is returned as JSON.

function getData () {

          // Request & receive JSON Data
          var pageRequest = new XMLHttpRequest();
          pageRequest.open('GET', '/data');
          pageRequest.onload = function() {

                   // Save JSON data to a variable
                   var mySensorData = JSON.parse(pageRequest.responseText);

                   // Call the renderTest function and pass it to mySensorData variable
                   renderData(mySensorData);
          };

// Send the request
pageRequest.send();

} // Function ends here

          // Call function before setting an interval
          getData();
          // Set Interval. Last argument is in milliseconds NOTE: setInterval() keeps triggering expression again and again unless you tell it to stop
          setInterval( getData, 5000 );

The JSON contains the amount of rows in the database for both sensors within the last 5 minutes. Since we’re only storing sensor data to MySQL when the sensors are actually detecting something, we can easily count the amount of detections simply by checking the amount of rows in their dedicated tables.

For example, currently the contents of nuotiovahti.info/data look like this:

{
 "flame": 0, 
 "pir": 0
}

So that means that there has not been any detections by either of the two sensors in the last 5 minutes.

Now, we can set a detection limit which has to be exceeded before we’re displaying the campsite as reserved. This is done to assure that a bird etc. can’t just fool our device and make the spot look like it’s occupied by actual people. So essentially, we’re doing a error check of sorts. A few detected movements is not enough to display the spot as occupied online.

Based on our test results in the Nuuksio national park, we found that 25 detected movements is enough for a reliable detection limit. The flame sensor is less prone to mistakes, so we made a limit of 5 for that.

// If the amount of rows from the PIR sensor is higher than value, then use the HMTL class "AreaStatusYes"
if (data.pir > 25) {
          pirhtml = "<td class='AreaStatusYes'>";
}

// If the amount of rows from the flame sensor is higher than value, then use the HMTL class "AreaStatusYes"
if (data.flame > 5) {
          flamehtml = "<td class='FlameStatusYes'>";
}

Basically, we’re just changing the CSS-class for their dedicated table cells. Color green is the default and means that the place is available and there’s no fire. Red means either occupied or a detected fire.

The elements we’re then added to a htmlString, which creates the HTML table on the webpage.

// Add elements to htmlString
htmlString += "<tr><td>" + "Mustalampi" + "</td>"+ pirhtml + data.pir + "</td>" + flamehtml + data.flame + "</td></tr>";

With jQuery’s .text method, we also added their respective text contents for each occasion.

// Add htmlString as content to HTML
$("#datatable tbody").html(htmlString);

// Text values for the classes
$( ".AreaStatusNo" ).text("Available (detections: " + data.pir + ")");

$( ".AreaStatusYes" ).text("Occupied (detections: " + data.pir + ")");

$( ".FlameStatusNo" ).text("No fire (detections: " + data.flame + ")");

$( ".FlameStatusYes" ).text("Burning (detections: " + data.flame + ")");

For example, if the PIR-sensor has detected movement over 25 times within the last 5 minutes and the flame sensor has not detected once, the page would look like this:

site2

So, the campsite is currently occupied by people but there’s no fire. The detection counter was displayed on the site just for the sake of testing. It’s not really helpful to actual users, but for our testing purposes, it helped a lot.

The final product:

phoneexampletrans

Configuring SaltStack

Remote management with reverse SSH purely, is not very functional in terms of scalability. Therefore, it’s not good enough for production purposes. Understandably, we’re are just prototyping, but it doesn’t hurt to think long-term here. With Salt, we get both the basic remote management functionalities, like running basic shell commands for troubleshooting, but also the centralized management capabilities. Configuring our SSH-based remote management for multiple devices is not practical, whereas Salt was made exactly for that.

Salt is one of the main centralized management tools alongside Puppet, Chef and Ansible. If you’re interested in any of those, I’d suggest taking a look at a project by another group on the same course: https://github.com/joonaleppalahti/CCM. I personally found their material very useful when configuring Salt for our project. Jori Laine, a member of theirs, personally gave us some input after we had some initial problems with Salt’s installation: https://jorilaine.wordpress.com/.


The basic idea

SALT (1)

Our Raspberry Pi -based sensor device is operating in a mobile network behind NAT. Thus, we can’t reach it directly from the internet. This is not a problem with Salt, which works with a Master-Minion principle.

Salt-Master gives commands to it’s minions but it does not need to know where they’re located at. It identifies them with their unique salt-keys, not their IP-addresses. Salt-Minions are the ones maintaining the connection between themselves and the master. This is why it does not matter whether they’re behind NAT or not, as long as the Salt-Master is accessible to them.


Installing the Salt-Master

For the master’s installation to work, we had to get a server with more memory since our original one with 1GB of RAM was not cutting it. In the end we ended up with 3GB of RAM, 1 CPU and a 20GB SSD Disk from DigitalOcean.

First step of the installation process was to download the Salt Bootstrap installation script. I downloaded it with Curl and installed a specific release version based on the Git tag.

curl -L https://bootstrap.saltstack.com -o install_salt.sh
sudo sh install_salt.sh -P -M git v2017.7.2

-M flag was used to install a Salt-Master. After the installation was successful, I changed the master’s config file in /etc/salt/master.

# Set and ID for your master:
master_id: NuotiovahtiMaster

# The address of the interface to bind to:
interface: 46.101.235.80

The address above is the server’s own. After the changes, I restarted the salt-master service. Also I had to make sure the server’s firewall is allowing incoming connections from ports 4505 and 4506:

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW IN    Anywhere
4505                       ALLOW IN    Anywhere
4506                       ALLOW IN    Anywhere
22/tcp (v6)                ALLOW IN    Anywhere (v6)
4505 (v6)                  ALLOW IN    Anywhere (v6)
4506 (v6)

Salt-Master was now configured to allow connections from the minions.


Installing the Salt-Minion

Salt Bootstrap installation script did not work for the Raspberry Pi. It was complaining about missing dependencies etc. This is why I installed Salt-Minion by importing the SaltStack repository key with wget:

wget -O - https://repo.saltstack.com/apt/debian/8/armhf/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add -

After that, I edited /etc/apt/sources.list.d/salt-stack.list and added the following line:

deb http://repo.saltstack.com/apt/debian/8/armhf/latest jessie main

Then I updated the apt-get package repos and installed salt-minion through apt-get.

sudo apt-get update
sudo apt-get install salt-minion

The installation completed and I added the Salt-Master’s IP-address to the minion configuration file in /etc/salt/minion.

# Set the location of the salt master server. If the master server cannot be
# resolved, then the minion will fail to start

master: 46.101.235.80

Finally, I restarted salt-minion and it started to look for it’s master. That’s everything for the minion.


Testing the connection between Master and Minion

Now, all that’s left to do, is to accept the minion’s key on master and we’re good. I listed all keys with the command “sudo salt-key –list all“. Raspberrypi was listed under “unaccepted keys”. I accepted all unaccepted keys with “sudo salt-key -A“. The Pi was now listed under “accepted keys”:

saltkeys

It lists master as a minion too because I originally tested both Salt-Master and the Minion locally on the server.

Now, I could test the connection between our Raspberry Pi and the server with “sudo salt raspberrypi test.ping“. The minion responded:

saltping

This meant, that we could now run shell commands on the Pi from our server! The syntax is “sudo salt raspberrypi cmd.run ’command’” where ”raspberrypi” is the name of the minion and ”command” is the shell command I wish to run on the Pi. If I had more minions, I could run the commands to all of them simultaniously by calling the minion’s name with ‘*’ in the previous example.

Let’s test it out with checking the results of ifconfig on the Pi:

sudo salt raspberrypi cmd.run ’ifconfig’

saltifconfig

We could now use Salt for restarting the sensor device or for running basic troubleshooting commands. Salt is way more powerful that this, but for now, this is all we need.

Playing with Fire

The thing with time is that you can never rewind it. So now have to do some backlog posts. This is about our fire sensor.

During our project we had several questions about our fire sensor, which were :

– Detection angle
– Detection distance
– Detection through surfaces (glass, plastic, etc.)
– Sensitivity
– How does it work with Pir-sersor

The sensor is a “generic one” from aliexpress, unfortunately there where zero information about the circuitboard or any other component. Manual, phwww – overrated to include.

Testing the sensor

dsc_0069.jpg
Picture 1 – detection angle – no detection

The picture above shows that the flame sensor is on, but cannot see the flame. The green led is the power indicator.

dsc_0070.jpg
Picture 2 – detection angle – flame detected

The picture above shows that by moving the candle slighty to the center, flame sensor can detect the flame. The angle is small, but enought for our purpose.
Also, when the Pir sensor is on, the active flame infront of it does not affect he movement detection.

The actual range with one candle is about 1 meter or 3 feet.

DSC_0073
Picture 3 – detection through materials

Above picture shows that flame sensor can see through the glass of lantern. Plus the glass does not affect the detection range.

IMG-20171123-WA0003
Picture 4, mounted flame sensor

Through out testing the sensor, one small thing was found. The sensor has an adjustment screw. If you turn it towards the maximum, the sensor will “detect” flame, even when the power is just on and the code is not being executed.

The answer is to adjust the sensitivity to the point that it’s maximum before false detections occur – like the picture above.

Other than that, the sensor has worked really well and we have no complaints.

To the next time.

 

 

Automating scripts on the Raspberry Pi

For the overall functionality of the prototype, it’s very essential for the Raspberry Pi to function independently without any manual labour. Whoever ends up maintaining the device, should be capable of operating it just by powering it on. Currently the Pi’s functionality has been automated with two different tools: rc.local and Cron.

Running sensor scripts with rc.local

Since I could not get cron to run python scripts for the sensors reliably at boot up, I had to settle for rc.local, which essentially does the same thing. It runs commands when the Raspberry Pi boots. This worked by editing the file /etc/rc.local with root privileges. Then I added the following two commands before the line “exit 0“.

(sleep 90
python /home/pi/sensors/mqtt_pir_sensor.py) &

(sleep 120
python /home/pi/sensors/mqtt_flame_sensor.py) &

The “&” at the end makes sure the commands run in a separate process and the Raspberry Pi continues to boot up with the process running in the background.

First, I tried to run the two python scripts without the sleep command. The sensor code ran, however, only for a brief moment before it got interrupted. That’s why I made both commands to sleep for at least one minute before running the scripts. “mqtt_flame_sensor.py” has a longer sleep time due to “mqtt_pir_sensor.py” having an internal 30 second learning period withing the code. This way, both sensors now start sending MQTT-data simultaneously!

Screenshot_20171124-134830

Running scripts with Cron

Cron is a tool for scheduling tasks. We’re currently running two scripts with it. The following lines were added to Cron table with the command “crontab -e“.

# crontab -e

*/30 * * * * /home/pi/automation/remoteconnection.sh >/dev/null 2>&1
*/15 * * * * /home/pi/automation/pingserver.sh >/dev/null 2>&1

Remoteconnection.sh is creating the reverse SSH-tunnel we talked about in an earlier blog post. We ended up running it every 30 minutes. By default, Cron tries to send a notification of the task’s completion to the user’s email. This can be avoided by adding “>/dev/null 2>&1” at the end of the line.

Pingserver.sh is pinging our server every 15 minutes. This was to make sure we know when the device has lost power after we leave it outside for the night. Raspberry Pi loosing power all of a sudden, is not the same as a regular shutdown. Hence, it is also more difficult to find a log entry of it ever actually happening.

#!/bin/bash

#Ping nuotiovahti.info and output to uptime.txt.
ping -c 1 139.59.140.158 | while read pong; do echo "$(date): $pong"; done >> uptime.txt

As can be seen, we ping the server once and add a timestamp on it. The results look something like this:

pings

Now we can at least know the shutdown time within a margin of 15 minutes. To make sure Cron really is running the needed scripts, we kept an eye on /var/log/syslog. That’s where the processes are logged into.

The Raspberry Pi is now doing everything on it’s own, without the need for manual labour.

 

Problems, problems and more problems

It’s all fun and games, before “everything” brakes.

Past couple of weeks have been a lot of testing and discovering problems, obviously.

The first discovered problem:

When we are using “fullstack” – raspberry, witty pi and GSM hat, the motion sensor (PIR) won’t just work or the data jumps from movement to no movement with out any logic.

Unfortunately this was the peak of the iceberg.

DSC_0074.JPG
picture 1 – complete stack with sensors

I’m not gonna go to full details of the head banging against the table/walls and the frustration so here is a simplified version of what we have been able to discover. After a lot of hours of testing the Pir sensor with configurations such as:

a, only raspberry
b, raspberry with witty pi
c, raspberry with GSM hat
d, raspberry with witty pi then gsm hat – as shown in picture 1
e, raspberry with gsm hat then witty pi – reversed order of picture 1
f, gsm module on and off
g, switching cables and switching gpio pins where we mount the cables on all of the above configurations
h, flashing the whole system and installing everything again
i, trying to mount the gsm hat with wires – picture 2

IMG-20171108-WA0008.jpeg
Picture 2 – wire mess

After all of these test we have come to the conclusion of:

1. Sakis 3G messes up something
2. Pir has a number of power related problems – earliest reported ones i could find where from 2009
3. Pir is not as accurate as we would have hoped – the box needs to be altered
4. Something in the GSM hat might be broken, but if nothing is broken, then there are just compatibility issues – a lot and we need a plan B.

So, some problems and some progress of finding the solutions, but there is still work to be done.

After all the window of final on the site testing is approaching fast.
Until next time.

 

Building the Box

Finally! We have reached the point that every part needed for the prototype box has arrived.
So, time to build!

The first thing, the box. Ordinary plastic “food storage” box from grocery store, advantages – cheap and waterproof. Plus this model had a handy plastic-grill that keeps our raspberry and battery floating above the bottom. Don’t have to worry about condensation.

DSC_0061picture 1. the box.

We bought a “rack” case for our raspberry, but after the project it’s gonna be schools property so i cannot “destroy” the original. That’s why i made a copy from plastic sheet. This will be the mounting point for Pi.

DSC_0052
picture 2. new baseplate.

DSC_0053
Picture 3. making the plate

DSC_0057
picture 4. Finished base plate with mounting screws.

DSC_0062
Picture 5. Screw attach point.

Because most of the screws are schools i had to improvise. Solution was to use standard computer motherboard spacer screws that were hot glued to the plastic-grill. This way the base plate can be detached from the grill after the project.

Next, was to make the mounting points for the sensors.

DSC_0055
Picture 6. Drilling.

The Pir sensor had 23mm diametre, bit of a problem. I didn’t have a drill head of that size so i made a smaller hole and used a Dremel multitool to carefully file the hole to the correct size.

DSC_0056
Picture 7. Fit testing the PIR

Pir sensor fitted like a glove so it was time to move to the flame sensor.
DSC_0059
Picture 8. Flame sensor hole and support plate.

This was more tricky, the flame sensor needs to be at least in angle of 45 degrees or more to “see” the actual fire.
What i did was to drill a right size hole and then file the hole into an angle that would suite the sensor. Then i hot glued a plastic support that will keep the sensor in the right place and angle.

Finished build

DSC_0060
Picture 9. finished build.

As a side note i painted the box black (teachers wish). The square is the Pir sensor and circle the flame. (Hard to see) Also i added some plastic that insulates the sensors and make them more waterproof and added a litte drop of super glue to keep the plastic in place.
The battery will be attached with some double sided tape. The paint already flaked so need to give it a new layer and some varnish.

And that’s it. The finished box.

We are ready for winter.

Problems with remote management

Our remote connection script is currently not very efficient as it is running every minute, which creates new processes both on the Pi and the server. They reserve an insane amount of memory, which we did not realize when we originally did it. Therefore, I tried to make our reverse SSH script to check whether a connection is already up before creating a new one.


Old reverse SSH script

So is the old script that Cron was running all the time.

#!/bin/bash
# Script for creating a reverse SSH tunnel from Raspberry Pi to the server

        ssh -f -N -o BatchMode=yes -R 2222:localhost:22 markus@139.59.140.158

So originally, we thought that it wasn’t a problem to create new connections all the time, as they’ll fail because the server’s remote management port 2222 is already listening to a reverse ssh connection. However, it did still create an insane amount of process on the server. We realized this after our MySQL database crashed twice during testing.

sshd

As can be seen, all these processes are taking up memory and completely unnecessarily.


New attempt for a smarter reverse SSH script

So we approach this problem by trying to edit our script to first check whether a successful connection has already been established. Now, it ended up being a lot more difficult than we expected.

pi@raspberrypi:~ $ ps aux | egrep "[m]arkus"
pi 1740 0.0 0.1 8888 1660 ? Ss 20:06 0:00 ssh -f -N -o BatchMode=yes -R 2222:localhost:22 markus@139.59.140.158

Checking the connection by looking at processes on the Pi, did not work. Just because the SSH command has been given like 20 minutes ago, does not guarantee that the tunnel connection is still working from end to end.

So the first attempt was this:

#!/bin/bash
# Script for creating a reverse SSH tunnel from Raspberry Pi to the server

output=$(ps aux | egrep "[m]arkus")
connection=$output

if [ "$connection" ]; then
:
else
 ssh -f -N -o BatchMode=yes -R 2222:localhost:22 markus@139.59.140.158
fi

But like I said, it was not reliable at all. After I manually killed the connection on the server’s side, the Raspberry Pi did not create a new tunnel because it still sees a process of the earlier tunnel connection in ps aux.

The next attempt was with netstat:

#!/bin/bash
# Script for creating a reverse SSH tunnel from Raspberry Pi to the server

output=$(netstat | awk '$5 ~ /^139.59.140.158:ssh/ && $6 ~ /ESTABLISHED/')

connection=$output

if [ "$connection" ]; then
:
else
ssh -f -N -o BatchMode=yes -R 2222:localhost:22 markus@139.59.140.158
fi

So, the script is basically checking if there’s an established ssh connection to our server’s IP address before running the tunnel again. But again, even though the connection had already been dropped on the server, the Raspberry Pi failed to create a new tunnel. Netstat still saw an established connection to the server so the tunnel was not established.

We could have set a timeout to the ssh command, to drop the connection after a certain amount of time. However, it would’ve been a bit sloppy to kill the connection in case someone happened to be remotely connected to the Pi on that moment.

I tried a program called AutoSSH, which looked very promising. Everytime I manually killed the connection on the server, AutoSSH immediately brought it right back on. However, it was not capable of restoring the connection after a couple of minutes of internet downtime. Therefore, I had to look at some other options.

EDIT 15.12.2017: One of our teachers commented on this, and said that he has used AutoSSH for maintaining reverse SSH tunnels successfully for years. Perhaps I did not have it configured correctly.

All of my attempts were unsuccessful which is why, for now, I had to do something a bit radical to make sure we’re not running out of RAM on the server.


Script for killing the useless SSHD processes on the server

I created a shell script called terminatesshd.sh which kills the extra and pointless sshd processes on the server. First, it checks the PID (process ID) of the established reverse SSH tunnel because we don’t want to kill that process. I also made it check whether any users are SSH:ed into the server. This is because I don’t want to kick the user’s out when I kill all the processes. Had I had more time, I would have made it to also check the PIDs of those users too.

In the end, this is what I came up with:

#!/bin/bash

CheckPID=$(sudo lsof -ti TCP:2222)
PID=$CheckPID

CheckSession=$(who)
session=$CheckSession

if [ "$session" ]; then
:
#echo 'Someone is logged in! Aborting...'

else
ps aux | grep 'sshd: markus' | awk '{print $2}' | grep -v $PID | sudo xargs kill -9
#echo 'Just killed all processes!'

fi

So in short: the script checks the PID of the SSH tunnel and then if any users are logged in to the server, it does nothing. However, if there are no users logged in, it kills all sshd processes, except the one with the PID we checked in the beginning of the script.

Should be noted though, that this has not been thoroughly tested due to time limits but it seemed to be working as expected. We can now run this by Cron every once in a while.

This is in no way a perfect solution, which is why I’ll be configuring SaltStack to work alongside with it.

 

 

Reverse SSH tunneling for remote management

Preview:

One of the main requirements for this project is to have a reliable method for managing our prototype remotely. The problem is that the device is in a mobile network behind NAT, which means that accessing it regularly via normal SSH is not possible since it does not have a static public IP-address that we could connect in to.

We discussed different ways we could approach this: a reverse SSH tunnel connected from the Raspberry Pi to the server would eliminate the problem with NAT, as our server is accessible by a public IP-address ,whereas the connection would not work the other way around.  We also thought about using remote configuration tools like Puppet or Salt.

In the end, we decided that SSH is the most versatile way to go about this. However, we have to figure out how taxing it is in terms of resources. After all, our Raspberry will be running of a battery in a mobile network. Whether we are going to use this approach in late production, depends on the reliability of the connection.

Reverse SSH Uusi Pieni


Testing a reverse tunnel manually:

So the idea is to start the tunnel connection from the Raspberry Pi. To begin with, I tested how the reverse tunnel worked manually.

ssh -N -R 2222:localhost:22 markus@139.59.140.158

The connection was refused. This was because the server was not currently allowing connection to port 2222.

This is when I logged in to our public server, which we’re currently trying to connect to from the Pi. There, I first made a hole in the wirewall to allow incoming connections to port 2222:

sudo ufw allow 2222/tcp

After that, I went back on the Raspberry Pi and tried the ssh command above with promising results.

I was prompted that the authenticity of the host can’t be established and was asked whether I wanted to continue connecting, which was completely normal. After answering “yes”, I typed in the user markus’ password. After that nothing happened visually, the terminal was just waiting for something to happen.

Back on the server, I typed in the following command:

ssh -l pi -p 2222 localhost

I was prompted for the Pi user’s password and after a successful authentication, I was logged in to the pi.

So the reverse SSH was working!

Essentially what’s happening here, is that the public server is listening on port 2222 for incoming SSH connections. When a connection is received, it is forwarded to the previous connection that was established already from the Pi’s side. So, when the tunnel is up, connecting into port 2222 of the server is redirected to the Raspberry Pi.


Automating the process:

In order for this to make any sense, I had to make sure the Pi is automatically creating the reverse SSH tunnel at all times. I did this with a simple shell script which is then run every minute by a Cron job.

SSH key pair

First order of business, was to generate SSH keys for the Pi and the server. Otherwise, is makes running the script successfully more complicated when the Pi is prompted for the server user’s password. I wanted to avoid this so I generated a key pair: private key for the Pi and public key for the server.

On the Raspberry Pi:

cd ~/.ssh
ssh-keygen -t rsa

When asked, I did not create a passphrase. After the keys were generated, I transferred the public key “id_rsa.pub” to the server with scp:

scp id_rsa.pub markus@139.59.140.158:.ssh/authorized_keys

Now, I am no longer prompted for a password when connecting from the Pi to the server.

Shell script for connection

I made a shell script called “remoteconnection.sh” for automating the connection. It’s worth noting that I added something to the ssh command: -o BatchMode=yes makes the connection fail if auth keys are not setup correctly. -f means that ssh will background itself after authentication.

#!/bin/bash
# Script for creating a reverse SSH tunnel from Raspberry Pi to the server

ssh -f -N -o BatchMode=yes -R 2222:localhost:22 markus@139.59.140.158

Now, the script should be ready to go. Before scheduling a Cron job, I tested it manually by running the script. However, before I could run it, I had to make it executable with:

chmod 700 ~/remoteconnection.sh

Then ran the script:

./remoteconnection.sh

SSH begun to run on the background and I was not prompted for a password. On the server side, I connected again with the same command as earlier:

ssh -l pi -p 2222 localhost

After typing in the Pi’s password, I was in.

Cron job for running the script

For initial testing purposes, Cron is running the script every minute. I want to be able to access the Raspberry Pi at all times. Not having to wait for 15 minutes before being able to connect for example.

On the Pi I edited Cron jobs with:

crontab -e

and added in the following line:

*/1 * * * * /home/pi/remoteconnection.sh >/dev/null 2>&1

By default Cron likes to send the output of running the job to the user’s email address. I avoided this by adding in the “>/dev/null 2>&1”.

Our Pi should now create a reversed SSH tunnel every minute, if the tunnel is not already setup.

I checked to see whether Cron was actually running the script at all. Log details were found on /var/log/syslog

sudo tail -F /var/log/syslog

And I was pleased to see that it was indeed run every minute by Cron:

cronworking

(The picture above is a bit old, therefore, the script name is different but the results are the same.)


Testing the connection:

During testing, I noticed that after rebooting, the connection was not always successful automatically. I often had to log in to the Pi and manually start the script for it to work. Even then it was a hit or miss. When it worked however, the Pi really seemed to maintain the connection very reliably. It’s just that after rebooting the device, Remote port forwarding fails to listen port 2222.

I realized that this is because after old connections have dropped, the server port 2222 is still waiting for them to reconnect. To fix this I made a script to kill old processes that are using port 2222 on the server.

#!/bin/bash
# Script for killing old connections to port 2222. Run this after remote connection is over.
# Note! This is run on the server! You can check if (CLOSE WAIT) connections are still alive with: "sudo lsof -i TCP:2222".

sudo lsof -t -i tcp:2222 -s tcp:listen | sudo xargs kill

To check whether old connections are still in (CLOSE, WAIT) state, you can use the command:

sudo lsof -i TCP:2222

Now everytime I want to remote connect to the Raspberry Pi, I first check whether connections are still waiting and if they are, then kill them with the script above. After that, I’ll just have to wait for the Pi to reconnect (which it does every 1 minute).

Another “problem” we discovered, was that the mobile network causes a very notable latency when remotely connecting to the Raspberry Pi. Everything is remarkably delayed but still functional.

A week back, we were not attending the school during a holiday, which gave us a good opportunity to test the reliability of the mobile connection and our remote management tools.

cof

We left the Raspberry Pi running on the mobile network for over a week. Every day we tested the remote connection twice and every attempt was successful.

It’s not perfect but it’s working. Now that the course resumes, I’ll be developing this further to make sure it’s reliable and works immediately at startup.