The aim of this project is to monitor the health of a bee hive (eventually several hives). To do this I want to know the size of the cluster of bees in the main brood body of the hive, to see how much flight activity there is and to listen to the buzzing – with a view to detecting preparation for swarming and other things demanding attention.
I have found two systems already available that perform at least some of this. There is a modification to the Open Energy Monitor that measures 4 temperatures and has a very neat display based on the emoncms. http://openenergymonitor.org/emon/beehive/v2 This would cost about £60 per hive pretty much pre-built. Temperature only,
There is also a commercial system that collects a richer set of data. This is quite pricey and has ongoing subscription charges. (£240 plus £70 p.a. for a single hive). It has a 3-4 month battery life. It communicates wirelessly from each hive to an apiary data collection point and then to the Arnia server via GRPS. http://www.arnia.co.uk/
The system described here uses the Spark wifi which with a carefully positioned router reaches my wife’s apiary. For additional hives I can either use more Sparks or an 868MHz wireless Arduino.
This is work in progress. I have the basic hive data collection sensors and software up and running and have a simple python script that collects the data. I’m publishing now because I could use some advice on getting the most from the Spark RAM.
The hardware is very simple. I have a Sparkfun microphone feeding A7, a photoresistor in A0 and 47k thermistors in A1-A6.
A0-A6 are all pulled up to 3.3v with a 47k resistor network and have 10uF decoupling capacitors to ground. There is an led on D0.
The Spark software is …
// beehive monitor
// v0.01
// help yourself
// don't blame me
// don't get stung
#include "application.h"
#include <math.h>
//pin definitions
int led = D0;
int therm[2]; //two thermistor for demonstration expandable to 6
int micPin = A7;
int optPin = A0;
int dynamicData[128]; //we are very short of memory so reuse this for optical and audio
int packetSize;
char UDPin[16];
UDP udp;
//
int v[2]; //thermistor ADC readings
const float Rinf = 0.06; //thermistor constants for 57k thermistor
const float B = 3990.0;
const float Rup = 47000.0; //pullup resistor value
float t[2]; //temperature measurement
char output[48]; //for spark variable
void setup() {
therm[0] = A1;
therm[1] = A2;
Spark.variable("temperature", &output, STRING);
pinMode(led, OUTPUT);
udp.begin(5208);
}
void loop() {
if (udp.parsePacket()>1){
digitalWrite(led, HIGH); // Turn ON the LED to show packet received
udp.read(UDPin,16);
if(UDPin[0]=='s'){ // If sn received, sleep for n mins
int i = UDPin[1] - '0';
if (i>0 && i<10){
i*=60; //add a 0 for longer sleeps when debugged
Spark.sleep(SLEEP_MODE_DEEP, i);
}
}
if(UDPin[0]=='d'){ // If d received, send monitoring data
udp.beginPacket(udp.remoteIP(), udp.remotePort());
if (UDPin[1]=='t'){
//update temperatures
for (int i = 0; i<2;i++){
v[i] = analogRead(therm[i]);
t[i] = B / log(Rup * v[i]/(4096 - v[i])/Rinf) - 273;
}
sprintf(output,"%d, %d, %.1f, %.1f",v[0],Network.RSSI(),t[0],t[1]);
udp.write((unsigned char*)&output,sizeof(output));
}
else if (UDPin[1]=='a'){
//update sound
for (int i=0;i<128;i++){
dynamicData[i]=analogRead(micPin);
}
udp.write((unsigned char*)&dynamicData[0],sizeof(dynamicData));
}
else if (UDPin[1]=='o'){
for (int i=0;i<128;i++){
dynamicData[i]=analogRead(optPin);
}
udp.write((unsigned char*)&dynamicData[0],sizeof(dynamicData));
}
udp.endPacket();
digitalWrite(led, LOW); // Turn OFF the LED
}
}
}
And the python stub that shows that the data have been collected is …
#!/usr/bin/python
# Beehive monitor
#
#
##
import socket # Import socket module
import time
import struct
host = "192.168.1.112" # use name of Spark server
port = 5208 # Reserve a port for your service.
i=0
micData = [0]*128
optData = [0]*128
while True:
s = socket.socket(socket.AF_INET,socket.SOCK_DGRAM) #Create a socket object
print('+')
s.settimeout(3)
while s:
try:
s.connect((host, port)) # connect to server if awake
s.sendall(b'da\0 ') # da = send audio data
print ('sent send data message')
except socket.error:
print('unable to connect')
break
r='not read anything'
try: #check for data
r = s.recv(1024)
if r == 0: # if r is 0 then the sender has closed for good
print('socket disconnected')
#read audio data packed in a buffer
micData=struct.unpack('128i',r)
print(micData)
s.sendall(b'do\0 ') # da = send optical data
print ('sent send data message')
except socket.error:
print('unable to connect')
break
r='not read anything'
try: #check for data
r = s.recv(1024)
if r == 0: # if r is 0 then the sender has closed for good
print('socket disconnected')
#read optical data packed in a buffer
optData=struct.unpack('128i',r)
print(optData)
s.sendall(b'dt\0 ') # da = send optical data
print ('sent send data message')
except socket.error:
print('unable to connect')
break
r='not read anything'
try: #check for data
r = s.recv(1024)
if r == 0: # if r is 0 then the sender has closed for good
print('socket disconnected')
#read string with temperature data
try:
text = r.decode("utf-8")
except:
text = "Can't decode"
print(text)
#s.sendall(b's1\0 ') # s = sleep n is time in 10s of min (UDPstub does 1s not 10s of min
#print ('sent go to sleep message')
except socket.timeout:
print('.')
time.sleep(10)
s.close()
print ("Finished...")
You will see that the python code collects a set of readings and then can (but presently it is commented out) send the spark to sleep. To stop the core going to sleep you justy stop the python code from running so that when the core wakes it stays awake for reprogramming. This is a good thing!
Data are collected each time the python code requests it.
Communication is by UDP. This seems the quickest way to get the data and handles the varying packet sizes. The python struct library looks after unpacking the raw integer data.
Issues
:
I have used the same array to store audio and optical data collected sequentially. I found that I get an out of heap SOS if I collect both audio and optical data simultaneously. I would really like to collect more data points but run out of space at around 256 samples. I’m not sure how the program gets through the 6k of user RAM so any advice on finding a bit more space would be appreciated.
I have 3 AA batteries that are trickle charged by a pair of 6v 90mA solar panels. As long as the spark sleeps most of the time these power it nicely. I had originally planned to power both a Ciseco RFu-328 and the Spark. The RF-328’s are cheap as chips and have built in 868MHz radio transcievers. The idea is to use one for each beehive in the apiary and have the Spark consolidate the data and forward it. Unfortunately the 1015 regulator I was using couldn’t handle the power requirements of bothe the RFu and the Spark. For the time being I’m going ahead with just the Spark.
I have experimented with antennae and I have found that I get better range with the chip antenna than a uFL connected wifi antenna. I will pursue this in more detail now that I have found the Network.RSSI() function. It is great that the Spark Team has added plenty of new functionality - I really must re-read the documentation to see what else is new.
What Next?
I need to get off the breadboard and into a hive - I thought I’d got there until the regulator wimped out. When I get a beefier regulator I’ll get both processors talking by I2C which will be a new learning experience.
I have found that the Spark ADC’s give a cleaner signal than the 328 unless I access the 328 using low level code that disables interrupts. The conversion rates are roughly similar but I know the Spark has stacks of headroom if I dig into the wiring library. I want to do that as I’d like to see whether there is any interesting audio information in the ultrasonic range- but that will wait till I have the basic system working.
Spark ADC output from Sparkfun microphone - 440Hz tone input
FRu-328 output for same tone high level code
FRu output - low level code
FRu output - low level and interrupts disabled
Signal processing
I was going to do most of this on the Spark so that I could send alerts using Spark publish. However while I’m struggling with RAM I’m doing the processing remotely. The volume of data isn’t a problem with the UDP communications.
The thermistors will measure the temperature on a line running through the middle of the bee brood nest. Where the bees cluster the temperature will be a steady 30C ish so this will give me the size of the brood cluster.
I will FFT the audio and optical signals.
The flicker on the optical signal will be a bit like the output of a Malvern Instruments particle counter and will give me information about the number and speed of bees passing through the hive entrance.
There have been several scholarly articles about bees buzzing. They mostly seem to use low audio frequencies and nothing more sophisticated than frequency-time traces and principal components analysis. The main thing is that there is a huge amount of data but mostly it is irrelevant or contains a lot of redundant information. I intend to distill out the key bits of information by using dimension reduction techniques. The basic idea is to transform each new set of data (2 spectrograms and a bunch of temperatures + time of day & weather or anything else that seems relevant) into a short vector that consolidates the useful data and rejects everything else. I’ve done this in the past for monitoring engineering systems using nonlinear principal components, K means and Kohonen mapping and cleverer people than me have used Gaussian mixtures and the wonderfully named Generative Topographical Mapping and Support Vector Machines. (This may seem to be drifting off topic but I want the information in the public domain so that someone doesn’t grab the IP from in front of all those hardworking beekeepers). The main thing is that once the current state of a hive is described by a short vector it is statistically meaningful to compare it with a library of known states either by calculating the nearest known vector or by using a neural network classifier. This way I can provide an alert when the bees are about to swarm, are running short of space, are under attack by animals or disease or have been disturbed. Furthermore if I can’t confidently classify the current state it means it is novel and that means someone needs to go take a look.
After I have identified the key features I may not need to collect such a lot of data. I hope that this is case because then I can get the processing back onto the Spark and make better use of the Cloud communications.
solar trickle charger - 5.5v output cells but with 3 AA cells connected gives 3.4 to 4.11v
breadboard Spark Beehive monitor
prototype board two processor beehive monitor with inadequate power supply (for now).