Real-world applications with event driven / multithreading

I would really love to use the Spark Core for real-world applications, however I am having some trouble setting it up right now.
A real-world sensor would capture data (for example temperature) every second and send it to a webserver. Now here is what the problems are:

  1. Each second needs to be interrupt driven so that it is each second, this can be solved by the timer library
  2. The data needs to be send asynchronous and non-blocking to the webserver (and if there is no connection saved in flash). And we can not stop the polling to acces the temperature variable.
  3. When the core tries to reconnect to the cloud or to the webserver, the temperature measuring each second still needs to take place, it may not block it.

Spark professionals out there, can you please get me in the right direction, this is a simple real-word example that should be able to be solved with Spark.

You basically need a buffer, possibly a ring buffer if you don’t mind corrupting your oldest data should the cloud connection be down for too long, and then your main loop would only write values to the cloud from the buffer until it’s empty. You should probably timestamp your buffer entries so that you can recreate the realtime data on your webserver.

The temperature sampling every second can be done a number of ways, but it must be fault tolerant with regards to the cloud connected state. I.e., it must always take a sample, and push that sample to the end of the buffer.

Your main loop would then process the buffer asynchronously from oldest to newest (FIFO).

It would probably be wise to add some flags that prevented the reading and writing of the buffer from happening at the same time. Because this could be an issue if the buffer filled up with more data than could be written to the server in one second, it might be best to only send one value per time through the main loop to the server. Might also be able to use a small extra buffer for the incoming samples as well.

I’m going to wait and see if anyone chimes in that has done this already, surely someone has?.. and already has a good example worked up for this. If not, this is definitely a great example to work up for all.

1 Like

Hello BDub,

We are on the same page here, I agree with your principle, however there are some practical issues:

  1. How do I get the buffer from being “thread safe”?
  2. How do I make sure the temperature measurements are taking place even if the Spark has lost Cloud or Wifi connection?
  3. How do I make the communication (that sends the contents of the buffer) work in parallel and NON-blocking to the rest?

Does anyone have some practical knowledge / example pieces for the different functionalities?

@GrtVHecke, in the Spark, for all intents and purposes, there are only two cooperative threads - loop() and main(). Your interrupt-driven sensor would be another, so to speak.

  1. Loop() must cede control back to main(), the background task, to maintain the cloud connection. To be “thread safe” on your ring buffer you can either a) disable interrupts when reading the ring buffer in loop() or b) set a semaphore and make the ring buffer a mutually exclusive resource.

  2. If wifi and cloud are enabled, then if wifi or the cloud drop out, loop() will not run (@BDud?). This is a problem that Spark is well aware of an working on creating a “true” non-blocking background task.

  3. At this time this is not possible to my knowledge.

Is the Spark Cloud connection necessary or are you using TCPClient to connect to a server? You mention data every second but many samples can be buffered before they go “stale”. Is the sensor capable of being sampled once a second?

:smile:

1 Like

@peekay​123,

Thank you for the reply!

  1. Can you show me an example of how to use a semaphore for mutual exclusion in Spark please? It would be very helpful.

  2. Do you have any idea in which timeframe Spark will address the blocking task for loop? Can we use in the Spark main() for this and override it? If so do you have some kind of examples for this?

  3. Will 3 be possible if item 2 is fixed? Because the functionality will really be necessary.

  4. The connection will be done by TCPClient (and I will address a Node.js server). And yes, data will be gathered for 1 minute and then send to the webserver. And yes, the sensor is capable of being sampled each second (RTD).

Geert

@GrtVHecke,

  1. a semaphore is a type of flag. In your case, you can simply set a flag to TRUE if the foreground is reading the buffer. In the ISR the code checks the flag and skips putting the sensor reading in the buffer. This is similar to disabling interrupts but not as disruptive. If missing one reading is not acceptable, then you will have to disable/enable interrupts.

  2. The work on the non-blocking background code is ongoing but I don’t have a timeline yet. I will inquire at our next Spark Hangout.

  3. Doing the communications “in parallel” should be achievable when 2 is completed. The size of the buffer, however may be an issue if the core is off-line for a long duration.

If I understand correctly you sample for 1 minute (60 samples) then send that “bundle” to the server and so on. Using TCP, I assume the server will respond with an “ack” to each bundle. You will need to write some non-blocking finite-state machines (FSM) to handle the communication, buffer and sensor states but it is all feasible.

@peekay123,

First of all, thank you for the answer. I still have some questions, but it is becoming more clear.

  1. The flag true or false, will be written and read as well from the foreground process and the ISR code, will that nog give any memory issues, since it is dealing with the same kind of problem (access the same memory at the same time --> the flag).

  2. Please ask the Spark people to get us informed, so that we have a timeline. I hope it will not take too long because I really need this functionality.

  3. I think the best solution might be to have the possibility to build in the spark multiple non-blocking loops. Like Loop() and main() you could also have UserLoop01() … UserLoop10() which would be able to run in parallel.

  4. About main(). You mentioned before that there is a main() as well, like in a pure C program. Can we make a Main() in the user code as well and will this have priority over the loop(). Can you explain more how these 2 react and how to use them please?

  5. Yes, the server will respond with ACK messages. The non-blocking code would be in the loop() or main() actually, and reading the sensor in a timer. For this kind of purposes it would be good to have like I described multiple UserLoops() as a functionality in the Spark, together with a way to safely exchange data between the UserLoops. Like you can do in high level languages or have the possibility to be able to create a process (like in some other microcontroller languages).

@GrtVHecke,

  1. You are correct though the ISR only reads the flag. There is a very small probability that the flag will be read by the ISR while the foreground task is setting it. At that point, you disable interrupts around the write operations.

  2. Will do

  3. What you are talking about are “threads” which can run concurrently. This brings your requirement to an RTOS level, which is being discussed for Spark Core II. You can create cooperative loops in the existing code but there is no parallelism.

  4. Main() is a C construct whereas setup() and loop() are Arduino constructs. In the Core, main() is used to run the “background” code whereas setup() and loop() are used for the user code. In this context, you should not consider using main() in any way.

  5. You are now looking for an RTOS, which is beyond the capability of the current Spark Core.

HOWEVER, one member did port the NutX RTOS to the Core but with few remaining resources:

@GrtVHecke,my gut tells me that you are over-designing your device! Most, if not all the functionality you require can be done without threads!

@peekay​123,

Thank you again for your reply.

Please find my ideas below:

  1. I will disable interrupts at the moment of read. I just wonder if I read the value and just disabled the interrups and enable them again and it was like 0,5 seconds ago I read a value, will it start over counting again from 0, so that I get the timer after 1,5 second, or does it just mean I will not get the event?

  2. if I will not do it with threads, how should I address the non blocking TCPClient. If this is possible, I will do it this way.

  3. Any ideas on features and planning for the Core II (just a question, I suppose it will not be on time for my project)?

Kind regards,

Geert

I’ve been working on something similar for a thermostat. Right now the upload to server part does block the temperature reading code, but it is set up to buffer to a file on an SD card. With that in mind, I’ve switched to compiling locally because I’ve included the most recent ram optimizations (decreasing tcp buffer and modify makefile). Code can be found at https://github.com/mumblepins/core-firmware

@GrtVHecke,

Can you please tell me what experience you have programming on other micro-controllers?

  1. Disabling an interrupt does not disable the hardware timer with which it is associated so it will still fire at the defined interval. The IntervalTimer library has a function for ONLY disabling a timer's interrupt instead of disabling ALL interrupts so as not to affect the entire system.

  2. The loop() runs forever, ceding control to the background task for only 5ms or so (typically). Creating a set of finite state machines, coupled with timing loops should give you what you need. For example, I have a program which reads a sensor every 100ms into a small buffer. Every second I grab the buffer, calculate some values and update a display. Every 30 seconds, I (optionally) log those values to a microSD and every minute I Spark.publish() them. All these are timer based using the elapsedMillis library, are mutually exclusive and each are state driven (FSM) so they are not blocking. It is just a matter of designing to requirements.

  3. You can check out the Core II topic for more information:

@peekay123,

Thanks again for the response.

My background comes from Linux based systems, but for this project we need a smaller and cheaper board. However, from an education standpoint, I still have an engineering degree in electronics, this is why I am assigned with this task.

Would it be possible to have some example code for your FSM? I wonder how you do it if for example a connection to the webserver (as I tested it) can easily take up to 3 seconds. Some example code would be very interesting as a staring point.

Kind regards,

Geert

@GrtVHecke, I will try and dig up some FSM code for you. The key to connecting to the web server, in your example, is not how long it takes but monitoring the STATE of that process and acting accordingly. The selection of a monitoring interval is also important. For example, I will not monitor the web server connection state every 1 ms! Instead, I may monitor it every 100 or 500ms depending on the requirements.

I now understand your “thread” view because of your linux background :stuck_out_tongue:

@peekay​123,

Thank you very much again for your reply. Thank you for “digging up” some FSM code as example, this will be very helpfull. Sorry for the questions, but coming from a Linux world, these seemed the logical questions to me :wink:

@GrtVHecke, here is an example of an FSM that I wrote for a Teensy 3 board connected to a CC3000 breakout while waiting for the Spark Core. The program allowed the user to turn wifi on or off and this non-blocking FSM managed the wifi connection. The code was called from loop() which I also included in the gist. :slight_smile: