Blinking red using Spark publish

After a couple of seconds it comes back to cyan
 But it comes back to blinking red again. (I am setting Spark.publish with 80ms intervals). Any ideas why is it happening?

Every time I want to reflash the core I need to apply factory reset, because it doesn’t able to find the core while it is sending with Spark.publish (I guess). How bad is to apply factory reset several times (by the way)?
Thanks! :blush:

HI @afromero

There is a rate limit for Spark.publish() of an average of one per second with a burst of up to four allowed but that should not be crashing your core, just dropping events.

Can you read the blinking red pattern? It will likely be SOS n-flash, SOS n-flashes, where SOS is three quick flashes followed by three slower flashes followed by three quick ones again. The n-flashes tells you the failure. See this doc:

http://docs.spark.io/troubleshooting/#troubleshoot-by-color-flashing-red

1 Like

Since @bko beat me to it (yet again :scream:) I’ll just stick to the remaining questions.

You can reprogram the Core about 100.000 times I believe (don’t hold me on that number), so doing a couple of resets a day shouldn’t hurt at all.

It would be useful if you could post your code here, so we could see what you’re talking about, and perhaps trace down some problems there might be. Please take a look at this post so you know how to properly format your code when posting to the forums. Thanks in advance!

2 Likes

It seems a “invalid case” which I don’t know what means. But, if you say there is a limit for spark publish, then I don’t think it works for what I need it. I use your code (Getting started
) and seemed very good for my purpose, but if you say I can not use it for more than a couples of times per second. What do you suggest me if I want to send for several samples (eg each 20ms)? Thanks.

Thank you @Moors7. Much appreciated. :slight_smile:

Hi @afromero

If you post your code, we can help you debug it.

If you have only 8-bit or so samples at 20ms, you could still use publish by grouping the data into a string (63-byte max for publish) and sending it that way. You need to get 50 values in a string to make the rate average out.

You could also use a pull model with a Spark.variable() which can be a much longer string.

You could also use regular TCPClient which is what a lot of folks do with services like Google spreadsheets and Azure and other cloud data repositories.

1 Like

@bko, all of this is inside a function that is called when is detected an edge falling with interruption.
What I do here is to get data from spi (I use an ADS1299 to do that), let’s say a sinewave of 1Hz, I sample it to 250sps (4ms). Each sample has 96bits. The meaning of this is to watch the wave at my cellphone. I could not afford a lost of data so I can stop to send.

//Every 4ms comes here
digitalWrite(CS, LOW);
        long output[9];
        long dataPacket;

        unsigned long lastTime = micros();
        char publishString[40];
        
        for(int i = 0; i<4; i++){
            for(int j = 0; j<3; j++){
                digitalWrite(SS,LOW);
                byte dataByte = SPI.transfer(0x00);
                digitalWrite(SS,HIGH);
                dataPacket = (dataPacket<<8) | dataByte; 
            }                                            
            output[i] = dataPacket; //Use it since there are 4 channels, each one with 3 bytes
            dataPacket = 0;
        }
        digitalWrite(CS, HIGH);

        outputCount++;
        unsigned long now = micros();
        if (outputCount==5){
            outputCount=0;
            now = now - lastTime;
            sprintf(publishString,"Time: %uus",now);
            //Serial.println(publishString);
            Spark.publish("Uptime",publishString); 

I want “publishString” reaching at my cellphone in some way, once uptime is getting there every (1~50ms or even more) I would find other way to send “output[]”. Any clue? Thank you :slight_smile:

@bko, I am thinking about spark.variable using string (up to 622bytes), what if I use each channel as one variable (24 bits per sample = 24 bits each 4ms) and storing it in a variable as a socket during a half of one second, would be 375 bytes which I could send to. I would send data every 0,5s. Spark.variable afford that rate? Much better than sending every 4ms wich would be inefficient.

OK, so publishing from inside the interrupt handler has been a problem in the past–perhaps @mdma can chime in since I thought that was fixed. Try slowing your interrupt down to 1 second for testing and see if that works.

You should probably gather data in your interrupt routine but send it periodically in the main loop.

Spark.variables() will work, but your rate is pretty high. You could also roll your own with TCP.

1 Like

It’s not possible to successfully use any Spark cloud functions from an interrupt handler. It’s best to store the data you want to publish in a buffer and then publish on the main thread, like @bko suggests.

(@bko Maybe you’re thinking of a recent fix that prevented a SOS when calling Spark.publish() while not connected to the cloud.)

2 Likes

FWIW I seemed to have a problem when I overdid some debug output to Serial1 and accidentally created continuous output at 115200 on that Serial channel. After that I couldn’t re-program the Spark - had to reset it and reclaim it. Same problem again. took out the Serial1 prints and (after another reset-reclaim) all was okay again. So, unless I am attributing this problem wrongly I would say it may have some bearing on the foregoing discussion (which yeah, is a couple of months old now)