Scope problems with Spark.variable()

As uncovered in this topic, there appear to be scope issues with Spark.variable().

There is no detection/protection for variables that go out of scope that are referenced later.

Reads may return garbage, writes may break things badly by scribbling on the stack/heap.

I would be delighted to be proved wrong.

@ScruffR, can you repost your test case here? It is not a particularly pathological test case, but it is a good start.

This is to demonstrate how things might look good at first, but get corrupted after a while - e.g. after function calls.

/********* Now tested and fails brilliantly  *********/

// just to create and destroy some String instances for demonstration
String produceGarbage(const String &s) {
    String s1 = String(s);
    String s2 = String(s);
  
    s1.toUpperCase();
    s2.toLowerCase();

    return s1 + s2;
}

int blinkLED(String c) {
    String test = produceGarbage(c);   // disturb the Stack and Heap a bit
    
    digitalWrite(D7, !digitalRead(D7));
    
    return digitalRead(D7);
}

void setup() {
    String idStr = Spark.deviceID();
    
    Spark.variable("id1", (void*)idStr.c_str(), STRING);
    Spark.variable("id2", (void*)Spark.deviceID().c_str(), STRING);

    Spark.function("test", blinkLED);
    
    pinMode(D7, OUTPUT);
    
    delay(10000);   // some time to request the variables a first time
}

void loop() {
    delay(1000);
    produceGarbage("Now start messing things up");
}

Request the Spark variable “id1” and “id2” immediately after reboot (max 10sec) and they will look fine.
Do it after a few calls to Spark function “test” or wait some time and things might turn bad, altough there is no one actually touching the Spark variables.

@ScruffR, is this an issue with Spark.variable or a really a known problem with String class memory management?

Actually it’s a bit of both and more.

String is a nice and easy - and sometimes obscure, since it doesn’t fail immediately, like char[] - test object to show the underlaying problem that everybody will face, who does not take in consideration scope and lifetime of variables and who forgets, that testing a solution in (near) real life situations is required otherwise you might fall for wrong positive tests.
Things look good at first, but go topsy-turvy when put into real action.

So when you use the wrong kind of variables as Spark variables, you might get punished and don’t even know why :wink:

1 Like

You can eliminate the string class stuff, by just passing in a character array that will go out of scope, so at it’s root this is a persistence/scope issue.

Also

void setup() {
  int foo = 12;
  Spark.variable("foo",&foo,INT);
}

As I said in the other thread, global scope and static allocation is the solution of course.

Maybe it is possible but I don’t know how you could write Spark.variable to test for this.

2 Likes

We must find a way to deal with this.

Maybe some C++ syntactic sugar can do it, maybe some compiler flags can bounce problematic requests.

At this point I am not sure, and just want to brainstorm.

1 Like

When one tries to return a pointer to a local variable you'll get warning: address of local variable 's' returned [-Wreturn-local-addr]. Although if you have no errors you'll normally not see this warning in the WebIDE and you might not pay attention to the warnings on local build either, but maybe this could be (ab)used for that purpose? :eyes: :blush:

So long as we are using pointers in Spark.variable, there is no easy fix - we don’t know where the pointer has come from or if memory has been deallocated. In that case, the best fix is to stress in the docs variables should be globally allocated.

There’s sadly nothing to stop someone passing a pointer to Spark.variable that is a buffer on the stack, which will contain garbage when the code leaves the function, or as in @ScruffR’s example, passing a pointer to memory that is later freed.

Enabling warnings as errors may catch some instances. I’ve started on this path in a development branch, and very much hope the next release of the firmware comes with warnings as errors enabled.

In general this is a side effect of passing strings by reference to Spark.variable, rather than by value (copying) - it requires the referent to remain valid at all times. As @bko said, the simplest way to do that is to allocate variables globally.

A potential fix is to change Spark.variable() to use pass by value semantics so the value is copied. The difference is then that the user will not be able to update the string buffer independently - they will have to call Spark.variable() again to register the new value. Thoughts on that?

Related issue:

OK - what about turning the semantics on it’s head and instead of attaching a cloud thing to a local variable of unknown scope, we make declaring the web thing return an object that you access via set/get semantics ?

There are resource issues if you forget about the object, but they seem less awful than the ability to read and write memory that no longer belongs to you.

Sure, that’s a possibility. E.g.

String& s = Spark.variable("myVar", STRING);   // initially empty

// later
s = "Hello World";
// myVar now shows "Hello World" when requested in the cloud.

// or, convenience to create and set in one. 
String& s = Spark.variable("myVar", STRING, "initial value");

This is the same as pass-by-value semantics that I mentioned above.

But this brings references in to the mainstream, which may be a point of confusion for beginner coders.

To be honest, I’m not sure this is an issue. Sure, it’s surprising when you first encounter it, but it’s easily avoided once you know how. And while I normally believe “what you don’t know shouldn’t hurt you” - here we can’t fix C/C++'s pointer semantics, unless we move to a reference-counted object allocation like ruby/javascript etc.

I was thinking along the lines of Spark.variable() returning a < to be defined > object that can later be manipulated, but I’m known to be a bit of a knuckle-dragger when it comes to cute C++ syntax in limited capability embedded systems.

Maybe that’s what your code does, in which case: Yes.

It does. See the first example. The string is empty until the user populates it with some data.

A reference is just a pointer in disguise, but you don't need the -> syntax. Makes working with strings, ints similar to how they are now with locally declared instances.

It seems to me that we are trying to find a problem for a solution! Having to declare globals for use in Spark.variable() is a simple and clear solution for the bulk of users. Why complicate that?

3 Likes

How do you enforce it ?

Heck, even with globals, it can screw up, if you pass String.c_str(), then screw around with the String enough that it realloc()s the memory, you’re now pointing to hyperspace.

You really think a system should be that fragile ? [And yes, I know there is a million ways to get a core to auger in, but I for one think they should be patently illegal/stupid, not just “you didn’t read the FAQ closely, did you ?”]

YMMV

But @AndyW, that brings us back to the real issue - Strings, garbage collection. I agree that systems should not be that fragile but then there’s the CC3000 (!!!) :stuck_out_tongue:

2 Likes

I agree with what @peekay123 said, defining a global is a clean solution. Also when we do a Spark.variable it means that the variable will change in future, like sensor readings. If we declare a local variable and how do we change the variable inside loop() function?

Really ? I guess I don’t see that beyond my usual reservations about cute C++ constructs on small embedded platforms. But perhaps we agree, and I’m just not seeing it.

Why indeed! There are many programming pitfalls, this is just one of them, and at a guess it happens to less than 1% of users?

I sketched out a solution above because I was nerd sniped by the problem (and to show how potentially simple the solution could be), but I really do feel there is little to gain by attempting to fix this, and potentially we create more confusion than we solve.

1 Like

To change the variable when it's local, simply re-assign with another call to Spark.variable(). The issue I referred to above says this doesn't work presently, but I feel redefinition is a natural need and the fix is straightforward, so it will happen! :slight_smile: