Today, I put together a Python module to access Spark devices through the cloud API.
I though it could also help some people out, so I published it: https://github.com/Alidron/spyrk
It’s not yet supporting all functionalities. But at least I got what I needed for the moment: easy programmable access to my cores without having to deal constantly with curl, the access token or the device ids .
Also, I don’t know why, but the GET calls to retrieve the list of devices, functions and variables are super slow right now?! It takes a few seconds for the module to initialize.
The calls to get devices are kind of slow, maybe too slow? The API is trying to ping all your cores before it can tell you which of your cores are online or not. The timeout is 10 seconds I think, so we could make that timeout settable as a parameter? We’re also working on better realtime feedback of what is online, but it might be a few weeks before that’s ready.
Thanks @Lexa! If all your Cores are online, the response should be pretty quick. However, if any of them are offline, as David said, the Cloud is giving them 10 seconds to respond.
Haaaa, that explains it then . I tried with my two cores connected, and indeed it is way faster!
Don’t a core ping the cloud every 15 secondes? We could say that if a core did not ping the cloud in the last 15*2 (or *3 or *4) then it is probably offline? So the REST call don’t block.