The Spark Core Firmware

@dorth is correct - a new build will be uploaded to the web IDE by the end of the week. In general we’ll be trying to push new firmware to the cloud every two weeks.

3 Likes

That is awesome. It is really good to see how connected you guys are to this as a team.

I get this error when building on Ubuntu 12.04.



Building file: …/src/spark_wiring_i2c.cpp
Invoking: ARM GCC CPP Compiler
mkdir -p obj/src/
arm-none-eabi-gcc -g3 -gdwarf-2 -Os -mcpu=cortex-m3 -mthumb -I…/inc -I…/…/core-common-lib/CMSIS/Include -I…/…/core-common-lib/CMSIS/Device/ST/STM32F10x/Include -I…/…/core-common-lib/STM32F10x_StdPeriph_Driver/inc -I…/…/core-common-lib/STM32_USB-FS-Device_Driver/inc -I…/…/core-common-lib/CC3000_Host_Driver -I…/…/core-common-lib/SPARK_Firmware_Driver/inc -I…/…/core-communication-lib/lib/tropicssl/include -I…/…/core-communication-lib/src -I. -ffunction-sections -Wall -fmessage-length=0 -MD -MP -MF obj/src/spark_wiring_i2c.o.d -DUSE_STDPERIPH_DRIVER -DSTM32F10X_MD -DDFU_BUILD_ENABLE -fno-exceptions -fno-rtti -c -o obj/src/spark_wiring_i2c.o …/src/spark_wiring_i2c.cpp

Building file: …/src/spark_wiring_interrupts.cpp
Invoking: ARM GCC CPP Compiler
mkdir -p obj/src/
arm-none-eabi-gcc -g3 -gdwarf-2 -Os -mcpu=cortex-m3 -mthumb -I…/inc -I…/…/core-common-lib/CMSIS/Include -I…/…/core-common-lib/CMSIS/Device/ST/STM32F10x/Include -I…/…/core-common-lib/STM32F10x_StdPeriph_Driver/inc -I…/…/core-common-lib/STM32_USB-FS-Device_Driver/inc -I…/…/core-common-lib/CC3000_Host_Driver -I…/…/core-common-lib/SPARK_Firmware_Driver/inc -I…/…/core-communication-lib/lib/tropicssl/include -I…/…/core-communication-lib/src -I. -ffunction-sections -Wall -fmessage-length=0 -MD -MP -MF obj/src/spark_wiring_interrupts.o.d -DUSE_STDPERIPH_DRIVER -DSTM32F10X_MD -DDFU_BUILD_ENABLE -fno-exceptions -fno-rtti -c -o obj/src/spark_wiring_interrupts.o …/src/spark_wiring_interrupts.cpp
…/src/spark_wiring_interrupts.cpp:59:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:60:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:61:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:62:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:63:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:64:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:65:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:66:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:67:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:68:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:69:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:70:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:71:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:72:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:73:7: error: expected primary-expression before ‘.’ token
…/src/spark_wiring_interrupts.cpp:74:7: error: expected primary-expression before ‘.’ token
make: *** [obj/src/spark_wiring_interrupts.o] Error 1
darcy@PXE:~/spark/core-firmware/build$

From …/src/spark_wiring_interrupts.cpp

//Array to hold user ISR function pointers
static exti_channel exti_channels[] = {
{ .handler = NULL }, // EXTI0
{ .handler = NULL }, // EXTI1
{ .handler = NULL }, // EXTI2
{ .handler = NULL }, // EXTI3
{ .handler = NULL }, // EXTI4
{ .handler = NULL }, // EXTI5
{ .handler = NULL }, // EXTI6
{ .handler = NULL }, // EXTI7
{ .handler = NULL }, // EXTI8
{ .handler = NULL }, // EXTI9
{ .handler = NULL }, // EXTI10
{ .handler = NULL }, // EXTI11
{ .handler = NULL }, // EXTI12
{ .handler = NULL }, // EXTI13
{ .handler = NULL }, // EXTI14
{ .handler = NULL } // EXTI15
};

My system info

darcy@PXE:~/spark/core-firmware/build$ uname -a
Linux PXE 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:40:43 UTC 2013 i686 i686 i386 GNU/Linux
darcy@PXE:~/spark/core-firmware/build$ cat /etc/issue
Ubuntu 12.04.4 LTS \n \l

Anyone know what I missed to get this error?

I had problems in Win 8.1 also but found a solution. Check out THIS TOPIC.

Hope it helps!

:smiley:

2 Likes

When I get something right on my main PC, I can’t reproduce the success with my laptop. :confused:

In this case, actually the original guide (in the first post) is the actually the simplest method and works well. There was just few points to pay attention to:

  • Use Git Bash instead of Windows Command prompt. It installs with Git and runs from context menu (right mouse click).
  • As @Dave said, dfu-util must be 0.7 from http://dfu-util.gnumonks.org/releases/dfu-util-0.7-binaries.7z
  • Make sure everything is in Windows PATH variable so they can be found and run anyywhere
  • Do not install Git or Make to Program Files. Space in certain places breaks things in unix-style enviroment.

That last part took me whole saturday to figure out.

I redownloaded clean Spark sources from GitHub and it seems to build well without modifications to makefiles. First build with i3-laptop is slowish, but after that I only have to rebuild the application.cpp related changes, which is suprisingly fast compared to time it takes the online IDE to send me new firmware.

Cool, glad you got it building again! The original makefile was OS independent, but the much faster makefile has a few more requirements, as a Windows user myself, I’m looking forward to trying this out.

The build time on the build server is also about under a second, so if you’re watching your core, by the time your core flashes magenta once, the cloud has already transferred your code, built it, and has started sending chunks to your core. Most of the time is spent making sure all the packets get there safely, and making sure you have firmware to fall back on if the transfer fails. :smile:

Edit: Which is to say, we’ve been brainstorming on ways to make the OTA updates faster, and I think we have some good solutions in the pipeline to speed that up significantly.

It just occurred to me that the speed of the reflashing must be related to the speed at which the CC3000 and STM32 are working together to read bytes over TCP. And if the firmware is about 70KB or so, then the transfer speed is a lot slower than I think it could be… sounds like 1 - 2KB per second (my flashing process takes about 45 seconds), and should be way faster, at least 30KB/s or more.

this is due to the gcc version
seet this post https://community.spark.io/t/solved-firmware-compile-error/794

1 Like

A big component of the OTA delay has to do with the small receive buffer size (256 bytes, so more ram is left available), and the positive confirmation message after every package is received. This means on a medium latency network, say ~100ms ping, you have to wait an extra ((70000/256)*100)/1000 ~= 27 seconds. The trade off here is this is very robust in a high-error environment, but slow in a clean environment. My plan was to try to carefully send more than one chunk at a time in a way as to not overwhelm the core. I think this is possible, but it requires some careful coding, and it needs to be backwards compatible as well. :smile:

1 Like

I see your math, and it looks correct, but when I saw 30KB/s on a CC3000 breakout, it was using 64byte buffers in code… but the CC3000 itself buffers up to like 1520 bytes if I remember correctly, so in theory after your fill up your 256 byte buffer on the Spark Core, the CC3000 still has (1520 - 256) bytes to give you without any delay.

Hmm, sounds like we could safely be sending at least 4-256 byte chunks each time, without running into buffer issues?

Yeah, that’s been my experience… if I printed out a message with how many bytes I received it would spit out “Received 64bytes!” a whole bunch of times up to like a total of 1520, with a little “hiccup” while it got the next chunk of data and repeat. If you are handshaking each 256 byte packet, I think you are not really trusting TCP very much then? :smile: Should just be able to dump 70KB at the CC3000 and it manage collecting it as fast as you pull it out.

Right now we’re doing crc checks every 256 bytes, and giving the core a chance to re-request a packet if it doesn’t like it. (Yes, very distrustful of TCP in this context) The trade off was that the cost of resending one packet was better than resending all the packets, but if we split the difference and did just as many checks, but optimized for larger chunk sizes, I think we’d save a lot of time.

I’m not a super web protocol guy, but if you are using TCP isn’t the error checking already built in? Assuming it is, you should be able to look at the header to see how much data is being sent, and if you don’t get it all then you don’t try to program, and revert to last known good firmware? CRC check the whole 70KB just for good measure?

We do a 32bit checksum on the packets, as opposed to TCP’s 16 bit (I think) checksum. Since a firmware update is so important, we really don’t want any bits flipped: http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Error_detection

I think TCP can get weird when you introduce radio transmissions, but it’s possible we could lean on TCP a bit more than we are currently. Another benefit of this approach could be that our protocol would be resilient with something closer to a UDP if that was something that made sense down the road.

I agree. Software over TCP doesn’t need additional checksums - the practical chances of a handful flipping of random bits is far more than 2^16. If you feel you need to verify integrity, you should be looking at sending a SHA hash of the the original data and verifying the whole on reciept. Or better still, sending a digital signtature. That solves multiple problems in one hit - both integrity and provenance.

I don’t know the spark in detail but my impression is that the RSA keys are used as part of the handshake, but not to validate the data sent. This seems backwards. It’s much more flexible to allow an open channel, but verify the information coming over the channel, rather than insist on a secure channel to begin with.

1 Like

Seconding this. If you're using TCP, you only need to be doing a checksum of the binary blob after upload. If you're going to verify each packet you might as well use UDP to lower the overhead.

Wireless actually works very well with TCP networks because of the data integrity. Wireless is prone to dropping or corrupting packets and TCP makes sure they get there intact, at the expense of speed. (When I say speed, I'm talking about large amounts of continuous data, like video streaming; for the small > 128KB files being sent to the Core, TCP is plenty fast!)

For example, look at something like 6LoWPAN which still uses TCP to talk with very low speed wireless sensors.

Sounds like we should give this a try! :slight_smile:

Can someone confirm that the “compile-server2” branch represents the firmware currently being deploy by the Web IDE?

thanks,
Chris

Hi @chrisb2,

Yup! That’s the branch that the build site is using.

Thanks,
David