The Spark Core Firmware

I see your math, and it looks correct, but when I saw 30KB/s on a CC3000 breakout, it was using 64byte buffers in code… but the CC3000 itself buffers up to like 1520 bytes if I remember correctly, so in theory after your fill up your 256 byte buffer on the Spark Core, the CC3000 still has (1520 - 256) bytes to give you without any delay.

Hmm, sounds like we could safely be sending at least 4-256 byte chunks each time, without running into buffer issues?

Yeah, that’s been my experience… if I printed out a message with how many bytes I received it would spit out “Received 64bytes!” a whole bunch of times up to like a total of 1520, with a little “hiccup” while it got the next chunk of data and repeat. If you are handshaking each 256 byte packet, I think you are not really trusting TCP very much then? :smile: Should just be able to dump 70KB at the CC3000 and it manage collecting it as fast as you pull it out.

Right now we’re doing crc checks every 256 bytes, and giving the core a chance to re-request a packet if it doesn’t like it. (Yes, very distrustful of TCP in this context) The trade off was that the cost of resending one packet was better than resending all the packets, but if we split the difference and did just as many checks, but optimized for larger chunk sizes, I think we’d save a lot of time.

I’m not a super web protocol guy, but if you are using TCP isn’t the error checking already built in? Assuming it is, you should be able to look at the header to see how much data is being sent, and if you don’t get it all then you don’t try to program, and revert to last known good firmware? CRC check the whole 70KB just for good measure?

We do a 32bit checksum on the packets, as opposed to TCP’s 16 bit (I think) checksum. Since a firmware update is so important, we really don’t want any bits flipped: http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Error_detection

I think TCP can get weird when you introduce radio transmissions, but it’s possible we could lean on TCP a bit more than we are currently. Another benefit of this approach could be that our protocol would be resilient with something closer to a UDP if that was something that made sense down the road.

I agree. Software over TCP doesn’t need additional checksums - the practical chances of a handful flipping of random bits is far more than 2^16. If you feel you need to verify integrity, you should be looking at sending a SHA hash of the the original data and verifying the whole on reciept. Or better still, sending a digital signtature. That solves multiple problems in one hit - both integrity and provenance.

I don’t know the spark in detail but my impression is that the RSA keys are used as part of the handshake, but not to validate the data sent. This seems backwards. It’s much more flexible to allow an open channel, but verify the information coming over the channel, rather than insist on a secure channel to begin with.

1 Like

Seconding this. If you're using TCP, you only need to be doing a checksum of the binary blob after upload. If you're going to verify each packet you might as well use UDP to lower the overhead.

Wireless actually works very well with TCP networks because of the data integrity. Wireless is prone to dropping or corrupting packets and TCP makes sure they get there intact, at the expense of speed. (When I say speed, I'm talking about large amounts of continuous data, like video streaming; for the small > 128KB files being sent to the Core, TCP is plenty fast!)

For example, look at something like 6LoWPAN which still uses TCP to talk with very low speed wireless sensors.

Sounds like we should give this a try! :slight_smile:

Can someone confirm that the “compile-server2” branch represents the firmware currently being deploy by the Web IDE?

thanks,
Chris

Hi @chrisb2,

Yup! That’s the branch that the build site is using.

Thanks,
David

Just wanna know what the backup firmware does and when it kicks in or gets updated :smiley:

0x40000	BackUp Firmware Location	128 KB max

@satishgn any idea? :smiley:

@kennethlimcp, The OTA firmware is downloaded to External Flash(0x60000). After reset, the bootloader takes a backup of the current firmware in Internal Flash (0x08005000) to External Flash(0x40000) before copying the OTA downloaded code to the work area in Internal Flash. During this period if a power failure or any dramatic event interrupts this copy process then on reset, the code in the backup area is reverted back. Hope this answers your question!. Thanks.

1 Like

It does :slight_smile: Thanks!

Just wanted to make sure i was understanding this correctly heh.

Haven’t seen how this is being kicked it but would be nice to explore it :smiley:

Dear BDub, I had the same issue with the ,o,d running under Win7 cygwin. I added the windows make to the arm-gcc compiler and the problem went away. To me it looks like it is rooted in the cywin make (I have Version 4.0). Obviously the windows make does everything correctly. So if you want to get rid of your batch file, just install windows make into the gcc compiler directory. If you add it as a separate directory you have to add the path to the $PATH variable so that it is correctly found.

Thanks for the update! I have actually been problem free for a while now when I noticed one day that I didn't need to clean all every time anymore. I think it may have been due to a change in the make file by Spark, or it might have been when I converted my PATH variable to DOS 8.3 filename structure. Either way I'm enjoying the good life with fast build times now! :smile:

Some history from the other thread where I tinkered with this problem:

1 Like

I’m trying to build and load the firmware for the first time, and have the firmware compiled. I am at the step of loading the firmware onto the Spark Core. I have dfu-util installed and working, it seems.

root@ubuntu:~/Spark/core-firmware/build# dfu-util -l
dfu-util 0.7

Copyright 2005-2008 Weston Schmidt, Harald Welte and OpenMoko Inc.
Copyright 2010-2012 Tormod Volden and Stefan Schmidt
This program is Free Software and has ABSOLUTELY NO WARRANTY
Please report bugs to dfu-util@lists.gnumonks.org

Found DFU: [1d50:607f] ver=0200, devnum=4, cfg=1, intf=0, alt=1, name="@SPI Flash : SST25x/0x00000000/51204Kg", serial=“8D86087F5055"
Found DFU: [1d50:607f] ver=0200, devnum=4, cfg=1, intf=0, alt=0, name=”@Internal Flash /0x08000000/20
001Ka,108*001Kg", serial=“8D86087F5055”

The problem is with flashing the firmware:
root@ubuntu:~/Spark/core-firmware/build# dfu-util -d 1d50:607f -a 0 -s 0x08005000:leave -D core-firmware.bin
dfu-util 0.7

Copyright 2005-2008 Weston Schmidt, Harald Welte and OpenMoko Inc.
Copyright 2010-2012 Tormod Volden and Stefan Schmidt
This program is Free Software and has ABSOLUTELY NO WARRANTY
Please report bugs to dfu-util@lists.gnumonks.org

dfu-util: Invalid DFU suffix signature
dfu-util: A valid DFU suffix will be required in a future dfu-util release!!!
Opening DFU capable USB device…
ID 1d50:607f
Run-time device DFU version 011a
Claiming USB DFU Interface…
Setting Alternate Setting #0
Determining device status: state = dfuDNLOAD-IDLE, status = 0
aborting previous incomplete transfer
Determining device status: state = dfuIDLE, status = 0
dfuIDLE, continuing
DFU mode device DFU version 011a
Device returned transfer size 1024
DfuSe interface name: "Internal Flash "
Downloading to address = 0x08005000, size = 72136
Download [= ] 4% 3072 bytesdfu-util: Error during special command “ERASE_PAGE” get_status

Not sure if this is a possible cause, but I running Ubuntu 12.4 inside a VM on VMware Workstation using the USB emulation.

Any suggestions?

Hi @shmorgan

I think you are having the problem in this thread, for which you need to build dfu-util differently on Ubuntu. As a nice side effect, this makes dfu-util faster!

2 Likes

Hey Dave, I wonder if there was any progress with regards to making OTA updates faster?

Hi @eranarbel,

We haven’t had a chance to streamline the over the air firmware push quite yet, but there are a few good alternatives in the meantime if you have really really slow OTA updates. We still plan on making this faster though. The challenge here is just making sure everything is backwards compatible, but I’m still hoping we can do this sometime during the summer.

Thanks!
David

Compiling remotely / flashing locally via dfu:

  1. Try 'verify’ing on the build IDE, (refresh the page, bug), then click “download binary” next to the project name
    OR

  2. Install the CLI ( https://github.com/spark/spark-cli ), and use spark cloud compile to get the binary locally

  3. Then put your core into dfu mode

  4. Then you can flash locally with spark flash firmware your_binary.bin

1 Like