Let's get (non)volatile

Does anyone have any good ideas on the easiest way to add some non-volatile config information to the Spark? Without recompiling, I mean. I would love to be able to stash a few hundred bytes and get em later.

I see from the EEPROM thread that this is not a high priority… but does anyone have any ideas? I could just put an SD reader into the circuit - but then I need a library for that too :smile:

Ken

@Soulhuntre: Have you had a try with the Arduino SD library?
I could imagine that this would not be as hard to port as EEPROM since it’s not too close to the Core HW using SPI which is present in the Core.
I guess its more or less a matter of adjusting the pin numbers to the Core ones.

I’ll have a try with it right away :wink:

1 Like

One option is to use the external flash; the Core has 2MB of external flash, of which we’re only using 0.5MB, and the remaining space is reserved for the user.

Here’s a quick interface written up by @zachary, but not yet integrated into our code:


The following two functions can be used similarly to EEPROM.read() and EEPROM.write() to access the 1.5MB available on the external flash chip.

Addresses passed to these two functions can range from 0 to 1,572,863 = 0x17FFFF. The corresponding address on the external flash is from 0x80000 to 0x1FFFFF. In case you haven’t seen it, the memory map for the external flash chip is here:

http://docs.spark.io/#/hardware/memory-mapping-external-flash-memory-map

One significant quirk—this chip can only be accessed 16 bits at a time, so you can NOT pass odd addresses. All addresses must be even.

Another “just in case you didn’t know”—you can use dfu-util to read and write directly between your computer and the external flash chip on the Core. That’s how we program keys and factory reset firmware onto external flash during manufacturing.


int SparkFlash_read(int address)
{
  if (address & 1)
    return -1; // error, can only access half words

  uint8_t values[2];
  sFLASH_ReadBuffer(values, 0x80000 + address, 2);
  return (values[0] << 8) | values[1];
}

int SparkFlash_write(int address, uint16_t value)
{
  if (address & 1)
    return -1; // error, can only access half words

  uint8_t values[2] = {
    (uint8_t)((value >> 8) & 0xff),
    (uint8_t)(value & 0xff)
  };
  sFLASH_WriteBuffer(values, 0x80000 + address, 2);
  return 2; // or anything else signifying it worked
}

Additionally, as you can see and probably guess, it’s much more efficient to write large buffers than to write a half word at a time. If your surrounding code:

  • guarantees even addresses and numbers of bytes
  • adds 0x80000 to addresses, and
  • converts data to/from a byte array

then you would do better to just call sFLASH_ReadBuffer or sFLASH_WriteBuffer directly with a larger number of bytes as the final argument.

3 Likes

The functions like sFLASH_ReadBuffer and sFLASH_WriteBuffer for working with the external flash chip are defined in the core-common-lib, here:

Additionally, you may in some cases have to erase a sector using sFLASH_EraseSector(uint32_t SectorAddr) before writing to it. I encountered this when working on the thermostat. Most of the rest of the following commit is incidental—the thing that made it work was the addition of sFLASH_EraseSector before writing:

The size of the erasable sectors is 4kB = 0x1000. So e.g., if you erase the sector at 0x80000 like I do in the thermostat code, that erases everything from 0x80000 to 0x80FFF. The next sector begins at 0x81000.

1 Like

Thanks for the info, I’ll try it when I have my LCD sorted!

Ken

Got interested to try this SD idea last Sunday :

. soldered a standard 8-pin header to an SD card adapter sold with a microSD, luckily the pin spacing of SD adapter allows it
. inserted an 4G microSD card in the adapter
. connected it via SPI to the core with basic breadboard wires
. reused/adapted/optimized sample code from the web on SPI SD card

After some effort in debugging/optimizing (no DMA yet), I can read (at 3Mbits/sec) and write (at 1.5Mbits/sec) successfully the SD card by 512b sector, so my core has now 4G of storage for very little extra hardware :wink:

4 Likes

That’s a lot of storage! World’s smallest Dropbox clone? :slight_smile:

2 Likes

Any idea when the EEPROM library will be included in the Web builds?

Also what is the expected writes on this chip?

The datasheet for the SST25VF016B flash chip lists the endurance as 100,000 cycles (typical).

Because we have multiple sources of non-volatile storage (STM32 internal flash, SST25VF016B flash chip, CC3000 nvmem), we’ve debated exactly what the EEPROM library should do. We’re leaning toward using the CC3000 because it’s the smallest space with the highest endurance.

Don’t know when we’ll get to it, but, of course, we accept pull requests. Here’s what EEPROM should do:

Cheers!

I have done this @Zachary with the eeprom library I built. I’ll release the code later today.

I also implemented the lower level avr eeprom functions, eeprom_write_block, eeprom_read_block et.c.

3 Likes

Awesome @mdma, would love to add this to the core-firmware repository!

So we have to erase a 4kB block even if we want to change just 1 byte in that sector? Is there a more efficient way of changing 1 byte in the sector than loading it into memory, changing it, then writing the whole sector back?

My work on the spark eeprom/flash library is taking a pause while I’m busy with other commitments but I have coded a system where the number of erases is significantly less than the number of writes (say 8x less). Also the erase is only needed if a write attempts to change a 0 back to a 1. So writing 0xFF, then 0xF0, then 0x40 then 0x00 to the same location would not require any erases since no 0 bits are turned back into a 1.

Other alternatives include opening a stream so that the flash library can appropriately co-ordinate erases so that client code doesn’t have to worry about that.

Finally, appropriate wear-levelling algorithms can help ensure that the erases are distributed throughout the flash rather than having one 4k block being continually erased for updates to a single address.

@zachary
I have used the functions sFLASH_ReadBuffer and sFLASH_WriteBuffer to work with external flash memory. The problem is the code works only when the numer of bytes to write is less than 300, it is wierd and the thing is I have an array of size 20KB+ that I want to store in the flash memory.

The procedure I am following is to read/write from external Flash is.
Setup()

  • Erase a block, which erases 4096 bytes

  • sFLASH_WriteBuffer size 300 bytes

    Loop()

  • Read and Serial print 4 bytes from some address to check if the data is written correctly.

Writing 300 bytes to flash memory is working and I am reading the correct values but when I increase it to like 400, 1000bytes, the Spark core blinks blue and nothing gets printed on the Serial connection.

Do you have any idea on how I can solve this problem?

Thanks

Hi @Muez,

Sorry if this is a silly question, but I’m wondering if this just could be a problem of allocating too much ram at once? Are you allocating a solid 400 byte chunk?

Thanks!
David

Hi @Dave

I have the array initialized with 1000bytes and I am just loading 400 of it. It is compiling and loading to the Spark Core corectly but the core flashs blue and no output on the serial connection.

I saw in one of the documentations that flashing blue indicates connection to cloud failure. May be I am messing with the Wifi module of CC3000.

If the problem is allocating too much ram at once, is there a way around to load big array to the external flash memory?

regards
Muez

Hmm, is it just a problem of writing 400 bytes at once, can you write more in smaller chunks? Any chance you could share your core and we could try to find the issue from there?

Thanks,
David

Hi @Dave

I tested it with writing small chunks of data (100 bytes each) into two different locations of the external flash memory, but it didnt work.

You can check out the code :

#define size 400
#define Flash_Address 0x81000
#define Flash_Address1 0x91000

uint16_t array[]; // initialized with some values [1000 elements]

void setup() {

uint8_t values[size] ;

int i, j = 0;
for (i = 0; i < size; i+=2)
{
    values[i]    = (uint8_t)((array[j] >> 8) & 0xff);
    values[i+1]  = (uint8_t)(array[j] & 0xff);
    j++;
}

sFLASH_EraseSector(Flash_Address);
//delay(10);  // This didnt help
sFLASH_EraseSector(Flash_Address1);

sFLASH_WriteBuffer(values, Flash_Address, 100);
sFLASH_WriteBuffer(values+100, Flash_Address1, 100);

Serial.begin(9600);

while (Serial.available() ==0);
Serial.println("Starting up...");

}

void loop() {

uint8_t _read[4];

while (Serial.available() ==0);
Serial.print("Value In array = ...");
Serial.println(array[50]);

Serial.print("Value Read1 = ...");
sFLASH_ReadBuffer(_read, Flash_Address+100, 2);

Serial.print((_read[0] << 8) | _read[1]);
Serial.println("");

delay(3000); // 3 second delay

}

The code compiles and loads to the core but it keeps flashing blue.

regards,
Muez

I'm guessing you're running into ram limitations by allocating a 400 byte array, and a 2000 byte array?

In addition to the RAM limits that @Dave mentions, I think you could be having other troubles too. If you are just trying to get read-only data into the external flash so your program can use it, I would try dfu-util to just load it over the USB in bootloader mode. The address map is in the hardware section of the doc.

If not, read on.

First off, try making your data const to get it out of RAM and into program flash:

const uint16_t array[] = { ... };

Since the SPI bus used by the external flash is shared with the TI CC3000 WifFi module, I think you could be having problems related to interrupts and IO for the Wifi interfering with your flash operations. Another user reported that he was able to make his external flash work by turning off the CC3000 interrupts. Note that this SPI is not the same as the user SPI on the Spark core pins.

Try turning off interrupts for the CC3000 around every call that uses the SPI bus to work with the external flash. Note that I would not recommend just turning these off for a long time since you will get data overruns from the cloud part of the firmware. Here’s an example for the writes:

NVIC_DisableIRQ(CC3000_WIFI_INT_EXTI_IRQn);
NVIC_DisableIRQ(CC3000_SPI_RX_DMA_IRQn);
NVIC_DisableIRQ(CC3000_SPI_TX_DMA_IRQn);

sFLASH_WriteBuffer(values, Flash_Address, 100);
sFLASH_WriteBuffer(values+100, Flash_Address1, 100);

NVIC_EnableIRQ(CC3000_WIFI_INT_EXTI_IRQn);
NVIC_EnableIRQ(CC3000_SPI_RX_DMA_IRQn);
NVIC_EnableIRQ(CC3000_SPI_TX_DMA_IRQn);

As @david_s5 said earlier, we really need some kind of locking/arbitration mechanism for this shared SPI bus.

3 Likes