TCPClient read() data missed

I have been reading the weather info from "Login | Weather Underground".
The code that I am using are

static TCPClient client;

    while (client.available())
    {                                   
        data = client.read(); 
        cnt++;
        if (data == '{')
            open_cnt++;
        if (data == '}')
            close_cnt++;
    }

GET /api/your ID/forecast/q/autoip.json HTTP/1.1
GET /api/your ID/yesterday/q/autoip.json HTTP/1.1

Using the "Forecast", I can success getting the correct data (7946 bytes).
But when testing the "Yesterday", this should have (41501 bytes) return. I had validated this with CURL.
The data receiving is start missing over ~10000 bytes. So, I set a total count and number of "{ }" received. The number of bytes received always at "12410" and of course, the "{ }" will not match also. In correct case, "{ }" will match.

I’m just guessing, but I bet you’re running out of variable RAM. The Spark Core doesn’t have a huge amount of RAM available.

It would probably help to see the full code.

2 Likes

@Dougal is right on here: that is a lot of data and seeing you full code would help debug it.

You should also look into using HTTP CHUNKED reqeusts so that the server does not send you all 40K+ bytes at once. Without that, you are depending on TCP flow control to hold back the data stream as you read it.

1 Like

The code I am using to debug is pretty simple. The following is called right after the "Client connect" passed and the sent command

GET /api/your ID/yesterday/q/autoip.json HTTP/1.1 

The parseData() will "read" each byte. Unless the RAM you are referring to under Core API. Otherwise, I am using only following variables.

For this command, I could not chunked it as the data I want to read (weather summary) is at the end of the 40KB. How could I control the TCP flow if the Core CPU is not fast enough ?

static TCPClient client;

void parseData(int mode)
{
int i, cnt=0, open_cnt=0, close_cnt=0;
char data;
unsigned long t1 = millis();

for (i=0; i<2000; i++)                  // Wait inter data timeout
{
    while (client.available())          // Client has data
    {                                   // Yes
        data = client.read();           // read received data
        cnt++;
        //
        // Use open-close {} to determine end
        //
        if (data == '{')
            open_cnt++;
        if (data == '}')
            close_cnt++;
        i=0;                            // reset count if data avail
    }
    if (open_cnt==close_cnt && open_cnt)
    {
        if(DEBUG==true) Serial.println(cnt);
        if(DEBUG==true) Serial.println(open_cnt);
        if(DEBUG==true) Serial.println(close_cnt);
        Serial.print("- Completed - ");
        Serial.println(millis()-t1);
        return;
    }
    delay(1);                           // 1ms wait
}
if(DEBUG==true) Serial.println(cnt);
if(DEBUG==true) Serial.println(open_cnt);
if(DEBUG==true) Serial.println(close_cnt);
Serial.print("- Timeout exit - ");
Serial.println(millis()-t1);

}

Does anyone from Spark team may have good suggestion for me to try why receiving 40KB data from TCPClient does not work ??

The same code works on same site with different request (less than 9000 bytes).

Hey @Dilbert, nice looking project you have there! I seem to recall there is an interesting way you need to handle the larger amounts of data. And requesting via HTTP 1.1 will allow chunked data, so it’s likely that you are getting chunks and not processing them so you see less data returned.

I don’t have the time to really test your code out today, but I have a suggestion. Take a look at my FacebookLikesAlert and see how I did this part:

while (client.connected() && (millis() - lastRead) < 10000) {
  while (client.available()) {
    char c = client.read();
    // ...
  }
  // ...
}

From what I remember, between chunks client.available() is not true… thus I needed the client.connected() timeout to keep things alive.

If you are still not seeing a way forward, ping me again and I’ll try to do some actually testing of your code.

BTW, this is very clever :wink:

if (open_cnt==close_cnt && open_cnt)
1 Like

I agree with @BDub and his example would be a good one to look at.

Just tried to add 'client.connected()' but result is the same.

I also look into CHUNKED, add below but result is the same.

TE: chunked 

Also try range header but server does not seem to support. Still sending all the data :frowning:

Range: bytes=20000-

HTTP/1.1 200 OK
Server: Apache/1.3.42 (Unix) PHP/5.3.2
X-CreationTime: 0.206
Last-Modified: Fri, 26 Sep 2014 22:28:44 GMT
Content-Type: application/json; charset=UTF-8
Expires: Fri, 26 Sep 2014 22:28:45 GMT
Cache-Control: max-age=0, no-cache
Pragma: no-cache
Date: Fri, 26 Sep 2014 22:28:45 GMT
Content-Length: 27926
Connection: keep-alive

With more data analysis between the good and bad case. Some how, the core seems to stuck in certain state and responding very slowly after 8030 bytes received
Below is the time measured in the reading data loop.

receiving 8030 bytes --> total time = 2339 ms
receiving 8031 bytes --> total time = 4327 ms

p.s. Since the code is while in the loop(), will the Core firmware OK with such a long delay without loop() being returned ?? Is there a limit how long the loop() can be executed ?

I did more testing today and get some interesting result. My code has 2 sections while processing the TCPClient data.

  1. Process Data while available
  2. Wait for timeout while data not available

I had put a time measurement for each section and got below result

Total Data received : 6570 bytes
Process time : 1070ms
Waiting period : 674--ms
Total Data received : 8030 bytes (diff : 1460)
Process time : 303ms
Waiting period : 1509--ms
Total Data received : 9490 bytes (1460)
Process time : 308ms
Waiting period : 3258--ms
Total Data received : 10950 bytes (1460)
Process time : 302ms
Waiting period : 6768--ms
Total Data received : 12410 bytes (1460)
Process time : 300ms
2 sec Timeout exit : 12410

As you can see 2 strange results. After 6570 bytes (1070 ms) being received

  • Receiving additional data is incremental of 1460 bytes.
  • The waiting period is double each time.

I suppose the designer of this part of the code may easily point out what may had happened ? Whether this is a internal timeout or circular buffer ?

Thanks for sharing! This is indeed strange. This sprint there is time available for looking into TCP on the core!!

Are you able to make the full source of the app available and could I have the URL of server you are connecting to, and anything else I might need to reproduce the problem?

I had modified the code late last night to use read(buffer,len) instead of single byte read.
It shows it always have 5 x 128 bytes read and follow with 90 bytes. That total of 730 bytes, is this the TI internal buffer ?

The first long waiting period (215 ms) begins after 3650 bytes received.

It looks like this is same issue reported in the TI forums with a workaround - http://e2e.ti.com/support/wireless_connectivity/f/851/t/365625.aspx

730 does appear to be a magic number with the cc3000. And the slowdown you see after 8030 bytes (which is 730*11) seems to further hint at some magic behind that number.

This looks like normal TCP/IP exponential back off when the receiver (in this case the Spark core) overflows its buffers and sends a NAK. The TI data sheet says that max packet buffer size is 1468 bytes which is a magic number from the old wired Ethernet packet days.

So the solution to not overflowing and not getting exponential back off and retry is to not send packets larger than the TI part can accept, which implies 1468 byte maximum size payloads.

That makes sense. I was going to check my sources before writing something similar.

Is there any way to force the sender to use a smaller send buffer size? The TI article I linked mentions the problem disappears with buffer sizes less than 2240.

That setting is typically called the MTU or maximum transfer unit and is set on the network interface (NIC) of the sending host. There is a path MTU discover protocol which I doubt that Spark implements where the receiving host (Spark in this case) sends as TCP NAK and an ICMP frag needed. Lots of hosts and routers ignore this and the sending host figures out via NAK what a good transfer size and rate are, just as in this case.

If you are in control of the sending host, going into the NIC settings and making sure the MTU is set to 1500 and Jumbo packets are off will help.

Do you think a patch will be available from Spark release for test ? I could try this out very quick.

Any possible update be available ?

@Dilbert are you in control of the sending host—the server to which the Core is connecting?

1 Like

No, I am using the Core to connect to a weather station API and get report. One of the report is very big and over 40KB. Unfortunately, the summary section (which I need) is at the end of the report.

Happy to consider implementing something that would alleviate this problem—i.e., have the CC3000 encourage the hosts to which it connects to avoid using jumbo frames—but I’d need more info. The CC3000 doesn’t expose any “send a TCP NAK”-level functions in its API, and a quick search of TI’s forum didn’t lead to any info there. I would normally suspect that the CC3000 would send a NAK on its own without our intervention, but skepticism would be wise. :wink: