UDP issues and workarounds

Wow I followed your links and what a disappointment! Ugh… CC33000 not fixing that issue is crazy and so many people are going to ditch that chip. Wow a proxy for the udp packet may be the only thing you can do. What a waste though an additional hop has to be made.

What about the CFOD or CBOD when sending rapidly? Anything to make that work? I can’t keep my core alive for long periods of time. That is also frustrating. I hope some advancements are being made to keep the user loop running even when wifi trips up.

Thanks

Yes, it’s not a great situation, but I think we can make it better.

The CFOD/CBOD - this may be because the host maintains a free packet buffer count, which it updates when it sends a packet, or when the CC3000 sends a message about packets it has sent. However, this isn’t properly guarded - it’s updated both on the main thread and also in an interrupt, leading to the typical issues associated with concurrent updates to a shared value.

I have a proposed fix for this, but don’t have any good test cases to stress the problem. If you have a test case I can use, I can investigate further. Bu I’m soon on vacation until the end of the month so I’m afraid I can’t look at it until August.

@mdma if you get a chance, the code in the following link causes the core to die every time.

If you change the delay to under 200ms you die even faster. The longest I have seen the code run is a couple hours. If you follow along you see that disabling the cloud helps but the core still dies though. I did not update the thread to reflect this information, I felt like the discussion was going nowhere fast.

At the time of the orig post on the other thread you get two possible outcomes with the above code.

  1. CFOD
  2. The core appears to be alive and the cloud REST functions still poll the device however the user loop is not running. You can verify this by using the spark.variable and D7 led. The variable stops updating well under the maximum value an int can hold and the D7 led no longer blinks.

I do not know if any cloud code updates have been done since then. I am also running the latest ti firmware at the time was well.

Thanks Again

Time to call it broken, I think.

Does this influence DNS hostname resolving?

I have not seen DNS be effected but I guess it is possible. There have been issues with folks that have a complicated or slow DNS setup. If your wireless access point/router is gatwaying for you, normally everything is fine. One satellite internet user has written his own DNS with a much longer timeout since the default never worked for him.

Writing the proxy to do as described would be impossible. The boundaries of received UDP datagrams cannot be determined. Hmm, many weeks later: I understand now: The proxy would not be on a Spark so it would recognize the packet boundaries and insert an end of packet marker and re-transmit, the Spark can search for the marker.

UDP.parsePacket and UDP.read are broken. The Spark will sometimes receive incoming packets (AAA), (BBBBB) as (AAABBBBB). I need to be able to rely on the return from parsePacket to tell be the length of the next unread packet.

It’s an embarrassment for a serious library to get the basic UDP fundamentals wrong. This will give newbies a skewed idea of UDP and will frustrate people who know what they are doing. Please fix this.

2 Likes

UDP.write and UDP.endPacket are broken. According to the Spark docs, and common sense, UDP.write provides a buffer for you so that you don’t have to do your own buffering. Then when you call UDP.endPacket, everything you have written gets sent as a single packet.

Instead, UDP.write sends a packet every time it is called. UDP.endPacket does not function.

Since UDP inherits from Stream, this is especially problematic. Some of the write methods call UDP.write many times in the process of writing a single piece of data, but then UDP.write fires each tiny piece off in its own individual packet.

Can we expect this to ever be resolved on the Spark Core?

2 Likes

Here is the work-around for the write endPacket issue. Try it–works great!

The read and parsePacket issue is related to the TI chip and is not fixable by Spark. That is one of many reasons they are moving to different WiFi chip for Photon. Encode the packet length in the packet or if you can’t, then build a parser that can figure it out. If you are using a well-known network service over UDP, we can probably help you figure it out.

//----- UDP + overloading the inappropriate UDP functions of the Spark Core (REQUIRED !)
class myUDP : public UDP {
private :
	uint8_t myBuffer[128];
	int offset = 0;
public :
	virtual int beginPacket(IPAddress ip, uint16_t port){
		offset = 0;
		return UDP::beginPacket(ip, port);
	};
	virtual int endPacket(){
		return UDP::write(myBuffer, offset);
	};
	virtual size_t write(uint8_t buffer) {
		write(&buffer, 1);
		return 1;
	}
	virtual size_t write(const uint8_t *buffer, size_t size) {
		memcpy(&myBuffer[offset], buffer, size);
		offset += size;
		return size;
	}
};

myUDP Udp;

P.S. You will get better answers if you stick to one thread. I see @Moors7 tidied up for you a bit–he beat me to it again!

2 Likes

It’s good to know about the Photon. Thank you! Any chance the endPacket issue will be fixed in the firmware?

To get around the lack of buffer, I wrote a MemoryStream class (http://pastebin.com/rSqrRKax) inspired by C#'s MemoryStream. I can write:

MemoryStream* stream = new MemoryStream();
stream->write((byte)'h');
stream->write((byte)Spark.deviceID().length());
stream->write(Spark.deviceID());
udpClient.beginPacket(serverAddress, port);
udpClient.write(stream->getBuffer(), stream->getSize());
udpClient.endPacket();
delete stream;

MemoryStream is helpful because it expands dynamically and efficiently. The firmware API could use a solution like this behind the scenes rather than fixed size buffers.

@jnm2 makes a good point. The endPacket problem/feature/bug and buffering for sending could and should be fixed. I would attempt to fix it if I was proficient enough in c++.

Good Luck! Personally I have abandoned the spark core for receiving UDP packets. I can't design something that doesn't follow network standards. Even if I could tell 3rd parties that I needed them to encapsulate their UDP packets in an envelope they would laugh. Nor do they want to spend the development time implementing. I also refuse to write a UDP proxy either, one of the many points of UDP is the speed and this takes away from the speed. I also don't want to have to additionally support a proxy increasing my total cost ownership.

I hope from all the threads about UDP that the photon addresses all these problems. I really like the "spark" feature set: cloud, web ide, community, price, ability to use wifi without much difficulty, and all the other features. Nice job spark team. Please have this fixed in the photon.

Thanks

Hi @jnm2

Your MemoryStream class looks very nice and is interesting but heap fragmentation is a real problem on processors like Spark and calling new many times for small numbers of bytes is usually not a good way to maintain stability. Static allocation is generally more stable and leads to fewer problems on smaller processors like this. The Spark team are moving to less static buffer allocation at compile time, but to allocate only the buffers you really need for your code–right now there can be unused buffers allocated in RAM by the base firmware.

One other point is that there is no point in growing the UDP buffer size to be larger than the MTU size of the TI CC3000 which is fixed at 1468 bytes maximum if you intend to call UDP.write on that buffer. You are dealing with the packet level interface and there is no other buffer in the system to absorb your write requests, other than the packets buffers inside the TI part.

I had thought about sending in a pull request to fix the UDP write/endPacket problem too but then Photon happened and things are going to be different in the future. You can try out the branch that will be used on Photon right now on your Core if you setup the local gcc tool chain and build locally, but I don’t think it addresses these issues yet.

@bko please submit your write/endPacket fix. This seems to be in the realm of firmware under sparks’ control.

2 Likes

Having issue after flashing:

In setup has begin with binding to port
void loop()
{
if (Udp.parsePacket() > 0)
{
//
}
}

After it Photon is flashing with red. I comment parsepacket in loop works. Fw 0.4.4

Any ideas? Because it worked on 0.4.3

Hi @Aka_Abe

Can you show us the minimal program that demonstrates the problem? I have used 0.4.4 with my UDP code and it is working for me. Maybe something else is causing trouble for you?

Hi,

I think problem is in call to Wifi.LocalIp(). Please see code below:

char localIp[4];

UDP Udp;

void initLocalIp()
{
IPAddress mylIp = WiFi.localIP();
sprintf(localIp, “%d.%d.%d.%d”, mylIp[0], mylIp[1], mylIp[2], mylIp[3]);
}

void setup()
{
Udp.begin(8888);
initLocalIp();
}

void loop()
{
if (Udp.parsePacket() > 0)
{
}
}

This is way too small for the dotted IP address you are printing and so it overwrites something else in memory.

Try increasing it to char localIp[16]; or larger since there are 3 digits per address byte and three "."'s plus the trailing zero byte to terminate the string.

3 Likes

Solved. Thanks!

1 Like