UDP received DGRAM boundaries lost: read(), parsePacket(), available() all broken

@phec - I’m unsure what you mean by “chunking” but what I wrote at the top, in the initial posting, is correct. UDP datagrams arrive as sent (if they arrive at all) and not necessarily in the correct order. TCP data arrives in a stream, byte (n+1) always arrives immediately after byte (n). You can read the stream in any size (chunk?) you like. And even if you attempt to read in exactly the same size as sent, even if this is constant size, there is nothing in the TCP protocol which guarantees that your attempt will succeed. The sender may send 100, 100, 100. And you may attempt to read in 100 byte sizes (chunks?) to find that the reads return 100, 20, 100, 80. bytes. Ordinarily, where not too many network hops are made, and where traffic is not very congested, you will see 100, 100, 100 but the point is this, if you want to write robust code, you must not rely on that. Plenty of programmers do idiotic things such as write one database record per TCP sendto() call and then are surprised that the error-corrected reliable TCP connection does not always result in one recvfrom() per sendto(). The network programmer always has to choose between two things: Either he must worry about message boundaries and use TCP or he must worry about lost/duplicated/out-of-sequence messages and use UDP. Sometimes the latter is much easier. But not (currently) on the Spark Core!

I start to suspect that Spark UDP is derived from Spark TCP. You quote UDP.available() as being inherited from the Stream class. UDP is not a stream. Not in the way Stevens (and doubtless the RFCs) use the term. TCP uses SOCK_STREAM sockets - UDP uses SOCK_DGRAM.

Where do I find documented “remoteSockAddr.sadata[6] and remoteSockAddr.sadata[7]”?