Assume you have a client application which is known to connect to a given IP, and send a fixed size message (of size X = ~dozen bytes) upon connection, and wait for a reply.
If you’re writing a server, can you guarantee that the first (non-EAGAIN) call to read() (assuming no errors) on the (non-blocking) socket after the accept() will return X?
My understanding is that TCP/IP is a streaming protocol, so that isn’t guaranteed, but it might be true in practice, since such a small packet isn’t likely (possible?) to be split during transmission.
In practice it depends in part on the client application. If the client socket had local buffering disabled, and wrote the bytes to the socket one byte at a time then it is entirely possible that the 12 bytes are transferred in multiple wire level packets.
And in part on the way the server application is written. If the server socket read call is blocking and requesting 12 bytes then it will wait for 12 bytes regardless of how they arrive (or until a timeout or an error occurs). However, severs should really be using non blocking sockets for performance reasons.
TCP is a stream-oriented protocol. It is entirely legal for either operating system, a router, or any other device involved in the connection to break up writes of any size into multiple packets, or to recombine them. A well written program must handle short reads.