[ENet-discuss] Large data packets over ENet

Jacco Bikker jacco.bikker at overloaded.com
Thu Feb 3 01:21:09 PST 2005


> If you compress the entire frame section and then send
> in fragments, would the remainder be corrupt if you lost a fragment?

That's not neccessary. The ray tracer thinks in tiles: The screen is
divided in 16x16 pixel blocks (1024 bytes each once rendered). This
improves cache coherency when traversing the spatial data structures. I
can thus simply compress the data for each tile and send it over. That
would result in something very similar to mpeg decompression. Packet loss
would result in tiles not getting updated.

> I mean the socket buffers (UDP/Winsock if you like).  You do this with the
> following code:

Thanks for that snippet. I'll check it out.

> Switching to 16bit will definitely make a huge difference.  It's a classic
> quality/performance trade off.  I think 16bit at 40fps would be much more
> impressive than 32bit at 25fps.

I'm not sure about that. After all I'm ray tracing, so part of the wow
factor is image quality. Especially refraction and soft shadows look much
better at 32bit. Perhaps I can do something inbetween by using dithering.

> How fast is LZO, out of curiosity?  I use Huffman for game data,
> but the implementation is dog slow.  I have to optimise it.

I use the code from www.oberhumer.com . It's old, but the author claims
that it is still the fastest compression code around. He claims 5Mb/s on a
P133 (it's that old ;) ), all I can say is that it compresses the full
768KB in about 40ms on my system (1.7Ghz). That's very little overhead,
and just like rendering, it is distributed over the slave systems.
Decompression is much faster than 5Mb, I believe he mentiones something
like 16Mb/s. Of course that part needs to be handled by the master alone.
It's also very easy to use. Took me one hour or so to add to my code.

> You would have to buffer the data from the thread anyway, until the main
> thread wants it.  I think John Carmack agrees with me that threads aren't
> necessary for high performance.

I'm going to check out this buffer issue asap. Looks like a very good
candidate for my current stalling problems. :) That would also explain why
debug mode always has far fewer problems than release mode.

> I believe ENet provides unreliable transfer.  I'm just not sure the
> architecture is geared towards high throughput.

I don't think it is. One of the problems I stumbled upon is the fact that
ENet allocates memory for each packet that is created, and performs a
memcpy for each packet that is send. Under normal circumstances that's
probably perfectly acceptable, but for me it's probably too much overhead.

Nevertheless, ENet really helped me to get things working in the first
place. :)




More information about the ENet-discuss mailing list