The overhead goes up as the packet size goes down. Check out this write up for the gory details in an entertaining story:<br><a href="http://www.tamos.net/%7Erhay/overhead/ip-packet-overhead.htm">http://www.tamos.net/~rhay/overhead/ip-packet-overhead.htm</a><br>
<br><br><div class="gmail_quote">On Mon, Sep 27, 2010 at 3:33 PM, Nicholas J Ingrassellino <span dir="ltr"><<a href="mailto:nick@lifebloodnetworks.com">nick@lifebloodnetworks.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div text="#000000" bgcolor="#ffffff">
This just inspired me to do another test.<br>
<br>
I am now only sending 1 out of every ~10,000 pixels. It still takes
about half of one second to receive ~50 pixels (7 byte packets per
pixel). All the CPU usage is on the client, not the server. I am very
familiar with this graphics library (Allegro) having used it many times
before. If I receive, discard the packets, and do not render the pixels
my CPU usage remains at ~100% leading me to believe it is <i>enet_host_service()</i>
and not something having to do with rendering data onto the screen.<br>
<br>
Is ~350 bytes split into ~50 unreliable, unsequenced packets still too
much?<br>
</div></blockquote></div><br>