[ENet-discuss] Bandwidth/latency optimization

Blair Holloway thynameisblair at chaosandcode.com
Tue Dec 3 15:16:54 PST 2013


Here are my thoughts:

1) Packet size

Smaller is nearly universally better, especially if any of your data are
reliable, as your packets may quickly be eaten up by resent data over poor
connections.

A second advantage of smaller data sizes is that you can generally have a
higher number of maximum players, or more stuff happening in your game.

2) Compression

Of all the games I've worked on, we only ever used bit-packing or a range
encoder to compress the data; we never used a more general-purpose
compression algorithm.

One exception was a game whose packets often contained a lot of repeated
string data. We generated a dictionary at runtime using previously seen
strings, and used indices into that dictionary during future transmissions.

In general, the best "compression" we implemented was always "remove the
need for X to be sent". :)

3) Channels

Can you envision a need up-front for high-priority reliable data? I've
never encountered such a case myself, but I *have* found cases where
certain unreliable data need to be prioritized ahead of others. Sometimes
this can be implemented as simply as "put the high priority stuff in the
packet first; continue until pre-set limit reached".

4) Server cycle

The only thing I can really say here is that the smaller simulation time
step your server takes, the lesser your latency will be. Increasing the
timestep will save you bandwidth, but it can make your game seem laggier.
For an FPS, you probably want to aim for an even higher packet rate -- 30
packets per second wouldn't be unusual.

Cheers,

- Blair



On Tue, Dec 3, 2013 at 1:45 PM, Jérémy Richert <jeremy.richert1 at gmail.com>wrote:

> Hi all,
>
> I am currently developing a multiplayer FPS, and I am wondering how to
> tune ENet to have a network engine as optimized as possible on a bandwidth
> usage and a latency point of view.
> I would like to have your thougts on the following aspects:
>
> ----------------
> 1. Packet size
> ----------------
>
> I was wondering whether it was better to send large packets or to divide
> them into small packets.
>
> Pros of the large packets :
> - Less overhead due to the protocols (8 bytes for UDP, 20 bytes for IPv4,
> 10 for ENet)
>
> Pros of the small packets :
> - When a packet is lost, less data is lost
> - When a reliable packet is lost, resending it requires less bandwidth
> - Lower latency
>
> From what I have read, most people agree that it is recommended to send
> small packets to avoid packet splitting. This means that the application
> has to ensure that the size of the data sent does not exceed the MTU. Also,
> as the MTU depends on the router, some people recommended to have packets
> that will never be splitted, i.e. <= 576 bytes.
> What is your experience on this point?
> For now I have capped the data size to 1500, but I am thinking of reducing
> it for a better latency. Does anyone know the typical packet loss rate?
>
> ----------------
> 2. Compression
> ----------------
>
> Has anyone used compression in a network engine? If yes, which compression
> algorithm? What was the average gain?
> I have read that John Carmack used the Huffman compression in the Quake 3
> network engine because it was well suited for network data compression, but
> I still need to find some time to implement it in my program and do some
> tests.
>
> ----------------
> 3. Channels
> ----------------
>
> What is your network channel policy? How many channels do you use? What do
> you send on each channel?
> At the moment I have 2 channels: one for sending unreliable data (a lot),
> another one for reliable data.
> I have chosen this organization to avoid blocking the unreliable data
> while waiting for an ACK for the reliable data. I am thinking of adding
> another channel to send high priority reliable data, but I am not sure of
> the benefit as I already group the reliable data before sending them to
> limit the blocking. It may be useful if the packet loss rate is too high.
>
> ----------------
> 4. Server cycle
> ----------------
>
> I know this aspect depends on the game type, but I would be interested in
> knowing how your applications work on this.
> On my side, based on Valve's introduction to network concepts (+ some
> readings on the UT and Quake network engines), I have decided to implement
> a 50-ms cycle on the server side. This means that the servers only updates
> the simulation and notifies the clients each 50 ms. Meanwhile, it only
> reads the network messages to empty the network event buffer and to handle
> the reliable packet sending.
> On the client side, I have introduced a voluntary delay of 70 ms, which
> goes unnoticed on a user point of view, but helps a lot for overall
> fluidity as the client will almost always have the following world update
> (unless a packet is lost).
> I will soon try to increase the server cycle time to 60 ms to gain 10-20%
> of bandwidth. I also plan to separate the display delay of the player (it
> will stay to 70 ms) and the rest of the world (increased to 100 ms).
> However, I am afraid of the impact on reactivity.
>
> Does anyone have a similar architecture? What is your experience on
> timings?
> If not, has anyone already developed a FPS or a game very dependant on the
> network speed? If yes, what are your advices?
>
> ----------------
> 5. Other
> ----------------
>
> If anyone has some useful advices on how to improve the network engine of
> a game/application, I would be pleased to hear it (well, to read it at
> least).
>
>
> Thanks in advance for all your experience sharing.
>
> Best regards,
> Jeremy Richert
>
> _______________________________________________
> ENet-discuss mailing list
> ENet-discuss at cubik.org
> http://lists.cubik.org/mailman/listinfo/enet-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cubik.org/pipermail/enet-discuss/attachments/20131203/09016873/attachment.html>


More information about the ENet-discuss mailing list