[ENet-discuss] A unified rate-based throttle for reliable and unreliable packets

Lee Salzman lsalzman at telerama.com
Wed Mar 31 15:42:11 PST 2004


I've been giving some thought to trying to make the throttling
mechanisms more holistic, and particularly more sane for the case of
reliable data. Currently the windowing mechanism and the packet dropping
mechanism sort-of work together, but in an almost antagonistic way. 

So I propose the following throttling mechanism in place of what's
there, that would both get rid of the reliable packet window and replace
the two throttling mechanisms with one integrated scheme:

The main variable of the throttle will be the current sending rate. This
is the rate, in bytes per second, which at most data should be sent over
a connection. The rate ultimately floats up and down in response to
network conditions. Regardless of throttling reliable or unreliable data, 
this sending rate is always obeyed. 

If bandwidth limits for the connection are specified, we then have at
our disposal a maximum throughput that ENet should not exceed. Since
this is only optionally specified, it may not be counted upon but can be
taken advantage of to set an upper limit on the current sending rate.

Whenever we need to send out data (on a call to either enet_host_flush or
enet_host_service), we determine a data budget. This data budget is
defined as time since data was last sent multiplied by the current
sending rate, and caps to be at most the amount of data sendable in one
second (or some fraction thereof), so: 
  budget = min(timeSinceLastSent, 1) * sendingRate.

For reliable packets, we send out as many packets as possible within
this budget. Any remaining data is simply stalled there until the budget
is large enough to send.

For unreliable packets, minimal latency of sending is much more
important than whether the packets get there, since ENet assumes that a
high sending rate. So as to accomplish this, ALL backlogged unreliable
packets are sent out subject to packet dropping. To compute the
probability that a packet should be dropped, we consider the TOTAL
amount of backlogged data (both reliable and unreliable) in excess of
the data budget. The probability of dropping a byte in a packet is then:
  dropProbability = max(backloggedData - dataBudget, 0)
                      / backloggedData
The probability of dropping an individual packet is its size in bytes 
multiplied by this probability.

The criteria for floating the sending rate up are the same as currently,
that being "good" RTTs. The criteria for dropping it down are "bad"
RTTs or dropped packets.

So, do you guys think this is good, or would it not work? Gimme
feedback!

Lee




More information about the ENet-discuss mailing list