[ENet-discuss] A unified rate-based throttle for reliable and
bruce at cubik.org
Fri Apr 2 00:27:49 PST 2004
Much of this isn't in my area of experience. So, I have no real basis
on which to make a judgement as to whether or not this will work.
I do have other comments though. :)
If something like this goes into place, it appears to me that having
some visualization/logging tools will be that much more important for
ENet in the near-term future.
If I'm understanding this correctly, sending reliable data has the
potential to choke the flow of the unreliable data. This makes it seem
likely that some way of throttling the flow of reliable data into the
ENet layer may then be needed.
But how would you know that something like that was happening? If you
could log something either to file for later visualization or to a
socket for visualization in realtime to see what the current sending
rates are, the current budget, how much of that is going to reliable
data, how much to unreliable data, and so on, that seems highly useful.
(It'd be even nicer if integrated with a level above ENet to flag the
types of reliable/unreliable data being sent to see what is happening at
an even finer-grained level.)
I may take it upon myself to produce a small C library to produce this
sort of trace data that could be used by other things. I know I have
enough projects that could use that type of thing. If I do, I'll
probably end up doing a UI for it in Eclipse.
Lee Salzman wrote:
> I've been giving some thought to trying to make the throttling
> mechanisms more holistic, and particularly more sane for the case of
> reliable data. Currently the windowing mechanism and the packet dropping
> mechanism sort-of work together, but in an almost antagonistic way.
> So I propose the following throttling mechanism in place of what's
> there, that would both get rid of the reliable packet window and replace
> the two throttling mechanisms with one integrated scheme:
> The main variable of the throttle will be the current sending rate. This
> is the rate, in bytes per second, which at most data should be sent over
> a connection. The rate ultimately floats up and down in response to
> network conditions. Regardless of throttling reliable or unreliable data,
> this sending rate is always obeyed.
> If bandwidth limits for the connection are specified, we then have at
> our disposal a maximum throughput that ENet should not exceed. Since
> this is only optionally specified, it may not be counted upon but can be
> taken advantage of to set an upper limit on the current sending rate.
> Whenever we need to send out data (on a call to either enet_host_flush or
> enet_host_service), we determine a data budget. This data budget is
> defined as time since data was last sent multiplied by the current
> sending rate, and caps to be at most the amount of data sendable in one
> second (or some fraction thereof), so:
> budget = min(timeSinceLastSent, 1) * sendingRate.
> For reliable packets, we send out as many packets as possible within
> this budget. Any remaining data is simply stalled there until the budget
> is large enough to send.
> For unreliable packets, minimal latency of sending is much more
> important than whether the packets get there, since ENet assumes that a
> high sending rate. So as to accomplish this, ALL backlogged unreliable
> packets are sent out subject to packet dropping. To compute the
> probability that a packet should be dropped, we consider the TOTAL
> amount of backlogged data (both reliable and unreliable) in excess of
> the data budget. The probability of dropping a byte in a packet is then:
> dropProbability = max(backloggedData - dataBudget, 0)
> / backloggedData
> The probability of dropping an individual packet is its size in bytes
> multiplied by this probability.
> The criteria for floating the sending rate up are the same as currently,
> that being "good" RTTs. The criteria for dropping it down are "bad"
> RTTs or dropped packets.
> So, do you guys think this is good, or would it not work? Gimme
More information about the ENet-discuss