[ENet-discuss] ENet ain't what it could be, Was: ENET_EVENT_TYPE_DISCONNECT with peer address == 0 + additional questions

Intripoon intripoon at gmx.de
Wed Jul 5 03:58:44 PDT 2006


Hi !

I think the true solution for the disconnection waiting issue, without any
detailed reasons or something, is to wait until all outgoing packets are
sent and, while waiting, ignore all incoming packets. Then disconnect. But
that's easy to add on top of enet. I just wanted to make sure that the
current behaviour is the one how enet is wanted to behave before I implement
what already might meant to be there.

I made a few more tests with more than one peer and while one large send
between two peers blocks all channels, sending packets to other peers
meanwhile works. The blocking issue could also be dealt with on top of enet:
Just split large packets into smaller ones, put them into a priority queue
and merge them back correctly on receive. Something like that is probably
done by enet anyways, so this might be called a bad workaround. But if
inserting this in enet means to break it because of complexity, I'ld prefer
the "workaround".

While you say enet is not optimized for lots of reliable data, in my test it
still performs better than other libs. E.g. sending 10MB of data with enet
reliable on a 100MBit lan transferes the data at something around 6MB/s. The
overhead is around 1MB, so actually 11MB got transfered. If this is because
of resend of data or just udp overhead, I don't know. Trying the same with
raknet gives me 250kB/s and an overhead of 3MB...

Best regards
	Marc




-----Ursprüngliche Nachricht-----
Von: enet-discuss-bounces at cubik.org [mailto:enet-discuss-bounces at cubik.org]
Im Auftrag von Lee Salzman
Gesendet: Mittwoch, 5. Juli 2006 01:13
An: aspirin at ntlworld.com; Discussion of the ENet library
Betreff: Re: [ENet-discuss] ENet ain't what it could be, Was:
ENET_EVENT_TYPE_DISCONNECT with peer address == 0 + additional questions

Adam D. Moss wrote:
> 
> You have to implement an application-level reliable acknowledgement of 
> disconnect.  Not ideal IMHO, but that's the way it is.
> 

Feel free to suggest some sensible implementation of synchronous disconnect,
taking into account queued outgoing packets in channels, also taking into
account incoming packets while these remaining packets are sent out, and any
other new packets the host sends out after issuing the disconnect request..

The more I thought about it, the less I could find a satisfying way of doing
it. Say, while the remaining outgoing packets are being sent out, you get
new incoming packets. Do you deliver those? What if they require
  a response from the host that just issued the disconnect in the first
place? Do you queue those new outgoing responses too or just drop them?
Hell, what if there's a huge amount of outgoing reliable packets queued up,
and the disconnect only happens an extremely long time later? Too many "what
if"s. So, for now, I've just given up and stuck with the asynchronous
disconnect.

> 
> I'd say that your expectation seems reasonable, but not something that 
> could be relied upon - really these ARE sequentially being shot to the 
> same port no matter how much ENet gives a multiplexed view of it, so 
> there's a hidden queue for various reasons.  Hm.
> 
> --adam

Channels are there to combat the case of the resend mechanism blocking stuff
from getting sent. They don't really address saturation issues.
So, if on channel 0, you have one reliable packet that gets dropped over and
over, rather than just idling till the packet gets through, stuff on channel
1 goes through unheeded. But if you've got a never ending stream of data to
send on channel 0, all bets are off.

Currently there is one outgoing queue for reliable packets, and one for
unreliable packets, with the reliable packets taking priority. I guess my
original reason for this is because reliable packets can timeout and also
control the throttling, so I wanted to make sure that they got sent out
ASAP. That and ENet is mostly made for shuttling around unreliable stuff,
with the occasional stuff that you can't afford to lose in transit. I hate
to admit it but I didn't architect it well enough for lots of reliable data.

To fix this and other issues people have noted lately would require a
non-sucky packet throttle (which I still have not figured out yet) that:
    1) treats reliable/unreliable uniformly
    1.1) merge the outgoing queues
    1.2) include some new delivery options for reliable packets like
quickest delivery
    2) supports QoS for packets/channels/peers (time-to-live, priorities,
subsumption of old packets by newer packets, etc)
    2.1) maybe get rid of the random unreliable dropping in favor of better
QoS, or make it some kinda specific QoS policy
    3) better estimate and limit available bandwidth automatically
    3.1) use estimated bandwidth to control outgoing rate

Again, if anyone has any grand ideas for this stuff, and even better,
outlines of how to implement them, please feel free to suggest them.

Now, on a brighter note, I do have some improvements I made to the protocol
to make it use less bandwidth that I may put into CVS soon, since it seems
to be working well enough in Sauerbraten.

Lee
_______________________________________________
ENet-discuss mailing list
ENet-discuss at cubik.org
http://lists.cubik.org/mailman/listinfo/enet-discuss




More information about the ENet-discuss mailing list