[ENet-discuss] enet comments

Brian Hook enet-discuss@lists.puremagic.com
Fri, 7 Mar 2003 16:17:15 -0800


>The typedefs I've already fixed in the latest source tree. However,
>as for poshlib, ENet is a relatively stand-alone library. It has no
>external dependencies whatesoever. I'd almost rather keep it that
>way.

No problem there.  posh.h, however, is public domain, so you can just 
"bring it with you" without requiring someone to download a new 
distro.

>I'm still not entirely sold on this point. I think that's a pretty
>sloppy development practice

That, unfortunately, is how a lot of otherwise successful software is 
developed.  Torque is a monolithic build system, for example -- no 
libs or anything.  Just one huge ass project with a bunch of files.

>Similarly to the problem you describe with clashing object files,
>include files can clash as well. Ensuring the include files are
>properly isolated in a directory isolates this problem. If nothing
>else, "enet/enet.h" would be better than "enet.h" for usage within
>the source code of ENet itself, so someone else's "win32.h" doesn't
>get accidently included.

I'm all for that, I just don't like the angle brackets and their 
associated requirements.  In fact, all my code uses relative paths.  
So f I were writing enet, I'd probably do something like:

#include "include/enet.h"

That won't have a conflict.  And someone using your library would be 
able to do:

#include "<absolute path>/enet/include/enet.h" if they wanted.  Once 
again, this is how most stuff I've used works -- the user is 
responsible for finding the header file they need however they want, 
but the project itself uses relative paths to find its own local 
header files.

>As I described above, I think is is a REALLY sloppy and abusive
>practice that really shouldn't work at all and really shouldn't be
>catered to.

Pragmatism vs. idealism, your call though =)

>I have no problems with allowing user-specifiable allocation and
>deallocation callbacks. However, I don't think ENet needs to go
>through great pains internally to manage reuse. A good malloc, and
>even mediocre ones, will pool like-sized objects together and try to
>reuse from these pools as often as possible.

Unfortunately, not all mallocs are good or even mediocre.  In 
addition, on embedded style system there may be more stringent 
requirements for memory.  Anyone who has ported their software to 
Palm or GBA will be familiar with this type of issue.

>most people underestimate the efficiency of modern malloc
>implementations.

There you go again, assuming "modern" =)

Keep in mind I'm coming from the point of view of having a server 
brought to its knees because of fragmentation.  The performance 
profiles you're seeing may not scale all the way up to thousands of 
simultaneous clients with up-times of 3 weeks. 

Loss of available memory because of address space fragmentation may 
be only 5% a day, but after 3 weeks, you're pretty much hosed.  Part 
of this also depends on the interleaving of allocations between the 
app and enet.

Realistically, it's probably not a problem for 95% of apps, but for 
that last group, it's a significant issue.

Keep in mind that some developers may have an overloaded operator new 
that they want to be used instead because they do memory 
characteristic tracking, etc.

That said, they can trivially replace memory.c, so no worries.

>The only only real barrier to this is how ENet does iterates over
>all clients looking for packets to send out and dispatch in some
>cases.
>So, as we had discussed earlier, I may just add in either a single
>active peer list or a few separate queues for peers with packets to
>send or dispatch.

This may not even be necessary.  A linear search over 1000 clients 
every 50-100ms isn't going to hose things that bad, even if those 
clients are empty.

>In ENet, there's a certain amount of bidirectionality implied in
>even the handshakes that take place to implement various protocol
>commands. So there was no reason to not make connections
>bidirectional, since I had to implement it for the protocol. I also
>think that the bidirectional connections, with multiple packet
>channels, just offers a simpler interface.

So ENet will require users to modify their NAT/firewall settings no 
matter what?  If so, while I agree with your statement that there is 
a certain amount of architectural elegance to that, at the same time, 
for mass market applications that's pretty much a deal killer.

It may not be so difficult to just store their originating address 
and update this (to handle shifting ports) as new packets arrive.  
That way the client never has to open a port locally, and I don't 
think this really causes that big a change?

>The problem with that is then every person needs to be an expert on
>DoS attacks. I think it's better to encapsulate this expertise in
>the libray itself, and just let the developer write games.

That's reasonable.

>Currently this can be exploited ENet, in so far as there is a
>maximum limit on clients. You could spam a lot of connect packets
>and fill up all the connection slots. However, other than preventing
>new connections for a few minutes, it shouldn't harm anything
>currently.

Preventing new connections for a few minutes is a pretty major DoS =)

>clients to connect. Combine with just preventing obscene numbers of
>reconnects within a certain time frame from a given host, and you
>could effectively prevent this attack.

This may actually be the easiest thing to do -- just limit the number 
of connection attempts from a specific host and make this 
configurable with a #define.  It needs to be reasonably high since 
you can have a situation where large numbers of players are behind a 
single NAT (cybercafes, LAN parties, etc.), but there's still a huge 
difference between that and 1000 connect requests in a minute.

>Alternatively, you could just employ this scheme on the connection
>structures themselves, and not bother with distinguishing between
>potential and established connections.

I haven't studied your handshake protocol, so "potential" vs. 
"established" doesn't make much sense to me (since my own library 
doesn't even have the concept of a connection, it just has the 
concept of a validated client).

>Delta compression has always intrigued me in that a lot of things
>seem to use it, but I've never been sure on what kind of potential
>space savings it would really offer.

Delta compression provided in a generic layer will never, ever be as 
optimal as delta compression that leverages application specific 
knowledge.  BUT, a generic delta compression can give you a large 
part of the benefit, and the API is trivially simple -- compress 
packet A given packet B as its originating form, and decompress 
packet A given packet B as its originating form.  Do it in 4-byte 
chunks.

You can even make this an optional send flag.

It's trivial to write -- maybe 30 minutes -- and it can give a really 
significant boost in performance for almost no pain.  The ideal 
situation is an FPS where the user has run over to a corner and is 
just sitting there.  Their state will not change at all, so the 
information that comes over will be delta compressed to 0 -- then the 
payload will be primarily the ENet header and UDP header.

On the flip side, the server updates can now be entire snap shots of 
game state, which would normally be far too large to send over raw, 
but if you can rely on some form of delta compression, it should 
"just work". 

This is something that can easily be done at the app level, but it 
can also be hidden in the net layer as well.  Storing the delta 
reference frames can lead to some bookkeeping grossness (having to 
store N prior frames, etc.) so I still leave it at the app layer.

Brian