[vworld-tech] Modern MUD server design

Brian Hook hook_l at pyrogon.com
Thu Jan 22 14:55:48 PST 2004


> Of course, these are not do-it-yourself packages, they often
> include kernel modules and involve specific hardware
> configurations. Not something you could - for example - just put on
> your home PC.

I'm thinking of something pretty ghetto, admittedly =)

The kernel itself will act as the intermediate cache, which will speed 
up the most common queries, but obviously this may not scale as well.  
Without an idea of the number of mobs, rooms, players, etc. it's hard 
to get a grip on what is going to be overkill, adequate or woefully 
unacceptable.  I'm mostly trying to get a good idea of what I need to 
worry about in a year.

Then for the DB I was thinking of having two parallel mirroring 
servers running and occasionally syncing up with each other, and in 
the event that the kernel loses connection to the primary, it 
automatically kicks over the secondary while occasionally checking on 
the status of the primary.

I don't know enough about the MySQL and libpgc libraries to understand 
how bad/good this will be (e.g. if they're all blocking, that's going 
to be an issue).

In fact, this may be a situation where I will require separate 
threads, queued queries so that I don't have any halts just because 
some subsystem wanted to see how much a steel helm weighs, the DB took 
a dump and now we're stuck waiting for the API to figure out that the 
other system isn't just a little busy, it's actually dead.

But that can be retrofitted later pretty easily.

> Sorry if I can't help more - if you do find any good technical
> papers that explain which system stats, hardware and software, as
> well as specific network topology and the sort, please share.  I'm
> keenly interested in how some of the big boys work behind the
> scenes.

I'm more interested in how you can achieve this on the low-end.  Being 
able to construct a "mostly" redundant and fail-safe MUD server for a 
few thousand bucks would be pretty cool.  At the very least you have 
to have redundant hardware, and then software to automatically switch 
over on fail, and regular backups/mirroring from one DB to another (or 
if you want to get hardcore, just run two servers in parallel like 
mirrored hard drives).

Brian





More information about the vworld-tech mailing list