11 Aug, 2009, Silenus wrote in the 1st comment:
Votes: 0
Hi all,

I have been looking at the possibility of extending my server to support concurrency and perhaps distribution of some sort and I was wondering how best to try to understand the issues involved in this sort of programming. I have only done a limited amount of concurrency type stuff and no distributed programming at all. I am wondering if some Erlang message passing model could be incorporated into the language design and whether something like this is general purpose enough for a mud programming language.
11 Aug, 2009, quixadhal wrote in the 2nd comment:
Votes: 0
It's extremely difficult to distribute processing in a game system that's as tightly interconnected as a MUD. Because so many of the actors require access to so much of the world data, you usually spend so much time acquiring locks, that most of the gains for concurrency are lost.

In general, you can win if you put things like I/O into a (one or more) threads, and sometimes things like world events that don't directly touch players (such as having NPC's decide where to migrate, or updating weather systems). But the core systems for combat, player interaction, etc… all depend on lots of data being at their fingertips AND on things happening in the right order (which means spinlocks).
11 Aug, 2009, Tyche wrote in the 3rd comment:
Votes: 0
11 Aug, 2009, flumpy wrote in the 4th comment:
Votes: 0


lots of truth there.

Programming in a multi threaded way is like driving on the motorway. You (in your car) know what you are doing, but if you don't look in your wing mirrors when changing lanes everything can go very wrong.

That is to say, think about the problem domain first. Threads are only useful when you want two things to happen concurrently (strangely enough). There is no inherent speed benefit from using a thread, and it only complicates your code if you use one needlessly.
11 Aug, 2009, Silenus wrote in the 5th comment:
Votes: 0
Well I am well aware of these issues however it is likely on a modern multicore you may see a certain amount of speed up if you partition the problem in a certain way. The difficulty as noted is that in the design of most muds there is no clear partitioning of the game state but I speculate this could be overcome to some extent. The question actually goes towards how to design a language which supports concurrency for muds. How and when they are used is up to the developer.
11 Aug, 2009, David Haley wrote in the 6th comment:
Votes: 0
You're asking several very different questions here.

1. When is concurrency appropriate?
2. When is concurrency appropriate for MUDs specifically?
3. How does one design a language to make concurrency possible or easy?
4. How does one learn to program effectively in a concurrent environment?

It would help if we could narrow down what we're talking about, at the least to treat each issue in turn.

I would recommend reading this thread from MB as well.

My gut instinct is the same as what I said over there: be very, very wary of this. The benefits are likely to be far less than you appear to think, and benefits that actually do work are likely to be so complicated that the resulting system is very difficult to use. There are some places where concurrency can give you immense gains; I/O handling is one of these as you do not want your whole system blocked while waiting on a socket, for example. But note that this gain isn't actually in terms of how much stuff you can get done, but in terms of not sitting around waiting!

MUDs are simply not a very concurrent environment. Put another way, they are very statefully interconnected. Almost everything has side effects, potentially on a very large number of other things.

The words you use in your post indicate to me (frankly, and no offense) that you don't really know what you want to do at all, and that you just saw this concept and thought it would be cool but without really understanding it. You say things like "likely", "certain amount", "speculate", etc. My recommendation would be to first understand concurrency in general and when it helps you, and then you will be better able to evaluate whether it would be appropriate here. Based on my experience with MUDs, my very strong suspicion is that except for certain very specific things, the complication introduced by concurrency far outweighs performance benefits.
11 Aug, 2009, Silenus wrote in the 7th comment:
Votes: 0
Well I tend to use coding as a vehicle to understand issues w.r.t to code. For example my whole excursion into building a compiler system to run mud code has been quite illuminating- I probably feel I know a fair deal more than I started out with.

For me this aspect of the project is similar. I haven't done much work with threads beyond some simple exercises in Java and of course some reading like the first edition of Doug Lea's book on it. So you are quite correct that I dont really understand what I am dealing with which is specifically why I am trying it.

Now for me this is a platform support issue. It is somewhat difficult to speculate on how it may or may not be used if it's used at all. The I/O system most likely will be asynchronous already and certain other parts of the system may work on callbacks (i.e. recompile the following code and when done call me back).

As for your concern about muds being stateful in this way sure (I think I stated this in my post). However I have a sense that alot of things are potentially localized and dont cause data cascades throughout the system. So I speculate it may be possible to isolate certain parts of the system from others and not rely on locks.

But I firstly about the answer to 4) so I could better assess this.
11 Aug, 2009, David Haley wrote in the 8th comment:
Votes: 0
I don't really have a solid science for learning concurrent programming and my understanding of the collective wisdom is that generally speaking it is far more of an art. The basic notion is in some sense extremely simple: given a sequence of computations that depend on the previous computation's result, you cannot parallelize them. If several concurrent computations access (where at least one is modifying) the same variable, then this access must be protected so that consistency is guaranteed.

Putting this relatively simple concept into practice is of course the tricky bit. There are all kinds of issues, ranging from simply forgetting to synchronize some particular access, to synchronizing things so aggressively that you end up in deadlocks.

I do not think that implementing concurrency as a language feature is at all an appropriate way of learning concurrency. Before doing that, I would strongly recommend writing truly concurrent systems, ranging from relatively simple producer/consumer I/O machines to more complex things like (even simple) GUI applications with background processing. Then of course there are many paradigms for concurrency, from the straightforward providing of synchronization tools, to languages that do not provide explicit synchronization but also don't allow writing to variables in the normal way.

Silenus said:
So I speculate it may be possible to isolate certain parts of the system from others and not rely on locks.

This is highly unlikely for anything terribly interesting.
11 Aug, 2009, Noplex wrote in the 9th comment:
Votes: 0
Ah, the amazing world of concurrency, when I was running a MUD the <b>only</b> reason we had to spin off another process was to handle DNS lookups. Everything else worked on a single thread of execution, and honestly, if I were to begin working on a new MUD engine today I do not think I'd even consider threads for anything other than offloading some database operations.

I think the bigger issue is that as soon as you spawn a second thread of execution you need to take into consideration any shared memory between the two, and that point you'll need to start using locks on any piece of code that may touch a shared memory structure. But I agree with what David (and others) have expressed above: I'm not sure that working on a MUD is the best way to learn concurrency.

An open source project that I have been working on for nearly a year has been a spreadsheet application which has multiple sources of input into the application, and because of the nature of the project (having to read and index massive files) I have learned a lot about concurrency with C/C++ than I would have otherwise have previously only using managed languages such as Java, C#, and Python.

My suggestion would be to look into concurrency and parallel programming with a managed language, learn all of the basics that you can, and if you feel up to it look into the <a href="http://www.boost.org/doc/libs/1_39_0/doc... libraries</a> which can be used with C++ so that you do not have to write wrappers around the pthread code.
12 Aug, 2009, Silenus wrote in the 10th comment:
Votes: 0
Well I have considered writing a go program which utilizes threads but in a very primitive way. I am curious what other sorts of applications exploit concurrency which may be interesting to write. I have become reasonably comfortable with C++ in the last month or so but I suspect I could go back to Java or something and play with a managed language. GUI stuff however really isnt my cup of tea :S.
12 Aug, 2009, quixadhal wrote in the 11th comment:
Votes: 0
From the small amount of parallel processing I did back in college (we had an nCube supercomputer), most of the problems that really benefitted from parallelism were ones that could be partitioned neatly, and that had fixed data sets. Consider one of the classic problems… pathfinding. It seems like this would benefit from parallel processing immensely, but there are some caveats that limit how useful it would really be in a MUD.

The first one that comes to mind is doors. Opening or closing (locked) doors will change the weights of a given path (possibly invalidating the path entirely). That means as your tasks process sets of nodes, any nodes which contain doors have to be mutex-locked, so that the door state can't change until the whole set of paths has been evaluated. If your game has a significant number or doors, you may end up with quite a few locks to establish, and the time you spend waiting for all of that to click into place might be nearly as much as you were saving by splitting the problem up in the first place.

Then there's the issue of how you debug it once it breaks. Debugging a multi-threaded application is NOT FUN. Duplicating the conditions where the problem manifests becomes FAR more difficult, since it may only show up with things happen to run in the right order (no longer guarenteed since tasks are independant).

If you think it's interesting enough, by all means… I'm not saying don't do it. Just be aware of the level of pain you're walking into, and don't be surprised if you don't get nearly the benefit you expected at the end of the day.
12 Aug, 2009, Silenus wrote in the 12th comment:
Votes: 0
I suspect the doors situation with pathfinding would be a bit tricky not only because you might have to mutex the doors but because it might be questionable whether an agent should have access to this information in the first place. I.e. in principle unless the door is in line of sight it is unlikely that the agent in question will magically know if it's locked or not. It might for example have to assess certain things about the situation to plan which may be the correct course to take. If the agent for example has keys for some of the doors the pathfinding algorithm might need to take this into account. I suspect mutex the doors in this case might make little sense. Rather you might have to associate some sort of probability of them being open/closed and then do a sort of success/fail analysis and calculate the average distance required or if maybe one wanted to be a bit more sophisticated indicate to the agent also how to find the key.
12 Aug, 2009, David Haley wrote in the 13th comment:
Votes: 0
A classic example of parallelization is search, yes. An easy way to apply this here without worrying about noise like doors is minimax search of some game tree. Write a simple tic-tac-toe player that uses parallel minimax search and see if you can get it working. An interesting twist on this is an agent that takes TCP connections and plays tic-tac-toe with people, applying minimax in each game (but without parallelizing individual games' minimax searches).
12 Aug, 2009, Silenus wrote in the 14th comment:
Votes: 0
I suspect the first exercise might be slightly more challenging to make efficient. I probably should look at how to efficiently implement something like reduction on trees when they are imbalanced. I suspect problems like this have probably been solved. I assume this is an example of an online algorithm since one in principle does not know during the execution of the algorithm the complete state of the system. What I think I will do is try to find a book on concurrency (I already have one on parallel algorithms but it may be a bit too difficult for me) and read through it and try some of the exercises.

Parallel algorithms seem to be a pretty interesting area so I will try to get through that other book- but I am uncertain how useful it is in most programming situations since massively parallel stuff is usually applied to scientific applications (if cpu bound) or maybe information retrieval applications.
12 Aug, 2009, Erok wrote in the 15th comment:
Votes: 0
Noplex said:
Ah, the amazing world of concurrency, when I was running a MUD the <b>only</b> reason we had to spin off another process was to handle DNS lookups. Everything else worked on a single thread of execution, and honestly, if I were to begin working on a new MUD engine today I do not think I'd even consider threads for anything other than offloading some database operations.

I have the impression that more servers are taking socket I/O out of the main loop. I have background loading of zones as well, but I suppose that could be considered a database operation. I find a thread pool of two handles this partitioned work most of the time, when simulating the activity of several users, and three makes it pretty much seamless. I just enjoy writing efficient code so this was my motivation, but agree that the typical server can get by with one thread and spawning a temporary thread whenever a DNS lookup is needed.
12 Aug, 2009, David Haley wrote in the 16th comment:
Votes: 0
If you use non-blocking sockets, I don't think you really get any efficiency gain if you do I/O in another thread – you might even lose some due to synchronization. Disk or database I/O is another story, of course.

Silenus: I think you should start simple, not with something complex like tree reduction… This homework assignment is perhaps a more reasonable set of problems to solve when getting familiar with threads. There are more handouts available at the course website, too.
12 Aug, 2009, Noplex wrote in the 17th comment:
Votes: 0
David Haley said:
If you use non-blocking sockets, I don't think you really get any efficiency gain if you do I/O in another thread – you might even lose some due to synchronization. Disk or database I/O is another story, of course.

Well, you gave the same answer I would have.

Those homework assignments give me something to do when I don't want to write web-code anymore, thanks.
12 Aug, 2009, Erok wrote in the 18th comment:
Votes: 0
It would be interesting to do some performance testing.

What I find appealing about handing off the I/O is that the main execution loop can be entirely event driven, whereas polling is required otherwise. The latter model usually results in game events being processed on the next polling interval instead of as they happen, although it's likely not noticeable in terms of game-play.
12 Aug, 2009, David Haley wrote in the 19th comment:
Votes: 0
Polling intervals can be essentially instantaneous; there's no need to poll only once every game tick, for example. In my single-threaded implementation of socket I/O, things are still event-based as far as the rest of the system is concerned. The difference is that the event handler exhausts pending events until moving on to the socket I/O, but really, I think that's more or less how a multi-threaded version would work as well, assuming a FIFO queue of events.
12 Aug, 2009, Erok wrote in the 20th comment:
Votes: 0
For a MUD, I agree polling is just fine, whether the interval is on the tick or some multiple of the tick.

I suppose it's my embedded development background that makes me view polling as less efficient than what is roughly analogous to interrupts.
0.0/34