23 Jul, 2009, donky wrote in the 21st comment:
Votes: 0
David Haley said:
Well, for one, you don't have to sit there managing the async calls; you don't have to register the call explicitly nor pay attention to when it's completed. The call gets launched implicitly, and the language handles blocking for you when you need its result.


While this sounds like a nice idea in theory, in practice I have found that I need to know when things will block so that checks can be made for consistency following the blocking.

One example that always sits at the back of my mind with regard to this is where I had deterministic combat working on the client and server. Given cooperative scheduling and knowledge of what should cause the current microthread to yield, the combat simulations should have run identically on both sides (due to random seed usage). However, some non-blocking operation like starting a new microthread mistakenly blocked the current one and perhaps ran the newly started one, causing erroneous blocking. And so, the random seed was incorrectly used for different purposes, causing drift in the simulation on each side.

Now, the simple argument for handwaving this away would be suggest distinct random seeds. But this is irrelevant, as combat was simulated identically for all ongoing actions against whatever actors in the same order on both client and server. Only if something was broken in such a way that needed to be fixed anyway, would something go wrong.

An advantage of not having to write code that uses callbacks and which can just block allowing other code to run in the meantime, is that the code can be written in a synchronous straightforward manner that is inherently readable. That is readable, in a way that code broken up and structured around callbacks is not. With preemptive scheduling, where blocking may happen at almost any point in the code, the programmer needs to take a lot of care to program defensively. However, cooperative scheduling can allow the programmer to know where blocking may occur and ameliorate the defensive programming overhead. So where the cost with callbacks is the disjunct code, the cost with a threading solution is the defensive programming.

I wonder how much defensive programming would be needed in a game written using a language that worked this way. I'd speculate that it would be between the amount needed for preemptive scheduling and cooperative scheduling, but closer to the former than the latter.

Thoughts?
23 Jul, 2009, David Haley wrote in the 22nd comment:
Votes: 0
It's actually pretty easy to know when it will block: it will block if you try to read it and it's not done yet. You wouldn't really have to worry about blocking, unless the function is doing something that affects your code beyond producing a value. Basically, the idea is to have a function that produces a value, and evaluate it in parallel using a future. The idea is not to spawn off some computation that has all kinds of side effects and then pretend that you can ignore synchronization, because you can't. However, when you're evaluating something that is functional in nature (as in, has no side effects) you, formally, have no need for synchronization except at the point of consumption of the value, as you cannot (obviously) consume the value until it is ready for consumption.

I'm not sure what to say about your combat simulation; it seems like a fragile model and I would definitely implement it without depending on the order in which threads happen to be run on one end of the communication. I'm not sure why you think that stricter regimentation of calculation order is just handwaving the problem away.

When your asynchronous calls are strictly functional, there is no synchronization to worry about – the fact that it's asynchronous happens purely in the background. The difference is that instead of registering a callback and poking around waiting for it to be called, you block when you try to read the value that's not yet ready. So in this case, the amount of 'defensive programming' required is basically the same as cooperative scheduling, i.e. very, very little if any at all.
23 Jul, 2009, Silenus wrote in the 23rd comment:
Votes: 0
I probably should take a look at this when I find some time-

http://www.cs.cmu.edu/afs/cs.cmu.edu/pro...

I wonder what kind of results are in the paper.
20.0/23