03 Sep, 2011, Rarva.Riendf wrote in the 21st comment:
Votes: 0
kiasyn said:
an interesting way to tackle this would to just store footprints/etc in each room.

a skilled ranger etc would look for these signs and they would be displayed in the description. it would also mean you could throw people off your track by walking into rooms unnecesssarily.

I already have footprints stored but those are used for another purpose (as the result of a spell) or even blood dripping from wounds on the ground.
But the purpose of the function is not really related to players, it is also used for a map.
And last for least…yes it is fast enough once you do not calculate 100 paths in a row when you only need to calculate one in the first place.
04 Sep, 2011, Tyche wrote in the 22nd comment:
Votes: 0
Doesn't anyone actually read and understand the code they are discussing anymore?
04 Sep, 2011, Runter wrote in the 23rd comment:
Votes: 0
Are you referring to the fact that the code Rarva linked to does not do the inefficient things people have claimed? Yeah, I realize. I also realize it was fast enough for 1993 when it was written and it's still fast enough now for Rarva. I don't see any point at all in rewriting the code if it achieves its purpose without flaw. The so called speed gains of redesigning it is time better spent else where.
05 Sep, 2011, Tyche wrote in the 24th comment:
Votes: 0
Runter said:
Are you referring to the fact that the code Rarva linked to does not do the inefficient things people have claimed? Yeah, I realize. I also realize it was fast enough for 1993 when it was written and it's still fast enough now for Rarva. I don't see any point at all in rewriting the code if it achieves its purpose without flaw. The so called speed gains of redesigning it is time better spent else where.


No. I mean it doesn't do any of the things he claims it does.
It doesn't search locations more than once.
It doesn't calculate the path, and recalculate the path repeatedly.
It does exactly what it's constructed to do.
It returns the direction of the shortest path[1] to the target room to the player.

A* search is useful when there are some directional heuristics available to optimize search.
There are no directional heuristics available in any of the muds the snippet was written for.
Dijksta's is useful when there are weights attached to the edges of a graph.
There are no weight edges in any of the muds the snippet was written for.
Absent any of the above conditions, the BFS algorithm it uses is likely the best candidate for the task.

Oh there are inefficiencies. Someone hinted at one of them.
It actually does 4 to 6 hash table searches at every location.
It could quite easily be modified to do only one hash table search per location.
It doesn't require a rewrite to use a different algorithm.
A few modifications could get you around a 400% boost in performance.

It could also be modified to return the entire path, if one wanted, although the coders clearly didn't intend that.

[1] Obviously in the case of choosing the option to ignore locked doors it necessarily would not.
05 Sep, 2011, Rarva.Riendf wrote in the 25th comment:
Votes: 0
Quote
**
It returns the direction of the shortest path[1] to the target room to the player.
**
It could also be modified to return the entire path, if one wanted, although the coders clearly didn't intend that.

And it is what I did, (stored for each node the path it took to get there).
The bottleneck I was suffering was outside of the method, because to calculate the path it would use an easy recursion method. I missed those 3 or 4 lines, so I thought that the lag was coming from the method itself, as I was using it for the whole world instead of an area. And there was many comments hinting at it should be limited to areas because of the lag it could induce.
I just trusted the previous coders to make at least a moderate effort to make it fast, I was wrong. The command was not available to players, so I guess they just used it for themselves to build path between areas. Thus the quick and dirty hack.
And I also made the change hinted (less hash access) in the process. It is fast enough to avoid to go multithreading for now. But it is not THAT fast. Especially since all doors are explored in the same order, so if your path is in the 'wrong' (ie:last explored) direction you can explore a lot of node uselessly (no need to explore the north of the map if your exit will be at the full south) (But since I also have a map command to build a whole map I may change order of exit explored depending on wich general direction the path goes, by building a grid and counting the number of north/east etc like if all exit existed and each room were in this grid.)
06 Sep, 2011, David Haley wrote in the 26th comment:
Votes: 0
Quote
(But since I also have a map command to build a whole map I may change order of exit explored depending on wich general direction the path goes, by building a grid and counting the number of north/east etc like if all exit existed and each room were in this grid.)

This is your best bet, by far, for improving the speed of a search algorithm, and that is why I suggested it from the start. You can improve the existing algorithm only so much; the real slowness is the approach more than the implementation (and multithreading wouldn't speed that up).
06 Sep, 2011, Silenus wrote in the 27th comment:
Votes: 0
I do wonder about A* algorithms in general. Since lately I have been doing some rudimentary book work on alg. analysis. (were you encoding Tic Tac Toe in Prolog David?). My understanding is that these algorithms are somewhat heuristical based on a geometric distance bound. I think in LPC given the overhead of interpretation a naive implementation of this is not particularly quick (especially if one uses a poor choice of pq's).

I am curious though how many states you could burn through here on a modern super pipelined GZ proc with modern cache hierachies/sizes.
06 Sep, 2011, David Haley wrote in the 28th comment:
Votes: 0
Quote
were you encoding Tic Tac Toe in Prolog David?

It wasn't Prolog, but it was a similar language. You can read more about it here, and here is the language specification. (Yes, I was involved in that project.)

Quote
My understanding is that these algorithms are somewhat heuristical based on a geometric distance bound.

A* is entirely heuristic-based, but not necessarily a geometric heuristic (some search spaces aren't geometric at all). The whole point of A* is to use heuristics to guide a search by preferring the "right direction", however one defines "right". In geometric cases, it's much easier to understand what going in the right direction means, but even then, straight geometric distance is only the simplest of heuristics and has a number of issues.

Quote
I think in LPC given the overhead of interpretation a naive implementation of this is not particularly quick (especially if one uses a poor choice of pq's).

A naive implementation of anything on anything will not be particularly quick. :smile:
I wouldn't worry about the interpretation overhead, unless the interpreter is particularly slow.
06 Sep, 2011, Runter wrote in the 29th comment:
Votes: 0
Yeah, mistakes in algorithms in high level languages may be an issue that could be overlooked in a lower level language yet not in a high level language. However, I believe that good higher level languages bring better tools to your finger tips encouraging more efficient code in general. It wouldn't be a surprise to me if many applications in higher level languages run more efficient than a poorly written counter part in C. That isn't to say that a C program can't use the best data structures and algorithms per the job. It's just usually doesn't make sense for the programmer for productivity to implement relatively complex things for small parts of the app. They don't have to. They could use libraries to get these things. In my experience, that rarely happens in the C world when compared to high level languages. I'm not sure why.
06 Sep, 2011, Silenus wrote in the 30th comment:
Votes: 0
This link of yours seems to point to a Datalog with negation thing i.e. Prolog without terms but with negative clauses. I am not sure which project you are referring to in this case but I did not read this particularly carefully- seems like some game playing competition of sorts.

I am curious in this case how closely related toys like Datalog are to the Cook theorem. I am not very familiar with A* and unfortunately since I am traveling I don't have the chapter in question on me (I sort of skimmed over it). Isn't A* metric dependent (it requires a distance function)? I maybe off on this point though. IIRC the chapter in question did provide some sort of bound calculation as well.
06 Sep, 2011, Silenus wrote in the 31st comment:
Votes: 0
Runter, I agree with you mostly here. Time are such now that computational resources are plentiful and cheap and generally commoditized very high level languages ala Ruby have numerous advantages. You can write a lot less code in many cases to get the same effect and this results in many cases in fewer errors.

However in general algorithmic efficacy is important (though asymptotic analysis probably does not tell the whole story due to the stupid constants and N > n issues). I think there is article by Knuth that describes this (some paperback I found at Borders bookstore).
06 Sep, 2011, Runter wrote in the 32nd comment:
Votes: 0
Silenus said:
Runter, I agree with you mostly here. Time are such now that computational resources are plentiful and cheap and generally commoditized very high level languages ala Ruby have numerous advantages. You can write a lot less code in many cases to get the same effect and this results in many cases in fewer errors.

However in general algorithmic efficacy is important (though asymptotic analysis probably does not tell the whole story due to the stupid constants and N > n issues). I think there is article by Knuth that describes this (some paperback I found at Borders bookstore).


Well, my point is that when writing in a low level language often times it's too difficult or onerous for programmers to implement the best algorithm even if they know what it is. Such that they opt to something Good Enough to keep productivity reasonably high. Although, the best algorithm may be implemented for you in a higher level language out of the box. For example, binary trees. Efficient associative arrays. Etc.
06 Sep, 2011, Rarva.Riendf wrote in the 33rd comment:
Votes: 0
David Haley said:
Quote
(and multithreading wouldn't speed that up).

The only reason I was going for multithreading was not to speed up the result anyway, it was only no not induce lag in the main thread.
Maybe you could fasten the process with multithread by trying to calculate path from start and end at the same time, but only if your path are the same wichever way you go.
It would be the easiest thing to do but it depends on your world. Maybe I will try that one day if I am bored. Dunno if putting a lock on room to test if they are already visited is more costly than letting one thread calculate everything. Gains in multithreading are so unpredictable when you have locks involved.
06 Sep, 2011, David Haley wrote in the 34th comment:
Votes: 0
Silenus said:
This link of yours seems to point to a Datalog with negation thing i.e. Prolog without terms but with negative clauses.

There are free terms.

Silenus said:
I am not sure which project you are referring to in this case but I did not read this particularly carefully- seems like some game playing competition of sorts.

The general game playing project, which includes the game specification language, the game server, etc.

Silenus said:
Isn't A* metric dependent (it requires a distance function)?

Yes, although I'd call the metric a score rather than a distance, although it's the same thing, really. It's just not necessarily geometric distance.

Runter said:
Well, my point is that when writing in a low level language often times it's too difficult or onerous for programmers to implement the best algorithm even if they know what it is. Such that they opt to something Good Enough to keep productivity reasonably high. Although, the best algorithm may be implemented for you in a higher level language out of the box. For example, binary trees. Efficient associative arrays. Etc.

I'd say this isn't quite right… in fact, you'll often find that high-level language projects will revert to low-level languages to implement the really tight loops/computation-heavy portions. That said, I agree that the high-level languages often provide very efficient data structures – often by implementing them in C, not the high-level language. :wink: Everything is a tradeoff between speed and productivity. If you're talking about a core algorithm that's going to run hundreds of times a second and is a bottleneck for the application, you can be quite sure that it's worth spending more time on it to make it fast, not merely good enough.

The nice thing about high-level languages is that you have this flexibility: write what you want in one language, write C/C++ bindings for the portions that need to be efficient…
06 Sep, 2011, David Haley wrote in the 35th comment:
Votes: 0
Rarva said:
Maybe you could fasten the process with multithread by trying to calculate path from start and end at the same time, but only if your path are the same wichever way you go.

This wouldn't help you at all, really, unless you had some reason to believe that the backward search space is somehow different from the forward search space – but I doubt that this is a guarantee you could make. I'd focus efforts on a good heuristic rather than something like this. If you have a good heuristic, then it can start making sense to search in both directions, but honestly that's a little too clever for this particular application and you're more likely to hurt than help yourself.

Rarva said:
Gains in multithreading are so unpredictable when you have locks involved.

If your multiple threads are in contention over the same locked resources, then the gains are quite predictable: very small to zero, if not negative. Multiple threads are useful when computation can occur in parallel.
06 Sep, 2011, Rarva.Riendf wrote in the 36th comment:
Votes: 0
David Haley said:
This wouldn't help you at all, really, unless you had some reason to believe that the backward search space is somehow different from the forward search space

This could help as you could use two cores instead of one sitting idling. (if you have multiple core that is)

Quote
If your multiple threads are in contention over the same locked resources, then the gains are quite predictable: very small to zero, if not negative. Multiple threads are useful when computation can occur in parallel.

That again depends on how often and how long you need the lock compared to the gain by the parallelized stuff.

But I totally agree this would just be for educational purpose, probably nothing to gain and a lot more complex code. As I said, only if I am bored :)
06 Sep, 2011, David Haley wrote in the 37th comment:
Votes: 0
Quote
This could help as you could use two cores instead of one sitting idling. (if you have multiple core that is)

Might as well use two cores to process one search queue. Two-directional search isn't simple.

Don't forget that you need one of your cores to run your normal MUD stuff, too…
06 Sep, 2011, Runter wrote in the 38th comment:
Votes: 0
DH said:
That said, I agree that the high-level languages often provide very efficient data structures – often by implementing them in C, not the high-level language.


That's kinda the point though. It shouldn't be surprising that people in C would rather use recursion, for example, to solve certain problems better done by iteration because it's so onerous to just implement and test a list or map structure in C. And I stand by my statement that it's easier to implement the correct algorithm in a higher level language, and therefore, it'll be implemented more often more correctly in a high level language.

I think we all agree that low level languages have a place and there's times where speed is absolutely paramount. But I believe this assessment is done poorly by most programmers. There's very few things that need to be written with such a low level language. On a mud dev site I guess it's appropriate to say that Muds isn't one of those types of software almost monolithically. Unless your target hardware is your MP3 player.
06 Sep, 2011, Rarva.Riendf wrote in the 39th comment:
Votes: 0
Quote
On a mud dev site I guess it's appropriate to say that Muds isn't one of those types of software almost monolithically. Unless your target hardware is your MP3 player.

The server I have has only 16 meg of ram left :) (256meg total to run everything including forum) the mud itself use like 35 meg so 70 with a build port.
Off course I am a cheap bastard :) and the sever only cost like a 100$ a year.
And it was a code I took over. It is more a hobby for me an old friends than a real attempt of making an old piece of history something modern.
If I could fix all the bugs, it would be a great achievement, it was worst than spaghetti when I began.
06 Sep, 2011, Runter wrote in the 40th comment:
Votes: 0
I think like 16 megs is about what mp3 players have. ;)

https://vpscheap.info/compare-vps.html

That's 10 gigs hard drive, 200G bandwidth, 128M ram, free backups, managed hardware for $1.50 per month or $15 dollars a year. I don't really understand running any software on something so archaic these days. Just skip a coffee and pay for a good server. :p
20.0/61