22 Mar, 2009, elanthis wrote in the 1st comment:
Votes: 0
Are there any common MUD platforms that do not support alloca? I know it's non-standard and frowned upon and all that, but dealing with the internal temporary memory allocation for MSSP/ENVIRON/TTYPE parsing in libtelnet is kind of driving me nuts. Making it safe is not hard, just tedious. Making it safe when used in C++ apps is impossible (what if an exception gets thrown from inside a libtelnet event handler?) without adding in either a full memory tracking system (ugh) or C++ hacks that requires compiling libtelnet as C++ (similar to what Lua does). Using alloca just makes it all so much simpler. C99 variable-sized arrays would help, but I do know for a fact that some MUDs still try to support pre-C99-compatible compilers. Even compilers that do C99 sometimes make you ask for it explicitly, e.g. GCC 4.4 still seems to operate in C89 mode by default. :/

libtelnet does already rely on a few (very very commonly supported) extensions to ANSI C89. I'm just wondering if alloca is another one I can add to the list.

Not looking for a debate on the evils of alloca. I know what they are. Hence why I'm asking instead of just using it.
23 Mar, 2009, David Haley wrote in the 2nd comment:
Votes: 0
No preference here. I'm not a huge fan of supporting legacy compilers unless we reallyreally have to, and I don't know of MUD platforms that must use old compilers. This is the kind of thing that pisses me off about writing in C, oh well. (It's also probably why most people take shortcuts and just use static buffers.)
23 Mar, 2009, Scandum wrote in the 3rd comment:
Votes: 0
Aren't alloca allocations automatically de-allocated when the function call ends? Unless you're planning to create a thread for each socket's telnet handling.
23 Mar, 2009, Les wrote in the 4th comment:
Votes: 0
Scandum said:
Aren't alloca allocations automatically de-allocated when the function call ends? Unless you're planning to create a thread for each socket's telnet handling.


Since you're basically making a bigger stack than necessary (for the function call's arguments) with alloca I think it wouldn't rule out the function being reentrant simply because it uses alloca (that concurrent function call would have it's own stack, etc).
23 Mar, 2009, elanthis wrote in the 5th comment:
Votes: 0
Scandum said:
Aren't alloca allocations automatically de-allocated when the function call ends?


Yes.

Scandum said:
Unless you're planning to create a thread for each socket's telnet handling.


No.
23 Mar, 2009, Scandum wrote in the 6th comment:
Votes: 0
Les said:
Scandum said:
Aren't alloca allocations automatically de-allocated when the function call ends? Unless you're planning to create a thread for each socket's telnet handling.


Since you're basically making a bigger stack than necessary (for the function call's arguments) with alloca I think it wouldn't rule out the function being reentrant simply because it uses alloca (that concurrent function call would have it's own stack, etc).

It shouldn't, unless you run out of stack space. But I'm not sure what good alloca would do in a state machine since you might have to bail out in the middle of things with broken packets. The only way it'd be useful that I can think of is with a scan ahead approach like mth uses, perhaps elanthis saw the light? :devil:
23 Mar, 2009, David Haley wrote in the 7th comment:
Votes: 0
I think you might want to think a bit more about what Elanthis is doing here, and why/where he might need temporary, dynamically allocated storage that doesn't need to last beyond a function's lifetime, but that needs to disappear when the function call ends :wink:
23 Mar, 2009, Tyche wrote in the 8th comment:
Votes: 0
Just a guess. You're implementing your Telnet state machine on the stack? Works great for a compiler, but not for multiple Telnet streams because you have to jump out any time you run out of data. If so then Scandum may be correct, you either need to go multi-threaded… or use some sort of continuation mechanism.
23 Mar, 2009, elanthis wrote in the 9th comment:
Votes: 0
::sigh:: I put that last line in the original post for a reason.

No, I am not implementing the state machine on the stack.

When the complete subnegotiation buffer has been received, I parse it into an array of strings so that the application doesn't have to deal with that crap itself. That entails allocating an array to hold all the data and strings to copy the MSSP/TTYPE/ENVIRON data into since the data in the subnegotiation buffer is not NUL-terminated since it's not sent that way over the wire. This array is passed on to another function (a user-supplied callback) and then that memory can safely be released once the subnegotiation handler finishes. This is entirely separate from the state machine; as I said all persistent memory is allocated with malloc.

The two reasons I wanted to use alloca are (a) it removes the need to write 40 lines of repetitive code for freeing strings if a parse error occurs halfway through the process, and (b) it makes it safe to use libtelnet with C++ apps that may throw an exception from inside the user-supplied callback function. Just dealing with (a) I can do, dealing with (b) takes quite a bit more work unfortunately.

ALL I WANT TO KNOW is whether or not people have an objection to alloca due solely to trying to use a platform alloca isn't supported on. I know when to use it, how to use it, when not to use it, etc. I just want to know if anyone's using platforms that have no alloca, thereby forcing me to jump through the hoops to deal with error/exception handling.

If you want to see the code, it's in the same public git repo it's been in since I started.
23 Mar, 2009, Scandum wrote in the 10th comment:
Votes: 0
So basically you're scanning ahead now, welcome to the club. I guess that means all previous complaints against mth are null and void now.
23 Mar, 2009, David Haley wrote in the 11th comment:
Votes: 0
Uh…? :thinking:

I strongly encourage you to return to post #7 and think a little more about what's going on here.
23 Mar, 2009, elanthis wrote in the 12th comment:
Votes: 0
Scandum, you really are a dumbass.

No, I am not scanning ahead. I am still using a state machine for parsing TELNET. I am NOT using a state machine for parsing the subnegotiation buffers, but that's because I have the _whole buffer_ guaranteed for sure at time of parsing, because the state machine dealt with buffering it up until the IAC SE.

Were I trying to parsing the subnegotiation commands without buffering them I would obviously use a state machine, because doing a scan ahead without knowing that i for sure have the whole thing would be pure stupidity. Like your code.
23 Mar, 2009, Guest wrote in the 13th comment:
Votes: 0
MUDs running with Fedora aren't likely to have issue. alloca() exists on my setup and looks to have been around for some time. It's likely anyone using gcc will have it depending on how far back the function goes.
23 Mar, 2009, Scandum wrote in the 14th comment:
Votes: 0
My code only parses a subnegotiation if it successfully scans ahead. Looks like you're scanning backwards, so I guess that would make your code backwards. :evil:
23 Mar, 2009, David Haley wrote in the 15th comment:
Votes: 0
I have to admit that I'm not sure if you're trying to save yourself or trying to make a joke.
23 Mar, 2009, elanthis wrote in the 16th comment:
Votes: 0
scanning backwards? like, wtf and stuff, lolz.
0.0/16