30 Jan, 2013, Telgar wrote in the 1st comment:
Votes: 0
I'm writing a MUD in Javascript. One of the cool things about this is that you can dynamically reload code. This has obvious advantages, but it also has one particularly nasty gotcha - you can redefine function classes. If you have persistent data, this is a problem.

Example:

// in some module
function Player(name) {
this.name = name;
};

Player.prototype.isImmortal = function()
{
return this.name === "Adam";
};

//… later on, you create some persistent game state …
player_table["Adam"] = new Player("Adam"));

player_table.Adam instanceof Player /* returns true */;


Now say you need to reload the module defining player…

// in some module
function Player(name) {
this.name = name;
};

Player.prototype.isImmortal = function()
{
return this.name === "Steve";
};


Now what happens?

player_table.Steve = new Player("Steve");
player_table.Steve.isImmortal() /* returns true */
player_table.Adam.isImmortal() /* also returns true */

uh oh

player_table.Adam instance of Player /* returns false */


Obviously it is a really dumb idea to code like this. The example is just for illustration. The reason this happens is because we have redefined the Player function and now it has a new prototype. Hence, changes to the prototype only affect new players. Older, pre-existing players don't get your fancy new code updates.

Exactly the opposite of what you want for reloadable code. I'm curious how others have gone about solving that problem.. I'll post my solution, which turned out to be a bit non-trivial.

// get access the global object
var GLOBAL_OBJECT = Function('return this')();

Object.defineConstant = (function() {
var prop = {
enumerable: true,
writable: false,
configurable: false,
value: null
};
return function(obj, key, value) {
prop.value = value;
Object.defineProperty(obj, key, prop);
};
})();

Function.staticClass = function (name, func, parent) {
if (!GLOBAL_OBJECT.hasOwnProperty(name)) {
Object.defineConstant(GLOBAL_OBJECT, name,
function() {
if (typeof this === "undefined" ||
!this instanceof GLOBAL_OBJECT[name]) {
return new (Function.prototype.bind.apply(
GLOBAL_OBJECT[name], arguments));
} else {
if (parent) {
parent.apply(this, arguments);
}
func.apply(this, arguments);
return this;
}
}
);
if (parent) {
var Holder = function() {};
Holder.prototype = parent.prototype;
GLOBAL_OBJECT[name].prototype = new Holder();
}
}
};


Phew!!! A lot of the fancy footwork there is to stop you from accidentally invoking the constructor without new.

Now you can do things like this..

Function.staticClass("Mobile", function(name) {
this.name = name;
this.gender = Global.Gender.NEUTRAL;
this.genderName = undefined;
this.race = undefined;
this.class = undefined;
this.stats = Global.Abilities.getDefaultStats();
});

Function.staticClass("Player", function(name, pid) {
this.playerId = pid;
}, Mobile);


And now I have a Player class that inherits from the Mobile class. I can change the prototypes on both classes, and all player class objects inherit those changes.

You might spy that I have to reference globals in the above functions. I have a module system that encapsulates functionality without polluting global state, but I can't use it here. This is the only thing I don't entirely like, but it turns out to be necessary, since the constructor functions can only by defined once (redefining them would create a new prototype, creating all the problems above). If I used local references to modules, then I wouldn't get new values when those modules are reloaded.

It might look like this poses a problem since I can't redefine the constructors, but this is really simple to work-around:

Function.staticClass("Mobile", function(name) {
this.initialize(arguments);
});

Mobile.prototype.initialize = function(name) {
this.name = name;
this.gender = Global.Gender.NEUTRAL;
this.genderName = undefined;
this.race = undefined;
this.class = undefined;
this.stats = Global.Abilities.getDefaultStats();
};
30 Jan, 2013, Runter wrote in the 2nd comment:
Votes: 0
I've solved the problem by only using code reloading for quality of life on development. I don't suggest using code reloading for production. (Or editing production code in the first place…) If you have OLC I suggest doing it only on a building port where the code reloading can be tightly controlled, if you need it.
30 Jan, 2013, Telgar wrote in the 3rd comment:
Votes: 0
Oh, I just realized using the .initialize pattern also means I can get rid of the globals… thus I am pretty happy with this solution.

Mostly the reloading will be useful for bug-fixing at run-time. I'd reload in the test instance first, validate the fix, then reload in production.

Most of the metadata is abstracted out into a schema based OLC system, so I don't even need reloading to do things like add new classes, skills, etc.. but if there is a bug that can be exploited for infinite gold, I can now patch it without taking down production.
30 Jan, 2013, quixadhal wrote in the 4th comment:
Votes: 0
You might want to take a look at the Gurbalib mudlib for the DGD game driver to see how it handles this kind of thing. It takes things a step further by making a persistent world, where everything remains until it's explicitly deleted. The tricky part there is that an LPMUD supports class inheritance, so you have to be able to (potentially) update all the children who inherit the thing you just changed, but ensure that nothing loses data since the source for a given object might not even exist (on disk) anymore.

Looks promising!
30 Jan, 2013, Telgar wrote in the 5th comment:
Votes: 0
The persistent world part is easy… I have only a very small number of persistent tables, and I think it is reasonable to make these top-level objects from C/C++, so that I could debug them if that need ever arises (plus, I get to specify things like the array size on construction to make sure they perform well and I'm not down-converting my VNUM array to an object, set read-only properties and make them non-deletable, etc, etc). SpiderMonkey has a pretty easy way to do that…

/* Create the zone table */
zone_table = JS_NewArrayObject(ctx, MAX_ZONES, NULL);
if (!zone_table) {
fprintf(stderr, "Failed to create zone table!\n");
return -1;
}
if (!JS_DefineProperty(ctx, global, "zone_table",
OBJECT_TO_JSVAL(zone_table), NULL, NULL,
PROP_READONLY)) {
fprintf(stderr, "Could not add zone table to global!\n");
return -1;
}

/* Create the obj_proto table */
obj_proto_table = JS_NewArrayObject(ctx, NUM_VNUM, NULL);
if (!obj_proto_table) {
fprintf(stderr, "Failed to create obj_proto table!\n");
return -1;
}
if (!JS_DefineProperty(ctx, global, "obj_proto_table",
OBJECT_TO_JSVAL(obj_proto_table), NULL, NULL,
PROP_READONLY)) {
fprintf(stderr, "Could not add obj_proto table to global!\n");
return -1;
}


While I could use the same approach for "classes", creating Javascript functions which take args, etc from C++ is rather cumbersome and then I would need to define all my Javascript "classes" in C++.
30 Jan, 2013, plamzi wrote in the 6th comment:
Votes: 0
First off, I'd like to point out that this is not a challenge unique to JS. If you have instanced objects in memory and you wish to alter the constructor at runtime, you'd be writing a lot of "meta-code" in any language. And in most languages, it will look pretty convoluted and headache-inducing.

Secondly, while I'm a fan of runtime updates for most things, I agree with Runter that pulling stunts like these should be confined to a dev environment, where technically you don't need to worry about converting objects already in memory, and where, honestly, it's not worth it to write and maintain such code because you can just reboot whenever you modify structures.

You definitely get points for cleverness and your solution proves your JS understanding is better than mine at this time. That said, not everything that can be done is worth doing. I've done a good bit with dynamic updates in C and node.js, but I think trying to make everything update-able can be a huge time sink with no practical gain whatsoever.
30 Jan, 2013, quixadhal wrote in the 7th comment:
Votes: 0
Telgar said:
The persistent world part is easy… I have only a very small number of persistent tables


You misunderstand the nature of the kind of persistence I'm talking about. A persistent game like the one I referred to doesn't just persist a few tables, it persists the entire game state. Literally, you do do a shutdown, boot the new driver (passing descriptors to prevent disconnects), and restore the entire game state so players don't even notice. Combat continues where it was, damaged objects lying in rooms remain damaged.

DGD helps quite a bit with that by providing a state dump mechanism, however you still have to manage code updates to your object hierarchy, as a new code version may require you to perform one-time translation of data structures as the new code takes over.
30 Jan, 2013, Rarva.Riendf wrote in the 8th comment:
Votes: 0
Well you can 'version' thing, and write code that morph the old objects in new one when it detects different version.
I occasionaly did that for players. run the code once, then DELETE it…so you dont pull your hair off later wondering why you had this code in the first place

edit:nm, just realised Quix sait that already…
30 Jan, 2013, Telgar wrote in the 9th comment:
Votes: 0
quixadhal said:
Telgar said:
The persistent world part is easy… I have only a very small number of persistent tables


You misunderstand the nature of the kind of persistence I'm talking about. A persistent game like the one I referred to doesn't just persist a few tables, it persists the entire game state. Literally, you do do a shutdown, boot the new driver (passing descriptors to prevent disconnects), and restore the entire game state so players don't even notice. Combat continues where it was, damaged objects lying in rooms remain damaged.

DGD helps quite a bit with that by providing a state dump mechanism, however you still have to manage code updates to your object hierarchy, as a new code version may require you to perform one-time translation of data structures as the new code takes over.


I think we're talking about the same kind of persistence. You're talking about a copyover type implementation, while my persistence model is lazy, in-place conversion. I propose that getting the above fulling working in a Diku derivative type MUD can be achieved with the following persistent data structures:

zone_table - a table of zones, holding last reset time, # of players in zone, etc.

room_vnum_table - a table of room VNUMs loaded from zone files.
room_rnum_table - a table of instantiated rooms

object_vnum_table - a table of object VNUMs loaded from zone files.
object_rnum_table - a table of instantiated objects

mobile_vnum_table - a table of object VNUMs ….
mobile_rnum_table - a table of instantiated mobiles

finally, to hold player data

player_table - a hashtable mapping player name to player data, maintained for active or recently active players.

That's it. Almost everything else in game state can be either derived from those structures, or (things like mail, message boards, auction inventory) is stored offline in files. Mobiles have program state stored directly in their objects; rooms have lists of players contained within; players and mobiles have combat state and engaged enemy; shops have object rnums for available items stored on the mobiles. Since Javascript has runtime garbage collection, I don't need to explicitly copy any of that over; whatever chaff I had in the form of cached objects or private data (such as Unicode and terminfo tables) is dropped and reloaded.

Keeping those 8 tables across code reload makes 99% of the game state persistent. Notably, descriptor state, which dangles off the player, remains intact, and I don't need to squirrel away negotiated telnet options or whatnot - they still exist. This is one point where copyover schemes have been a bit weak, though it could be improved.

Course there are a few outliers, like ongoing auctions, maybe public chat channels (if you support IRC #topic style channels - note that private channels, since they are attached by player state - are still attached). Support for this could be arranged without any help from the runtime engine, but every object we want to persist requires some pollution of global state.

As for versioning objects, this is mostly unnecessary in Javascript. The typing is completely dynamic, so if I want to add a new property, "hoursOfLight" to all objects of class "Torch", then I simply update the schema for "Torch" to include the following field:

"hoursOfLight" : { "type" : "int", "default" : 100 }

Now any references to torches which exist offline but haven't defined this property fall back to a schema lookup based on their class, and pick up the default property value. Any instances of in game objects are corrected simply by having their prototype adjusted. Since the prototype for the class is static, as I achieved above, Poof… presto magical conversion. No versioning necessary.

For a more complex case, say I want to change a player setting from

"unicode" : { "type" : "boolean", "default" : false }

to something supporting sub-selections of supported font code points

"unicode" : { "type" : "set", "enum" : "Unicode:codepoints" }

Then to accomodate both versions, I have to code defensively:

if (typeof player.unicode === "boolean") {
// do things the old way..
} else {
}

Or maybe I want to iterate over the player table and update in place, or lazily on login or logout. I don't really want to perform changes like that live, nor do I see the need to be doing it often, but the point is, in Javascript, this is entirely possible without any copyover, and in most cases, without even writing any explicit version conversion function.
31 Jan, 2013, Telgar wrote in the 10th comment:
Votes: 0
plamzi said:
First off, I'd like to point out that this is not a challenge unique to JS. If you have instanced objects in memory and you wish to alter the constructor at runtime, you'd be writing a lot of "meta-code" in any language. And in most languages, it will look pretty convoluted and headache-inducing.

Secondly, while I'm a fan of runtime updates for most things, I agree with Runter that pulling stunts like these should be confined to a dev environment, where technically you don't need to worry about converting objects already in memory, and where, honestly, it's not worth it to write and maintain such code because you can just reboot whenever you modify structures.

You definitely get points for cleverness and your solution proves your JS understanding is better than mine at this time. That said, not everything that can be done is worth doing. I've done a good bit with dynamic updates in C and node.js, but I think trying to make everything update-able can be a huge time sink with no practical gain whatsoever.


There's a second piece you need to make everything dynamically updatable; with a little work you should be able to get this working in node.js.

I'm sure if you have done much node.js programming, you'll have noticed right away that once you reload any module, references to obsolete modules become a problem. They get implicitly cached in all sorts of things like closures, local variables. This is sort of an unfortunate side effect of node.js' otherwise quite excellent module system; it was designed to support complex load dependencies, but not to support loading a module more than once.

The solution is to use a global namespace manager which updates all those local variables and even peeps into closures for you. How?

////////////////////////////////////////////////////////////////////////////
// //
// NAMESPACES //
// //
// This module allows clients to register private namespaces, detect //
// collisions, and find module access points using namespaces, rather //
// than global identifiers //
// //
////////////////////////////////////////////////////////////////////////////

function Namespaces() {

var debug = true;
var _Namespace = {};
var _topLevel;
var _callouts = {};
var _recurse_node = "";

if (!(this instanceof Namespaces)) {
return new Namespaces();
};

function NamespaceError(message) {
if (!(this instanceof NamespaceError)) {
return new NamespaceError(message);
}
this.name = "NamespaceError";
this.message = message;
return this;
}
NamespaceError.inherits(Error);

function Node(parent, name) {
if (!(this instanceof Node)) {
return new Node(parent);
}
this._affixes = {};
this._parent = parent;
this._callout = undefined;
this._interface = undefined;
this._name = name;
return this;
};

Node.prototype.append = function(affix, n) {
this._affixes[affix] = n;
};

Node.prototype.hasAffix = function(affix) {
return this._affixes.hasOwnProperty(affix);
};

Node.prototype.getAffix = function(affix) {
return this._affixes[affix];
};

Node.prototype.deleteAffix = function(affix) {
delete this._affixes[affix];
return this;
};

Node.prototype.erase = function() {
this._interface = undefined;
if (this.hasOwnProperty("_callout")) {
delete _callouts[this._callout];
}
this._callout = undefined;
};

Node.prototype.detach = function() {
this._parent = undefined;
};

Node.prototype.calloutMatches = function(callout) {
return !this.hasOwnProperty("_callout") || this._callout === undefined ||
this._callout === null || this._callout === callout;
};

Node.prototype.getInterface = function() {
return this.hasOwnProperty("_interface") ? this._interface : null;
};

Node.prototype.init = function(impl, callout) {
if (impl) {
this._interface = impl;
}
if (callout) {
this._callout = callout;
}
};

Node.prototype.getParent = function() {
return this._parent;
};

Node.prototype.getChildren = function() {
var out = [];
for (var affix in this._affixes) {
if (this._affixes.hasOwnProperty(affix)) {
out.push(affix);
}
}
return out;
};

Node.prototype.hasChildrenOrData = function() {
for (var affix in this._affixes) {
if (this._affixes.hasOwnProperty(affix)) {
return true;
}
}
if (this._interface) {
return true;
}
return false;
};

Node.prototype.recurse = function(func) {
if (this._interface) {
if (debug) {
_recurse_node = this._name;
}
func(this._interface);
}
for (var affix in this._affixes) {
if (this._affixes.hasOwnProperty(affix)) {
this._affixes[affix].recurse(func);
}
}
};

function onAllNodes(func)
{
_topLevel.recurse(func);
};


// Namespace.register
//
// Adds namespace tree if any prepended Namespace
// exist in name
//
// @param name - semicolon delimited namespace identifier
// @param callout - if present, a function name which should be searched
// for in all interfaces, called and passed our module
// @param impl - the interface module provided by this namespace
// @param descr - a description of this namespace

function register(name, callout, impl, descr) {
if (_Namespace.hasOwnProperty(name) &&
!_Namespace[name].calloutMatches(callout)) {
throw NamespaceError(
"Namespace " + name + " already defined with another callout"
);
}
this[name] = impl;
if (callout && _callouts.hasOwnProperty(callout) &&
_callouts[callout].namespace !== name) {
throw NamespaceError(
"Namespace " + name + " attempting to reuse callout defined by " +
_callouts[callout].namespace
);
}
if (impl && !(impl instanceof Object)) {
throw NamespaceError(
"Namespace " + name + " has non-object interface"
);
}
var namespace_tree = name.split(":");
if (namespace_tree.length >= 1) {
var prefix = "";
var outer = _topLevel;
for (var i = 0, len = namespace_tree.length; i < len; i++) {
var affix = namespace_tree[i];
prefix += affix;
if (!outer.hasAffix(affix)) {
var n = new Node(outer, affix);
outer.append(affix, n);
_Namespace[prefix] = n;
}
prefix += ":";
outer = outer.getAffix(affix);
}
outer.init(impl, callout);

// If we have an implementation, scan for callouts
if (impl) {
for (var func in _callouts) {
if (!_callouts.hasOwnProperty(func)) {
continue;
}
var module = _callouts[func].module;
if (impl.hasOwnProperty(func)) {
if (debug) {
mudlog("Making callout " + func + " for " + name);
}
impl[func](module);
}
}
if (impl.hasOwnProperty("initialize")) {
impl.initialize();
}
}

// If we added a callout, check if any nodes need to re-register
if (callout) {
var callout_obj = {
namespace: name,
module: impl
};
_callouts[callout] = callout_obj;

// Call any pre-existing nodes that need to register
// with the new module

onAllNodes(
function(node) {
if (node.hasOwnProperty(callout)) {
if (debug) {
mudlog("Making callout " + callout + " for " +
_recurse_node);
}
node[callout](impl);
}
});
}

return outer;
}
return null;
};


// Namespace.delete
//
// Removes a namespace from namespace tree
//
// @param name - namespace to remove
// @param recursive - delete all nodes beneath this node
// @param cleanup - clean up unused parent nodes

function deleteInternal(name, recursive, cleanup) {
cleanup = (cleanup === undefined) ? true : false;

if (!_Namespace.hasOwnProperty(name)) {
throw NamespaceError(
"Namespace " + name + " undefined or already defined with another key"
);
}
var victim = _Namespace[name];
victim.erase();

if (recursive) {
var children = victim.getChildren();
for (var i = 0, len = children.length; i < len; i++) {
var sub_space = name + ":" + children[i];
try {
deleteInternal(sub_space, true, false);
} catch (e) {
error_log("Unable to delete protected subspace " + sub_space);
}
}
}

var namespace_tree = name.split(":");
var prefix = name;
for (len = namespace_tree.length, i = len - 1; i >= 0; i–) {
if (victim.hasChildrenOrData()) {
return;
}
var affix = namespace_tree[i];
var outer = victim.getParent();
assert_true("Tree Integrity test", outer.hasAffix(affix));
outer.deleteAffix(affix);
assert_false("Tree Integrity test", outer.hasAffix(affix));
victim.detach();
victim = outer;
delete _Namespace[prefix];
prefix = prefix.delimitedParent(':');
if (!cleanup) {
return;
}
}
};


// Namespace.getInterface
//
// Gets a module interface from a namespace name
//
// @param name - semicolon delimited namespace identifier

function getInterface(name) {
if (_Namespace.hasOwnProperty(name)) {
return _Namespace[name].getInterface();
}
return null;
};


// Namespace.getChildren
//
// Gets a list of child Namespace given a namespace identifier
//
// @param name - semicolon delimited namespacey identifier

function getChildren(name) {
if (_Namespace.hasOwnProperty(name)) {
return _Namespace[name].getChildren();
}
return null;
};

function resetInternal() {
_Namespace = {};
_topLevel = new Node(null, "");
_Namespace[""] = _topLevel;
};

resetInternal();

return {
register: register,
delete: deleteInternal,
getInterface: getInterface,
getChildren: getChildren,
reset: function() {
resetInternal();
}
};
};

(function unitTest() {
try {
error_log("BEGIN UNIT TEST: Namespace");
var Namespace = new Namespaces();

// Set to true to debug this test
var log = true;

var impl = { a: "test" };
var ns = Namespace.register("Animal:Mammal:Dog:Snoopy", "Secret", impl);
assert_true ("Namespace Test 2A: register", ns.getInterface() === impl, log);
assert_true ("Namespace Test 2B: register",
Namespace.getInterface("Animal:Mammal:Dog:Snoopy") === impl, log);
assert_equals ("Namespace Test 3: getChildren",
Namespace.getChildren("Animal:Mammal:Dog"), [ "Snoopy" ], log);
assert_equals("Namespace Test 4: get top level",
Namespace.getChildren("").length, 1, log);
assert_true ("Namespace Test 5: get top level",
Namespace.getChildren("").indexOf("Animal") >= 0, log);
test_exception_func("Namespace Test 6: exceptions",
Namespace.register,
["Animal:Mammal:Dog:Snoopy", "Oops", { a: "bad" }],
"NamespaceError", log);
var impl2 = { a: "test2" };
var ns2 = Namespace.register("Animal:Mammal:Dog:Snoopy", "Secret", impl2);
assert_true ("Namespace Test 7: re-register", ns2.getInterface() === impl2, log);
var impl3 = { a: "test3" };
Namespace.register("Animal:Mammal", "FieFouFum", impl3);
assert_true ("Namespace Test 8: sub-register",
Namespace.getInterface("Animal:Mammal") === impl3, log);
test_exception_func("Namespace Test 9: exception on bad deregister",
Namespace.delete,
["Animal:Mammal:Doog"],
"NamespaceError", log);
Namespace.register("Animal:Mammal:Dog:Snoopy:Bone:Pillow", null, impl2);
Namespace.register("Animal:Mammal:Dog:Snoopy:Bone", "VerySecret", impl);
Namespace.delete("Animal:Mammal:Dog:Snoopy", "Secret", true);

error_log("PASSED UNIT TEST: Namespace");
} catch (e) {
var errstr = e.name + ": " + e.message;
error_log(errstr);
error_log("FAILED UNIT TEST: Namespace");
throw e;
}
}());

if (typeof Global === "undefined") {
var Namespace = new Namespaces();
var Global = Namespace;
Namespace.register("Namespace", "withNamespace", Namespace, "Global namespace");
}


Now, every module you define gets wrapped by the global Namespace, and defines a link procedure, typically "withModuleXYZ".

Then I can write a new module, reload it independently of all others, and the above code make sure all private linked copies of it are updated. And it is a near fool-proof, guarantee, since due to the closure wrapping above, nobody can have a reference to my namespace unless they register a "withMyModule" linker of their own.

For instance,

/////////////////////////////////////////////////////////////////
// Reports
/////////////////////////////////////////////////////////////////

Namespace.register("Reports", "withReports", (function() {

const THIS_MODULE = "Reports";
const MAX_REPORT = 3000;

// Imports
var FileManager;

function createReport(type, player, data) {
var lower = type.toLowerCase();
var id = FileManager.createUniqueId("Reports", lower+".id");
var abuse = (type === "Abuse");
var fname = "Reports/" + type.capitalize() +
(abuse ? "s" : "") + "/" + id;
var schema = "Messages:" + lower;
var report = { text: data,
playerId: player.playerId };
if (!abuse) {
report.player = player.name;
}
FileManager.save(fname, report, schema, player.name);
return id;
};

function withFileManager(module) {
FileManager = module;
};

function withSchemaManager(SchemaManager) {
SchemaManager.importSchema("Messages:bug");
SchemaManager.importSchema("Messages:typo");
SchemaManager.importSchema("Messages:idea");
SchemaManager.importSchema("Messages:petition");
SchemaManager.importSchema("Messages:abuse");
};

return {
MAX_REPORT : MAX_REPORT,
withFileManager : withFileManager,
withSchemaManager : withSchemaManager,
createReport : createReport
};
})());


I can reload the Reports module, and anyone with a cached reference will get their reference updated by the withReports callback. Similarly, if I change the FileManager, to say, catch exceptions on file errors so the caller doesn't need to, this reports code above gets updated with a new reference to the FileManager module.

I just think it's awesome that with 47 lines of code, I now have a module that can file bugs, typos, ideas, petitions, and abuse reports.. this is why I am a Javascript convert.
01 Feb, 2013, plamzi wrote in the 11th comment:
Votes: 0
Telgar said:
I'm sure if you have done much node.js programming, you'll have noticed right away that once you reload any module, references to obsolete modules become a problem. They get implicitly cached in all sorts of things like closures, local variables. This is sort of an unfortunate side effect of node.js' otherwise quite excellent module system; it was designed to support complex load dependencies, but not to support loading a module more than once.

The solution is to use a global namespace manager which updates all those local variables and even peeps into closures for you. How?

384 + 47 lines of code

I can reload the Reports module, and anyone with a cached reference will get their reference updated by the withReports callback. Similarly, if I change the FileManager, to say, catch exceptions on file errors so the caller doesn't need to, this reports code above gets updated with a new reference to the FileManager module.

I just think it's awesome that with 47 lines of code, I now have a module that can file bugs, typos, ideas, petitions, and abuse reports.. this is why I am a Javascript convert.




Yes, I'm aware of the implicit module caching behavior. What you're doing is a lot of fancy footwork. If I had to go down that road just to get dynamic updates, I would have probably given up. Fortunately, there's a way to think outside of modules and end up with a one-line solution that gives me all the reloadability I need:


loadF: function(f) { try { eval(fs.readFileSync(o.path+f)+'') } catch(err) { srv.log(err) } }


In my particular project, the global state is kept in the database, so I don't have to worry about that part at all (and so I have 100% reloadability). But even if I had the state in memory, the complexities of writing and maintaining the kind of code that can update instanced objects far outweigh, in my mind, any potential benefits (pushing fundamental changes without having to take the server down?).

I can totally see that you're taking this as a challenge on yourself, and you're learning a lot in the process. That's a valid approach. My approach is to make a beeline for the finish. I also end up learning a lot, everytime :)

P. S.
node.js is my new god
01 Feb, 2013, Runter wrote in the 12th comment:
Votes: 0
Quote
node.js is my new god


I'm glad to see others in the community besides myself taking interest in it.
01 Feb, 2013, Telgar wrote in the 13th comment:
Votes: 0
plamzi said:

loadF: function(f) { try { eval(fs.readFileSync(o.path+f)+'') } catch(err) { srv.log(err) } }


In my particular project, the global state is kept in the database, so I don't have to worry about that part at all (and so I have 100% reloadability). But even if I had the state in memory, the complexities of writing and maintaining the kind of code that can update instanced objects far outweigh, in my mind, any potential benefits (pushing fundamental changes without having to take the server down?).

I can totally see that you're taking this as a challenge on yourself, and you're learning a lot in the process. That's a valid approach. My approach is to make a beeline for the finish. I also end up learning a lot, everytime :)

P. S.
node.js is my new god


The database approach is a nice way to handle it. The problem, of course, is that you have to put EVERYTHING in the database. And implementing full copy-over support across reboots is a bitch. Even after you get the file descriptors passed over intact, you then realize you need to copy over horrible things like telnet negotiations and possibly even buffer state.

As far as my approach, it's not so much the appeal of the challenge, I would say not having it is a challenge to progress. Since you can't exactly sit down and debug server-side embedded Javascript in a debugger (at least, I haven't written one yet…), and it's a giant PITA to restart the entire server, log back in and re-create whatever player state you were debugging everytime you want to update a single line of code for testing, this approach eventually becomes necessary. If you have modularized your code, you are at a significant disadvantage for reloading. However, I would argue that the long term benefits of modularity in terms of code maintainability outweigh the temporary benefits of a monolithic strategy.

In my case, I want both. I want the simplicity of integration that monolithic code has, along with the unit-testability and isolation that modules provide. I won't compromise on this. I am DJ Rhuby Rhod: I don't want one position. I want ALL positions. I looked extensively at Node.js before going home-brew, and for a while it almost won, but after hitting this problem, I realized, there had to be a better way. Simply eval()ing new code works for lots of cases, but when you are dealing with code that has callbacks and thus cached references to previous instances of your code, you rapidly approach nightmarish amounts of un-debuggability.

There's actually a bug in the fancy code above, btw… specifically in Function.staticClass… I have fixed it but the fix itself looks like an alien artifact. If anyone can spot it, PM me for a code to get a free "Ice Cream Cone" offering permanent nourishment (in exchange for continuous weight gain) once this MUD is up and running. If you can find a fix, talk to me about possibly becoming a coder…
01 Feb, 2013, Runter wrote in the 14th comment:
Votes: 0
I assure you that what you are describing isn't necessary anywhere. It's more useful in development and testing environments. Which mitigates the need for it to not be buggy. It isn't even necessary for large commercial ventures with 10 or 100s of people developing.

It's extremely bad practice to write live code in production for many reasons. Even when it's sandboxed. I think if we're not careful here people are going to take away a really bad practice from this. Some employers would fire their devs if they did what you describe and they found out about it later. It's extremely dangerous and not doing players a service.
01 Feb, 2013, Telgar wrote in the 15th comment:
Votes: 0
I assure you it is, in exactly the environment you suggest, development and testing. And if you like to develop in a buggy dev environment, that is your business. I do not.

Nobody is advocating developing code in production. Apparently you've never heard of the term hotfix. And while I do know of many people who have been fired for not being able to do such a thing, I have yet to hear of a case where someone was fired for adding another option to a crisis manager's toolbox.
02 Feb, 2013, plamzi wrote in the 16th comment:
Votes: 0
Telgar said:
I assure you it is, in exactly the environment you suggest, development and testing. And if you like to develop in a buggy dev environment, that is your business. I do not.

Nobody is advocating developing code in production. Apparently you've never heard of the term hotfix. And while I do know of many people who have been fired for not being able to do such a thing, I have yet to hear of a case where someone was fired for adding another option to a crisis manager's toolbox.


For a brief post, this one confuses a remarkable number of things. I can only assume it's an emotional reaction to misunderstanding Runter's (and my own) points. Just keep in mind that both Runter and I speak from experience. Maybe you have to live it to believe it.

Hopefully, you won't get to a point late in the game where you find out that there's something fundamentally dysfunctional about writing your entire code this way. That would be a huge waste. If you end up with functional code that's hard to maintain and that only you can understand, that's the lesser evil.

P. S. Also, the term "hotfix" doesn't mean what you think it means.
02 Feb, 2013, Rarva.Riendf wrote in the 17th comment:
Votes: 0
I do know now one that has been fired for not being able to fix a code while it is being run.
Mostly because it is more dangerous to replace a faulty code (from wich you know the fault) by another one wich may also be buggy and be even worse (as it was rushed in production with less testing than the faulty code to begin with) meaning you could corrupt your data even more. (and now with two different bugs in a row; kind of nightmarish to fix)

I am curious to know in which field of work having that kind of code is a prerequisite. To be so frequent you know many people that have been fired for that.

Who the hell need that for a mud anyway, when a copyover recover can takes less than a few seconds…
02 Feb, 2013, Runter wrote in the 18th comment:
Votes: 0
Telgar said:
I assure you it is, in exactly the environment you suggest, development and testing. And if you like to develop in a buggy dev environment, that is your business. I do not.

Nobody is advocating developing code in production. Apparently you've never heard of the term hotfix. And while I do know of many people who have been fired for not being able to do such a thing, I have yet to hear of a case where someone was fired for adding another option to a crisis manager's toolbox.


I don't know of any products where hotfixes are regular that don't require a reboot of some kind. The term certainly has nothing to do with reloading code on production.

You know people who have been fired for not being able to do what?
When do you think there would ever be such a crisis that requires you to apply code this way to the production server over a simple reboot?

The typical process for almost every product in this case is emergency maintenance where the game is down long enough to fix or rollback the changes. It's almost never a 2 second fix, and even if it was, you shouldn't apply it directly to the production server without testing it on development. Also, when I say testing I don't mean log in and test what you just did. I mean writing test suites. Assuming you write a test in your suite every time you add features, and usually 10's of tests per feature, you will need to run them all at least once before doing any deployment. Especially a hotfix. In almost every non trivial app I've worked on the test suite can take anywhere from 10 minutes to an hour depending on the kind of tests being ran.

I have no doubt you're a good programmer. My point is that there's no dire need for this code to be in place. I was wholly okay with what you were saying until you said this is an exercise of necessity. Other readers may come here later thinking that's true.
02 Feb, 2013, quixadhal wrote in the 19th comment:
Votes: 0
Even with my somewhat limited experience (compared to others here), I can assure you that "hotfix" doesn't mean "slap a code correction onto the live server, untested." Even in a little startup, when we identified a problem (even a "critical" problem), we still had to develop and TEST the fix on a test server and get approval before pushing it live. "Hotfix" means a fix that's applied outside the NORMAL upgrade/maintenance schedule – in no way does that imply it hasn't been tested first.

In practice, most businesses have a regular maintenance schedule and schedule upgrades several weeks in advance. So, normal bugfixes and updates get rolled together to fit into one of those timeframes. If your bug is so critical that it requires an immediate response, you have the choice of shutting down your service with a short ETA, or restoring an earlier backup of both code AND data. If you go the backup route, you not only have to test your fixes, but you have to do forensics on the corrupted database to try and salvage as much data as you can from the time between your snapshot and the time you shut it down. Merging those changes into your NEW live-from-backup server is often a much harder problem than fixing the bug.

Of courses, you're not running a business where people's livelihoods and/or lives are on the line… so you can just stick up a web page saying "Down for maintenance, be back soon!" and field any complaints about missing items/XP/etc by reimbursement, as needed.
02 Feb, 2013, Telgar wrote in the 20th comment:
Votes: 0
You all are conflating the point, as am I, by misusing terminology.

For me, reboot = reload Javascript code.
For me, persistent game state = data reachable from objects defined in the C++ embedder
For me, stable game state = persistent game state which has been flushed to disk

Thus, a hotfix is exactly what I speak of, a "reboot" which consists of re-loading Javascript code. The fact that the server process is not restarted is no longer relevant because we are cooking inside the box.

As for where the notion that untested code is going directly into production came from, I have no idea. Certainly, given the system I have, that is possible, but why on god's earth would I do that when I can pursue the methodology of writing up a fix, testing the code on a test instance, then after we are sure everything is good and passes unit tests, "reboot" production?
Random Picks
0.0/22