21 Jul, 2016, Rhien wrote in the 1st comment:
Votes: 0
I'm still using flat files on my code base which I plan on keeping, but I would like to insert data into a sqlite database for use elsewhere. I write a lot of database code, but never from C. I have the code below which works but as this is my first time writing to a sqlite db from a C program I wanted a sanity check it in case anybody can see any blatant errors I've made (leaks, poor logic, etc.).

/*
* Export data about each area file.
*/
void export_areas(void)
{
sqlite3 *db;
int rc;
sqlite3_stmt *stmt;
AREA_DATA *pArea;

rc = sqlite3_open(EXPORT_DATABASE_FILE, &db);

if (rc != SQLITE_OK)
{
bugf("export_areas -> Failed to open %s", EXPORT_DATABASE_FILE);
goto out;
}


// Total reload everytime, drop the table if it exists.
if ((sqlite3_exec(db, "DROP TABLE IF EXISTS area;", 0, 0, 0)))
{
bugf("export_areas -> Failed to drop table: area");
goto out;
}

// Create the table
if ((sqlite3_exec(db, "CREATE TABLE IF NOT EXISTS area(name TEXT PRIMARY KEY, min_level INTEGER, max_level INTEGER, builders TEXT, continent INTEGER, area_flags INTEGER);", 0, 0, 0)))
{
bugf("export_areas -> Failed to create table: area");
goto out;
}

// Begin a transaction
sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, NULL);

// Prepare the insert statement that we'll re-use in the loop
sqlite3_prepare(db, "INSERT INTO area (name, min_level, max_level, builders, continent, area_flags) VALUES (?1, ?2, ?3, ?4, ?5, ?6);", -1, &stmt, NULL);

// Loop over all areas and save the area data for each entry.
for (pArea = area_first; pArea; pArea = pArea->next)
{
sqlite3_bind_text(stmt, 1, pArea->name, -1, SQLITE_STATIC);
sqlite3_bind_int(stmt, 2, pArea->min_level);
sqlite3_bind_int(stmt, 3, pArea->max_level);
sqlite3_bind_text(stmt, 4, pArea->builders, -1, SQLITE_STATIC);
sqlite3_bind_int(stmt, 5, pArea->continent);
sqlite3_bind_int(stmt, 6, pArea->area_flags);

rc = sqlite3_step(stmt);

if (rc != SQLITE_DONE)
{
printf("export_areas -> Error inserting data for %s: %s\n", pArea->name, sqlite3_errmsg(db));
}

sqlite3_reset(stmt);
}

sqlite3_exec(db, "COMMIT TRANSACTION", NULL, NULL, NULL);
sqlite3_finalize(stmt);

out:
// Cleanup
sqlite3_close(db);
return;
}
22 Jul, 2016, Pymeus wrote in the 2nd comment:
Votes: 0
I'd recommend checking the return value of *every* sqlite call, including the transaction begin/commit, reset, and value bindings.

(Also, unless you're experienced enough with databases to know what you're getting yourself into, I'd recommend numeric primary keys. Minor performance differences aside, text primary keys tend to bite back in unexpected ways later on.)
22 Jul, 2016, Rhien wrote in the 3rd comment:
Votes: 0
Thanks for the advice. After I wrote this I actually changed this instance to use the area->vnum instead of the name as the key (it's assigned dynamically based on load order but since this would reload game data from scratch everytime any queries would continue to work unless they reference that key value specifically). Normally I'd join on key but query on the name.

Once I started loading the various flag tables in and joining things up I'm quickly find missing pieces.
25 Jul, 2016, Rhien wrote in the 4th comment:
Votes: 0
I think I'm going to split the bits for things like extra_flags out into foreign key tables as that would be the best relational way to do it, but sqlite also supports bitwise searches which means the object could be queried much with a syntax much like IS_SET which is kind of cool.

E.g.

select * from object where extra_flags & (4 | 8)


vs.

select * from object o
inner join object_extra_flags oef.vnum = o.vnum
where (oef.flag = 4 or oef.flag = 8)
30 Jul, 2016, quixadhal wrote in the 5th comment:
Votes: 0
My personal suggestion is to try and keep your SQL as close to standard SQL as you can.

Things like "CREATE TABLE IF NOT EXISTS" or "INSERT OR UPDATE" seem like nice time saving things, until you decide to switch databases in another year or two, and those all stop working.

Database code in C is pretty ugly stuff, because you can't do any self-inspection of data types, nor can you create new classes on the fly… so you get to do all the legwork.

In terms of flags, I'd prefer the usual concept of foreign keys into lookup tables.

select * from object natural join extra_flag_types where extra_flag IN ('GLOW', 'LIGHT');


And of course, extra_flag_types would just be a two column table of extra_flag_id being the bit number, and extra_flag being the text name of it. If you had the bit numbers, you'd then be using extra_flag_id instead. :)
30 Jul, 2016, Rhien wrote in the 6th comment:
Votes: 0
Thanks for the advice. You've got good thoughts, and the more I've plugged away at and queried the data as I've finished pieces the more I've updated the schema to be relational (object table links an object_affect, etc.). I've kept the int values for the flags in the tables but started busting each one out into a foreign key table also.

I think I'm going to leave the sqlite specific SQL if only because there won't be a ton and I assume if I were to ever change platforms that the most cumbersome part would be updating all of the C code that would have to change.

It's working well so far (I like that sqlite is self contained and easily usable from anywhere without a lot of setup for someone).
31 Jul, 2016, Pymeus wrote in the 7th comment:
Votes: 0
Of course, to many developers actual standard SQL feels surprisingly limited after developing on any mature RDBMS for a while. For that matter, as far as I know no actual RDBMS fully implements any version of the standard in the first place.

Another common approach, if you want the ability to migrate arbitrarily between RDBMS systems, is to try to shoehorn some kind of suitable abstraction layer (or even an ORM when outside of C/C++) in between the database calls and the main program logic. This gives you the ability to address the syntactic and behavioral incompatibilities between different database systems in a single place instead of in each SQL query sprinkled throughout your code. We could debate the merits of either approach endlessly – neither is one-size-fits-all.

To just scratch the surface, consider that at a very basic level sqlite is dynamically typed, postgres is fairly strictly statically typed, and mysql/mariadb is hovering somewhere in between. Or the syntactic conflict of mysql/mariadb's AUTO_INCREMENT vs sqlite3's AUTOINCREMENT vs postgres SERIAL; you often don't want to work out that logic each time that your code needs to CREATE TABLE. Or even the nasty surprises that can spring up when you migrate your sqlite or postgres project complete with carefully-constructed CHECK constraints over to mysql/mariadb, only to realize later that CHECKs are syntactically accepted but otherwise ignored by those systems and your tables have been silently filling with garbage since you introduced that subtle bug 3 weeks ago.
01 Aug, 2016, Rhien wrote in the 8th comment:
Votes: 0
If I were running all of the game data off of these tables I would be concerned with most of what you brought up because they're good thoughts. Currently, I'm only exporting the game tables and data (areas, objects, help files, etc.) so they can be easily consumed from something like a web interface. I'd like to give someone a leg up on integrating their game data with a modern web site (I didn't see much of this type of snippet available).

If I were writing in C# the data layer could be easily abstracted with interfaces and a middle tier ORM that does most of the leg work of moving between providers. This IMO is complicated by C (and although I <3 C# there isn't enough time available for me to port ROM to it so I'm sticking with the one that brung me). I hate null terminated character arrays but I love ROM. What can I say. Hehe.
01 Aug, 2016, quixadhal wrote in the 9th comment:
Votes: 0
Another option you might consider…

You could write a database layer in the language of your choice (perl, python, C#, whatever) and have it listen on a localhost-only socket. Then have your MUD connect to that socket and send requests for data to that script, which would collect the data and send it back using JSON or XML or whatever you can parse easily on the C side.

If you used a proxy like that, you would only change the proxy when you switched databases, and your web clients could also talk to the proxy to get their data.

It would, of course, be more work… but it wouldn't be database handling in C, just the same kind of file I/O, but over a socket instead of a file descriptor.
0.0/9