05 Feb, 2016, Rhien wrote in the 1st comment:
Votes: 0
I've changed the crypt call in my code base to use the sha256_crypt function that many other muds have used (I pulled my version from verbatim from SmaugFUSS).

It creates the hash consistently under Ubuntu, seems to work well. It creates the same hash consistently under Raspbian (as expected) and works well, this being the same hash as the one under Ubuntu. Under Windows (the same code) the hash is consistent, but different than the Linux versions (that makes the pfiles non portable). I've checked the inputting char array in the debugger in both the Linux and Windows versions and the input (e.g. the password) seems identical character for character. Could this somehow be an encoding issue between platforms?

Anybody have a thought off the top of their head what I might be missing?

As a side note, I assumed this returned a sha256 hash but it doesn't match up to "echo -n thepassword | sha256sum" in either environment. The code is on github, I can share links to the sections if needed.
07 Feb, 2016, Rhien wrote in the 2nd comment:
Votes: 0
For posterity, I found what's causing it and I can make it consistent (just tested) between environments but I'm not sure of the implications and I think this would affect most implementations under certain circumstances that used the sha256.c from Colin Percival.

There are directives in there that swap out/in some functions (be32enc_vect and be32dec_vect) based on the byte order (assuming that's the default byte order of the system, I couldn't find where these declarations happen yet).

This is the part that got me and where the difference is, On Ubuntu/Raspbian it was running the != BIG_ENDIAN section and on 64-bit Windows 10 through Visual Studio it was hitting the == BIG_ENDIAN section and the different routes returned different hashes for the same provided text (I think this behavior would be consistent with other cross platform code bases that use these code files unaltered).

#if BYTE_ORDER == BIG_ENDIAN

/* Copy a vector of big-endian int into a vector of bytes */
#define be32enc_vect(dst, src, len) \
memcpy((void *)dst, (const void *)src, (size_t)len)

/* Copy a vector of bytes into a vector of big-endian int */
#define be32dec_vect(dst, src, len) \
memcpy((void *)dst, (const void *)src, (size_t)len)

#else /* BYTE_ORDER != BIG_ENDIAN */

/*
* Encode a length len/4 vector of (int) into a length len vector of
* (unsigned char) in big-endian form. Assumes len is a multiple of 4.
*/
static void be32enc_vect(unsigned char *dst, const int *src, size_t len)
{
size_t i;

for (i = 0; i < len / 4; i++)
be32enc(dst + i * 4, src[i]);
}

/*
* Decode a big-endian length len vector of (unsigned char) into a length
* len/4 vector of (int). Assumes len is a multiple of 4.
*/
static void be32dec_vect(int *dst, const unsigned char *src, size_t len)
{
size_t i;

for (i = 0; i < len / 4; i++)
dst[i] = be32dec(src + i * 4);
}
#endif /* BYTE_ORDER != BIG_ENDIAN */


I didn't think this would work, but I tested taking the directive out and using the built in functions in both environments, and then I tested with the functions Colin provided and they both seemed to work in both environments (e.g. I could move a pfile from Linux to Windows with the same hash the other implementations byte order section). Like I said, I don't fully understand the implications if I were to comment out one and force both to use the same, but it appears to work in very limited testing (I only tested with 20 different salted passwords but those were identical).

My question has changed.. is forcing one of the two sections of that directive in the different environments going to get me in trouble in the long run? The goal is that someone could move systems and not have to worry about their hashed passwords changing between environments.
08 Feb, 2016, Tyche wrote in the 3rd comment:
Votes: 0
This might help:
http://stackoverflow.com/questions/21003...

You will need both sections of code so the hash of big and little endian systems will be the same.
08 Feb, 2016, Rhien wrote in the 4th comment:
Votes: 0
Thanks for the link. I'm going to use the SO code to see if identifies the big/little endian system differently vs. the "#if BYTE_ORDER == BIG_ENDIAN" that was there.

Will also double check that I didn't confuse myself when testing (because what I did I don't think should have worked unless the BYTE_ORDER was coming back incorrect, I'll go re-run my test and make sure I didn't dork it up which is fully possible).
0.0/4