02 Dec, 2008, Hades_Kane wrote in the 21st comment:
Votes: 0
That might be my fault…

I pooped in his cornflakes this morning.

My bad!
02 Dec, 2008, Grimble wrote in the 22nd comment:
Votes: 0
elanthis said:
Even a make -j4 on my quad-core machine takes 18 seconds to build the whole thing from scratch.


Yes, I'm running into the same compiling performance issues, also due to heavy use of templates and third-party libraries. It takes about 5 minutes to do a full build, although this is on an older low-end single-core PC. Weeding out any unnecessary includes is about all I can do short of upgrading the hardware. Still, I've worked on projects IRL that have taken hours to build, which often leads to one having time to whittle away at side projects :).
02 Dec, 2008, David Haley wrote in the 23rd comment:
Votes: 0
Have you tried using precompiled headers? More recent versions of gcc support them, although sometimes it's a little flaky. It can dramatically speed up compilation if you put the standard headers (system includes, etc.) into a precompiled header.
02 Dec, 2008, elanthis wrote in the 24th comment:
Votes: 0
DavidHaley said:
Have you tried using precompiled headers? More recent versions of gcc support them, although sometimes it's a little flaky. It can dramatically speed up compilation if you put the standard headers (system includes, etc.) into a precompiled header.


In my particular case, most of the slowness is just due to having a rats nest of includes. Every header includes every header it might maybe kinda need, not to mention a ton it did at one point need but no longer does. I just have to clean everything up, which I'm slowly doing. Once that's done, I may look into getting a precompiled header just for the STL stuff that I use all over the place (vector, map, string, etc.). C++ projects that I've started within recent years generally have much faster compilation times, even if they're significantly larger; I'm just more careful with include files.

Just for fun, I just took a couple minutes to move all the STL includes to a common header. G++ crashes trying to use the precompiled header now. ;)
02 Dec, 2008, David Haley wrote in the 25th comment:
Votes: 0
I try to avoid a mess of includes by very aggressively using forward declarations, and trying to only include headers in .cpp files. Not perfect, but not too bad. At least it prevents needless recompiling a little more, even though it doesn't speed up individual compilations.

And yeah, the precompiled header support is sometimes kind of flaky. :wink:
02 Dec, 2008, elanthis wrote in the 26th comment:
Votes: 0
DavidHaley said:
I try to avoid a mess of includes by very aggressively using forward declarations, and trying to only include headers in .cpp files. Not perfect, but not too bad. At least it prevents needless recompiling a little more, even though it doesn't speed up individual compilations.


Yeah, that's exactly what I'm doing. A lot of my .cc files also have a ton of unnecessary includes though. Slowly cleaning it all up…

Quote
And yeah, the precompiled header support is sometimes kind of flaky. :wink:


:) I managed to get it to build after rearranging a couple of the includes. The STL algorithm header seems to give it the most trouble. Anyways, now with all STL, IOStreams, ANSI C, and POSIX headers moved to a common preconpiled header (I didn't modify the, I managed to shave about 5 seconds average build time with make -j4. That's almost 33% of the total build time, so not too shabby. I'm sure once I finish cleaning up the header nest it'll be a lot better.

It is rather amazing how much time templates add to compilation. I have to include a compilation of Lua into that time, and the entire set of Lua source files compiles in less time than any two of the Source MUD C++ files (that use STL/templates).

That's yet another reason C++0x will be so nice. Extern templates. Each template specialization only needs to be compiled once (assuming you instantiate them explicitly in one of your source files) instead of once for every single translation unit. Assuming libstdc++ includes specializations of all the built-in and common STL types for the container classes, and you explicitly instantiate the ones for your custom types, compilation times should vastly improve for C++. … Actually, g++ might already support that; I should look into it.
03 Dec, 2008, Igabod wrote in the 27th comment:
Votes: 0
# $Id $

CC = gcc
RM = rm
EXE = merc.exe
PROF = -O -ggdb
CYGWIN = -DCYGWIN
ECHOCMD = echo -e
LIBS = -lcrypt

#more colors just to piss Tyche off
D_RED = \e# $Id $

CC = gcc
RM = rm
EXE = merc.exe
PROF = -O -ggdb
CYGWIN = -DCYGWIN
ECHOCMD = echo -e
LIBS = -lcrypt

#more colors just to piss Tyche off
D_RED = \e[0;31m
L_RED = \e[1;31m
D_BLUE = \e[0;34m
L_BLUE = \e[1;34m
D_GREEN = \e[0;32m
L_GREEN = \e[1;32m
L_GREY = \e[0;37m
L_WHITE = \e[1;37m
L_NRM = \e[0;00m
D_PURPLE = \e[0;35m
L_MAGENTA = \e[1;35m
D_BROWN = \e[0;33m
L_YELLOW = \e[1;33m
D_CYAN = \e[0;36m
L_CYAN = \e[1;36m

# Use these two lines to use crypt(), ie on Linux systems.
# C_FLAGS = $(PROF) -Wall
# L_FLAGS = $(PROF) -lcrypt

# Uncomment these two lines to use plaintext passwords.
# This is how you fix the 'crypt' linking errors!
L_FLAGS = $(PROF)
C_FLAGS = -Wall $(PROF) -DNOCRYPT -DQMFIXES

# Source Files
SRC_FILES := $(wildcard *.c)

# Backup Files
BCK_DIR = backup
BCK_FILES := $(BCK_DIR)/*

# Object Files
OBJ_DIR = obj
OBJ_FILES := $(patsubst %.c,$(OBJ_DIR)/%.o,$(SRC_FILES))

# Header Files need to find out where to put these in the make
H_FILES = $(wildcard *.h)

merc: $(OBJ_FILES)
@$(ECHOCMD) "$(L_RED)[- $(L_WHITE)Rebuilding MUD executable: $(L_GREEN)merc.exe$(L_RED) -]$(L_NRM)"
@$(ECHOCMD) "$(L_RED)[- $(L_YELLOW)**********$(L_CYAN)Compile Complete$(L_YELLOW)**********$(L_RED) -]$(L_NRM)"
@$(RM) -f $(EXE)
@$(CC) $(L_FLAGS) -o $(EXE) $(OBJ_FILES) $(LIBS)

$(OBJ_DIR)/%.o: %.c
@$(ECHOCMD) "$(L_RED)–> $(L_WHITE)Compiling file: $(L_MAGENTA)$<$(L_RED) <–$(L_NRM)"
@$(CC) $(C_FLAGS) -c -o $@ $<
@$(ECHOCMD) "$(L_RED)[- $(L_YELLOW)$<$(L_NRM) compiled $(L_GREEN)OK$(L_RED) -]$(L_NRM)"

clean:
@$(ECHOCMD) "$(L_RED)–> $(L_BLUE)Cleaning up for full make… $(L_NRM)\c"
@$(RM) -f $(OBJ_FILES) $(EXE) *~ *.bak *.orig *.rej
@$(ECHOCMD) "$(L_RED)–> $(L_GREEN)done$(L_NRM). $(L_RED)<–$(L_NRM)"
@make

cleanup:
@$(ECHOCMD) "$(L_RED)–> $(L_BLUE)Making clean for backing up $(L_NRM)\c"
@$(RM) -f $(OBJ_FILES) $(BCK_FILES) $(EXE) *~ *.bak *.orig *.rej
@$(ECHOCMD) "$(L_RED)–> $(L_GREEN)Ready to make a backup now$(L_NRM). $(L_RED)<–$(L_NRM)"
[/code]
i made some corrections based on what tyche said but i was unable to find any makefiles that had header files mentioned at all except afkmud but that was under an indent option which i don't know anything about. apparently my makefile is lacking just as much as every makefile i've seen. maybe someone could make a suggestion as to where to put my header files so that they are compiled on regular make. any other suggestions are welcome as well. also, i added BCK_FILES to the cleanup option cause i have a directory called backup where i store copies of the src directory temporarily during questionable coding projects. oh and to answer tyche's question of "why another make clean" one is for when i make backups and don't want all the object files and all that crap making it bigger. which i thought was quite obvious by the ECHOCMD Making clean for backing up. the reason i have it doing make at the end of the clean option is to save myself one more step since every time i make clean i make. if i wanted it to just be clean without compiling i'll just use the make cleanup command.

[edit to add]also, lobotomy mentioned colorizing the error messages themselves, i haven't figured out how to do this yet but would gladly colorize that instead of the other stuff if someone could tell me how.
03 Dec, 2008, Lobotomy wrote in the 28th comment:
Votes: 0
Igabod said:
[edit to add]also, lobotomy mentioned colorizing the error messages themselves, i haven't figured out how to do this yet but would gladly colorize that instead of the other stuff if someone could tell me how.

It's simply a matter of causing intentional color bleed by placing the color code you want for your error messages at the end of your compile status line (i.e, the "Compiling Example.o…" line). If an error occurs the text is then colored all in that color due to the error text not having its own default color. If an error does not occur, then the error color code is not triggered as the color code at the start of the next status line will overwrite the previous color.
03 Dec, 2008, Kayle wrote in the 29th comment:
Votes: 0
Look up colourgcc. It's a perl script that acts as a wrapper for gcc/g++ to allow you to colorize errors and warnings.
03 Dec, 2008, elanthis wrote in the 30th comment:
Votes: 0
Igabod said:
maybe someone could make a suggestion as to where to put my header files so that they are compiled on regular make.


You don't compile header files. Unless you're trying doing the precompiled header thing (really only helpful for C++), but that requires a lot more than just adding them to make somewhere.

If you're talking about dependency handling, I already linked you an article spelling out how to do that very explicitly. If that's too much to handle, you could try doing it the lazy way. Change this line:
$(OBJ_DIR)/%.o: %.c
into this updated line:
$(OBJ_DIR)/%.o: %.c $(H_FILES)

That would cause all source files to automatically be recompiled whenever a header is changed or added.

Quote
any other suggestions are welcome as well. also, i added BCK_FILES to the cleanup option cause i have a directory called backup where i store copies of the src directory temporarily during questionable coding projects.


Well, the best suggestion I have is to stop dicking around with "source backups" and to just use a proper source control system. If you're doing all your coding on the host itself, I would highly recommend git, as it requires no setup or server to be installed. There are dozens of high-quality tutorials around on using git, so I'm not going to waste space explaining it. Seriously, copying files around and trying to manually maintain "safe" and "experimental" code is the best way to accidentally screw yourself in a bad way. Any proper source control system will be leaps and bounds better than what you're doing.

Quote
oh and to answer tyche's question of "why another make clean" one is for when i make backups and don't want all the object files and all that crap making it bigger. which i thought was quite obvious by the ECHOCMD Making clean for backing up. the reason i have it doing make at the end of the clean option is to save myself one more step since every time i make clean i make. if i wanted it to just be clean without compiling i'll just use the make cleanup command.


The "problem" (it really isn't one) is that it breaks very common practice. It's a Makefile just for your needs, so obviously the de facto standards of rule naming and behavior have little importance to you, but to most others that setup just feels wrong and dirty. Common practice would be to have clean just clean, nothing else, and then to add a dist rule to create full source+data backups. Whatever works for you is fine, but anyone else that looks at that Makefile is going to scratch their heads and cringe a little at seeing a make clean that also results in a full build.

If you use a real source control system and if you implement the full dependency handling setup then you'll also find that you would never again want to run make clean, unless you're just bored and want time how long it takes to build your MUD from scratch. :p

Quote
also, lobotomy mentioned colorizing the error messages themselves, i haven't figured out how to do this yet but would gladly colorize that instead of the other stuff if someone could tell me how.


Some Linux distros ship with a gcc-color command that you call instead of gcc, which nicely colorizes error messages. I'm sure with a little Googling you could find a copy of that script if your host doesn't have it already. I still think you should just use better tools that open the errors directly up in your editor instead of having to comb through error output and find the lines yourself. The amount of time and energy people waste on making errors pretty kind of confuses me, given that most decent editors already make it unnecessary.
03 Dec, 2008, Igabod wrote in the 31st comment:
Votes: 0
elanthis said:
You don't compile header files. Unless you're trying doing the precompiled header thing (really only helpful for C++), but that requires a lot more than just adding them to make somewhere.

If you're talking about dependency handling, I already linked you an article spelling out how to do that very explicitly. If that's too much to handle, you could try doing it the lazy way. Change this line:
$(OBJ_DIR)/%.o: %.c
into this updated line:
$(OBJ_DIR)/%.o: %.c $(H_FILES)

That would cause all source files to automatically be recompiled whenever a header is changed or added.

ah yes thats what i was refering to, i didn't go back and re-read what was said in earlier posts, just went off my very poor memory, glad you figured out what i meant though.

elanthis said:
Well, the best suggestion I have is to stop dicking around with "source backups" and to just use a proper source control system. If you're doing all your coding on the host itself, I would highly recommend git, as it requires no setup or server to be installed. There are dozens of high-quality tutorials around on using git, so I'm not going to waste space explaining it. Seriously, copying files around and trying to manually maintain "safe" and "experimental" code is the best way to accidentally screw yourself in a bad way. Any proper source control system will be leaps and bounds better than what you're doing.

ok so i don't even know what a "source control system" is let alone how to do it. if you're talking about making full backups of the code then yes i do that, i just use the backup directory to make things easier on me. this isn't a mud that i intend on having people play, it's just a project for me to work on while i sit here at work for 8 hours a night (11pm-7am) with nothing to do except wait for the phone to ring (i'm the night manager of a hotel for those curious people).

elanthis said:
The "problem" (it really isn't one) is that it breaks very common practice. It's a Makefile just for your needs, so obviously the de facto standards of rule naming and behavior have little importance to you, but to most others that setup just feels wrong and dirty. Common practice would be to have clean just clean, nothing else, and then to add a dist rule to create full source+data backups. Whatever works for you is fine, but anyone else that looks at that Makefile is going to scratch their heads and cringe a little at seeing a make clean that also results in a full build.

If you use a real source control system and if you implement the full dependency handling setup then you'll also find that you would never again want to run make clean, unless you're just bored and want time how long it takes to build your MUD from scratch. :p
see above for my responses to this portion, however if someone could suggest a better name for my version of make clean i would use that, i just wasn't feeling all that creative at the moment.

elanthis said:
Some Linux distros ship with a gcc-color command that you call instead of gcc, which nicely colorizes error messages. I'm sure with a little Googling you could find a copy of that script if your host doesn't have it already. I still think you should just use better tools that open the errors directly up in your editor instead of having to comb through error output and find the lines yourself. The amount of time and energy people waste on making errors pretty kind of confuses me, given that most decent editors already make it unnecessary.

gcc-color huh, i wonder if that is part of cygwin… i'll look that up. as for using the other editors, i took a look at vim and was slightly confused by it, this may be because i just typed vim and looked at it rather than read the man for it but nevertheless i'm used to nano/pico, it's all i've ever known other than wordpad and it's what i'm comfortable with. editplus looks nice but i'd rather edit in the shell(or cygwin for offline editing) for some reason even though i'm not all that fluent with linux. i know i'm rather stubborn with some things but i'm always willing to at least check things out before deciding not to use them. and if i were to learn vim (some other time) and like it, i am not above switching to it, but as it stands with vim i just don't feel like learning it today, maybe tomorrow.
03 Dec, 2008, David Haley wrote in the 32nd comment:
Votes: 0
Igabod said:
ok so i don't even know what a "source control system" is let alone how to do it.

Google for "git vcs" or "bzr vcs" – those are the two I'd recommend.

Version control systems are basically tools that help you make backups intelligently. You make changes, then you lump those changes together into a "commit" (or "revision"). The VCS lets you go back and forth in those revisions if you realize you made a mistake. You can also do things like see what your code looked like at date X.

Most importantly for new and old programmers alike, it tells you exactly what has changed in your source files. It makes debugging much easier if you have enough discipline to regularly commit working code: you simply compare your working source files against the last known working revision. No guessing games, or no more "well I didn't actually change anything, well I did but it was trivial, so I don't see why it matters, so I don't really remember it…".

Basically, some kind of VCS is an essential component to any programmer's toolkit.
03 Dec, 2008, Kayle wrote in the 33rd comment:
Votes: 0
I would say SVN is an option, but I'd hate to have to try and walk you through attempting to install that in CYGWIN. I imagine it wouldn't be pretty.
03 Dec, 2008, Noplex wrote in the 34th comment:
Votes: 0
I would also suggest looking into using git; it would be my personal choice over subversion if it wasn't for my job requiring me to use the former. There are subtle differences between the syntax of each of the version control systems, but each usually has a certain case where it excels over the others. For example, git is good for larger projects but has an (arguably) more difficult syntax to understand than subversion.

The nice thing about git (which, from what I hear subversion is also implementing) are local check-ins. This means that all of your developers can check changes into their local copy of the repository and then push their changes to the main repository (or to another developer) when they are ready. This lets you create a nice chain of command (only certain people can push to the trunk repository).

Once you get over the difference of git from subversion it actually is a lot nicer. The native windows tools for git are a little lacking, but I'm not sure about CYGWIN.

With that said: how powerful is the workstation that you are running CYGWIN on? You may be better off downloading Sun xVM Virtualbox and running Linux in a sandbox. Since VM software is freely available it has basically made projects like CYGWIN moot.

You can grab virtual box at: http://www.virtualbox.org/
03 Dec, 2008, David Haley wrote in the 35th comment:
Votes: 0
bzr has the advantages of git in terms of distributed control and local commits, with the main disadvantage being that it is slower. But, it has a much friendlier syntax: it is extremely similar to cvs/svn.
03 Dec, 2008, Noplex wrote in the 36th comment:
Votes: 0
DavidHaley said:
bzr has the advantages of git in terms of distributed control and local commits, with the main disadvantage being that it is slower. But, it has a much friendlier syntax: it is extremely similar to cvs/svn.
Sounds like a good trade off; although git can be learned quickly with about an hour of practice. I still find myself looking up commands for subversion when I need to do complex things like merging. The problem with subversion is that there are no local operations. At least with git (and from the way it sounds, bzr) if you screw up you didn't do it on the trunk.
03 Dec, 2008, David Haley wrote in the 37th comment:
Votes: 0
In bzr, merging is pretty simple, at least for the typical use cases. I haven't played around very much with more complicated things like specifying the revision.

If you want to merge into the local tree from the parent repository: bzr merge
or if you want to change repositories: bzr merge (some repository address)

Then you resolve conflicts (if any), commit, and your local repository has the merged stuff.

If you then want to push that back to the parent repository: bzr push
It won't let you push unless you have all of the parent repository's changes, so if somebody pushes some stuff you have to merge again before pushing.

And yes, it is extremely nice to not screw up the trunk! :smile:
03 Dec, 2008, Tyche wrote in the 38th comment:
Votes: 0
Igabod said:
first off tyche, your rudeness was uncalled for, i didn't ask for someone to tear apart my makefile and make fun of everything in it, i asked for HELP with a specific part of it, which i was given. if you don't like my makefile thats fine, you don't have to use it. but you also don't have the right to put it down. while you did point out some helpful information, you did it in a piss poor way. if you were truly trying to help me out you would have left out all of the "lame" remarks and just said why what i have doesn't work. i don't know why all that hostility is directed toward me, i've never done anything to deserve it. maybe you should take a few minutes and think about what is going on in your life to make you think that was an appropriate way of addressing this issue. i do appreciate you pointing out where i have problems however so i will give you thanks for that and make the corrections where they apply to my mud. to answer a few of your questions posed in your response, it is slightly screwy cause i'm NOT experienced with makefiles (obviously) and just looked at other makefiles and did my best to take the parts i liked and use them. apparently i made some mistakes that can and will be fixed.


I am perfectly aware that you're aren't knowledgeable enough to be responsible for the festering lameness in the makefile. If the criticisms are invalid then others knowledgeable on 'make' will surely pipe up and argue.

However, I ought to ask you about 'output' which you are responsible for. Why is your own 'output' above in hard to read arbitrarily punctuated lowercase run-on sentences? Is it lazyness? Are you trying to be rude or annoying? Do you think it's cute, 31337, fashionable or something else? Do you think others perceive the strange grammatical construction in your posts as looking more serious and intelligent, or less serious and more childlike?
03 Dec, 2008, Tyche wrote in the 39th comment:
Votes: 0
Kayle said:
I would say SVN is an option, but I'd hate to have to try and walk you through attempting to install that in CYGWIN. I imagine it wouldn't be pretty.


Run setup.
Click on svn (or git, bzr, cvs) package to select it.
Click OK.

Pretty much the same as using the GUI tools in Mandriva or Ubuntu.
The amount of configuration to set up any of them as a server is pretty much the same.
04 Dec, 2008, The_Fury wrote in the 40th comment:
Votes: 0
Tyche said:
However, I ought to ask you about 'output' which you are responsible for. Why is your own 'output' above in hard to read arbitrarily punctuated lowercase run-on sentences?


If you check his earlier posts i think you might find he mentioned that his has some kind of Dyslexia, unless i am mistaken for someone else.
20.0/85