28 Mar, 2013, Vigud wrote in the 21st comment:
Votes: 0
Rarva.Riendf said:
Oh since I have just seen it, new version of GCC is out with a new compil option

http://gcc.gnu.org/gcc-4.8/changes.html

AddressSanitizer , a fast memory error detector, has been added and can be enabled via -fsanitize=address. Memory access instructions will be instrumented to detect heap-, stack-, and global-buffer overflow as well as use-after-free bugs. To get nicer stacktraces, use -fno-omit-frame-pointer. The AddressSanitizer is available on IA-32/x86-64/x32/PowerPC/PowerPC64 GNU/Linux and on x86-64 Darwin.
Vigud said:
Yeah, I recently learned about cppcheck and it's good. I knew about clang-analyzer before, and another thing on the horizon is clang -fsanitize: http://clang.llvm.org/docs/UsersManual.h...


Besides, notice the new optimization flag -Og.
28 Mar, 2013, Rarva.Riendf wrote in the 22nd comment:
Votes: 0
Kline said:
Rarva.Riendf said:
Oh since I have just seen it, new version of GCC is out with a new compil option

http://gcc.gnu.org/gcc-4.8/changes.html

AddressSanitizer , a fast memory error detector, has been added and can be enabled via -fsanitize=address. Memory access instructions will be instrumented to detect heap-, stack-, and global-buffer overflow as well as use-after-free bugs. To get nicer stacktraces, use -fno-omit-frame-pointer. The AddressSanitizer is available on IA-32/x86-64/x32/PowerPC/PowerPC64 GNU/Linux and on x86-64 Darwin.


Thanks; guess I'll get to try it without having to build it myself in about two years when Debian catches up.


Or just use a virtual machine :) It is said it will come in for Fedora 19.
http://fedoraproject.org/wiki/Releases/1...
28 Mar, 2013, Vigud wrote in the 23rd comment:
Votes: 0
Can't you just compile it and install into /opt? It takes less than 90 MB if stripped.
28 Mar, 2013, Rarva.Riendf wrote in the 24th comment:
Votes: 0
Vigud said:
Can't you just compile it and install into /opt? It takes less than 90 MB if stripped.


Because doing anything in Linux that is not in depot is a pain to maintain.
Not to mention that after that you have to configure your IDE everytime you make a change.

For a hobby, may not worth it.
28 Mar, 2013, Nathan wrote in the 25th comment:
Votes: 0
quixadhal said:
As someone who's done this in the early 1990's, before Linux… compiling a compiler is a 3 step process.

First, you download the new compiler source and compile it with your existing compiler (full debug, no optimizations).
Then you set that compiler aside and unpack a fresh copy. You set your environmental variables/path/etc so it uses your newly compiled compiler and compile it again (full debug, no optimizations).
THEN you do the same thing again, using copy #2 to compile copy #3. If those two copies are identical (diff –binary), you have a working compiler that you can safely install in your system.

Optinally, you can then use that third copy to recompile it with optimizations,if you trust such things. :)

The reason for this… you need to verify that it can compile itself properly, and you can't do that with the native compiler because it WILL produce different code.

Sound like a pain in the arse? Welcome to normal sysadmin activity before things got so warm and cushy. ;)


Thanks for the insight, didn't realize it was that much of a pain, but then maybe I just trust that the rare time when I use a linux machine that compilation will work as expected and whatnot… Foolishness, perhaps. Why do say 'before Linux', did something change or are you just placing this in a time frame?
28 Mar, 2013, quixadhal wrote in the 26th comment:
Votes: 0
A bit of both, actually.

Back in the late 80's, early 90's, that was the normal way you installed a new compiler. If it was meant to be the main system compiler, you would then ALSO go and recompile everything in the system area with it, since the only reason you'd normally do such an upgrade was a substantial improvement, or because you wanted to stress-test the new compiler (and what better way than recompiling your entire OS from the kernel and libc, down to everything in /usr).

When RedHat came into the picture, the idea of distributing pre-compiled binaries became a thing. Prior to that, even linux people would typically download the tarball and compile from source. Binary rpm's (and also Debian's .deb system) were a new concept. As they became popular, and really once package dependancy tracking was in place, it became more and more common to just download and install the binaries.

Today, compiling from source is somewhat rare… Usually only done when the packaged version is hopelessly out of date (Debian stable), a test version is to be installed outside the normal system area, or you have special requirements that mean modifying the source yourself.

EDIT: Oh, and FWIW, dependancy tracking doesn't require binary packages. My favorite system is the one OpenBSD (and FreeBSD) uses… where they setup a /ports directory structure which is kept up-to-date via cvs (probably subversion or git now). To install a package there, you cd /ports/subdir/package and type "make"… the makefile then ftp's the required files down, unpacks them, and compiles them… if it has dependancies, it goes into those package directories and invokes their makefile the same way. End result, you get the same automatic package system, but compiled from source. Very slick.
28 Mar, 2013, Kline wrote in the 27th comment:
Votes: 0
quixadhal said:
Usually only done when the packaged version is hopelessly out of date (Debian stable)

I love me some Debian (since potato), but yes, when they say "stable" they mean "stable, as proven for the last 3 years". They've started a 2 year release cycle now, which is nice, but it can still take awhile to even see a new package make it into the experimental or testing branches just to play with. Then actually getting to use it is another challenge depending on what dependencies will break. So, headaches aside for what is a hobby to me, I'll seriously be waiting until they have a pre-packaged binary that will play nice with dependencies; so about two years.

Case in point, for how outdated some packages are:
stable said:
Package: g++
Version: 4:4.4.5-1

testing said:
Package: g++
Version: 4:4.7.2-1
28 Mar, 2013, Rarva.Riendf wrote in the 28th comment:
Votes: 0
As I said, a virtual machine, the latest fedora, just code in there :) Fedora is not 'stable' though :) but if you only code in it, you dont care about what is broken on a regular basic…
29 Mar, 2013, Vigud wrote in the 29th comment:
Votes: 0
Even Fedora won't have GCC 4.8 until Fedora 19. And there's nothing particularly hard about building and installing gcc 4.8 outside directories you have in your PATH. It's not like a compiler you wish to use requires becoming part of the system… Simple build+install process worked for me with clang 3.3, nwcc, pcc, tinycc, TenDRA… There may be building problems like lack of supported standard library implementations or missing dependencies, but once the thing is built, there's nothing to maintain like you suggested.
20.0/29