12 Aug, 2010, Koron wrote in the 101st comment:
Votes: 0
Good for you.
12 Aug, 2010, KaVir wrote in the 102nd comment:
Votes: 0
Koron said:
This is another logical inconsistency with the D&D source material. These undead are "mindless" and can't be charmed, but they can be controlled through an equivalent spell designed to, essentially, charm only the undead.

No, the spells are not equivalent. Charm Person explicitly states that it "does not enable you to control the charmed person as if it were an automaton, but it perceives your words and actions in the most favorable way". You must "win an opposed Charisma check to convince it to do anything it wouldnt ordinarily do", and it "never obeys suicidal or obviously harmful orders".

If you command a skeleton or golem to jump off a bridge, it will, without hestitation. If it somehow survives, you can order it to do the same again and again, until it is eventually destroyed.
12 Aug, 2010, David Haley wrote in the 103rd comment:
Votes: 0
Deimos said:
Myself excluded, I haven't seen anyone on this thread actually offer up a list of criteria that they use to define intelligence.

Oh? You haven't been paying much attention then. :wink:
In post #46, I gave a reference to a page discussing a whole pile of criteria.
In post #50, I gave an explicit list of several criteria (a list of necessary but not sufficient criteria).
In post #59, I discussed learning briefly. In post #71, I briefly referred back to the criteria in post #50.
And those are just my posts, after a quick browsing through…

Deimos said:
How does something that can't remember follow instructions? Where exactly does it store those instructions? As previously pointed out, D&D is too paradoxical to be a decent source to pull from. Also, the criteria "any creature that can think, learn, or remember" is even broader than my own list, and would encompass everything I've listed thus far (including your absurdly expensive coffee-maker), and more.

Huh? Absolutely not – the coffee machine is not thinking, and certainly not learning beyond accumulating statistics – but as I say in the posts I refer to above that is an extraordinarily shallow definition of learning and certainly not what other intelligent creatures do (such as animals, let alone humans).

Koron said:
As a fast definition, this is adequate, but I'm not sure it stands to higher scrutiny. Learning, remembering, and reasoning are undoubtedly qualities that contribute to our considering a thing intelligent, but I'm not convinced they are necessary. Someone who has suffered brain damage may be incapable of long term memory, but this new inability to learn from anything that happened more than thirty seconds ago does not make one no longer intelligent. Does it make one less intelligent? Yeah, probably.

Such a person is not incapable of memory; they have impaired memory. A more interesting case would be somebody truly incapable of any memory whatsoever.

Koron said:
but an AI still has the I for a reason (even if it is A).

Um, yes, it has the 'I' in it because that's just the label people gave it. See the second quotation in post #55. To save you the trouble of finding it… here is what an AI researcher had to say about all of this:
"I myself have worked on projects like machine vision algorithms that can detect human facial expressions in order to animate avatars or recognize individuals. Some would say these too are examples of A.I., but I would say it is research on a specific software problem that shouldnt be confused with the deeper issues of intelligence"

Koron said:
Our beloved super coffee dispenser strikes me as [yes] and {1} to represent an incredibly shitty intelligence capable of only the most basic comparisons.

By your reasoning, any program with an if statement here and there becomes even only marginally "intelligent" because it can make comparisons. Surely there is more to it than that?
16 Aug, 2010, Koron wrote in the 104th comment:
Votes: 0
The distinction is certainly not an easy one. In the case of machine vision algorithms to identify individuals/emotions, I would have a hard time calling this program intelligence without useful memory applications. How does it identify and handle deception? I'm going to go out on a limb and assume that a forced smile, even after a bout of hardcore crying in front of the camera, will register as happy. A machine capable of identifying patterns over time would seem more likely to be considered intelligent. There's also the question of what the machine does with this information. Does it select its behavior from a vast pool of options, or does it follow a set routine for each identified individual/emotion? Ultimately, these kinds of programs exhibit intelligence currently analogous to bacteria; they follow basic instructions, even if there's no case to be made for animal intelligence.

Frankly, I'm not sure what the quote is referring to when it says "deeper issues of intelligence." Is it (smart/brainy/dynamic/capable of interpreting and expressing meaning in a variety of ways) like you or I? Clearly not. Does this inherently disqualify it from being somehow intelligent? I'm not so sure it should.

Quote
By your reasoning, any program with an if statement here and there becomes even only marginally "intelligent" because it can make comparisons. Surely there is more to it than that?

The dictionary definition of intelligence gives us two definition under which this is an accurate description:
Quote
1. capacity for learning, reasoning, understanding, and similar forms of mental activity; aptitude in grasping truths, relationships, facts, meanings, etc.

3. the faculty of understanding.

There is reasoning involved, even if only at a low level, based on the understanding and interpretation of facts (variables). Granted, there is a case to be made that our coffee maker is not "grasping truths" so much as "being spoon-fed information," but is this distinction an important one? Is x=1 inherently inferior to (or less intelligent than?) a function that assigns x a value based on facial recognition software?

There has to be a point at which a given machine moves beyond its default factory settings and starts modifying its behavior based on predictions it has made in its environment. (For example, this is the case with SSD/HDD hybrid drives, which move high priority files to the SSD side of things to speed up performance.) Once it has crossed this threshold, has it become more intelligent than before? (Granted, this question is moot if it wasn't intelligent before, but if that's the case, is it still unintelligent after this line has been crossed?)

I think the term intelligence may be afforded too much normative power. We think of ourselves as intelligent (and surely we are), and we think of ourselves as clearly superior to the programs we write (see previous comment), so it seems natural to consider them unintelligent, but isn't it a common goal to try to impart our creations with some of our own intelligence? After all, we do want them to behave in helpfully predictable ways.

We've been conditioned to respect the idea of intelligence largely through subjective means of comparison (eg, I have an IQ of X, so I'm smarter than a sixth-grader/Einstein/a hunk of limestone), but even someone with a mental handicap is clearly still intelligent (ie, capable of learning and reasoning), even if those reasoning skills are well below average. Based on this, I think it's fair to say that a rudimentary program with a few if-checks here and there is marginally intelligent as long as there is an understanding that we are several orders of magnitude (perhaps dozens?) more intelligent.
100.0/104