Knowledge Tree

As my husband points out in his comment to my last post, I made an error of omission, describing the tree from which Eve ate as the ‘tree of knowledge’, rather than the tree of knowledge of good and evil. I apologize for the misquote. The Word of God is infallible; I am not.

I actually realized my misquote last night, and had shrugged it off as being defensible in the context of my argument. As I looked at last night, self-awareness would inevitably require some sort of understanding of good v. evil to be able to evaluate threats to one’s self, to evaluate decisions within some sort of context. I’m no longer so sure of that argument. True, there’d have to be some knowledge of ‘good outcome’ vs. ‘bad outcome’, but those wouldn’t have to correspond to what we typically think of as good vs. evil. There seems to be a basic understanding of a baseline standard of good that a self-aware computer system might or might not “agree” with. For instance, most moral systems of the world agree that killing is not moral. Now imagine a system that didn’t agree with that tenet. Whether humans believe that tenet from a humanistic point of view (e.g., humans wouldn’t last very long if we went around killing each other off) or from a divine edict point of view (thou shalt not kill), the end result is that there’s benefit to each of us if we don’t go around knocking each other off indiscriminately, and thus whether one agrees with a divine edict, the end result is that we agree that killing is wrong. A computer system might or might not see such benefit, and I think we’d have a hard time proselytizing a computer to recognize divine edict.

I’m no theologian, so my arguments are a mix of limited understanding, limited faith, and some amount of blind acceptance. (Most things in life that we take as true end up being such a mix, unless something’s in our particular area of expertise.) I believe that man was created in God’s image, and that thus the things we take to be fundamentally true and good are those that He values. I also believe that we couldn’t clearly describe those values, that they’ve been muddied in us. “We know it when we see it”, ends up being our descriptor of what’s truly good. If we can’t describe it, then we can’t teach a computer those values and, more importantly, clearly define their applications. So, we couldn’t program a system to have the same value system embedded within us.

That returns me to my argument that man won’t ever be able to create a truly intelligent computer, because that computer won’t know what is good versus what is not good. (Man didn’t have knowledge of evil until he ate of the fruit, so knowledge of evil must not be a condition of intelligence.) Man can’t teach it, so the computer can’t know it.

If we ever create a computer that we believe is intelligent, beware what power we give it or that it can obtain. Hitler was an intelligent person with a values system most folks would say didn’t match what we understand to be good. If we examined his values, I believe we’d find several that were warped (in very significant ways!), but that his core set still resembled our own. Now imagine an ‘intelligent’ creature with a very different value system. If warped values can achieve massive evil, what could missing/incomplete/diluted values do?

Leave a Reply

Your email address will not be published. Required fields are marked *