20 Aug 2010 AI4U   » (Observer)

MindForth Programming Journal (MFPJ) 2010 August 19

Thurs.19.AUG.2010 -- Discovering a Major Problem

We are at a stage now where we may home in on the goal of having the AI maintain a continuous chain of self-referential thought.

We rename 17aug10A.F as 19aug10A.F and we run the AI code in search of correctable glitches. When we notice erroneous output like "WHO IS AM I", we go into the WhoBe module and we reset the "mfnflag" variable to zero after it causes the saying of "IS", so that the AI will stop unwarrantedly inserting "IS".

Then we notice that stray conceptual activations are carrying over even after KbTraversal is invoked, although KbTraversal is supposed to heavily activate a particular concept in a pre-ordained queue of activand concepts. So at the start of KbTraversal we comment outthree mild calls to PsiDecay, and we instead insert a call to the harsh PsiClear module, so that only the designated activand concept shall be activated. The problem of interference from stray activations seems to go away.

Then we notice a major problem, worthy of focussing our attention on for a major, upload-worthy update of the MindForth AI. We notice that the AI properly activates the concept of God during KbTraversal and properly asks the resulting question "GOD WHO IS GOD", but the AI does not remember our input of "God is Jesus" and repeats the question "GOD WHO IS GOD" during KbTraversal, even though a link from "GOD" to "JESUS" is still present in the recent memory of the AI, as shown in some old engrams.


424 : 55 0 0 100 100 7 66 55 to WHO
427 : 66 0 2 55 55 8 100 66 to IS
431 : 100 39 1 66 55 5 66 100 to GOD
435 : 100 39 1 100 0 5 58 100 to GOD
438 : 58 23 2 100 100 8 111 58 to BE
444 : 111 0 2 58 100 5 0 111 to JESUS
450 : 111 0 2 111 0 5 55 111 to JESUS
454 : 55 0 0 111 111 7 66 55 to WHO
457 : 66 1 2 55 55 8 111 66 to IS
463 : 111 0 2 66 55 5 66 111 to JESUS

We suspect immediately that KbTraversal is reactivating the "GOD" concept at an activation so low that the WhoBe module gets called repeatedly for the low-activation "GOD" concept, even though there is an engrammatic link between "GOD" and "JESUS". No, KbTraversal sends a rather high activation of sixty-two (62) into NounAct.

After the AI asks, "WHO IS GOD", the word "GOD" is left with an activation of thirty-nine (39). That activation should revive the "known" answer, namely that "GOD IS JESUS". However, let us check what is the threshold activation for invoking the WhoBe module. Oh, WhoBe is called by AskUser when a be-verb activation is less than forty (40). We could try gradually lowering the threshold activation from "40" down towards thirty and lower. We could also insert some diagnostic message code that will reveal to us what real values within BeVerb are letting AskUser call WhoBe.

If we solve this glitch properly so that the AI initially asks a who-is question but thereinafter remembers the answer supplied by the human user, we will have a very powerful demonstration of cognitive ability on the part of the AI. The testing of that ability will be worthy of mention in the MindForth user manual.

Thurs.19.AUG.2010 -- Novelty: Testing for Lowest Maximum

Although BeVerb was testing "beact" for an activation lower than forty (40) and we switched to testing for lower than twelve (12), we still did not escape calling AskUser and WhoBe, because one value of "beact" was "1" and another value of "beact" was "14". Immediately we realized that we need to test not for a single lowest value of "beact", but for a lowest maximum value.

Fri.20.AUG.2010 -- Knowledge-Base Responses to Who- Queries
In our coding yesterday we were able to isolate "maxbeact" as a variable that would prevent calls from BeVerb to AskUser and on to WhoBe if a single item of engrammatic knowledge about a concept exceeded a threshold level, while disregarding sub-maximum activations which would have caused a call to WhoBe, and which did indeed cause a call to WhoBe when the human user had not yet entered any knowledge about the concept in question. Unfortunately, calls to WhoBe were still getting through -- by way of the legacy "recon" system for posing a what-query upon the introduction of a previously unknown noun. Immediately we found ourselves in a quandary, because the conflicting decision-routines for "maxbeact" and for "recon" were relying upon different levels of threshold activation. We noticed yesterday that the EnCog (English thinking) module in our Forth code since 10 December 2009 has contained the comment- remark that "recon" may soon be phased out. We do not remember exactly why we were thinking of phasing out "recon", but we see in retrospect that the "recon" system was too indirect in its method of generating a question about an unfamiliar noun. Although yesterday we were daunted by the prospect of having to integrate the "recon" system and the "maxbeact" system, today with more clarity we realize that we need only to comment out the central test of the "recon" value in order to permit the "maxbeact" system to operate without interference from the "recon" system. Then, if the unimpeded "maxbeact" system works, in the sense of letting the AI initially ask questions before knowledge is gained, and in the sense of recalling the knowledge instead of asking unwarranted questions, we may proceed to dismantle the obsolete "recon" system in a careful, non-disruptive way. So now we try to comment out the heart of the "recon" system.

After commenting out the assignment of "recon" in the VerbPhrase module, we no longer obtained "what-is" questions about unfamiliar nouns, but we also did not obtain factual answers to who-queries about knowledge that had been entered about a previously mysterious noun. Perhaps a threshold was still operating to prevent the generation of a statement in response to a who-query from the human user. Or perhaps not enough activation was going into "spreading activation" by way of the NounAct module. We decided to insert a diagnostic message at the start of the NounAct module.

We are still not getting the knowledge back out. Perhaps the competent threshold for the old "recon" system is still preventing the formulation of a statement of knowledge. Having knocked out the operation of "recon" as a determinant in the BeVerb module, perhaps we should either lower or eliminate the threshold used in conjunction with "recon". In BeVerb, we are using a threshold of "12" for "maxbeact", while in VerbPhrase the threshold in connection with "recon" has long been set at "20". We will comment out the "20" threshold in VerbPhrase and see what happens with a much lower threshold of "12" -- the same as with "maxbeact" -- although we seem to recall that the threshold of "20" was chosen in order to prevent spurious statements of false knowledge.

Ah, now we are getting somewhere, as the following exchange shows.

Robot:
Human: god

Robot: GOD WHO IS GOD
Human: god is jesus

Robot: JESUS BE JESUS
Human: who is god

Robot: GOD BE JESUS
Human:

User Command: halt
Currently, the BeVerb module is set up to choose a proper form of be-verb only for personal pronouns like "HE SHE IT", etc. If we enhance the BeVerb module to let it find a be-verb for a noun, we may start getting the proper generation of knowledge-based responses to who-queries.

In the VerbPhrase module, we have a test which detects the imminent selection of AM, IS or ARE and shunts the continuation of the sentence-generation off to the BeVerb module. Let us try adding the be-verb "BE" to the group of verbs that will shunt generation off to the BeVerb module.

When we tried to use BeVerb to switch from "BE" to "IS" in who-query responses, the AI failed to state the correct predicate nominative, so we will comment out and release our semi-successful code with a view to switching "BE" to "IS" in a later release. Our current code shows the AI at least finding the factual knowledge for making an albeit grammatically awkward response to a who-query.

Latest blog entries     Older blog entries

X
Share this page