Older blog entries for AI4U (starting at number 38)

JavaScript AI Mind Programming Journal -- Thurs.2.SEP.2010

Thurs.2.SEP.2010 -- Implementing the "prsn" Variable

Today we have imported the "prsn" variable and some associated code from MindForth into the JavaScript AI (JSAI). In so doing, we have also switched the SpeechAct() module from conditionally adding an inflectional "S" for a third-person verb, to merely outputting an "S" if directed to do so by the VerbPhrase() module.

In the JSAI WhoBe() module, we have brought in the following code


  if (subjpsi==50) prsn=1; // 1st person "I";  2sep2010
  if (subjpsi==53) prsn=1; // 1st person "WE"  2sep2010
  if (subjpsi==56) prsn=2; // 2nd person YOU;  2sep2010
  if (subjpsi==49) prsn=3; // 3rd person HE;   2sep2010
  if (subjpsi==80) prsn=3; // 3rd person SHE;  2sep2010
  if (subjpsi==95) prsn=3; // 3rd person IT;   2sep2010
  if (subjpsi==52) prsn=3; // 3rd person THEY; 2sep2010

as a mutatis mutandis solid block from MindForth. WhoBe() does not yet make use of the above code, but we install it right away because in MindForth we have learned that tracking the person and number of verb- subjects makes the general AI coding easier.

We need to port a lot more of MindForth into the JavaScript AI -- especially the recent seq-skip code that vastly improves the comprehension of input -- and we expect the JSAI to prove to many Netizens that MindForth is worth looking into.


MindForth Programming Journal (MFPJ) 2010 August 30

Mon.30.AUG.2010 -- On the Shoulders of Giants?

The "prsn" variable in MindForth artificial intelligence (AI) enables the AI to think thoughts in the first, second or third person with English verbs in the present tense. MindForth is different from most natural language processing (NLP) software because previous NLP software may be intricately crafted for the generation of grammatically correct English sentences but not for the thinking necessary to drive the NLP mechanisms. Because MindForth has a conceptual core that actually thinks, MindForth is an AI engine that may be "reinventing the wheel" in terms of tacking on NLP routines that have already been invented elsewhere unbeknownst to the Mentifex (mindmaker) originator of MindForth, but MindForth remains the original invention of an artificial mind that needs its own special forms of NLP software. Other advanced NLP software may translate ideas from one natural language to another, but MindForth is ideation software that thinks up its own ideas, thank you, and becomes more skillful at thinking co-extensively with the growing sophistication of its NLP generativity. We are met today on a mindgrid of that generativity, and we must generate AI Mind code for self-referential thinking in English. MindForth is like an AI rodent that scurries about while giant NLP dinosaurs tower overhead.

Mon.30.AUG.2010 -- VerbPhrase Orchestrates Inflection

Our current code is abandoning the stopgap measure of using the SpeechAct module to add an inflectional "S" to regular verbs in the third person singular. The control of verb inflections is now shifting into the VerbPhrase module where it belongs. We will try to use an old "inflex1" variable from the 20may09A.F version of MindForth to carry each phonemic character of an inflectional ending (such as "S" or "ING") from the VerbPhrase module into the SpeechAct module. An old MindForth Programming Journal (MFPJ) entry describes the original usage of "inflex1" to carry an "S" ending into SpeechAct. Now we would like to expand the usage so that "inflex1" and "inflex2" and "inflex3" may carry all three characters of an "ING" ending into SpeechAct. First we rename all (three) instances of "inflex1" as simply "inflex" so that we may confirm our notion that "inflex1" was not yet affecting program-flow, before we re-introduce "inflex1" as a variable that does indeed influence program-flow. We run the AI code, and nothing seems amiss.

Then we rename our instances of the temporary "inflec1" from yesterday (29aug2010) as the henceforth genuine "inflex1" to make sure that we still have the functionality from yesterday. Again we run the code, and all is well. Now we need to clean up the test routines from yesterday and smooth out glitches such as the tendency to tack on an extra "S" each time that a verb is used in the third person singular.

We still have the variable "lastpho" from the 24may09A.F AI, for avoiding an extra "S" on verbs. That variable is continually being set in the SpeechAct module. First in VerbPhrase we use a test message to report to us what values are flowing through the "lastpho" variable. Then in VerbPhrase we make the setting of "inflex1" to ASCII 83 "S" dependent upon the "lastpho" not being "S", but the method initially does not work. We suspect that the "lastpho" value is being set too early at almost the beginning of the SpeechAct module.

When VerbPhrase sends an inflectional "S" inflex1 into SpeechAct, all the conditionality about person, number, gender, etc., should be kept in VerbPhrase and should no longer play a role in SpeechAct. SpeechAct as code should not care why it is being asked to add an "S" or an "ING" onto a word being spoken. Therefore much of the conditional code in SpeechAct after the the detection of an intended "32" space should be removed, and SpeechAct should simply speak the inflection.


MindForth Programming Journal (MFPJ) 2010 August 28

Sat.28.AUG.2010 -- First-Person Consciousness

The declaration today of a "prsn" (person) variable in the MindForth robot AI has a bearing not only on the proper use of English verb forms in the first, second and third person, but also on self-awareness and artificial consciousness. There is no consciousness module in MindForth, because consciousness emerges not from a single location but rather from the overall functionality of the mind qua mind. Since the new "prsn" variable will help the robot Mind to think about itself and talk about itself in the first person singular, the person variable will reinforce the very concept of the robot self as the ego of a conscious mind. As the AI Mind speaks confidently and grammatically about itself during interaction with other persons -- human peers or robot peers -- the evolution of AI reaches the all-important milestone of self-referential thought.

MindForth did not previously have a "prsn" variable because initially all utterances of the proof-of-concept AI were in the third person plural by default. When the goal was to demonstrate thinking and not yet to trigger a Singularity, the simplest way to deal with grammatical person was not to worry about it at all. As the AI Mind has advanced in complexity and in functionality, bugs and glitches began to appear which could be resolved only by taking person into consideration. The issue was forestalled while special coding for be-verbs dealt with first-person forms like "am" and with be-verbs required for use with English pronouns, but now the general coding of general verb-usage requires the adoption of a person variable to make things work. The variable shall be "prsn" for two reasons, brevity and clarity. The chosen name of the variable has clarity because it refers not to a general concept of "person" as perhaps a legal entity or as perhaps a dramatic character, but rather to the specific idea of first person, second person and third person. The "prsn" variable will hold values of "1", "2" or "3" accordingly, and may hold a zero ("0") value for use with infinitive forms such as "to be".


MindForth Programming Journal (MFPJ) 2010 August 27

Fri.27.AUG.2010 -- Fixing the VerbPhrase Mind- Module

Today let us explore why the AI Mind can have a grammatically incorrect exchange like the following.

Human: what is god
Robot: A GOD BES SPIRIT

First we must determine which part of the free AI source code for autonomous robots is mistakenly saying "BES" instead of "IS". If we eliminate this bug, and the next, and the next, we graduaully approach a functional AI Mind ready to inhabit myriad AI robots.

By inserting diagnostic messages to track the generation of thoughts, we determine that the VerbPhrase module is mistakenly saying "BE" instead of "IS". We temporarily fix this bug by inserting the following code into VerbPhrase, before it has a chance to utter "BE" as the main verb of a thought.


motjuste @ 58 = IF  \ shift from BE; 27aug2010
  num @ 1 = IF  \ singular; 27aug2010
    midway @  t @  DO  \ search backwards in time
      I       0 en{ @  66 = IF  \ most recent instance
        66 motjuste ! ( 66=IS; 27aug2010 )
        I     7 en{ @  aud !  \ get recall-vector
        LEAVE  \ after finding most "IS"; 27aug2010
      THEN     \ end of test for 66=IS; 27aug2010
    -1 +LOOP \ end of retrieval loop for "IS"; 27aug2010
  THEN  \ end of test for singular; 27aug2010
  ( following code covers undeclared plurals; 27aug2010 )
  num @ 1 = NOT IF  \ other than singular; 27aug2010
    midway @  t @  DO  \ search backwards in time
      I       0 en{ @  67 = IF  \ most recent instance
        67 motjuste ! ( 67=ARE; 27aug2010 )
        I     7 en{ @  aud !  \ get recall-vector
        LEAVE  \ after finding recent "ARE"; 27aug2010
      THEN     \ end of test for 67=ARE; 27aug2010
    -1 +LOOP \ end of retrieval loop for "ARE"; 27aug2010
  THEN  \ end of test for not singular; 27aug2010
THEN  \ end of test for 58=BE; 27aug2010

The code above rejects "BE" as a verb and says "IS" or "ARE" instead. Although "IS" is definitely selected by the "num(ber)" flag, "ARE" is the default alternative when the number of a subject is not known. This code fixes some problems in the AI, but introduces or accentuates other problems which were perhaps lurking unseen in the AI codebase. These emerging problems are minor in nature and pleasantly challenging for the AI appcoder. The MindForth AI is becoming a functional AI Mind that needs cosmetic improvements and functional add-ons.


MindForth Programming Journal (MFPJ) 2010 August 25

Wed.25.AUG.2010 -- Coordinating InStantiate and WhoBe

A problem exists in the following man-machine dialogue, where "a spirit" is unwarrantedly treated as a "who".

Robot:
Human: god

Robot: GOD WHO IS GOD
Human: god is a spirit

Robot: A SPIRIT WHO IS SPIRIT
Human:
The following report from the "En" English lexical array does not reveal to us why the word "WHO" was used at time point "t=433", but it does show an "mfn" gender flag set to one ("1") for masculine.

389 100 10 0 1 100 5 100 386  to GOD
393 100 10 0 1 100 5 100 390  to GOD
397 55 6 0 0 55 7 55 394  to WHO
400 66 0 2 0 66 8 58 398  to IS
404 100 10 0 1 100 5 100 401  to GOD
408 100 12 0 1 100 5 100 405  to GOD
411 66 0 2 0 66 8 58 409  to IS
413 1 15 0 0 1 1 1 412  to A
420 109 31 1 0 109 5 109 414  to SPIRIT
422 1 15 1 0 1 1 1 421  to A
429 109 36 1 0 109 5 109 423  to SPIRIT
433 55 0 1 0 55 7 55 430  to WHO
436 66 0 2 0 66 8 58 434  to IS
443 109 0 1 0 109 5 109 437  to SPIRIT
t nen act num mfn fex pos fin aud
By forcing diagnostic reports on the values of "mfn" and "mfnflag", we tracked down a bugfix for the problem of WhoBe saying "WHO IS SPIRIT" instead of "WHAT IS SPIRIT". In the InStantiate module, we added one (indicated) line of code.

  mfn @ 1 =  mfn @ 2 = OR IF  \ masc or fem; 17aug2010
    CR ." InSt: setting mfnflag to " mfn @ . \ 25aug2010
    mfn @ mfnflag !  \   17aug2010
  THEN   \ 17aug2010
  mfn @ 0 = IF  0 mfnflag ! THEN  \ test; 25aug2010
  0 mfn !  \ Test code applies only to En array.
  0 preset !
The AI properly began saying "WHAT IS SPIRIT", but it also began to leave out the word "IS" from the output of the WhoBe module, as shown in the dialog reproduced below.
Robot:
Human: god

Robot: GOD WHO GOD
Human: god is a spirit

Robot: SPIRIT WHAT IS SPIRIT
Human:

It turns out that the WhoBe module was testing for a positive value of "mfn" or "mfnflag" as a pre-condition for saying "IS" after "WHO". Once we began resetting the "mfnflag" to zero in the InStantiate module, WhoBe stopped saying "IS" -- until we removed the IF-THEN precondition.

Wed.25.AUG.2010 -- Tweaking the Codebase

Now we have a problem because the user can answer a who- query with a sentence of input, but the AI does not regurgitate the knowledge upon receiving the same who- query.

Thurs.26.AUG.2010 -- Changelog Entry

The changelog entry on yesterday's upload of free AI source code to the Web reads, "25aug10A.F retrieves knowledge gained from human users." The AI is becoming better able to ask a question about a new concept, comprehend the answer by storing it with the proper associations recorded among concepts, and respond with the assimilated knowledge when queried about the original concept. Curiously, the evolving albeit buggy AI responds differently to asking about a new concept with "who" and with "what", even when the same information provides the content of the response. To illustrate this point, we run the AI right now and we obtain the following dialog.

Robot:
Human: god

Robot: GOD WHO IS GOD
Human: god is spirit

Robot: SPIRIT WHAT IS SPIRIT
Human: who is god

Robot: GOD IS A SPIRIT
Human: what is god

Robot: A GOD BES SPIRIT
Human:
The answers recorded above are different because different parts of the AI Mind software respond differently to who- queries and to what-queries. Meanwhile we inch closer to our next AI milestone of demonstrating self-referential thought.


MindForth Programming Journal (MFPJ) 2010 August 24

Tues.24.AUG.2010 -- Enhancing Query-Responses

Today in 24aug10A.F we have modified EnCog so that query-answers will state the "qus" query- subject in the response. Before the direct-object of the response, we have put a call to EnArticle so that the response may insert "A" or "THE" before the direct object.

Tues.24.AUG.2010 -- Anticipating EnAdjective

Because of the "seq-skip" work done on Sat.21.AUG.2010, it becomes possible to introduce new adjectives by using them just before a noun already known to the AI. For instance, if the AI already knows the word "book" but not the adjective "new", a user could type in the sentence, "i have a new book", and the seq-skip mechanism, detecting "book" as a noun and as the direct object of "have", could tentatively parse "new" as an adjective.


MindForth Programming Journal (MFPJ) 2010 August 23

Mon.23.AUG.2010 -- Calling WhoBe or WhatIs Properly

Today we would like to use the most-glaring-defect principle to start working on the problem of MindForth giving responses with "BE" instead of "IS".

As an obiter dictu item, let us record here the idea that we may get the AI to respond diferrently to who- queries and to what-queries by potentiating the EnArticle module for English articles. If we ask, "Who are you?", we might get, "I AM ANDRU". If we ask, "What are you?", we may hope to get "I AM AN ANDRU" or "I AM A ROBOT". The slight proclivity to use "A" or "AN" shows a good response to a what-query. It may also help with our endeavors to code Is-a functionality.

At the start of any True AI coding session, we may pursue either the "most glaring defect" or simply the first bug that presents itself, such as now when we type in "cat" and we receive the output "CAT WHO IS CAT". That unfortunate output indicates that the WhoBe module is being called unwarrantedly, because the word "cat" is not known to be either masculine or feminine, as a human person might be. Now we have solved the "cat" problem by tracking down the "mfn" and "mfnflag" values, and by zeroing out not only "mfn" but also "mfnflag" at the end of the EnBoot English-bootstrap module. After the debugging, we entered the single word "cat" and we got the reply, "CAT WHAT IS CAT CAT". We are not sure why the word "CAT" is reduplicated at the end of the output.

After hours of coding, we have gotten the AI Forthmind to ask "who is" or "what is" questions, depending upon whether an input word seems to denote a person or a thing. We have removed several bugs. We record the following dialog.

Robot:
Human: god

Robot: GOD WHO IS GOD
Human: god is jesus

Robot: JESUS WHO IS JESUS
Human: jesus is lord

Robot: LORD WHAT IS LORD
Human: lord is a spirit

Robot: A SPIRIT WHAT IS SPIRIT
Human:

We have not reached the point where we may act like Knuth and offer a bounty on MindForth bugs, but we are now releasing our most robust ever AI code.


MindForth Programming Journal (MFPJ) 2010 August 20

Fri.20.AUG.2010 -- Restoring the "recon" System
We had to upload the 19aug10A.F MindForth with only semi- successful code that answered a who-query with "BE" instead of "IS". In BeVerb we could force the word "IS" to be selected, but then the wrong predicate nominative was chosen. In our new code we want to explore why the switch from "BE" to "IS" was causing problems.

In our recent 19aug10A.F code we had a conflict between activation-thresholds for the governance of program-flow. The old "recon" system was using a threshold of "20" and the new "beact" system was using a threshold of "12". It has occurred to us meanwhile that we might solve some problems by tracking down the etiology of the "beact" activations and exerting an upwards push on them so that they would share the same threshold level of "20" with the "recon" system -- which was at a point carefully chosen to avoid spurious associations.

In our 20aug10A.F AI code, let us see what forces are at work to influence and shape the "beact" levels. The "beact" variable is first stored within the VerbPhrase module, as the activation on the winning verb selected for inclusion in a sentence. As we use a diagnostic message to reveal the values of both "beact" and ordinary "act" within VerbPhrase, we discover that they hold numerically the exact same values. Why, then, are the threshold
levels so different?

We should probably start using "predact" (for "predicate activation") instead of simple "act" to test the quasi-recon threshold, so that "beact" and "predact" together will make more sense as variables.

The "recon" comparison involved setting a threshold of twenty (20), below which validly associated verbs were empirically not being found for a mystery noun, so that the noun could be treated as the proper subject of a "what- is" question. Perhaps we could proceed by returning to reliance upon the recon-system, and by using the WhatIs module or its likeness as the arena for decisions about invoking the WhoBe module.

If we shift things around here and not only go back to using the "recon" system, but also use "recon" to differentiate between calling WhatIs and WhoBe, then we have made a major change in MindForth which may lead to the creation of an AI worth studying for many neophyte AI programmers. Only the AI that thinks and works is worth studying and reverse-engineering. How we arrived at the working AI will not be anywhere near as important as figuring out how the AI-complete software works, so that AI coders can work on maintaining and improving the AI.

Sat.21.AUG.2010 --
In our work now on implementing the generation of who- queries and on the successful retrieval of knowledge stored when who-queries are answered, we discover now that conditions in the MindForth program are much messier and problematically more complicated than we had imagined. For instance, it causes a problem if we enter "Andru is a robot" and the AI associates the be-verb to the article "A" instead of to "ROBOT". The problem is that we can not retrieve the basic knowledge that "Andru is robot". If we enter "Andru is robot" without the article "a", we can ask "what is andru" to retrieve the knowledge, but the AI answers, "ANDRU BES ROBOT", as if "be" were a regular verb that may take an inflectional "s" ending.

Just now we typed in "andru is robot" and "what does andru be". We received the answer, "HE BE ROBOT".

We seem to recall that either in Forth or in JavaScript, we had coded a mechanism for InStantiate to skip over an article when storing the association between an input verb and its direct object. Since we can not find such code, it probably does not exist. We will compose new code to do the job. Since we can intercept "a" or "the" and not store them as a "seq" associated with a verb, at the same time we can set a "lackseq" flag to indicate that there exists a condition where a recent engram lacks a "seq" value. Then we can wait for a candidate "seq" to come in, and we can have InStantiate or some other competent module retroactively store the valid "seq" while resetting the "lackseq" flag to zero.

It looks as though InStantiate stores the "seq" value only retroactively, anyway, so we may superimpose code to prevent the articles "a" and "the" from being stored as a false "seq".


MindForth Programming Journal (MFPJ) 2010 August 19

Thurs.19.AUG.2010 -- Discovering a Major Problem

We are at a stage now where we may home in on the goal of having the AI maintain a continuous chain of self-referential thought.

We rename 17aug10A.F as 19aug10A.F and we run the AI code in search of correctable glitches. When we notice erroneous output like "WHO IS AM I", we go into the WhoBe module and we reset the "mfnflag" variable to zero after it causes the saying of "IS", so that the AI will stop unwarrantedly inserting "IS".

Then we notice that stray conceptual activations are carrying over even after KbTraversal is invoked, although KbTraversal is supposed to heavily activate a particular concept in a pre-ordained queue of activand concepts. So at the start of KbTraversal we comment outthree mild calls to PsiDecay, and we instead insert a call to the harsh PsiClear module, so that only the designated activand concept shall be activated. The problem of interference from stray activations seems to go away.

Then we notice a major problem, worthy of focussing our attention on for a major, upload-worthy update of the MindForth AI. We notice that the AI properly activates the concept of God during KbTraversal and properly asks the resulting question "GOD WHO IS GOD", but the AI does not remember our input of "God is Jesus" and repeats the question "GOD WHO IS GOD" during KbTraversal, even though a link from "GOD" to "JESUS" is still present in the recent memory of the AI, as shown in some old engrams.


424 : 55 0 0 100 100 7 66 55 to WHO
427 : 66 0 2 55 55 8 100 66 to IS
431 : 100 39 1 66 55 5 66 100 to GOD
435 : 100 39 1 100 0 5 58 100 to GOD
438 : 58 23 2 100 100 8 111 58 to BE
444 : 111 0 2 58 100 5 0 111 to JESUS
450 : 111 0 2 111 0 5 55 111 to JESUS
454 : 55 0 0 111 111 7 66 55 to WHO
457 : 66 1 2 55 55 8 111 66 to IS
463 : 111 0 2 66 55 5 66 111 to JESUS

We suspect immediately that KbTraversal is reactivating the "GOD" concept at an activation so low that the WhoBe module gets called repeatedly for the low-activation "GOD" concept, even though there is an engrammatic link between "GOD" and "JESUS". No, KbTraversal sends a rather high activation of sixty-two (62) into NounAct.

After the AI asks, "WHO IS GOD", the word "GOD" is left with an activation of thirty-nine (39). That activation should revive the "known" answer, namely that "GOD IS JESUS". However, let us check what is the threshold activation for invoking the WhoBe module. Oh, WhoBe is called by AskUser when a be-verb activation is less than forty (40). We could try gradually lowering the threshold activation from "40" down towards thirty and lower. We could also insert some diagnostic message code that will reveal to us what real values within BeVerb are letting AskUser call WhoBe.

If we solve this glitch properly so that the AI initially asks a who-is question but thereinafter remembers the answer supplied by the human user, we will have a very powerful demonstration of cognitive ability on the part of the AI. The testing of that ability will be worthy of mention in the MindForth user manual.

Thurs.19.AUG.2010 -- Novelty: Testing for Lowest Maximum

Although BeVerb was testing "beact" for an activation lower than forty (40) and we switched to testing for lower than twelve (12), we still did not escape calling AskUser and WhoBe, because one value of "beact" was "1" and another value of "beact" was "14". Immediately we realized that we need to test not for a single lowest value of "beact", but for a lowest maximum value.

Fri.20.AUG.2010 -- Knowledge-Base Responses to Who- Queries
In our coding yesterday we were able to isolate "maxbeact" as a variable that would prevent calls from BeVerb to AskUser and on to WhoBe if a single item of engrammatic knowledge about a concept exceeded a threshold level, while disregarding sub-maximum activations which would have caused a call to WhoBe, and which did indeed cause a call to WhoBe when the human user had not yet entered any knowledge about the concept in question. Unfortunately, calls to WhoBe were still getting through -- by way of the legacy "recon" system for posing a what-query upon the introduction of a previously unknown noun. Immediately we found ourselves in a quandary, because the conflicting decision-routines for "maxbeact" and for "recon" were relying upon different levels of threshold activation. We noticed yesterday that the EnCog (English thinking) module in our Forth code since 10 December 2009 has contained the comment- remark that "recon" may soon be phased out. We do not remember exactly why we were thinking of phasing out "recon", but we see in retrospect that the "recon" system was too indirect in its method of generating a question about an unfamiliar noun. Although yesterday we were daunted by the prospect of having to integrate the "recon" system and the "maxbeact" system, today with more clarity we realize that we need only to comment out the central test of the "recon" value in order to permit the "maxbeact" system to operate without interference from the "recon" system. Then, if the unimpeded "maxbeact" system works, in the sense of letting the AI initially ask questions before knowledge is gained, and in the sense of recalling the knowledge instead of asking unwarranted questions, we may proceed to dismantle the obsolete "recon" system in a careful, non-disruptive way. So now we try to comment out the heart of the "recon" system.

After commenting out the assignment of "recon" in the VerbPhrase module, we no longer obtained "what-is" questions about unfamiliar nouns, but we also did not obtain factual answers to who-queries about knowledge that had been entered about a previously mysterious noun. Perhaps a threshold was still operating to prevent the generation of a statement in response to a who-query from the human user. Or perhaps not enough activation was going into "spreading activation" by way of the NounAct module. We decided to insert a diagnostic message at the start of the NounAct module.

We are still not getting the knowledge back out. Perhaps the competent threshold for the old "recon" system is still preventing the formulation of a statement of knowledge. Having knocked out the operation of "recon" as a determinant in the BeVerb module, perhaps we should either lower or eliminate the threshold used in conjunction with "recon". In BeVerb, we are using a threshold of "12" for "maxbeact", while in VerbPhrase the threshold in connection with "recon" has long been set at "20". We will comment out the "20" threshold in VerbPhrase and see what happens with a much lower threshold of "12" -- the same as with "maxbeact" -- although we seem to recall that the threshold of "20" was chosen in order to prevent spurious statements of false knowledge.

Ah, now we are getting somewhere, as the following exchange shows.

Robot:
Human: god

Robot: GOD WHO IS GOD
Human: god is jesus

Robot: JESUS BE JESUS
Human: who is god

Robot: GOD BE JESUS
Human:

User Command: halt
Currently, the BeVerb module is set up to choose a proper form of be-verb only for personal pronouns like "HE SHE IT", etc. If we enhance the BeVerb module to let it find a be-verb for a noun, we may start getting the proper generation of knowledge-based responses to who-queries.

In the VerbPhrase module, we have a test which detects the imminent selection of AM, IS or ARE and shunts the continuation of the sentence-generation off to the BeVerb module. Let us try adding the be-verb "BE" to the group of verbs that will shunt generation off to the BeVerb module.

When we tried to use BeVerb to switch from "BE" to "IS" in who-query responses, the AI failed to state the correct predicate nominative, so we will comment out and release our semi-successful code with a view to switching "BE" to "IS" in a later release. Our current code shows the AI at least finding the factual knowledge for making an albeit grammatically awkward response to a who-query.

MindForth Programming Journal (MFPJ)

Tues.17.AUG.2010 -- Using Gender to Trigger Who- Queries

Today we would like to see if the AI can ask a who-query rather than a default what-query, if the gender of a noun in question is known to be masculine or feminine. In English, as opposed to German or Russian, a non-neuter gender indicates that an entity is a "who" and not simply a "what".

When we rename 11aug10A.F as 17aug10A.F and run the Forthmind , entering just the word "god" causes the following exchange.

Robot: GOD WHAT IS GOD GOD
Human:
Next in the AskUser module we insert a diagnostic message to reveal any value held in the "mfn" gender variable.
Robot: GOD
AskU: mfn = 0 WHAT IS GOD GOD

Robot: GOD WHAT IS GOD GOD
Human:
Apparently any value that may have been held in "mfn" for "GOD" has been reset to zero by the time the AskUser module is called. We should be able to run a ".psi" report and check for sure. Oops! We chose the wrong report. We run the ".en" report.

324 100 0 1 1 100 5 100 322  to GOD
329 101 0 0 0 101 2 101 326  to HERE
333 102 0 0 1 102 5 102 331  to MAN
339 103 0 0 0 103 5 103 335  to MEDIA
346 104 0 0 0 104 5 104 341  to PERSON
352 105 0 0 0 105 2 105 348  to THERE
357 106 0 0 0 106 7 106 354  to WHOM
363 107 0 0 2 107 5 107 359  to WOMAN
367 56 0 0 0 56 7 50 365  to YOU
371 67 0 0 0 67 8 58 369  to ARE
380 108 0 0 0 108 5 108 376  to MAGIC
383 58 0 0 0 58 8 58 382  to BE
389 100 0 0 1 100 5 100 386  to GOD
393 100 0 0 1 100 5 100 390  to GOD
398 54 0 0 3 54 7 54 394  to WHAT
401 66 0 2 0 66 8 58 399  to IS
405 100 0 0 1 100 5 100 402  to GOD
409 100 0 0 1 100 5 100 406  to GOD
t nen act num mfn fex pos fin aud
The above ".en" report on the English lexical array is encouraging, because it shows that the word "GOD" retains its "mfn" value of one (1) for masculine each time that the word "GOD" is used. However, the software may be blanking out the "mfn" value in advance of the AskUser module. We need to run a search on "mfn" in the Forth code to see in what situations the "mfn" value is reset to zero.

Hmm, "mfn" is reset to zero after storage in the InStantiate module. In order not to disturb the extremely fundamental InStantiate functionality, we should perhaps create "mfnflag" as a variable to pass the gender information from InStantiate to the AskUser module.

Tues.17.AUG.2010 -- Post-Upload Upshot

We did create and use "mfnflag" to get the AI to ask "Who" when a noun had a male or female gender, but not without some difficulty. We were coding under time- pressure, and the new
"mfnflag" kept losing its value somewhere between its initial setting in the InStantiate module and its utilization in the WhoBe module, but we could not at first detect that the value of the "mfnflag" was being changed -- probably by the occurrence of a zero-gender word like "WHO" itself. Our fix was to protect the "mfnflag" value within an IF-THEN clause in the Instantiate module, so that the positive value of "1" for male or "2" for female would persist until dealt with in the WhoBe module. Unfortunately, such a quick fix may be less than ideal for many normal situations.

It is typical of our AI coding that we latch onto even a sub-optimal algorithm that proves our point, so that we can get the functionality up and running. We were in such a hurry that we tested the AI only by entering the word "god" and seeing our desired response of "GOD WHO IS GOD" and not "GOD WHAT IS GOD". Maybe right now we will test the AI to see if it reaches the fourth call to ReJuvenate and then properly asks, "GOD WHO IS GOD".

We tested the 17aug10A.F AI and we let it run through the four activand concepts of KbTraversal. When it activated the concept of God, it said first "GOD WHO IS" and then "GOD WHO IS GOD", so there are still some bugs to be worked out. The AI also said, "I WHO IS AM I", which is a step backwards in functionality. On the whole, however, the AI is approaching self-referential thought.

We will need to firm up strongly the concept of self or "I", <making it so robust that chains of thought do not derail when the AI is thinking about itself. We may need to have a routine that intercepts the name of the AI Mind (typically "ANDRU") and substitutes the pronoun "I" or "ME" instead. We may also need a routine to accept vocative calls of "ANDRU" without regarding the word "ANDRU" as a suggested topic for a new thought. In fact, software conversion of the name "ANDRU" to an activation of the concept of self or "I" may serve both these purposes at once: prevention of reference to self as "ANDRU", and acceptance of the input name "ANDRU" as merely an attention-getter, giving the AI an opportunity to say something like "YES" or "I AM HERE".


29 older entries...

X
Share this page