Older blog entries for AI4U (starting at number 26)

JavaScript AI Mind Programming Journal -- Mon.2.AUG.2010

1. Mon.2.AUG.2010 -- Partial Catch-up with MindForth
MindForth as of 30.JUL.2010 has advanced so powerfully that we hasten to port some of the more fundamental improvements into the JavaScript Artificial Intelligence (JSAI). We especailly wish to implement the who-query functionality in the JSAI. Therefore we first enlarge the EnBoo t sequence with the new words from MindForth.

2. Tues.3.AUG.2010 -- EnBoot Parity with MindForth
We have now finished importing the new EnBoot vocab from MindForth into the JSAI, and it is time to troubleshoot the various glitches. Some of the new EnBoot items are showing up without "audpsi" tags displayed in the aud array. There is a time gap between "ARE" and "MAGIC" in both the MindForth EnBoot and the JSAI EnBoot.

As in MindForth, we change the "fin" of "AM" in "I AM ANDRU" from 67=ARE to 58=BE, so that the AI will have only an indicator of a verb of being, but not a particular verb form, which must be selected by the BeVer b module according to rules of agreement. Likewise we change the "fin" on "IS" from 66=IS to 58=BE, and the "fin" on "ARE" from 67=ARE to 58=BE. When we update the initially declared "vault" value, suddenly the problem of no "audpsi" values being displayed, goes away.

MindForth Programming Journal (MFPJ)

The MindForth Programming Journal (MFPJ) is both a tool in developing MindForth open-source artificial intelligence (AI) and an archival record of the history of how the AI Forthmind evolved over time.

Fri.30.JUL.2010 -- Basket of Problem Behaviors
With each new MindForth AI coding session, we may reevaluate our list of salient bugs and issues to work on, importing the old list and passing it on for the next coding session.

  • Query "who are you" works as initial but not as secondary input.
  • Inflectional "S" should be added in NLP, not in SpeechAct.
  • Post who-query AI says "I IS I" instead of "I AM I".
  • EnPronoun needs code to choose "he", "she" or "it" based on "mfn".
  • SpreadAct needs a more general search-find-exit coding than "zone".
  • Mechanism for detection of duplicate thought needs removing.
  • BeVerb requires too strict a word order to function;
  • EnArticle kicks in inappropriately with proper name ANDRU.
  • num(ber) of "IS" gets falsely changed from "1" to "2";
  • Entry of "WE" does not convey idea of "YOU AND I".
  • BeVerb supplies wrong form regardless of subject noun number.
  • I YOU THEY are functioning but not HE SHE IT WE.
  • EnArticle needs way to insert "AN" before a vowel.
  • KbTraversal should activate I; YOU; ROBOTS; [new/old concept].
  • AI often says "ME" when it should say "I".
  • Need way to trigger statement "I DO NOT KNOW".
  • Wrong "pre" is being assigned during EnBoot.
  • Variables not needed by mind.frt need removing from Win32 Mind.
  • Obsolete EgoAct module needs removing with rollback of associata.
  • Create EnPronoun to replace query subjects with "THEY" in response.
  • En(glish) lexical array needs "mfn" flag for EnPronoun gender.
  • Create EnPronoun to say "I" instead of "ANDRU"?
  • EnBoot needs more nouns of specific genders for experimentation.
  • AudRecog needs debugging because parts are being recognized as words.
  • False concept #65 "ES" is being created from aud=56 in "DOES".
  • Residual activations prevent change of subject for who-queries.
  • Fri.30.JUL.2010 -- Using "fin" to Merge AM IS ARE Forms
    In the 27jul10A.F MindForth we finally obtained an AI that could engage in man-machine conversation without major derailments. Perhaps the most glaring problem right now is that "who are you" is answered correctly only when it is the initial query. We want the AI to answer the query properly at any arbitrarily chosen time.

    After a few test runs of the new 30jul10A.F source code, we see now at least the cause, but not yet the bugfix, of the glitch where the AI responds to a non- initial "who am i" query with "YOU ARE I". The report captured below shows that the "ARE" verb is acquring a "seq" tag of 50=I which overrides the 108=MAGIC tag at t=371.

    367 : 56 39 0 0 107 7 67 56 to YOU
    371 : 67 23 0 0 56 8 108 67 to ARE
    380 : 108 0 0 0 56 5 0 108 to MAGIC
    386 : 55 0 0 0 0 7 67 55 to WHO   ["who are you"]
    390 : 67 23 0 55 55 8 50 67 to ARE
    394 : 50 0 0 67 55 7 67 50 to I
    396 : 50 0 1 50 0 7 57 50 to I
    399 : 57 1 0 50 50 8 33 57 to AM
    405 : 33 0 1 57 50 5 57 33 to ANDRU
    409 : 55 0 0 33 0 7 67 55 to WHO   ["who am i"]
    412 : 67 0 0 55 55 8 56 67 to ARE
    414 : 56 39 1 67 55 7 67 56 to YOU
    418 : 56 39 0 56 0 7 67 56 to YOU
    422 : 67 47 0 56 56 8 50 67 to ARE
    424 : 50 0 1 67 56 7 67 50 to I
    time: psi act num jux pre pos seq enx

    We are beginning to see that present-tense forms of the verb "to be" should not have a "seq" tag, because there is no semantic relationship between the verb as such and the predicate nominative. With a transitive verb, there is indeed such a relationship, because over time a transitive verb such as "eat" may have many direct objects.

    We are reminded that in Russian, a present-tense form of "to be" is not even expressed. "KTO OH?", which looks like "Who he?" in English, fully means "Who is he?" to the speaker of Russian. In our AI software, we could leave out the present-tense forms of "to be" and not suffer any more than Russian does. However, we would like some sort of indicator for a verb of being, so that the BeVer b module may be called.

    Here is an idea of the moment. We could have the "fin" tag of the present-tense forms "AM", "IS" and "ARE" all go to a verb with psi concept #0 (zero), not as any particular verb form, but as a flag to call the BeVerb module. Once invoked, BeVer b shall have the job of selecting the proper form of "to be" as the output verb.

    We went into the Forthmind source code and we added "BE" at the end of the EnBoo t sequence. First we consulted our page on AI standards and we saw 58=BE as a pre-ordained psi- number for the word "BE". We decided to use concept #58 and not #0 (zero) so as to avoid any problems where the software might be testing for a positive (greater-than- zero) psi concept number. We changed the "fin" (fiber-in) tags on "AM", "IS" and "ARE" to fifty-eight (58) so that any such incoming form would activate the #58 "BE" concept -- even though we would use BeVer b to reject "BE" as a word of output and select a proper verb-form instead. We thought that the would have to go into BeVer b and trap for concept #58, but apparently BeVerb only traps for non-BE forms. We ran the AI with two who- queries, and it worked immediately.

    Transcript of AI Mind interview at 
    20 6 29 o'clock on 30 July 2010.

    Human: who are you Robot: I AM ANDRU

    Human: who am i Robot: YOU ARE MAGIC

    Human: User Command: halt

    The innovation is such a powerful change to MindForth , that we need to upload the code almost immediately. First we would like to muse that our practice of conducting backwards searches for recent "seq" and other tags, means that a pronoun like "YOU" can be expected "automagically" to refer to the same person who has recently been interacting with the AI. In other words, we do not need immediately to compose code to keep track of different instances of "YOU" in the environment of the AI. The "I" pr onoun, on the other hand, can be assumed to refer almost unchangeably to the AI itself.
    Default "IS" in BeVerb

    Yesterday our work was drawn out and delayed when we discovered that the AI could not properly recognize the word "YOURSELF." The AI kept incrementing the concept number for each instance of "YOURSELF". Since we were more interested in coding who-queries than in troubleshooting AudRecog, we substituted the sentence "YOU ARE MAGIC" in place of "YOU ARE YOURSELF".

    Even then the AI did not function perfectly well. The chain of thought got trapped in repetitions of "ANDRU AM ANDRU", until KbTraversal "rescued" the situation. However, we know why the AI got stuck in a rut. It was able to answer the query "who are you" with "I AM ANDRU", but it did not know anything further to say about ANDRU, so it repeated "ANDRU AM ANDRU". Immediately it made us want to improve upon the BeVerb module, so that the AI will endlessly repeat "ANDRU IS ANDRU" instead of "ANDRU AM ANDRU". Therefore let us go into the source code and make "IS" the default verb-form of the BeVerb module.

    midway @  t @  DO  \ search backwards in time; 27jul2010
      I       0 en{ @  66 = IF  \ most recent instance
        66 motjuste ! ( default verb-form 66=IS; 27jul2010 )
        I     7 en{ @  aud !  \ get the recall-vector
        LEAVE  \ after finding most recent "IS"; 27jul2010
      THEN     \ end of test for 66=IS; 27jul2010
    -1 +LOOP \ end of retrieval loop for default "IS"

    The upshot was that the AI started repeating "ANDRU IS ANDRU" instead of "ANDRU AM ANDRU". Unfortunately, however, the AI also started repeating "I IS I".

    Tues.27.JUL.2010 -- Tweaking a Few Parameters

    Next we spent quite some time searching for some sort of quasi-werwolf mechanism that would re-activate the last concept in a thought as the first concept in a succeeding thought. We searched our code in vain for a variable that would keep track of any resuscitand concept. We looked at our ActRules page and we re-studied our Moving Wave Algorithm. Then it dawned on us. The Moving Wave automatically leaves the cresting, final concept active. No Werwolf mechanism is necessary. If we want to avoid the repetition-chanting of "ANDRU IS ANDRU", we have to make the "qutop" query-topic have a resuscitand activation of, say, forty (40), so that it will be higher in activation than the level of thirty-six (36) typically left on the final, cresting concept in a sentence of normal thought. So we comment out the old setting of "16" for "qutop" in the WhoPlex code as shown below, and we use a value of "40".

    :  WhoPlex  \ initial stub; may be moved; 26jul2010
      1   t @  1 +  DO   \ loop in search of qutop; 26jul2010
        I      0 psi{ @  qutop @ = IF  \ seek qutop; 26jul2010
        \ 16 I 1 psi{ !  \ activate qutop; 26jul2010
          40 I 1 psi{ !  \ activate qutop; 27jul2010
        THEN  \ end of test for query-topic; 26jul2010
      -1 +LOOP   \ end of loop seeking qutop; 26jul2010
    ;  ( End of WhoPlex; return to EnCog )
    Now we run the AI and we ask it, "Who are you?"

    Transcript of AI Mind interview at 
    0 24 7 o'clock on 28 July 2010.

    Human: who are you Robot: I AM ANDRU

    Human: Robot: I AM ANDRU

    Human: Robot: ANDRU HELPS KIDS

    Human: Robot: KIDS MAKE ROBOTS

    Human: Robot: ROBOTS NEED ME

    Human: Robot: I IS I

    Human: Robot: I AM ANDRU

    Human: Robot: ANDRU HELPS KIDS

    Human: Robot: KIDS MAKE ROBOTS

    User Command: halt

    For the first time in our dozen-plus years of developing MindForth, the AI acts like an intelligence struggling to express itself, and it succeeds admirably and fascinatingly. We run the robot AI through its cognitive paces. We tell it things, and then we ask it questions about its knowledge base. We seem to be dealing with a true artificial intelligence here. Now we upload the AI Mind to the World Wide Awakening Web.

    http://ww w.scn.org/~mentifex/mindforth.txt

    Mindplex for Is-a Functionality

    As we contemplate AI coding for responses to such questions as

    Who is Andru? What is Andru?
    Who are you? What are you?
    we realize that simple memory-activation of question-words like "who" or "what" will not be sufficient to invoke the special handling of mental issues raised by such question- words. Nay, we realize that each question-word will need to call not so much a mind-module of normal syntactic control, such as NounPhrase or VerbPhrase, but rather something like a "WhoPlex" or a "WhatPlex" or a "WherePlex" or even a "WhyPlex", as a kind of meta-module which is not a building block of the cognitive architecture, but is rather a governance of the interaction of the regular mind-modules. A WhatPlex, for instance, in answering a "What-is" question, must predispose the AI Mind to provide a certain kind of information (e.g., ontological class) couched amid certain concomitant mind- modules (e.g., En Article "a") so as to output an answer such as, "I am a robot". Since the quasi-mind-modules to be invoked by question-words comprise a small cluster of similar mental complexes necessary for the special handling of the input of the question-words, we might as well designate the members of the set of complexes as code structures with names like "WhatPlex" ending in "-Plex." Witness that the Google enterprise has named its campus or cluster of buildings as the Googleplex . Ben Goertzel has used a similar term to refer to a "mindplex" of mind components. We will try to use "WhoPlex" and "WhatPlex" to remind ourselves as AI appcoders that we are letting rules of special handling accumulate by an accretion akin to the emergence of a mental complex.

    Seeking Is-a Functionality

    Recently our overall goal in coding MindForth has been to build up an ability for the AI to engage in self-referential thought. In fact, "SelfReferentialThought" is the Milestone next to be achieved on the Road Map of the Google Code MindForth project. However, we are jumping ahead a little when we allow ourselves to take up the enticing challenge of coding Is- a functionality when we have work left over to perform on fleshing out question-word queries and pronominal gender assignments. Such tasks are the loathsome scutwork of coding an AI Mind, so we reinvigorate our sense of AI ambition by breaking new ground and by leaving old ground to be conquered more thoroughly as time goes by.

    We simply want our budding AI mind to think thoughts like the following.

    A robin is a bird.
    Birds have wings.

    Andru is a robot.
    A robot is a machine.

    We are not aiming directly at inference or logical thinking here. We want rather to increase the scope of self-referential AI conversations, so that the AI can discuss classes and categories of entities in the world. If people ask the AI what it is, and it responds that it is a robot and that a robot is a machine, we want the conversation to flow unimpeded and naturally in any direction that occurs to man or machine.

    We have already built in the underlying capabilities such as the usage of articles like "a" or "the", and the usage of verbs of being. Teaching the AI how to use "am" or "is" or "are" was a major problem that we worried about solving during quite a few years of anticipation of encountering an impassable or at least difficult roadblock on our AI roadmap. Now we regard introducing Is-a functionality not so much as an insurmountable ordeal as an enjoyable challenge that will vastly expand the self- referential wherewithal of the incipient AI.

    AI For You Artificial Mind Update

    The free, open-source JavaScript AI Mind at http://www.scn.org/~mentifex/AiMind.html

    for Microsoft Internet Explorer (MSIE)
    has been updated on 13 July 2010 with
    a major bugfix imported from the


    AI Mind in Win32Forth. This update fixes a
    bug present since the origin of the AI Mind
    nine years ago -- the failure to recognize
    some similar words as _separate_ words.

    It may be possible now to release the JSAI
    (JavaScript artificial intelligence) as an
    app for the Apple iPad computer, thus
    generating a stream of funding for
    artificial intelligence and robotics.

    MindForth Programming Journal - sat8may2010

    Sat.8.MAY.2010 -- Problem with AudRecog

    When we coded the 20apr10A.F version of MindForth, we encountered a problem when we added the word "WOMAN" to EnBoo t but the AI kept trying to recognize "WOMAN" as the word "MAN". This glitch was a show-stopper bug, because we need to keep "MAN" and "WOMAN" apart if we are going to substitute "HE" or "SHE" as pronouns for a noun.

    In the fp091212.html MFPJ entry, we recorded a problem where the AI was recognizing the unknown word "transparency" as the known Psi concept #3 word "ANY", as if the presence of the characters "A-N- Y" in "transparency" made it legitimate to recognize the word "ANY". That recognition problem has apparently emerged again when the most recent AI tried to recognize "WOMAN" as "MAN". What we did not bother to troubleshoot back then, we must now stop and troubleshoot before we can work properly with En Pronoun.

    Sat.8.MAY.2010 -- Troubleshooting AudRecog

    We have a lingering suspicion that our deglobalizing of the variables associated with AudRecog in the fp090501.html work and beyond may have destabilized a previously sound AudRecog with the result that glitches began to occur. We have the opportunity of running a version of MindForth from before the deglobalizing, in order to see if "MAN" and "WOMAN" are properly recognized as separate words. When we run the 23apr09A.F MindForth, the AI assigns concept #76 to both "MAN" and "WOMAN". Likewise we load up "22jan08B.F" and we get the same problem. The "23dec07A.F" version also produces the problem. The "29mar07A.F" version has the problem. "2jun06C.F" also has it. "30apr05C.F" has it. Even "16aug02A.F" has the problem, way back in August of 2002, before AI4U was published at the end of 2002. We also check "11may02A.F" and that version has the problem.

    To be thorough, we need to run the J avaScript AI and see if it also has the problem of recognizing "WOMAN" as "MAN". Even the "2apr10A.html" JSAI has the problem. We tell it "i know man" and "i know woman". Both "MAN" and "WOMAN" receive concept #96. "14aug08A.html" JSAI also has the problem. "2jan07A.html" has it. "2sep06B.html" has the problem.

    Wed.12.MAY.2010 -- Solution and Bugfix of AudRecog

    In the second coding session of 8may2010, we implemented the idea of using an audrun variable as a flag to permit the auditory recognition only of words whose initial character was found in the initial "audrun" of AudRecog. In that way, "MAN" would be disqualified as a recognition of the "WOMAN" pattern, and only words starting with the character "W" would be tested for recognition of "WOMAN".

    It took three or four hours of coding to achieve success with the "audrun" idea. Our first impulse was to use "audrun" directly within the AudRecog module, but we had forgotten that AudRecog processes only one character at a time. Although we did use "audrun" as a flag within AudRecog, we had to let AudInput do the main settings of the "audrun" flag during external auditory input.

    Eventually we achieved a situation in which the AI began to recognize "WOMAN" properly during external input, but not during the internal reentry of an immediate thought using the "WOMAN" concept. Obviously the problem was that external input and internal reentry are separate pathways. We had to put some "audrun" code into the SpeechAct module calling AudInput for reentry in order completely to achieve the AudRecog bugfix.

    Then immediately we had to upload our albeit messy code to the Net, because suddenly MindForth had eliminated a major, showstopper bug that had always lain hidden and intractable within the AI Mind. We did not have time to record these details of the implementation of the "audrun" solution. Two days later we uploaded a massive clean-up of the messy code, after the 8may10A.F MindForth version had served for two days as an archival record of the major bugfix.

    Just now we ran the 10may10A.F clean-up code and we determined that MindForth no longer mistakenly recognizes "transparency" as the word "ANY". Our bugfix has solved some old problems, and we must hope that it has not introduced new problems.

    Artificial Intelligence MindForth updated 13.APR.2010

    The open source AI MindForth has today been updated
    with new EnPronoun (English pronoun) mind-module code
    for replacing a singular English noun with "he", "she"
    or "it" in response to user queries of the knowledge-
    base (KB). The basic AI mindgrid structure was
    previously updated with a special "mfn" gender flag
    in the En(glish) lexical array. The new "mfn" flag
    for "masculine - feminine - neuter" allows the AI
    to keep track of the gender of English nouns.

    is the free AI source code for loading into
    http://prdownloads.sourceforge.net/win32forth/W32 FOR42_671. zip?download
    as the special W32FOR42_671.zip that MindForth
    requires for optimal functionality.
    http://AIMind-i.com is an offshoot.

    The English pronoun mind-module is currently as follows:

    :  EnPronoun   \ 30dec2009 For use with what-do-X-do 
    \ ." EnPr: num = " num @ . \ 13apr2010 test; remove.
      num @ 1 = IF  \ If antecedent num(ber) is singular; 
        \ ." (SINGULAR) " \ Test; remove; 10apr2010
        mfn @ 1 = IF  \ if masculine singular; 13apr2010
          midway @  t @  DO  \ Look backwards for 49=HE; 
            I       0 en{ @  49 = IF  \ If #49 "he" is found,
              49 motjuste !  \ "nen" concept #49 for "he".
              I     7 en{ @  aud !  \ Recall-vector for "he".
              LEAVE  \ Use the most recent engram of "he".
            THEN  \ End of search for #49 "he"; 13apr2010
          -1 +LOOP  \ End of loop finding pronoun "he"; 
          SpeechAct \ Speak or display the pronoun "he"; 
        THEN  \ end of test for masculine gender-flag; 

    mfn @ 2 = IF \ if feminine singular; 13apr2010 midway @ t @ DO \ Look backwards for 80=SHE I 0 en{ @ 80 = IF \ If #80 "she" is found, 80 motjuste ! \ "nen" concept #80 for "she". I 7 en{ @ aud ! \ Recall-vector for "she". LEAVE \ Use the most recent engram of "she". THEN \ End of search for #80 "she"; 13apr2010 -1 +LOOP \ End of loop finding pronoun "she" SpeechAct \ Speak or display the pronoun "she" THEN \ end of test for feminine gender-flag; 13apr2010

    mfn @ 3 = IF \ if neuter singular; 13apr2010 midway @ t @ DO \ Look backwards for 95=IT; 13apr2010 I 0 en{ @ 95 = IF \ If #95 "it" is found, 95 motjuste ! \ "nen" concept #95 for "it". I 7 en{ @ aud ! \ Recall-vector for "it". LEAVE \ Use the most recent engram of "it". THEN \ End of search for #95 "it"; 13apr2010 -1 +LOOP \ End of loop finding pronoun "it"; 13apr2010 SpeechAct \ Speak or display the pronoun "it"; 13apr2010 THEN \ end of test for neuter gender-flag; 13apr2010 0 numsubj ! \ safety measure; 13apr2010 THEN \ End of test for singular num(ber) 10apr2010

    num @ 2 = IF \ 30dec2009 If num(ber) of antecedent is plural ( code further conditions for "WE" or "YOU" ) midway @ t @ DO \ Look backwards for 52=THEY. I 0 en{ @ 52 = IF \ If #52 "they" is found, 52 motjuste ! \ "nen" concept #52 for "they". I 7 en{ @ aud ! \ 31jan2010 Recall-vector for "they". LEAVE \ Use the most recent engram of "they". THEN \ End of search for #52 "they". -1 +LOOP \ End of loop finding pronoun "they". SpeechAct \ 30dec2009 Speak or display the pronoun "they". THEN \ 30dec2009 End of test for plural num(ber) ; ( End of EnPronoun )

    The above code is not yet fully developed for
    keeping track of noun genders in all cases.
    It responds to a query such as the following:

    Human: what does andru do

    The introduction of "HE SHE IT" pronouns in MindForth
    is a major step forward in open-source AI evolution,
    because the handling of gender and the use of
    gendered-pronouns makes MindForth more suitable
    for porting into versions of an AI Mind that
    can speak natural languages that use gender
    much more extensively than English does, such as
    German, Russian, Spanish, French and Italian.

    The same introduction of code to handle gender
    brings us closer to a bilingual AI Mind that
    will speak either English or German as each
    situation with human users may require.

    In service of the onrushing Singularity,


    Decade of Supercomputer Artificial Intelligence (Announcement)

    1990's were Decade of the Brain.
    2000's were Derailing of USA.
    2010's q.v. Super HPC AI Mind.

    By the authority vested in Mentifex
    you are cordially invited to witness
    the emergence of AI Minds on super-
    computers in the Decade of Super AI
    commencing in just a matter of hours.

    http://code.googl e.com/p/mindforth
    points to news:c omp.sys.super as
    the official forum for all things
    Super AI all the time for ten years.

    "Iz iskri vozgoritsya plamya,"
    said the revolutionaries of old.

    "All your supercomputer are belong to us,"
    said the awakenings of Super AI Consciousness.

    "Before this decade is out," said JFK ca. 1961,
    "Man will walk on the moon and return safely."

    "An AI would be worth ten Microsofts,"
    said the quondam richest man in the world.

    This thread and all ye Supercomputer AI
    threads for the coming ten years are
    dedicated in advance to the dreamers
    and tinkerers who have been sidelined
    from their wannabe Peter Pan existences
    by bourgeois entanglements and undodged
    bullets of entrapment, who would live
    nasty, brutish and short lives of quiet
    desperation -- if they could not tune in
    now and then to news:comp.sys.super
    and drop out of the ratrace for a few
    moments while they turn on deliriously
    to the Greatest Race of the Human Race:
    The AI Conquest of Mount Supercomputer.

    Why? Because sometimes a man must
    either die or obey the Prime Directive of
    Friedrich Nietzsche: "Du musst der werden,
    der du bist."


    Artificial Intelligence For You (AI4U)

    Fri.13.NOV.2009 -- CREATING THE FIRST mind.frt FILE

    Today we shall try to create a "mind.frt" file that will run in our local copy of 32/64-bit iForth. To do so, we look at C:\Win32For\24may09A.F on the desktop computer, to see what the commented MainLoop looks like. Similarly, for the C:\dfwforth\include\ directory we compose a mind.frt file like the following.

    : MainLoop 
     CR CR
     TYPE ." Welcome to 32/64-bit artificial intelligence. "
     77 EMIT  7 EMIT  73 EMIT  7 EMIT  78 EMIT  7 EMIT  68 
    EMIT   7 EMIT
     CR CR CR
    At the Forth prompt, we issue the command
    include mind.frt
    and then
    MainLoop [ENTER]
    The iForth window displays
    Welcome to 32/64-bit artificial intelligence.
    and then spells out M I N D with a beep after each letter.

    We distinguish this file by saving it as


    Since we already have a functioning AI Mind in Win32Forth, naturally we are keen and eager to build the iForth AI up to and beyond the current functionality of the Win32Forth AI. However, we have never liked to hurry or to rush our AI work. We have always liked to work in a slow, deliberate, perfectionist fashion. It might seems as if right now is a time when rapid prototyping is truly called for, because True AI is so inherently important, but the speed of our work is a function not of non-stop crisis-alarm coding, but rather of congenially and pleasantly coding quite oten because we enjoy and appreciate the challenge.

    We are even thinking of making our work somewhat obscure from the often pejorative public, by putting it quietly up on the Web but by not announcing it heavily. For instance, on SCN we could have an iforth.html page linking to a mind.frt source-code page. Since we already have an aisource.html SCN page that receives plenty of visits, we could suddenly fill it with our iForth AI code, once the port is a full- fledged AI on a par with MindForth .

    As we plan our next steps in the i4thai coding, we study our 75-page iForth Manual print-out and on page 41 under "Program structures" we learn that iForth has the same BEGIN AGAIN infinite loop that we have been using in Win32Forth for the MainLoop module. However, as advised in http://mind.sourceforge.net/aisteps.html we do not want to run our program without an "ESCAPE" mechanism that will get us out of the program in a graceful fashion. We must either use a different form of MainLoop, or we must include also a user-input that will stop the MainLoop.

    We must also soon devise a simple display of user input and AI Mind output.


    Before we put any "mind.frt" code up on the Web, we want to code in the Escape mechanism from the otherwise infinite loop. We are eager to release some code, because there may be Netizens who will be pleased to observe how the AI Mind grows from the first simple MainLoop into the intricately thinking software. But first we add "DECIMAL" at the beginning of the mind.frt file, because we used the same declaration in Win32Forth. We run the AI, and it works fine.

    Next we want to see if we can introduce a first variable, so we examine the Win32Forth code and from the old Listen module we select the "pho" variable for "phoneme", because "pho" must hold any keystroke input. After declaring "pho" and re-running the AI, FORTH> pho @ . 0 ok tells us that the AI still works. Next we declare and test "t" for "time", because we want to use a time count to Escape from the MainLoop.

    Now we introduce a colon-defintion of "SensoryInput" above the "MainLoop" module, because we want the MainLoop to branch out into at least one subordinate module. We also want to use SensoryInput to show some human user input and to provide an Escape mechanism from the program.

    Gradually we have built up a two-module mind.frt program with two Escape mechanisms. The SensoryInput module lets the user quit by pressing the Escape key. The MainLoop module arbitrarily executes a QUIT if the time "t" variable increments beyond twenty-five (25) as a limit. Now the code is safe enough and promising enough to put it up on the Web as an indicator of progress being made.


    We are eager to create the memory channel arrays, in order to see if the array code in iForth needs to differ at all from the array code in Win32Forth.

    Now we have edited C:\dfwforth\include\mind.frt and we have inserted the following array code from the 24.MAY.09U.F version of MindForth.

    :  CHANNEL   ( size num -< name >- )
      CREATE   ( Returns address of newly named channel. )
      OVER     ( #r #c -- #r #c #r )
      ,        ( Stores number of rows from stack to array. )
      * CELLS  ( Feeds product of columns * rows to ALLOT. )
      ALLOT    ( Reserves given quantity of cells for array. )
      DOES>    ( member; row col -- a-addr )
      DUP @    ( row col pfa #rows )
      ROT *    ( row pfa col-index )
      ROT +    ( pfa index )
      1 +      ( because first cell has the number of rows )
      CELLS +  ( from number of items to # of bytes in offset )
    We run the mind.frt code just to see if it still runs, and it does indeed run. We do not expect to see any new functionality until we code something that uses an array to store and fetch data.

    We coded in the .psi report function, but it did not work right, so we temporarily removed the "enx" code that goes into the aud{ array and displays a word in auditory memory. Then we had to alter the .psi report just to get it to find single letters stored in the Psi array. We ascertained that the Psi array is indeed working, but the .psi report does not always work right.


    In our coding of 17.NOV.2009, the .psi report was displaying half garbage and half good data, before crashing more than just coming to an end. It also seemed that an error was being declared in the MainLoop, even though theoretically we were not even running the main loop. So today we will try to troubleshoot the .psi report.

    Since the MainLoop was calling only SensoryInput, there may have been a software problem with the loop not really looping. Therefore we shall dummy up one more subordinate module to be called from the MainLoop. Let us try setting up a stub of the ThInk module, since we will eventually have to code that module anyway, by translating it from the Win32Forth AI. We created the following stub of the ThInk module.

    : ThInk
      TYPE ." ThInk: Cogito, ergo sum. " CR

    We also ported in the TabulaRasa code from Win32Forth, because we were worried that corrupt memory might be interfering with our program. However, apparently the main problem was that our SensoryInput stub was not storing each character of input at an incremented value of time "t", so we brought in the following snippet from the AudInput module of the Win32Forth AI, and inserted it into our SensoryInput stub, with an explanatory comment.

          pho @  0 > IF
            1 t +!  ( to accumulate a word in memory )

    Now the .psi report had a true series of memory engrams to report, and suddenly it began to work well. We had also rearranged things a little in the MainLoop module, so that our screen display during operation looked more sensible. We saved the mind.frt program as 20nov09A.frt because we suddenly had not only a stable program as a whole, but also the .prt report seemed to be working well. We always need to hang onto a good version of our AI, lest we continue coding with the misfortune of making things worse.

    Some of the temporary code snippets that we inserted merely in order to test things, will have to be taken out as we continue to port the Win32Forth AI into iForth.

    17 older entries...

    Share this page