Older blog entries for AI4U (starting at number 22)

Seeking Is-a Functionality

Recently our overall goal in coding MindForth has been to build up an ability for the AI to engage in self-referential thought. In fact, "SelfReferentialThought" is the Milestone next to be achieved on the Road Map of the Google Code MindForth project. However, we are jumping ahead a little when we allow ourselves to take up the enticing challenge of coding Is- a functionality when we have work left over to perform on fleshing out question-word queries and pronominal gender assignments. Such tasks are the loathsome scutwork of coding an AI Mind, so we reinvigorate our sense of AI ambition by breaking new ground and by leaving old ground to be conquered more thoroughly as time goes by.

We simply want our budding AI mind to think thoughts like the following.

A robin is a bird.
Birds have wings.

Andru is a robot.
A robot is a machine.

We are not aiming directly at inference or logical thinking here. We want rather to increase the scope of self-referential AI conversations, so that the AI can discuss classes and categories of entities in the world. If people ask the AI what it is, and it responds that it is a robot and that a robot is a machine, we want the conversation to flow unimpeded and naturally in any direction that occurs to man or machine.

We have already built in the underlying capabilities such as the usage of articles like "a" or "the", and the usage of verbs of being. Teaching the AI how to use "am" or "is" or "are" was a major problem that we worried about solving during quite a few years of anticipation of encountering an impassable or at least difficult roadblock on our AI roadmap. Now we regard introducing Is-a functionality not so much as an insurmountable ordeal as an enjoyable challenge that will vastly expand the self- referential wherewithal of the incipient AI.

AI For You Artificial Mind Update

The free, open-source JavaScript AI Mind at http://www.scn.org/~mentifex/AiMind.html

for Microsoft Internet Explorer (MSIE)
has been updated on 13 July 2010 with
a major bugfix imported from the


AI Mind in Win32Forth. This update fixes a
bug present since the origin of the AI Mind
nine years ago -- the failure to recognize
some similar words as _separate_ words.

It may be possible now to release the JSAI
(JavaScript artificial intelligence) as an
app for the Apple iPad computer, thus
generating a stream of funding for
artificial intelligence and robotics.

MindForth Programming Journal - sat8may2010

Sat.8.MAY.2010 -- Problem with AudRecog

When we coded the 20apr10A.F version of MindForth, we encountered a problem when we added the word "WOMAN" to EnBoo t but the AI kept trying to recognize "WOMAN" as the word "MAN". This glitch was a show-stopper bug, because we need to keep "MAN" and "WOMAN" apart if we are going to substitute "HE" or "SHE" as pronouns for a noun.

In the fp091212.html MFPJ entry, we recorded a problem where the AI was recognizing the unknown word "transparency" as the known Psi concept #3 word "ANY", as if the presence of the characters "A-N- Y" in "transparency" made it legitimate to recognize the word "ANY". That recognition problem has apparently emerged again when the most recent AI tried to recognize "WOMAN" as "MAN". What we did not bother to troubleshoot back then, we must now stop and troubleshoot before we can work properly with En Pronoun.

Sat.8.MAY.2010 -- Troubleshooting AudRecog

We have a lingering suspicion that our deglobalizing of the variables associated with AudRecog in the fp090501.html work and beyond may have destabilized a previously sound AudRecog with the result that glitches began to occur. We have the opportunity of running a version of MindForth from before the deglobalizing, in order to see if "MAN" and "WOMAN" are properly recognized as separate words. When we run the 23apr09A.F MindForth, the AI assigns concept #76 to both "MAN" and "WOMAN". Likewise we load up "22jan08B.F" and we get the same problem. The "23dec07A.F" version also produces the problem. The "29mar07A.F" version has the problem. "2jun06C.F" also has it. "30apr05C.F" has it. Even "16aug02A.F" has the problem, way back in August of 2002, before AI4U was published at the end of 2002. We also check "11may02A.F" and that version has the problem.

To be thorough, we need to run the J avaScript AI and see if it also has the problem of recognizing "WOMAN" as "MAN". Even the "2apr10A.html" JSAI has the problem. We tell it "i know man" and "i know woman". Both "MAN" and "WOMAN" receive concept #96. "14aug08A.html" JSAI also has the problem. "2jan07A.html" has it. "2sep06B.html" has the problem.

Wed.12.MAY.2010 -- Solution and Bugfix of AudRecog

In the second coding session of 8may2010, we implemented the idea of using an audrun variable as a flag to permit the auditory recognition only of words whose initial character was found in the initial "audrun" of AudRecog. In that way, "MAN" would be disqualified as a recognition of the "WOMAN" pattern, and only words starting with the character "W" would be tested for recognition of "WOMAN".

It took three or four hours of coding to achieve success with the "audrun" idea. Our first impulse was to use "audrun" directly within the AudRecog module, but we had forgotten that AudRecog processes only one character at a time. Although we did use "audrun" as a flag within AudRecog, we had to let AudInput do the main settings of the "audrun" flag during external auditory input.

Eventually we achieved a situation in which the AI began to recognize "WOMAN" properly during external input, but not during the internal reentry of an immediate thought using the "WOMAN" concept. Obviously the problem was that external input and internal reentry are separate pathways. We had to put some "audrun" code into the SpeechAct module calling AudInput for reentry in order completely to achieve the AudRecog bugfix.

Then immediately we had to upload our albeit messy code to the Net, because suddenly MindForth had eliminated a major, showstopper bug that had always lain hidden and intractable within the AI Mind. We did not have time to record these details of the implementation of the "audrun" solution. Two days later we uploaded a massive clean-up of the messy code, after the 8may10A.F MindForth version had served for two days as an archival record of the major bugfix.

Just now we ran the 10may10A.F clean-up code and we determined that MindForth no longer mistakenly recognizes "transparency" as the word "ANY". Our bugfix has solved some old problems, and we must hope that it has not introduced new problems.

Artificial Intelligence MindForth updated 13.APR.2010

The open source AI MindForth has today been updated
with new EnPronoun (English pronoun) mind-module code
for replacing a singular English noun with "he", "she"
or "it" in response to user queries of the knowledge-
base (KB). The basic AI mindgrid structure was
previously updated with a special "mfn" gender flag
in the En(glish) lexical array. The new "mfn" flag
for "masculine - feminine - neuter" allows the AI
to keep track of the gender of English nouns.

is the free AI source code for loading into
http://prdownloads.sourceforge.net/win32forth/W32 FOR42_671. zip?download
as the special W32FOR42_671.zip that MindForth
requires for optimal functionality.
http://AIMind-i.com is an offshoot.

The English pronoun mind-module is currently as follows:

:  EnPronoun   \ 30dec2009 For use with what-do-X-do 
\ ." EnPr: num = " num @ . \ 13apr2010 test; remove.
  num @ 1 = IF  \ If antecedent num(ber) is singular; 
    \ ." (SINGULAR) " \ Test; remove; 10apr2010
    mfn @ 1 = IF  \ if masculine singular; 13apr2010
      midway @  t @  DO  \ Look backwards for 49=HE; 
        I       0 en{ @  49 = IF  \ If #49 "he" is found,
          49 motjuste !  \ "nen" concept #49 for "he".
          I     7 en{ @  aud !  \ Recall-vector for "he".
          LEAVE  \ Use the most recent engram of "he".
        THEN  \ End of search for #49 "he"; 13apr2010
      -1 +LOOP  \ End of loop finding pronoun "he"; 
      SpeechAct \ Speak or display the pronoun "he"; 
    THEN  \ end of test for masculine gender-flag; 

mfn @ 2 = IF \ if feminine singular; 13apr2010 midway @ t @ DO \ Look backwards for 80=SHE I 0 en{ @ 80 = IF \ If #80 "she" is found, 80 motjuste ! \ "nen" concept #80 for "she". I 7 en{ @ aud ! \ Recall-vector for "she". LEAVE \ Use the most recent engram of "she". THEN \ End of search for #80 "she"; 13apr2010 -1 +LOOP \ End of loop finding pronoun "she" SpeechAct \ Speak or display the pronoun "she" THEN \ end of test for feminine gender-flag; 13apr2010

mfn @ 3 = IF \ if neuter singular; 13apr2010 midway @ t @ DO \ Look backwards for 95=IT; 13apr2010 I 0 en{ @ 95 = IF \ If #95 "it" is found, 95 motjuste ! \ "nen" concept #95 for "it". I 7 en{ @ aud ! \ Recall-vector for "it". LEAVE \ Use the most recent engram of "it". THEN \ End of search for #95 "it"; 13apr2010 -1 +LOOP \ End of loop finding pronoun "it"; 13apr2010 SpeechAct \ Speak or display the pronoun "it"; 13apr2010 THEN \ end of test for neuter gender-flag; 13apr2010 0 numsubj ! \ safety measure; 13apr2010 THEN \ End of test for singular num(ber) 10apr2010

num @ 2 = IF \ 30dec2009 If num(ber) of antecedent is plural ( code further conditions for "WE" or "YOU" ) midway @ t @ DO \ Look backwards for 52=THEY. I 0 en{ @ 52 = IF \ If #52 "they" is found, 52 motjuste ! \ "nen" concept #52 for "they". I 7 en{ @ aud ! \ 31jan2010 Recall-vector for "they". LEAVE \ Use the most recent engram of "they". THEN \ End of search for #52 "they". -1 +LOOP \ End of loop finding pronoun "they". SpeechAct \ 30dec2009 Speak or display the pronoun "they". THEN \ 30dec2009 End of test for plural num(ber) ; ( End of EnPronoun )

The above code is not yet fully developed for
keeping track of noun genders in all cases.
It responds to a query such as the following:

Human: what does andru do

The introduction of "HE SHE IT" pronouns in MindForth
is a major step forward in open-source AI evolution,
because the handling of gender and the use of
gendered-pronouns makes MindForth more suitable
for porting into versions of an AI Mind that
can speak natural languages that use gender
much more extensively than English does, such as
German, Russian, Spanish, French and Italian.

The same introduction of code to handle gender
brings us closer to a bilingual AI Mind that
will speak either English or German as each
situation with human users may require.

In service of the onrushing Singularity,


Decade of Supercomputer Artificial Intelligence (Announcement)

1990's were Decade of the Brain.
2000's were Derailing of USA.
2010's q.v. Super HPC AI Mind.

By the authority vested in Mentifex
you are cordially invited to witness
the emergence of AI Minds on super-
computers in the Decade of Super AI
commencing in just a matter of hours.

http://code.googl e.com/p/mindforth
points to news:c omp.sys.super as
the official forum for all things
Super AI all the time for ten years.

"Iz iskri vozgoritsya plamya,"
said the revolutionaries of old.

"All your supercomputer are belong to us,"
said the awakenings of Super AI Consciousness.

"Before this decade is out," said JFK ca. 1961,
"Man will walk on the moon and return safely."

"An AI would be worth ten Microsofts,"
said the quondam richest man in the world.

This thread and all ye Supercomputer AI
threads for the coming ten years are
dedicated in advance to the dreamers
and tinkerers who have been sidelined
from their wannabe Peter Pan existences
by bourgeois entanglements and undodged
bullets of entrapment, who would live
nasty, brutish and short lives of quiet
desperation -- if they could not tune in
now and then to news:comp.sys.super
and drop out of the ratrace for a few
moments while they turn on deliriously
to the Greatest Race of the Human Race:
The AI Conquest of Mount Supercomputer.

Why? Because sometimes a man must
either die or obey the Prime Directive of
Friedrich Nietzsche: "Du musst der werden,
der du bist."


Artificial Intelligence For You (AI4U)

Fri.13.NOV.2009 -- CREATING THE FIRST mind.frt FILE

Today we shall try to create a "mind.frt" file that will run in our local copy of 32/64-bit iForth. To do so, we look at C:\Win32For\24may09A.F on the desktop computer, to see what the commented MainLoop looks like. Similarly, for the C:\dfwforth\include\ directory we compose a mind.frt file like the following.

: MainLoop 
 TYPE ." Welcome to 32/64-bit artificial intelligence. "
 77 EMIT  7 EMIT  73 EMIT  7 EMIT  78 EMIT  7 EMIT  68 
At the Forth prompt, we issue the command
include mind.frt
and then
MainLoop [ENTER]
The iForth window displays
Welcome to 32/64-bit artificial intelligence.
and then spells out M I N D with a beep after each letter.

We distinguish this file by saving it as


Since we already have a functioning AI Mind in Win32Forth, naturally we are keen and eager to build the iForth AI up to and beyond the current functionality of the Win32Forth AI. However, we have never liked to hurry or to rush our AI work. We have always liked to work in a slow, deliberate, perfectionist fashion. It might seems as if right now is a time when rapid prototyping is truly called for, because True AI is so inherently important, but the speed of our work is a function not of non-stop crisis-alarm coding, but rather of congenially and pleasantly coding quite oten because we enjoy and appreciate the challenge.

We are even thinking of making our work somewhat obscure from the often pejorative public, by putting it quietly up on the Web but by not announcing it heavily. For instance, on SCN we could have an iforth.html page linking to a mind.frt source-code page. Since we already have an aisource.html SCN page that receives plenty of visits, we could suddenly fill it with our iForth AI code, once the port is a full- fledged AI on a par with MindForth .

As we plan our next steps in the i4thai coding, we study our 75-page iForth Manual print-out and on page 41 under "Program structures" we learn that iForth has the same BEGIN AGAIN infinite loop that we have been using in Win32Forth for the MainLoop module. However, as advised in http://mind.sourceforge.net/aisteps.html we do not want to run our program without an "ESCAPE" mechanism that will get us out of the program in a graceful fashion. We must either use a different form of MainLoop, or we must include also a user-input that will stop the MainLoop.

We must also soon devise a simple display of user input and AI Mind output.


Before we put any "mind.frt" code up on the Web, we want to code in the Escape mechanism from the otherwise infinite loop. We are eager to release some code, because there may be Netizens who will be pleased to observe how the AI Mind grows from the first simple MainLoop into the intricately thinking software. But first we add "DECIMAL" at the beginning of the mind.frt file, because we used the same declaration in Win32Forth. We run the AI, and it works fine.

Next we want to see if we can introduce a first variable, so we examine the Win32Forth code and from the old Listen module we select the "pho" variable for "phoneme", because "pho" must hold any keystroke input. After declaring "pho" and re-running the AI, FORTH> pho @ . 0 ok tells us that the AI still works. Next we declare and test "t" for "time", because we want to use a time count to Escape from the MainLoop.

Now we introduce a colon-defintion of "SensoryInput" above the "MainLoop" module, because we want the MainLoop to branch out into at least one subordinate module. We also want to use SensoryInput to show some human user input and to provide an Escape mechanism from the program.

Gradually we have built up a two-module mind.frt program with two Escape mechanisms. The SensoryInput module lets the user quit by pressing the Escape key. The MainLoop module arbitrarily executes a QUIT if the time "t" variable increments beyond twenty-five (25) as a limit. Now the code is safe enough and promising enough to put it up on the Web as an indicator of progress being made.


We are eager to create the memory channel arrays, in order to see if the array code in iForth needs to differ at all from the array code in Win32Forth.

Now we have edited C:\dfwforth\include\mind.frt and we have inserted the following array code from the 24.MAY.09U.F version of MindForth.

:  CHANNEL   ( size num -< name >- )
  CREATE   ( Returns address of newly named channel. )
  OVER     ( #r #c -- #r #c #r )
  ,        ( Stores number of rows from stack to array. )
  * CELLS  ( Feeds product of columns * rows to ALLOT. )
  ALLOT    ( Reserves given quantity of cells for array. )
  DOES>    ( member; row col -- a-addr )
  DUP @    ( row col pfa #rows )
  ROT *    ( row pfa col-index )
  ROT +    ( pfa index )
  1 +      ( because first cell has the number of rows )
  CELLS +  ( from number of items to # of bytes in offset )
We run the mind.frt code just to see if it still runs, and it does indeed run. We do not expect to see any new functionality until we code something that uses an array to store and fetch data.

We coded in the .psi report function, but it did not work right, so we temporarily removed the "enx" code that goes into the aud{ array and displays a word in auditory memory. Then we had to alter the .psi report just to get it to find single letters stored in the Psi array. We ascertained that the Psi array is indeed working, but the .psi report does not always work right.


In our coding of 17.NOV.2009, the .psi report was displaying half garbage and half good data, before crashing more than just coming to an end. It also seemed that an error was being declared in the MainLoop, even though theoretically we were not even running the main loop. So today we will try to troubleshoot the .psi report.

Since the MainLoop was calling only SensoryInput, there may have been a software problem with the loop not really looping. Therefore we shall dummy up one more subordinate module to be called from the MainLoop. Let us try setting up a stub of the ThInk module, since we will eventually have to code that module anyway, by translating it from the Win32Forth AI. We created the following stub of the ThInk module.

: ThInk
  TYPE ." ThInk: Cogito, ergo sum. " CR

We also ported in the TabulaRasa code from Win32Forth, because we were worried that corrupt memory might be interfering with our program. However, apparently the main problem was that our SensoryInput stub was not storing each character of input at an incremented value of time "t", so we brought in the following snippet from the AudInput module of the Win32Forth AI, and inserted it into our SensoryInput stub, with an explanatory comment.

      pho @  0 > IF
        1 t +!  ( to accumulate a word in memory )

Now the .psi report had a true series of memory engrams to report, and suddenly it began to work well. We had also rearranged things a little in the MainLoop module, so that our screen display during operation looked more sensible. We saved the mind.frt program as 20nov09A.frt because we suddenly had not only a stable program as a whole, but also the .prt report seemed to be working well. We always need to hang onto a good version of our AI, lest we continue coding with the misfortune of making things worse.

Some of the temporary code snippets that we inserted merely in order to test things, will have to be taken out as we continue to port the Win32Forth AI into iForth.

MindForth Programming Journal - sun24may2009


Recently we gave MindForth the ability to add an "S" at the end of any ordinary English verb in the third-person singular form in the present tense. Thus we can put an "S" on the word "love" and say, "Your robot loveS you" -- if indeed your robot has the EmotiOn mind-module and you are worthy of a robot's love. Unfortunately, we got too much of a good thing -- not love, but the adding of an "S" at the end of a verb. It was straight out of "The Sorcerer's Apprentice" by good old Walt Disney, with "S" after "S" being added in a sibilant, hissing flood. So what do we do? We troubleshoot the Robot AI Mind by running the berSerk code and we try to undo the damage.

It is probably better to correct the problem at its source, where each inflectional "S" is added, rather than in the deposition of each "S" in the auditory memory channel.

Now we have eliminated the accumulating "S" problem by using "lastpho" to test for the last phoneme being an "S" at the end of the verb being recalled from auditory memory. It is not a perfect solution, because oftentimes a verb like "miss" will end in "S" anyway. We wish to solve the problem here for most cases and then adjust the solution later for exceptional cases.

MindForth Programming Journal - wed20may2009


So many robots need to have an AI Mind installed, and since MindForth is tantamount to the VisiCalc of artificial intelligence, that we now rush to add feature after feature to the budding robot AI. Recently we made MindForth able to InStantiate a singular noun upon encountering a plural English noun in the auditory robotic input stream. If you tell the robot AI4U Mind something about birds, it now sets up the singular noun "bird" as a concept. Then we encoded an algorithm of assuming from input of the article "a" that the next noun after "a" is a singular noun. If you say that you wish to manufacture "a conscious robot", the article "a" sets a flag that skips the adjective "conscious" and assigns the singular "num (ber)" to whatever noun (e.g., "robot") comes next. (And with AI4U technology we are indeed helping you to manufacture a conscious robot.) Next we need to ensure that the emergingly conscious AI Mind will use the right form of an English verb when, for example, "it talks" about a singular noun. Simply put, the software "needs" to put "s" on the end of a verb after a third-person singular noun.

MindForth Programming Journal - tues19may2009


As we run MindForth and look for the currently most obvious problem, we notice rather keenly that the AI needs to use the indefinite article "a" to set a following noun as singular in number. Since the AI has recently gained the ability to instantiate a singular noun upon receiving its plural form in the input stream, it makes sense now to enhance the ability of the AI Mind to deal with singular nouns.

In AudRecog we already have the following code.

pho @ 83 = IF  \  1oct2008 If the final character is "S"
  2 num !  \  1oct2008 Set the "num" flag as plural
THEN  \  1oct2008 End of test for "S" at end of a word.

The above code triggers an immediate setting of num(ber) as plural. When the article "a" comes in, we want not to act immediately but rather to govern a flag that will set the next incoming noun to a singular number.

One concern right now is whether to use the letter "a" or the concept of "a" as the determinant in setting the singularity flag. We should probably use the concept, so that eventually either "a" or "an" will trip the flag- setting.

In the new AI code shown below, the second part sets the singflag so that the first part can take future action. It may not really matter here which part comes first, but the idea is to let one event govern subsequent events.

singflag @ 1 = IF  \ 19may2009 If flag set by "a" or "an"
  pos @ 5 = IF  \ 19may2009 If noun by part-of-speech POS
    1 num !  \ 19may2009 Set num(ber) to singular one.
    0 singflag !  \ 19may2009 Zero out flag after use.
  THEN  \ 19may2009 End of test for a noun after "a"
THEN  \ 19may2009 End of test of singularity flag.
psi @ 1 = IF \ 19may2009 If article "a" comes in as input 1 singflag ! \ 19may2009 Set singularity-flag to one. 0 act ! \ 19apr2009 To suppress using article "a" THEN \ 19may2009 End of test for article "a" coming in.


When we next change the EnBoot sequence, we need to include "a" and "an" and "one" as input elements that will trigger the singularity flag "singflag". We also need to put in at least one pair of opposite adjectives, so that we can use one of the adjectives to make sure that it gets skipped, as in "a big question", where the article "a" is supposed to set the singflag that will cause the next noun to be regarded as singular in number. We could use "big/small" or "good/bad" or "old/new" or "robotic/human" as adjective pairs.

MindForth Programming Journal - mon11may2009


Today we would like to work on getting the AI to recognize both singular stems and plural forms of a standard English noun. Perhaps we will start out by trying to see if we can have the AI instantiate nouns ending in "s" while going back and assigning the "audpsi" ultimate tag to the penultimate phoneme which is the end of the singular stem.

Looking at the AudMem code, we realize that we need to get several new influences in there to cause the AudMem module to dip back one unit in time and assign the "audpsi" value to the penultimate phoneme that marks the end of the stem. We want the word to be a noun and to be ending in "s". We might get away with disregarding whether the word is a noun. We could just look for all words ending in "s" and then we could plunk down the "audpsi" ultimate tag not only on the final "s" phoneme but also on the penultimate phoneme. Serendipitously, in this way we would also manage to tag the stem of a third-person singular verb with an audpsi, as in, "He works," where a penultimate audpsi would identify the concept. So we get out our ASCII chart and we see that uppercase "S" is "83" in ASCII. We will test the end of words for an "S" and assign the "audpsi" value to both the final and the penultimate phoneme.


We will try now to achieve singular-stem recognition from plural nouns not only by putting an audpsi ultimate-tag on the penultimate phoneme, but also by setting the "ctu" continuation flag to zero ("0") in the penultimate position, so that our software will "think" that it has recognized a whole word instead of just a stem inside a noun-plural.

In the AudInput module, we have been putting our new code into the area for internal reentry. Perhaps the new code belongs in the area for external input.

Hey, perhaps all this new code should be in NewConcept, because we are trying to deal with previously unknown noun-stems.

Gee, we might try something really radical. We have created a variable

variable newpsi   ( 12may2009 for singular-nounstem 
assignments )

so that we can be sure to assign "ctu=0" and "audpsi" only to noun-stems of a plural word coming in during user input. It might be quite radical, but useful, to put the "newpsi" value just before all incoming "S" phonemes, not just final, end-of-word "S" phonemes. We would assume that such assignments would not cause any problems for "ctu=1" word-engrams. Then, when the word-engram was finalized, we could go back in and set the "ctu=0" value to permit future recognition of the noun- stem.


The variable "newpsi" obtains no positive (above zero) value until NewConcept is called -- which happens when?

With a new "wordend" variable we finally achieved the basic functionality that we have been seeking for the past three days, but with a few minor glitches. We used the following AudMem code.

    \ 13may2009 In AudMem as called towards end of 
    pov @ 42 = IF  \ 12may2009 Only during external input
      pho @ 83 = IF  \ 12may2009 If phoneme is "S"

\ CR ." S pho & newpsi = " pho @ . ." " newpsi @ . \ 12may2009 test \ ." time t = " t @ . \ 12may2009 test newpsi @ t @ 1- 5 aud{ ! \ 12may2009 pre-"S" audpsi

wordend @ 1 = IF \ 13may2009 If word has ended CR ." audpsi = " audpsi @ . \ 13may2009 a test. \ audpsi @ 0 = IF \ 13may2009 Change ctu only for new words. 0 t @ 1- 4 aud{ ! \ 13may2009 As if "no continuation". \ THEN \ 13may2009 \ 13may2009 End of test for known word. THEN \ 13may2009 End of test for end of word 0 newpsi ! \ 12may2009 Reset for safety. THEN \ 12may2009 End of test for "S" THEN \ 12may2009 End of test for external input.

In an upcoming other version of MindForth, we need to overcome the minor glitches. One glitch is that the AI is setting "ctu" to zero on both the penultimate and ultimate array-row of a plural word that has just previously been learned. We would prefer that the known plural word only have ctu=0 in the final row. Another glitch is that the new code is working only after a previously unknown verb is used. It should be relatively simple to remove that particular glitch.

4. Thurs.14.MAY.2009 -- STUMPED AND STYMIED

Of the two glitches we need to work on, the more important one, and also probably easier to solve, is the problem of the new code not working with a known verb from the EnBoot English bootstrap.

In the following transcript, the new stem-rec code does not work after we use the English bootstrap verb "know".

Transcript of AI Mind interview at 7 39 3 o'clock on 14 
May 2009.
i know books
 S pho & newpsi = 83   0 time t = 215


When we use the previously unknown verb "use" in the following transcript, the stem-recognition code works just fine. Why? What is the difference between using an old or a new verb? Here we say "i use books" to the AI Mind.

Transcript of AI Mind interview at 7 40 7 o'clock on 14 
May 2009.
i us
 S pho & newpsi = 83   0 time t = 207 e books
 S pho & newpsi = 83   77 time t = 214
 audpsi = 0


There must be a hidden influence in either OldConcept, or NewConcept, or both, because one or the other module is invoked for the verb, depending upon whether the verb is "old" or "new."

By means of some diagnostic code in AudMem, we have just learned that the "newpsi" variable has a value of zero after a verb from the English bootstrap.

Transcript of AI Mind interview at 21 8 34 o'clock on 14 
May 2009.
i know books
 AudMem: backstoring newpsi 0

We may want to troubleshoot the "newpsi", or perhaps just replace it with a "stempsi" variable.


Theoretically we should be able to see an audpsi Aud{ engram and be able to figure out exactly how and why that particular value got placed there. But we have been having extreme difficulty over this past week.

Bingo! In the "ELSE" (if no old concept, declare new concept) area of AudInput, we have found one ancient line of code that has been causing all our grief for the past week.

            nen @  tult @  5  aud{ !  \ Store new concept 
        THEN          \ end of test for a positive-length 
      THEN              \ end of test for the presence of 
a move-tag;
      AudDamp           \ Zero out the auditory engrams.

That top line in the snippet above has white space that made it not show up when we searched for "5 aud{ !" in the source code.

Okay, now we actually have to rename 14may09B.F as 15may09A.F and continue working with the new version designated properly for today, because now we have an actual prospect of implementing a correct algorithm for recognizing singular noun-stems within new plural nouns.

Well, we had a good scare in our maintaining of functionality today. Apparently the following block of new code in the AudInput module was making our AI lose its ability to recognize "I" properly. When we comment out the code below, the ability comes back.

    pho @ 83 = IF  \ 15may2009 If the word ends in "S"
      ctu @ 0 = IF  \ 15may2009 If word is ending
        0 t @ 1- 4 aud{ !  \ 15may2009 As if "no 
      THEN  \ 15may2009 End of non-continuation test
    THEN  \ 15may2009 End of test for "S"

The problem caused us to backtrack to 14may09B.F and use it to create 15may09B.F, which we deleted after we identified the problem as presented above. In a short while of coding 15may09A.F we added some useful code that we did not want to have to re-create, so we persisted in troubleshooting our AI.

6. Fri.15.MAY.2009 -- REMARKS

The current 15may09A.F code has a lot of "Junk DNA" in it, because it tooks us several days to locate and fix the problem. Now we need to gradually remove the many instances of test code, and devise a solution for the glitch of not always having the desired penultimate setting of "ctu" from one to zero.


Today we will re-constitute 15may09B.F as a clone of 15may09A.F and we will strip away the excessive noun-stem-related comments. Then we will name a new copy of the cleaned-up code as 16may09A.F, so that we can continue coding while still having the cleaned-up code in the 15may09B.F archive.

Here is a plan. In AudMem we could constantly test for S=83 and set ctu to zero upon finding "S", while also going back and switching changed ctu values back again to "1". To avoid going back too far, we could re-switch the changed ctu values merely upon finding a non-end-of-word.

Or we could make ctu=0 the default, constantly switching it to one retroactively, except when an "S" is encountered. Or we could change the whole AudMem system, and make it include a pho=32 space-bar at the end of each word, so that we would not have to do much retroactive adjustment.


Let's just jump right in and see what happens when we include an ASCII 32 space-bar at the end of each word and as part of each word. We are eager to hurry up and put some new AI up on the Web. The sooner we get the terminated-word code up on the Web, the sooner we establish the design as a kind of standard for AI prototypes.

Now we have gone in and added an extra row to each word in the EnBoot sequence. We will try to run the code, but we do not expect it to work.

Hmm.... The code did indeed run, but the thinking had gone haywire. We achieved no cognitive chain reaction, that is, we were not able to enter four sentences and get the AI to think in an "infinite" loop. But now we get to troubleshoot and debug. The chore of changing the EnBoot sequence has been done. We just have to make the rest of the program adjust to the changed EnBoot.

Now we are going to change some AudInput code that reacts to a space-bar pho=32. Instead of retroactively setting the ctu- flag to zero at "tult", we are going to set the ctu-flag at the current "t" time, because each word is surely at an end now.

Newxt we had better tend to problems in the AudMem code, because the EnBoot module is no longer filling in the audpsi concept numbers in the auditory memory channel. Therefore none of the bootstrap words are being recognized.

The major problem right now after the EnBoot change-over is that the AI is not recognizing any words. We may have to troubleshoot the AudRecog module.


There is obviously some tiny little glitch preventing the new, EnBoot-altered MindForth from recognizing a single word. Here we type in "you and i".

Transcript of AI Mind interview at 6 52 53 o'clock on 17 
May 2009.
yARc:0 ARc:0
audpsi=0 oARc:0 ARc:0 ARc:ctu=1
audpsi=0 uARc:ctu=1
audpsi=0 aARc:0 ARc:0 ARc:0 ARc:0 ARc:0
audpsi=0 nARc:0 ARc:0 ARc:ctu=1 ARc:0
audpsi=0 dARc:0 ARc:ctu=1
audpsi=0 iARc:0 ARc:0 ARc:0 ARc:0 ARc:0 ARc:0

There must have been two instances of initial "Y" in auditory memory, for us to see two diagnostic messages. Or maybe they were just general instances of "Y". The words "YES" and "YOU" in the EnBoot sequence have initial "Y", and the words "WHY" and "THEY" have non-initial "Y".

Maybe we should start (or resume?) putting commented-out diagnostic tools inside the crucial AudRecog module, so that we may quickly troubleshoot any future problems.

When we knock out AudDamp temporarily in order to see what activations are building up on auditory engrams during the recognition of input "you", we see the following differential build-up on the EnBoot engram of YOU.

74 Y 0 # 1 1 0
75 O 8 # 0 1 0
76 U 10 # 0 1 0
77   10 # 0 0 56

That record shows a good, healthy build-up.

10. Sun.17.MAY.2009 -- REVERTING TO THE OLD EnBoot

It is proving too hard to get the auditory memory to include ASCII pho=32 spaces as the final element in each English word. Therefore we are abandoning the code of last night and today and we are reverting to the 15may09B.F cleaned-up version.

The "ctu" value is rather sacred, because it plays a central role in the recognition of a word as an "audpsi" concept. Whether we enjoy it or not, we will have to do some retroactive resetting of "ctu".

For the setting of "ctu" in current circumstances, the most important thing is that a final "S" comes in, as shown by a terminating pho=32. Therefore, without relying on "wordend", we should simply trap for "S" and for pho=32. When pho is 32, we should see if the "prepho" is ASCII S-83. So we need to have prepho available. Actually, we need a system that keeps track of three elements: the current pho=32; the previous "S"; and the element before "S". Or do we? We need to see that it is at least positive.


After seven days of arduous AI coding, we seem finally to have solved the problem of getting the AI to accept a plural English noun ending in "s" while assigning the concept number to the singular stem. We used the following AudInput test-code in the area that deals with pov=42 external input, not with internal reentry, because new words are not learned during the reentry of thoughts.

      \ 17may2209 Testing for SP-32 or CR-13:
      pho @ 32 = pho @ 13 = OR IF  \ 17may2009
        pho @ 13 = IF 10 EMIT THEN  \ 17may2009
        ." AIptExtPho=" pho @ .    \ 17may2009 test
        ."  AIptExtPrepho=" prepho @ .    \ 17may2009 test
        prepho @ 83 = IF   \ 17may2009 If previous pho 
was "S"
          \ 17may2009 In next line, time t may not have 
          0 t @ 1 - 4 aud{ !  \ 17may2009 set ctu=0 
before "S" end.
          0 prepho !  \ 17may2009 Zero out prepho after 
        THEN  \ 17may2009 End of test for external 
final "S"
      THEN  \ 17may2009 End of test for external space-bar 


13 older entries...

Share this page