Older blog entries for AI4U (starting at number 86)

Mentifex on predictive textlike brain mechanism

The predictive textlike brain mechanism mentioned in the article works perhaps because each word as a concept has many associative tags over to other concept-words frequently used in connection with the triggering word. A similar neural network of associative tags is at work in the Mind.Forth Strong AI which has been ported into Strawberry Perl 5 and which you may download free-of-charge in order to study the Theory of Mind depicted in the diagram below also available as an animated brain-mind GiF:
./^^^^^^^\..SEMANTIC MEMORY../^^^^^^^\
| Visual. | .. syntax ..... |Auditory |
| Memory..| .. /..\---------|-------\ |
| Channel.| . ( .. )function|Memory | |
| . . . . | .. \__/---/ . \ | . . . | |
| . /-----|---\ |flush\___/ | . . . | |
| . | . . | . | |vector | . | .word | |
| ._|_ .. | . v_v_____. | . | .stem | |
| / . \---|--/ . . . .\-|---|--/ .\ | |
| \___/ . | .\________/ | . | .\__/ | |
| percept | . concepts _V . | .. | .| |
| . . . . | . . . . . / . \-|----' .| |
| . . . . | . . . . .( . . )| ending| |
| . . . . | inflection\___/-|->/..\_| |
| . . . . | . . . . . . . . | .\__/.. |
Syntax generates thought from concepts.
AI Mind Maintainer jobs will be like working in a nuclear power plant control room.

Ghost Perl AI uses the AudListen() mind-module to detect keyboard input.

Yesterday we may have finally learned how to let the Ghost Perl AI think indefinitely without stopping to wait for a human user to press "Enter" after typing a message to the AI Mind. We want the Perlmind only to pause periodically in case the human attendant wishes to communicate with the AI. Even if a human types a message and fails to press the Enter-key, we want the Perl AI to register a CR (carriage-return) by default and to follow chains of thought internally, with or without outside influence from a human user.

Accordingly today we create the AudListen() module in between the auditory memory modules and the AudInput() module. We move the new input code from AudInput() into AudListen(), but the code does not accept any input, so we remove the current code and store it in an archival test-file. Then we insert some obsolete but working code into AudListen(). We start getting primitive input like we did yesterday in the ghost181.pl program. Then we start moving in required functionality from the MindForth AI, such as the ability to press the "Escape" key to stop the program.

Eventually we obtain the proper recognition and storage of input words in auditory memory, but the ghost182.pl AI is not switching over to thinking. Instead, it is trying to process more input. Probably no escape is being made from the AudInput() loop that calls the AudListen() module. We implement an escape from the AudInput() module.

The ghost182.pl program is now able take in a sentence of input and generate a sentence of output, so we will upload it to the Web. We still need to port from MindForth the code that only pauses to accept human input and then goes back to the thinking of the AI.

Machine Translation by Artificial Intelligence

As an independent scholar in polyglot artificial intelligence, I have just today on March 21, 2017, stumbled upon a possible algorithm for implementing machine translation (MT) in my bilingual Perlmind and MindForth programs. My Ghost Perl AI thinks heretofore in either English or Russian, but not in both languages interchangeably. Likewise my Forth AI MindForth thinks in English, while its Teutonic version Wotan thinks in German.

Today like Archimedes crying "Eureka" in the bathtub, while showering but not displacing bath-water I realized that I could add an associative tag mtx to the flag-panel of each conceptual memory engram to link and cross-identify any concept in one language to its counterpart or same concept in another language. The mtx variable stands for "machine-translation xfer (transfer)". The AI software will use the spreading-activation SpreadAct module to transfer activation from a concept in English to the same concept in Russian or German.

Assuming that an AI Mind can think fluently in two languages, with a large vocabulary in both languages, the nub of machine translation will be the simultaneous activation of semantically the same set of concepts in both languages. Thus the consideration of an idea expressed in English will transfer the conceptual activation to a target language such as Russian. The generation modules will then generate a translation of the English idea into a Russian idea.

Inflectional endings will not pass from the source language directly to the target language, because the mtx tag identifies only the basic psi concept in both languages. The generation modules of the target language will assign the proper inflections as required by the linguistic parameters governing each sentence being translated.

Seed AI code structures to be implemented

In the last two days [15&16.FEB.2017] I have been figuring out that responses generated from the VisRecog module should be assigned a high truth value, because "seeing is believing."

In January 2017 I figured out that preterite assertions like "Roger is here" should default to a very low truth value.

So maybe now I have garnered two building blocks of the Full AI; Compleat AI; Terminal AI; Advanced AI; Whatever AI. I need a term to express the idea that it will no longer be a partial AI. It will be an AI as the completion of my lifework. Maybe I should call it the Seed AI. That term has been bruited about for a decade or more and it may now fit the bill for what I have been doing.

MindForth Programming Journal (MFPJ)


Sat.27.AUG.2016 -- Creating the MindGrid trough of inhibition

In agi00031.F we are trying to figure out why we have lost the functionality of ending human input with a 13=CR and still getting a recognition of the final word of the input. We compare the current AudMem code with the agi00026.F version, and there does not seem to be any difference. Therefore the problem must probably lie in the major revisions made recently to the AudInput module.

From the diagnostic report messages that appear when we run the agi00031.F, it looks as though the 13=CR carriage return is not getting through from the AudInput module to the AudMem module. When we briefly insert a revealing diagnostic into the agi00026.F AudMem start, we see from "g AudMem: pho= 71" and "o AudMem: pho= 79" and "d AudMem: pho= 68" and "AudMem: pho= 13" that the carriage-return is indeed getting through. Therefore in AudInput we need to find a way of sending the final 13=CR into AudMem. Upshot: It turns out that in AudInput we only had to restore "pho @ 31 > pho @ 13 = OR IF \ 2016aug27: CR, SPACE or alphabetic letter" as a line of code that would let 13=CR be one of the conditions required for calling the AudMem module.

Next in the InStantiate module we need to remove a test that only lets words with a positive "rv" recall-vector get instantiated, because we must set "rv" to zero for personal pronouns being re-interpreted as "you" or "I" during communication with a human user. Apparently the Perlmind just ignores the engrams with a zero "rv" and finds the correct forms with a search based on parameters.

Now we would like to see how close we are to fulfilling all the conditions for a proper "trough" of inhibition in the AI MindGrid. When we run the ghost175.pl Perl AI and we enter "You know God," we see negative activations in thepresent-most trough of both the input and the concepts of "I HELP KIDS" as the output. In the Forth AGI, we wonder why do not see any negative activations in the present-most trough. Oh, we were not yet bothering to store the "act" activation-level in the Forth InStantiate module. We insert the missing necessary code, and we begin to see the trough of inhibition in both the recent-most input and the present-most output.

Visualizing the MindGrid as Theater of Neuronal Activations

Recently we have developed the ability to visualize the MindGrid as Theater of Neuronal Activations. At the most recent, advancing front of the MindGrid, we see an inhibited trough of negative activations. We see an input sentence from a human user activating concept-fibers stretching back to the earliest edge of the MindGrid. We see an old idea becoming fresh output and then being inhibited into negative activation at its origin. We see outputs of the AGI passing through ReEntry() to re-enter the Mind as inhibited engrams while re-activating old engrams. We see the front-most trough of inhibition preventing the most recent ideas from preoccupying and monopolizing the artificial consciousness.

In ghost 174.pl, we have now commented out some code in the InStantiate() mind-module that was letting only nouns or pronouns of human input be re-activated along the length of the MindGrid. The plan now is to let all parts of an incoming sentence re-activate the engrams of its component concepts.

Now, how do we make sure that the front-most engrams of the sentence of human input will be inhibited with negative activation in the trough of recent mental activity on the MindGrid? It appears that InStantiate() makes a sweep of old engrams to set a positive activation, and then at the $tult penultimate-time it sets an activation for the current, front-most input. In order to keep a trough of recent inhibition, let us try setting a negative activation at the $tult time-point.

After input of "I see kids" and a response by the AI of "KIDS MAKE ROBOTS", in minddata.txt we see the sweep of positive activation of old engrams.

At t=477, "YOU" has an activation of thirty (30).

At t=518, "YOU" has an activation of thirty (30).

At t=317, 820=SEE has an activation of thirty (30).

At t=575, 528=KIDS has an activation of 62, apparently because there was also a re-entry of "KIDS".

As a result of the $tult trough-inhibition,
at t=2426, 707=YOU has a negative "-46" activation.
At t=2430, 820=SEE has a negative "-46" activation.
At t=2435, 528=KIDS has a negative -14 activation, apparently because the AI response of "KIDS MAKE ROBOTS" made a backwards sweep to impose a positive thirty-two (32) points of activation upon the pre-existing negative "-46" points of activation, resulting in -46+32 = -14 negative points of activation -- still part of the negative trough.

Now the AGI is making its series of innate self-referential statements ("I AM A PERSON"; "I AM A ROBOT"; I AM ANDRU"; I HELP KIDS") but why is it not using SpreadAct() to jump from the reentrant concept of "KIDS" to the innate idea of "KIDS MAKE ROBOTS"? Let us see if SpreadAct() is being called, and from where. We do not see SpreadAct() being called in the diagnostic messages on-screen while we run the AGI. Let us check the Perlmind source code. We see that the OldConcept() module since ghost162.pl was calling SpreadAct() for recognized nouns, but now we delete that snippet of code because we see in our MindGrid theater that we do not want OldConcept() to make any calls to SpreadAct(). The AGI still runs.

We see that SpreadAct() is potentially being called from the ReEntry() mind-module, but the trigger is not working properly, so we change the trigger. Then we get SpreadAct() re-activating nouns, and we begin to see a periodic association from the innate self-referential statements to "KIDS MAKE ROBOTS" and from there to "ROBOTS NEED ME". Apparently the inhibitions have to be cancelled out before the old memories can re-surface in the internal chains of thought of the AGI.

Artificial Intelligence in German (Amazon Kindle e-book)

If your humanoid robot needs an AI Mind to think in English or German, a new Amazon Kindle e-book goes into great detail about robotic thought processes.



This e-book in English about AI in German (and English and Russian) contains the entire AI source code in Forth, which causes most of the editorial portion of the e-book (18 of 20 chapters) to be readable without charge in the free preview.



InFerence
for Robot Artificial Intelligence (Mind-Module)

is now an Amazon Kindle e-book with a "Click to LOOK INSIDE!" free preview so that robot-makers and AI enthusiasts who may not have a credit card can get the gist of the information free from the product description and the first few chapters of the free preview. InFerence is available across the World Wide Web in Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Spain, United Kingdom and USA America. So far the robot AI e-book has been reviewed with four stars out of five. The robot AI software is free to download in English, German and Russian.



64-bit Supercomputer Forth Chips for Strong AI

Imagine a four-core, 64-bit Forth AI CPU designed to run a not-quite-
maspar
but still somewhat parallel artificial intelligence in English http://www.scn.org/~mentifex/mindforth.txt or in http://www.scn.org/~mentifex/DeKi.txt German.

Such a specialized, Strong AI Forth CPU could devote one core to visual processing and memory; a second core to auditory input and memory; a third core to robotic motor memory and output; and a fourth core to automated reasoning with http://code.google.com/p/mindforth/wiki/InFerence in English, German or Russian.

The 64-bit Forth CPU could be architecturally simple by dint of leaving out all the customary circuitry used for floating-point arithmetic, and Forth would serve as its own AI operating system.

JavaScript Artificial Intelligence Programming Journal

Wed.3.APR.2013 -- "nounlock" May Not Need Parameters

In the English JSAI (JavaScript artificial intelligence), the "nounlock" variable holds onto the time-point of the direct object or predicate nominative for a specific verb. Since the auditory engram being fetched is already in the proper case, there may not be any need to specify any parameters during the search.

Fri.5.APR.2013 -- Orchestrating Flags in NounPhrase

As we run the English JSAI at length without human input and with the inclusion of diagnostic "alert" messages, we discover that the JSAI is sending a positive "dirobj" flag into NounPhrase without checking first for a positive "predflag".

Sat.6.APR.2013 -- Abandoning Obsolete Number Code

Yesterday we commented out NounPhrase code which was supposed to "make sure of agreement; 18may2011" but which was doing more harm than good. The code was causing the AI to send the wrong form of the self-concept "701=I" into the SpeechAct module. Now we can comment out our diagnostic "alert" messages and see if the free AI source code is stable enough for an upload to the Web. Yes, it is.

77 older entries...

X
Share this page