5 Sep 2010 AI4U   » (Observer)

MindForth Programming Journal (MFPJ) 2010 September 5

Sun.5.SEP.2010 -- Looking Before We Leap

As we gear up for self-referential thought in autonomous robots, we want each robot AI Mind to be able to handle questions in three different formats, as exemplified by the following examples.

1. What do robots make?

2. What do robots do?

3. What are robots?

The questions listed above go from the very specific to the very general. The first question, "What do robots make?", is an example of the "what-do-X-VERB?" format, where the "verb" slot may be filled with any suitable verb, such as "think" or "need". We use the verb "make" here because it will allow the AI to recall a long list of the direct objects of the noun-verb combination, "robots make".

The second question is an example of the more general "what-do-X-DO" format, where no particular verb is supplied and the AI Mind is free to come up with a long list of verbs +/- objects that would complete a thought beginning with "robots" as a subject.

The third sentence, "What are robots?", is included here only for completeness in the consideration of questions that intelligent robots might be called upon to answer. We are concerned today with the answering of "what- do-X-VERB" and "what-do-X-DO" questions. It may seem to the casual peruser of these AI Lab Notes that such questions are ridiculously simple and should present no difficulty at all to any True AI worthy of the name, but a reality check is in order here because how a software program deals intelligently with such simple questions is itself a profound question requiring devilishly deep thought to answer. And if you did not smile at the mention of deep thought in the previous sentence, then you have no business here and you are really Joe Sixpack, not Joe Appcoder.

Now excuse us for a moment, because we have had to respond urgently to the travails of some young graduate student who has become lost on the Web and needs the help of a webfooted wizard at the prestigious AI Forum. We found what he was looking for, and the guy was beside himself with astonishment and thanks. In order to wring the last drop of memetic advantage out of the rescue-episode, we propose to follow up with the following tongue-in-cheek tradecraft.

It's so outstanding to hear from you again,
young coderpup.

How totally bodacious for you to do work on neural nets.

Bright and shining your future must be,
for you stick with your awesome goals and
fail-or-no-fail you care not.

To answer your further questions ready am I.

Just ask the Old AI Dude when the going gets ungoogly.

Sun.5.SEP.2010 -- Natura Non Facit Saltum

When we ask the AI, "What do robots make?", the responses could include cars, tools, parts, and even more robots. We need to change the AI Mindgrid in such a way that the AI will be able to make statement after statement until the possible answers have been exhausted in the knowledge base (KB). Somehow we need a way to make each succeeding answer drop out of the queue, so that the next answer may surface in consciousness. We may need to create an InHibit mind- module that will lower the activation on a particular node on the quasi-fiber of the verb (such as "make") figuring in the responses to the query.

Suddenly we see a way to achieve our goal of enabling multiple answers to a what-do-X-VERB query. It will involve radical changes perhaps not to the underlying MindGrid, but certainly to several mind-modules operating across the MindGrid.

At the heart of the solution is the brand-new idea that, during a query-response, after a verb-node wins selection into a thought, the entire verb-concept shall not be psi-damped down to zero, but rather only the selection- winning node shall be inhibited down to a negative level of activation, such as minus-fifteen or lower. Furthermore, the PsiDecay module shall be made to work in two directions, both downwards towards zero and upwards towards zero, so that any inhibited node shall gradually lose its inhibition. Mind-modules that try to zero out an entire range of concepts, shall be rewritten ("Get me Re- Write!") to zero out only positive activations on concepts, and to leave negative activations alone. At the same time as all these changes are in effect, the subject of the query shall have a special status of persistence, so that the AI shall try to issue a series of statements about the subject in combination with the query-verb, until all pertinent nodes on the query-verb have been knocked down into a sub-zero inhibition. At that point, any thought beginning with the query-subject will surely fail to connect with the query-verb, and may or may not find a different verb for the generation of a KB-valid sentence. We may let the special status of the query- subject persist only so long as valid thoughts emerge in connection (in synergy) with the query-verb, with a release-mechanism to dislodge the subject from its special status when the knowledge base has been exhausted.

The beauty of inhibiting serial same-verb nodes down to a definitely negative level of activation lies in the realization that the sentence-generation process will continue to work the old-fashioned way. The VerbPhrase module will flush out the next same-verb node to win thought-selection, oblivious to the fact that one node is now out of commission at a deep level (deep unthought) of negative activation. There is some elegance to a solution in which you change one phenomenon (the post- selection activation-level) while everything else still works in the same old way. It is like evolution, which does not make massive saltations all at once, but only makes one tiny mutation at a time.


Latest blog entries     Older blog entries

X
Share this page