Older blog entries for Flanneltron (starting at number 118)

Minor Sim Tribulations and Second-Order Cybernetics

As part of my effort towards developmental systems for cognitive architectures, I’ve been trying to beat some sense into an Alife simulator. I’ve used the Breve simulator in the past, and although it’s no longer supported, it still works fine. My existing Anibots code uses Breve [1]—which in turn uses Open Dynamics Engine—as the physical […]

Syndicated 2013-10-02 07:35:25 from SynapticNulship

On “Humanoid robots as ‘The Cultural Other’: are we able to love our creations?”

I just noticed a recently published Springer article titled “Humanoid robots as “The Cultural Other”: are we able to love our creations?” by Min-Sun Kim and Eun-Joo Kim [1] which cites my own article “Would You Still Love Me If I Was A Robot?“[2]. At the moment I do not have access to the full […]

Syndicated 2013-09-07 05:31:15 from SynapticNulship

Artificial Intelligence is a Design Problem

Take a look at this M.C. Escher lithograph, “Ascending and Descending”. Is this art? Is it graphic design? Is it mathematical visualization? It is all of those things. One might even say that it’s also an architectural plan given that it has been used to implement physical 3-dimensional structures which feature the same paradox when […]

Syndicated 2013-09-03 02:53:04 from SynapticNulship

The World’s End: Change and Consequences

It has been said that true science fiction requires a story in which the world is changed—and never goes back to the way it was (I don’t remember the source of this definition). By this definition, techno-thrillers such as everything by Michael Crichton are not science fiction, since the world is returned to normal after […]

Syndicated 2013-08-26 14:24:57 from SynapticNulship

Elysium and Science Fiction Films that Hate Science and Technology

I haven’t seen Elysium yet, but Ryan Britt’s article “Our Science Fiction Movies Hate Science Fiction” is interesting nonetheless: Ripping off the heads of robots like a sweaty space-age cyberpunk Robin Hood, Matt Damon is delivering future-social-justice this week in Elysium. Alright, so what does this have to do with anti-science-fiction? As Britt writes: But […]

Syndicated 2013-08-11 06:31:25 from SynapticNulship

A World of Affect

Back in the fall of 2005 I took a class at the MIT Media Lab called Commonsense Reasoning for Interaction Applications taught by Henry Lieberman and TA’d by Hugo Liu. For the first programming assignment I made a project called AffectWorld, which allows the user to explore in 3D space the affective (emotional) appraisal of [...]

Syndicated 2013-08-02 04:22:05 from SynapticNulship

AAAI Accepted My Paper “An Ecological Development Abstraction for Artificial Intelligence”

My short paper, “An Ecological Development Abstraction for Artificial Intelligence,” will be featured in the symposium “How Should Intelligence be Abstracted in AI Research: MDPs, Symbolic Representations, Artificial Neural Networks, or _____?” and will be published in the AAAI (Association for the Advancement of Artificial Intelligence) technical report for the AAAI 2013 Fall Symposium Series. [...]

Syndicated 2013-07-13 01:31:13 from SynapticNulship

Nature-Inspired Development as an AI Abstraction

I’m working on some ideas and a paper to present my version of biologically-inspired development. But not just as a single project or as a technique, but as an abstraction level. It’s hard to explain, so let me first digress with this: The agent approach to AI became a mainstream part of AI in the [...]

Syndicated 2013-06-13 01:55:13 from SynapticNulship

Language Does Not Shape Thought

Cognition causes language, not the other way around. Correlations between changes in thought with changes in language abound. But the arguments are very weak for causality from language to cognition in this context. What do People Mean by Language Shapes Thought? Lera Boroditsky likes to spread the meme language shapes thought. Others have used it [...]

Syndicated 2013-05-18 06:24:56 from SynapticNulship

Symbol Grounding and Symbol Tethering

Philosopher Aaron Sloman claims that symbol grounding is impossible. I say it is possible, indeed necessary, for strong AI. Yet my own approach may be compatible with Sloman’s. Sloman equates “symbol grounding” with concept empiricism, thus rendering it impossible. However, I don’t see the need to equate all symbol grounding to concept empiricism. And what [...]

Syndicated 2013-04-03 04:17:37 from SynapticNulship

109 older entries...

X
Share this page