Tuesday, March 14, 2006

Conceptual AI

Jantunen talks about artificial intelligence, which is capable on bringing so-called "technological singularity". This means that the AI is human-like and intelligent enough to perform various informational tasks, which require independent assimilation of facts. When the AI is sophisticated enough to design software and hardware, we can tell it to design a better AI, and so we have reached the technological singularity.

First, I characterize what I mean by conceptual AI, and then I describe how it may become reality.

This writing is about conceptual AI; an artificial intelligence, which can communicate with humans in natural language. It can also think with concepts, which are not hard-wired. Although current state-of-the-art game AIs can beat most humans in many areas (Deep Blue, Masters of Orion, ...), they wouldn't be conceptual AIs even if they had a natural-language interface, since they can only handle a finite set of concepts. The game AIs couldn't use new concepts to analyze the situation.

The conceptual AI can be divided roughly to two parts: the language unit and the domain model. The language unit could analyze textual descriptions of the domain model. It would try to map the concepts of the natural language to understand what domain model phenomena they denote. It would recongize and handle various linguistic problems: indetifying what pronouns, substantives and verbs denote, recognizing ambiguous exrpressions, guessing the standard of comparison when somtehing is described as "high", "bad" or "special", assimilating new concepts based on definitions, and recognizing when the other end is using a concept in a wrong way.

The domain model would contain the mechanics and dynamics of the subject matter. For example, if the topic of discussion is stock markets, the domain model would contain a database of financial information about the companies, about their production methods, dynamics of the industries, recent news events affecting the economy, etc. This domain information would be structured in a format, which is suitable for the AI. Some parts of format would depend on what concepts the AI considers relevant, and the format would be open-ended to enable extension when new kind of information is hypothesized to be relevant.

The conceptual AI wouldn't contain emotions, goals (except answering the user's questions and performing the tasks given to it), self-interests, visual memory (except if the domain model requires visual thinking), nor other human characteristics.

How Could Conceptual AI Be Developed?

I'm going to handle 3 scenarios; to reject 2 of them and give green light to one.

Rejected: Expert system is generalized into a Conceptual AI. The scenario is basically that initially there is a special-purpose AI, which has a complex and extensive domain model (expert system). The expert system is sharpened with a restricted natural-language interface to make it more usable and accessible for the people who need it. Gradually, the natural-language interface is extended and made more flexible, and metaconceptual elements are added. Why not credible:I haven't heard about expert systems, which could answer to a wide variety of different questions, nor questions which they are not designed to answer. Even if the domain model is complex, it still can probably handle only a finite types of questions. Therefore, there is little or no benefit from the metaconceptual lanugage module. The flexibility wouldn't be useful because of the rigid domain model.

Rejected: Data mining. The scenario is such that we have a huge database of numerical and textual data on some topic. We also have means to do some kind of preliminary analysis for the data: to put it to a relational database in nice tables, to parse the textual data, etc. First, a straightforward natural language interface is added to answer simple queries about the data. Secondly, the AI is sharpened with metaconceptual facilities. It is trained to recongize "normal" levels for various attributes based on distributions, so it can also recognize "higher than usual". It is made to understand definitions. It is made to point out trends and outlier cases. This time, there is both a advantage from the metaconceptual facilities. Why not credible: My impression is that the development is towards more flexible query languages. SQL (the current standard database query langauge) could be replaced by prolog-based query languages, providing a lot more flexibility and deduction. Therefore, there is little incentive to put a natural-language interface, when specialized query languages still have a long way to go.

Probable: Tech support. Suppose that we are talking about an ISP support phone. The AI would be developed gradually:

  • Phase 0 (where we are now): Asks the user to press 1 if the problem is in broadband connections, 2 for phones and 3 for other.

  • Phase 1: Asks the user to describe the problem. Uses the information to redirect the call to the person, who has handled this kinds of problems before.

  • Phase 2: Asks the user for some initial information to search the customer information. Uses the description to checks if the call reports a problem, which has already been reported by someone else.

  • Phase 3: The AI is extended to solve very simple and but common problems.

  • Phase 4: The metaconceptual side is extended so that it can better understand ambiguous descriptions and provide some initial information before the call is redirected to a human. This way, the queueing time is less wasted.

  • ...

  • Phase 20: The domain model is extended with information on how specific applications (Word, etc.) work.

  • ...

  • Phse 56: The domain model is extended with a computer. If the user describes a problem, the AI can try to reproduce the problem in the computer.

In this scenario, the incentive for a good metaconceptual unit is great right from the start, since user descriptions are seldom very clear nor logical. Secondly, each additional step towards more intelligent AI is economically justified, as it directly replaces human effort. The AI is useful right from the start, and there isn't any discontinuity point or big gap, where a big technical leap is needed to make the AI more useful. Therefore, there is constant incentive to take small steps towards a much better AI.
Why it hasn't started yet:This is possible only after a suitably robust speech recognition technology is available. The speech recognition technology must be able to recongize normal speech practically without errors, although it can ask the user to talk clearly. When the speech-to-text conversion is avalable, the work can start.

No comments: