Last Kurzweil book : How to create a mind

I read the last book of Ray Kurzweil , it is a good book explaining in a simple way the ideas the work and the progress of Kurzweil in the effort to build a human , to build a Strong Artificial Intelligence .

I agree in the major part of what Kurzweil say specially in the main idea of a mind as Pattern Recognition PRTM (Pattern recognition theory of mind) because a pattern recognizer , a classificator is a problem solver to the “top level” if you have an engine to solve a general classificator you can solve every problem.

Kurzweil implement the classificator using an HHMM ( hierarchical hidden Markov models )

That is a graph of states linked by different values of probabilities . There is a big problem in this model : how to define the topology of the graph? How many states do you need ? How many levels do you need for the hierarchical structure? I think this model is not enough flexible to let the emergence of all algorithm.

Also Kurzweil is aware of such limitations and implement into the system also a Genetic/Evolutionary Algorithm to solve this flexibility problem.

Before to continue is better to clarify a concept .Why we need flexibility ? How much flexibility we need?

Why not to use a simple table to solve every problem? We can build a table for a problem where for every possible input we have a corresponding output so we have the correct answer for every instance of the problem , it is very simple .

The problem is not only a matter of space the problem is that we don’t know the answer for every instance of the problem . The learning process of a table require a training for every input/output of the system , in some sense we have to spend too much in the learning ( programming ) phase to construct a solver implemented by a table . And a tree ? What about a tree structure a tree require less training data to learn but again it can not be enough small . As example you can try to implement the sorting problem using table, tree and graph with training data .

In the opposite side of the flexibility there is the ILS ( inverse levin search )   that give the best solution , the best program for the given training data. So why not to use an ILS ? The problem with this solver is that it is too much flexible it does not make assumption so the search space become too much wide . In general if you know some constraints some assumption some characteristic about the problem making restriction in the search space can be very useful (using a tree instead of a graph in a learning systems can be useful if we know that there is not the possibility of connections from bottom node to the top etc… ).

So how much flexible/generic a solver must be? I prefer to watch at the problem to understand if there are some constraints some inference we can do to restrict the genericity of the solver . A generic classificator has not restriction but for sure a mind is not based on a generic classificator , it is only a restricted classificator .

Kurzweil seem to prefer to watch at the solver ( the brain ) to implement these restrictions implementing a system similar to the structure of the brain (like many other projects is doing ) but without using a neural network (!?) because it is not enough flexible …

And worst he insert a GA because the HHMM is not again enough flexible . I think it can be better to define the level of flexibility required before and then implementing a solver with the correct level of flexibility.

In the book Kurzweil describe its speech recognizer that seem to work very well but is based on very strong restrictions and it can be very dangerous to make such restrictions without strong evidence . The system is done using a classificator of points in 16 dimensions where each cluster is defined as a circle in the graph that include all the points of the class . There are also a lot of restrictions on the size of the data etc… but for them I don’t see big problems . The problem is the assumption of the relation of the 16 dimensions like spatial dimension! Why ? And why to use a circle to identify a cluster? why not 16 dimensions plane or a curve of N dimensions ? An answer to these questions can come from the acoustic physic and probably there is an answer to these questions due to the effective good performance of the system but my doubt is not there my doubt is how such system work for a completely different set of problems .I am sure that the pattern recognizer of the brain are very similar but I think the restrictions made in the speech recognizer are too strong . I think it is very difficult that a system with that restrictions can work in a totally different domain.

Another question that come to me is if a system with that restrictions is enough for a good performance why don’t to use a SVM ( Supported Vector Machine ) it seem perfect for that problem !

Now about the GA . Why GA works in the nature? Or better why evolutionary algorithms works in the nature? As showed by NFL  theorems ( 1 , 2 ) there are some problems . A good answer can be found in Investigations by Stuart Kauffman  where is explained simply that the fitness evolve in the nature like the object of the evolution , in other words the problem change . This is not the case if we fix an arbitrary problem ( but we can not do this for large problems ) . I am not saying that a GA can not be a good solver but that it can be a bad solver ( genetic algorithm open  deep questions… for example every problem existing are natural ? trained to be solved by an evolutionary algorithm ? the laws of nature constrain an evolutionary environment ? )  .

Before to use an evolutionary algorithm I make always a question why the algorithm should be a good solution , on which criteria can I claim the evolutionary algorithm can work ? I watch on the fitness of the problem and if it is possible to represent the fitness such that “good solution” are enough “closed” together it can be a good choice.

English: Fitness Evolution: Genetic Algorithm ...

I can not find an answer to these questions into the book I don’t know if there are answers and probably the objective of the book is to be  “non-technical” enough avoiding the possibility of these explanations .

Ok I can not close the post without report these 2 issue .

Self-reference

Very often Kurzweil speak about “self-…” as a powerful feature . How much powerful a self-reference can be? In the page 188 he explicitly say “…as well as for self-modifying code ( if the program store is writable ) , which enables a powerful form of recursion.” . A self-modifying code … and if the program can not be writable what we can not do ? What is impossible to do if we have a universal language to write a program in a readonly memory ? Nothing!

To explain better we can try to think on how a self-modifying code change itself , to change itself it must follow a program ( otherwise we have an oracle ) but we can make a self-modifying code that self-modify also the program ( how it self-modifying the code )  and again we can build a self-self-self-modify , where is the end of this self references? The end is the universality! you can never do something better than a universal program . The only thing you can save with a self-modifying code is the constant C ( the space of the code ) .

For every self-….-self-rewriting program exist a not rewriting program that use (in the worst case ) a constant C of more space.

The cellular automata

In the chapter 9 Kurzweil speak about the cellular automata describing how the class 4 behave and here seem there is a misunderstanding of the theory of Wolfram and on what happen in the cellular automata evolution.

It is true that given an arbitrary cell of a cellular automata there is the possibility that we can not know what is the value of that cell after N steps without executing the entire evolution for N steps ( page 238 ) but it is also possible that we can say exactly what is the value in less than N steps in general without executing N steps . The point is that we can know if  it is possible to claim the value of a cell without executing N steps that is different from saying that we never know the value without executing the cellular automata .

This is absolutely not in contrast with the scientific laws because the missing point of Kurzweil is that we can also find theorems with proofs in a cellular automata system and these theorems can claim that a cell must have a specific value after N steps or that a group of cells must have a special value after M steps . We can find also a group of theorems equivalent to the Newtonian Laws . There is not  contradiction. We can not predict the value of a cell in the same way we can not predict a position of subatomic particle of a satellite orbiting the Earth even if we can predict the position of the satellite. The point is that there are proofs that assert there are cells for which we can not predict the value without executing the entire evolution .

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s