# Last Kurzweil book : How to create a mind

I read the last book of Ray Kurzweil , it is a good book explaining in a simple way the ideas the work and the progress of Kurzweil in the effort to build a human like Strong Artificial Intelligence .

I agree in the major part of what Kurzweil say specially in the main idea of a mind as Pattern Recognition PRTM (Pattern recognition theory of mind) because a pattern recognizer , a classificator is a problem solver to the “top level” if you have an engine to solve a general classificator you can solve every problem.

Kurzweil implement the classificator using an HHMM ( hierarchical hidden Markov models )

That is a graph of states linked by different value of probabilities . There is a big problem in this model : how to define the topology of the graph? How many states do you need ? How many levels do you need for the hierarchical structure? I think this model is not enough flexible to let the emergence of all algorithm.

Also Kurzweil is aware of such limitations and implement into the system also a Genetic/Evolutionary Algorithm to solve this flexibility problem.

Before to continue is better to clarify a concept .Why we need flexibility ? How much flexibility we need?

Why not to use a simple table to solve every problem? We can build a table for a problem where for every possible input we have a corresponding output so we have the correct answer for every instance of the problem , it is very simple .

The problem is not only a matter of space the problem is that we don’t know the answer for every instance of the problem . The learning process of a table require a training for every input/output of the system , in some sense we have to spend too much in the learning ( programming ) phase to construct a solver implemented by a table . And a tree ? What about a tree structure a tree require less training data to learn but again it can not be enough small . As example you can try to implement the sorting problem using table, tree and graph with training data .

In the opposite side of the flexibility there is the ILS ( inverse levin search )   that give the best solution , the best program for the given training data. So why not to use an ILS ? The problem with this solver is that it is too much flexible it does not make assumption so the search space become too much wide . In general if you know some constraints some assumption some characteristic about the problem making restriction in the search space can be very useful (using a tree instead of a graph in a learning systems can be useful if we know that there is not the possibility of connections from bottom node to the top etc… ).

So how much flexible/generic a solver must be? I prefer to watch at the problem to understand if there are some constraints some inference we can do to restrict the genericity of the solver . A generic classificator has not restriction but for sure a mind is not based on a generic classificator , it is only a restricted classificator .

Kurzweil seem to prefer to watch at the solver ( the brain ) to implement these restrictions implementing a system similar to the structure of the brain (like many other projects is doing ) but without using a neural network (!?) because it is not enough flexible …

And worst he insert a GA because the HHMM is not again enough flexible . I think it can be better to define the level of flexibility required before and then implementing a solver with the correct level of flexibility.

In the book Kurzweil describe its speech recognizer that seem to work very well but is based on very strong restrictions and it can be very dangerous to make such restrictions without strong evidence . The system is done using a classificator of points in 16 dimensions where each cluster is defined as a circle in the graph that include all the points of the class . There are also a lot of restrictions on the size of the data etc… but for them I don’t see big problems . The problem is the assumption of the relation of the 16 dimensions like spatial dimension! Why ? And why to use a circle to identify a cluster? why not 16 dimensions plane or a curve of N dimensions ? An answer to these questions can come from the acoustic physic and probably there is an answer to these questions due to the effective good performance of the system but my doubt is not there my doubt is how such system work for a completely different set of problems .I am sure that the pattern recognizer of the brain are very similar but I think the restrictions made in the speech recognizer are too strong . I think it is very difficult that a system with that restrictions can work in a totally different domain.

Another question that come to me is if a system with that restrictions is enough for a good performance why don’t to use a SVM ( Supported Vector Machine ) it seem perfect for that problem !

Now about the GA . Why GA works in the nature? Or better why evolutionary algorithms works in the nature? As showed by NFL  theorems ( 1 , 2 ) there are some problems . A good answer can be found in Investigations by Stuart Kauffman  where is explained simply that the fitness evolve in the nature like the object of the evolution , in other words the problem change . This is not the case if we fix an arbitrary problem ( but we can not do this for large problems ) . I am not saying that a GA can not be a good solver but that it can be a bad solver ( genetic algorithm open  deep questions… for example every problem existing are natural ? trained to be solved by an evolutionary algorithm ? the laws of nature constrain an evolutionary environment ? )  .

Before to use an evolutionary algorithm I make always a question why the algorithm should be a good solution , on which criteria can I claim the evolutionary algorithm can work ? I watch on the fitness of the problem and if it is possible to represent the fitness such that “good solution” are enough “closed” together it can be a good choice.

I can not find an answer to these questions into the book I don’t know if there are answers and probably the objective of the book is to be  “non-technical” enough avoiding the possibility of these explanations .

Ok I can not close the post without report these 2 issue .

Self-reference

Very often Kurzweil speak about “self-…” as a powerful feature . How much powerful a self-reference can be? In the page 188 he explicitly say “…as well as for self-modifying code ( if the program store is writable ) , which enables a powerful form of recursion.” . A self-modifying code … and if the program can not be writable what we can not do ? What it is impossible to do if we have a universal language to write a program in a readonly memory ? Nothing!

To explain better we can try to think on how a self-modifying code change itself , to change itself it must follow a program ( otherwise we have an oracle ) but we can make a self-modifying code that self-modify also the program ( how it self-modifying the code )  and again we can build a self-self-self-modify , where is the end of this self references? The end is the universality! you can never do something better than a universal program . The only thing you can save with a self-modifying code is the constant C ( the space of the code ) .

For every self-….-self-rewriting program exist a not rewriting program that use (in the worst case ) a constant C of more space.

The cellular automata

In the chapter 9 Kurzweil speak about the cellular automata describing how the class 4 behave and here seem there is a misunderstanding of the theory of Wolfram and on what happen in the cellular automata evolution.

It is true that given an arbitrary cell of a cellular automata there is the possibility that we can not know what is the value of that cell after N steps without executing the entire evolution for N steps ( page 238 ) but it is also possible that we can say exactly what is the value in less than N steps in general without executing N steps . The point is that we can know if  it is possible to claim the value of a cell without executing N steps that is different from saying that we never know the value without executing the cellular automata .

This is absolutely not in contrast with the scientific laws because the missing point of Kurzweil is that we can also find theorems with proofs in a cellular automata system and these theorems can claim that a cell must have a specific value after N steps or that a group of cells must have a special value after M steps . We can find also a group of theorems equivalent to the Newtonian Laws . There is not  contradiction. We can not predict the value of a cell in the same way we can not predict a position of subatomic particle of a satellite orbiting the Earth even if we can predict the position of the satellite. The point is that there are proofs that assert there are cells for which we can not predict the value without executing the entire evolution .

# The Measure of Intelligence

The measure of intelligence of an object X is exactly the Kolmogorov complexity of X .

This come from the simple observation that difficult problems has high KC and for every high KC there are difficult problems .

The interesting thing is to compare a utility function on solving problems . The are a lot of definitions of intelligence based on a measure of utility . The idea is to ask how much can be useful to solve a problem X ? The answer come from the Universal Distribution

From here is simple to understand that simple problems has high probability and so a solution for these problems is more important . It is more useful the ability to solve a simple problem than a difficult one!

This is a utility measure but we can not accept that solving a simple problem is more intelligent than to solve a difficult one so we must split the 2 definitions of Utility and of Intelligence.

Anyway there is this incredible fact that the Universal Distribution tell : The Intelligence is not so useful!

# Proving Darwin

Gregory Chaitin is one of my preferred author and perhaps the best one, I read all its books, I read all its papers . He is the man who discover Ω , who discover the algorithmic randomness and I don’t believe in the existence of stochastic processes so I think the correct definition of random is the algorithmic randomness.

The objective of Chaitin is to prove that a random walk in the software space can increase the complexity . Trying to do this what he reach is a very interesting result.

I don’t agree with Chaitin in the starting point not in the conclusion.

To have an infinite increasing complexity evolving software we need the existence of

1. a random stochastic source
2. an oracle to solve the halting problem

The point 1 is a main problem , I think the universe is deterministic . It is very difficult to construct a stochastic universe that appear so deterministic. Why should we introduce a stochastic source where everything can be explained by deterministic systems? Without a stochastic source we can not define an “evolving program” , in this case the program will change following deterministic rules so the program never change what really change is its state. A program never change by definition.

Without a stochastic source we can not have a random walk in the software space , what we can do is to use an algorithmic random string S as a finite source of random data but in this case the random walk has a limit in the increasing of the complexity that an evolving software can reach.

If we have a stochastic random source the injection of the complexity from the random source to the evolving programs is not the most interesting part , we should move the attention to the stochastic random source with an infinite complexity , this strange object become really more interesting than the evolving programs .

Using the dictate of Occam’s razor isn’t it more simple to believe in a deterministic universe that follow deterministic rules and so using a finite algorithmic complexity the execution of such program express evolutionary characteristics without strange things like oracles and stochastic source?

Why the universe expose evolutionary behaviour? Can this evolutionary scenario be only a point of view of the behaviour of a deterministic program?

What Chaitin develop is an interesting tool to use with approximations , approximating our low knowledge of the deterministic behaviour of the evolutionary walk in the software space such that we can approximate it with a random walk, and other approximations like the oracle because I really don’t think this is the reality.

When Chaitin move to the field of biology I had a lot of doubts on the relevance of the results that was possible to reach but I am amazingly surprised by these results I trust the Chaitin development in this field will give me a lot of suprises.

I hope to give to the reader enough curiosity to read the new Proving Darwing book.

# Exact string matching

Image via Wikipedia

I made a lot of improvements since the first implementation of the exact string matching algorithm especially for its parallelization with the cuda implementation but now is time to make some speed up again.

In the best case the program run the search in O(M/N) so the idea is to maximize the frequency of this best case .

To make the search I build a look up boolean table with for each word as index I set true when the word is inside the string to search then I search the words through the first string.

So to maximize the speed of the algorithm I need to maximize the size of the word . Now I use 2 different word size , 3 bytes and 4 bytes . The first for very short strings and the second in all the other cases.

The limits in the usage of 4 bytes word was 10 bytes because the first byte and the last are subject to mask due to the 8 bits shift to manage all the bits possibilities inside the bytes and I need 2 word to for the word misalignment possibility.

This was not the best limit because I can enlarge the step in the research of the words in the first string. The max step in bytes is :

$(INT(|S|/|W|)-1)*|W|+( |S| mod |W| )+1$

So the limit now is 6 bytes for each string starting from a size of 6 bytes I can use a 4 bytes word map .
Implementing this on cuda there is a problem when the step in the research is less than the word size , is less than 4 bytes because there are conflicts in the memory access ( more threads access the same memory for read ) and this slow down the computation.
To avoid this problem I use more than one parallel job every job with a research step of a word and each job a different starting point in the research , a phase that depend on $( |S| mod |W| )+1$

After the implementation I made some test and with 7 bytes strings the program now run 2x time faster!

p.s. it is incredible every time I put the hands in the code I get a 2x faster program or better … is there an end to this process?

# 145 millions rows

Image via Wikipedia

I am again experimenting with rule 30 I reach 145 *10^6  rows of evolution .

Evolving 2*10^16 cells!

So for everyone wants to know what happen after 145 millions I save some images in BMP format with 8192×2048 pixel for each image.

This is the link to the gallery here

I am sorry there is not a title for the images and they are not correctly sorted but the name of the images is in the format “1_POS.bmp” where POS is the byte number from the left to the right.

# Experimenting compression

Image via Wikipedia

Times ago I made some test on the compression using my inference engine and now I recollect the statistical data.

First of all my objective is not to develop a compressor because this need an estimation of “compression ratio”-”files types”-”compression time” , it is not to develop a compressor for “random data” because random data is not compressible by definition , it is not to compress exclusively one specific file because is always possible to compress one specific file to 0 and it is not about perpetual compression.

My objective is to evaluate how much good my inference engine is.

To do this a well and perhaps the best one empirical test is to compress pseudo-random files .

So I  tried to compress the famous million random digit file from the Mark Nelson blog .

I tried to split the file in blocks of 5 bytes each one and then I run the inference engine trying to compress each block in a 36 bits ( in other words the sequence predictor must predict correctly 4 bits in a sequence of 40 bits ).
Sometimes the compressor give me the compressed sequence and sometimes it give me no result for simplicity I define “1″ when the compressor find the sequence and “0″ when the compressor is not able to find the sequence.
If the compressor and the blocks are “statistically independent”  ( and I hope it is not the case )  the frequency of the compressed sequences must be
$2^{36} / 2^{40} =2^{-4}=1/16$ .

I run the compressor and my frequency is 1/12!

It seem a great result!  The problem is how to code the data without losing everything?
The idea is to write the compressed sequences where the compressor find a 36 bits sequence and the original 40 bits when the compressor is not able to find the sequence. So the problem is how to identify compressed sequences from original sequences.

The million random digit file has a size of 415241 bytes so I use  83048 blocks of 5 bytes and I can code about 6920 blocks with 36 bits instead of 40 bits so I can use only less than 6920*4=27680 bits to identify the position of 6920 compressed blocks and I want to compress at least 1 byte so my limit is 27672 bits . I can not use more than 27672 bits.
With this value I can not add simply one flag bit in the front of each block because this require 83048 bits.
A first idea is to write a header part where I write one bit for each block where “1″ mean compressed block and “0″ not compressed block and then compress the sequence of 83048 bits but I need a compression ratio of 27672/83048≈1/3.
By the entropy I have -((1/12)*Log[2, (1/12)] + (11/12)*Log[2, (11/12)]) =0.41 so maximizing the entropy I can not reach enough compression .

A way to reach max entropy is using an arithmetic coding or also a binomial enumeration for example with the combinadic program.

Using Mathematica I answer to some questions like which frequency let me coding the compression?

N[FindInstance[-((x)*Log[2,(x)]+(1-x)*Log[2,(1-x)])==(83048*x*4)/83048,x]]

N[FindInstance[-((x)*Log[2,(x)]+(1-x)*Log[2,(1-x)])==(x*4),x]]

0.156417354964272015502158337639=1/6.66667

How much can I compress with a frequency of 1/6 ?

p=1/6

N[(83048*p*4-Log[2,Binomial[83048,83048*p]])/8]=173 bytes

And with 1/7 ?

p=1/7

N[(83048*p*4-Log[2,Binomial[83048,83048*p]])/8]=-209 bytes

Concluding this is only one problem to deal with before to claim that it is possible to compress the million digit , another is to be sure to get a 1/12 frequency because I test only 1/100 blocks and it is really too small .

However it seem incredible to me that I need a frequency of 1/6 to be able to code the compression! I need to be able to predict correctly 4 bits in a sequence of 36 bits against a probability of 1/16, it seem a deficiency in the coding phase but the entropy tell me the contrary it is not possible to coding out this dependence and also doing this more practically with the binomial enumeration I can gain bits only with a frequency of 1/6!

This does not mean that having a frequency of 1/12 I can not code a compression because if I have for example all the “1″ bits together it is possible to compress the bits and I can make a total compression but this mean I need a not uniform distribution of compressible blocks but my experiment is on the compression of a pseudo-random bits so I suppose this is not the case and if the distribution let me to make compression  it is not interesting for me and probably I need to use another file as test.

So my final question is : is it really true that is not possible to code efficiently this statistical dependence?