The informatics inventions

analytical-engine

watching the whole history of informatics in an objective view there are 3 main inventions

  • Computer
  • Internet
  • Bitcoin/blockchain

 

…when an invention is too close it is difficult to recognize its importance.

Advertisements

It is not AI rules

cestino

there are simple rules to recognize fast if a program it is not an AI

 

 

  • Is it generic? Can work for different problems? ( AGI )
  • The solution found by the AI can be a program with different space/time complexity ? Is there a limit in the complexity of the solution?
  • Is it computable? An AI must be in general not computable
  • Is it an implementation of ILS ?
  • How it solve NFL ?
  • Is it “universal” ?

Intelligent like a stone

stone One difficulty in accepting my definition of intelligence equal to the Kolmogorov complexity is often its independence from time and interaction. It can be useful to think to some examples where very difficult solutions does not imply interaction , time , execution or executor , for example it is possible to have a program very complex that give solutions to very hard problems and given that program you need only a simple stupid executor to get the result it seem clear to me that the most intelligent ( and interesting ) part is into the program and not into its execution. One book can help is the “The End of Time” by Julian B. Barbour , the best book I ever read about time. Reading this book it is possible to understand how the time emerge from the laws of physic as a result of a special state of the nature and it is not a foundational law, the causality is only the result of the shape of the nature.

Deep Learning THE LIMITS

angry_cats_season_by_74k-d5vpmib

Nowadays there is a new name going around in the AI field the “deep learning” the new revolutionary algorithm to solve everything… I am bored to read deep deeep deeeep everywhere . Why deep ? It is Large not deep !? It is only a new name for old stuff with a more powerful hardware. Let me explain where are the troubles. There are 2 main problems or limits in what a Deep Learning can achieve

  1. The fitness of the objective
  2. The low computing possibilities of the agent ( is it only a tree ? Is it universal? )

(1) The learning phase need a function to get a value on how good the behavior of the agent is and like what the NFL theorems say it is not possible to solve the problem for every fitness , it is possible to solve the problem for only special fitness ( if there is not a gradually increasing fitness I think the trainer has some trouble ) (2)  As every programmer know there are very different programs and very different problems and it is difficult to have a program as solution that work fine for different problems . An agent is a program and if it is a NN with many layers is something like a tree it is only one step forward from a table . A tree-program is very limited . It is difficult to run a sat solver program implemented by a tree…( so don’t try to teach a deep learning how to solve the sat ) . Perhaps the “deep learning” use a RNN , ok here we have a graph another step forward but again which complexity ? You can not solve problems with high complexity with low complexity programs … if you want you can do the opposite for example it is possible to implement a sorting solver using a sat-solver but solving an higher complexity with a lower complexity is really a bad idea and in older post I explain why.

Last Kurzweil book : How to create a mind

I read the last book of Ray Kurzweil , it is a good book explaining in a simple way the ideas the work and the progress of Kurzweil in the effort to build a human , to build a Strong Artificial Intelligence .

I agree in the major part of what Kurzweil say specially in the main idea of a mind as Pattern Recognition PRTM (Pattern recognition theory of mind) because a pattern recognizer , a classificator is a problem solver to the “top level” if you have an engine to solve a general classificator you can solve every problem.

Kurzweil implement the classificator using an HHMM ( hierarchical hidden Markov models )

That is a graph of states linked by different values of probabilities . There is a big problem in this model : how to define the topology of the graph? How many states do you need ? How many levels do you need for the hierarchical structure? I think this model is not enough flexible to let the emergence of all algorithm.

Also Kurzweil is aware of such limitations and implement into the system also a Genetic/Evolutionary Algorithm to solve this flexibility problem.

Before to continue is better to clarify a concept .Why we need flexibility ? How much flexibility we need?

Why not to use a simple table to solve every problem? We can build a table for a problem where for every possible input we have a corresponding output so we have the correct answer for every instance of the problem , it is very simple .

The problem is not only a matter of space the problem is that we don’t know the answer for every instance of the problem . The learning process of a table require a training for every input/output of the system , in some sense we have to spend too much in the learning ( programming ) phase to construct a solver implemented by a table . And a tree ? What about a tree structure a tree require less training data to learn but again it can not be enough small . As example you can try to implement the sorting problem using table, tree and graph with training data .

In the opposite side of the flexibility there is the ILS ( inverse levin search )   that give the best solution , the best program for the given training data. So why not to use an ILS ? The problem with this solver is that it is too much flexible it does not make assumption so the search space become too much wide . In general if you know some constraints some assumption some characteristic about the problem making restriction in the search space can be very useful (using a tree instead of a graph in a learning systems can be useful if we know that there is not the possibility of connections from bottom node to the top etc… ).

So how much flexible/generic a solver must be? I prefer to watch at the problem to understand if there are some constraints some inference we can do to restrict the genericity of the solver . A generic classificator has not restriction but for sure a mind is not based on a generic classificator , it is only a restricted classificator .

Kurzweil seem to prefer to watch at the solver ( the brain ) to implement these restrictions implementing a system similar to the structure of the brain (like many other projects is doing ) but without using a neural network (!?) because it is not enough flexible …

And worst he insert a GA because the HHMM is not again enough flexible . I think it can be better to define the level of flexibility required before and then implementing a solver with the correct level of flexibility.

In the book Kurzweil describe its speech recognizer that seem to work very well but is based on very strong restrictions and it can be very dangerous to make such restrictions without strong evidence . The system is done using a classificator of points in 16 dimensions where each cluster is defined as a circle in the graph that include all the points of the class . There are also a lot of restrictions on the size of the data etc… but for them I don’t see big problems . The problem is the assumption of the relation of the 16 dimensions like spatial dimension! Why ? And why to use a circle to identify a cluster? why not 16 dimensions plane or a curve of N dimensions ? An answer to these questions can come from the acoustic physic and probably there is an answer to these questions due to the effective good performance of the system but my doubt is not there my doubt is how such system work for a completely different set of problems .I am sure that the pattern recognizer of the brain are very similar but I think the restrictions made in the speech recognizer are too strong . I think it is very difficult that a system with that restrictions can work in a totally different domain.

Another question that come to me is if a system with that restrictions is enough for a good performance why don’t to use a SVM ( Supported Vector Machine ) it seem perfect for that problem !

Now about the GA . Why GA works in the nature? Or better why evolutionary algorithms works in the nature? As showed by NFL  theorems ( 1 , 2 ) there are some problems . A good answer can be found in Investigations by Stuart Kauffman  where is explained simply that the fitness evolve in the nature like the object of the evolution , in other words the problem change . This is not the case if we fix an arbitrary problem ( but we can not do this for large problems ) . I am not saying that a GA can not be a good solver but that it can be a bad solver ( genetic algorithm open  deep questions… for example every problem existing are natural ? trained to be solved by an evolutionary algorithm ? the laws of nature constrain an evolutionary environment ? )  .

Before to use an evolutionary algorithm I make always a question why the algorithm should be a good solution , on which criteria can I claim the evolutionary algorithm can work ? I watch on the fitness of the problem and if it is possible to represent the fitness such that “good solution” are enough “closed” together it can be a good choice.

English: Fitness Evolution: Genetic Algorithm ...

I can not find an answer to these questions into the book I don’t know if there are answers and probably the objective of the book is to be  “non-technical” enough avoiding the possibility of these explanations .

Ok I can not close the post without report these 2 issue .

Self-reference

Very often Kurzweil speak about “self-…” as a powerful feature . How much powerful a self-reference can be? In the page 188 he explicitly say “…as well as for self-modifying code ( if the program store is writable ) , which enables a powerful form of recursion.” . A self-modifying code … and if the program can not be writable what we can not do ? What is impossible to do if we have a universal language to write a program in a readonly memory ? Nothing!

To explain better we can try to think on how a self-modifying code change itself , to change itself it must follow a program ( otherwise we have an oracle ) but we can make a self-modifying code that self-modify also the program ( how it self-modifying the code )  and again we can build a self-self-self-modify , where is the end of this self references? The end is the universality! you can never do something better than a universal program . The only thing you can save with a self-modifying code is the constant C ( the space of the code ) .

For every self-….-self-rewriting program exist a not rewriting program that use (in the worst case ) a constant C of more space.

The cellular automata

In the chapter 9 Kurzweil speak about the cellular automata describing how the class 4 behave and here seem there is a misunderstanding of the theory of Wolfram and on what happen in the cellular automata evolution.

It is true that given an arbitrary cell of a cellular automata there is the possibility that we can not know what is the value of that cell after N steps without executing the entire evolution for N steps ( page 238 ) but it is also possible that we can say exactly what is the value in less than N steps in general without executing N steps . The point is that we can know if  it is possible to claim the value of a cell without executing N steps that is different from saying that we never know the value without executing the cellular automata .

This is absolutely not in contrast with the scientific laws because the missing point of Kurzweil is that we can also find theorems with proofs in a cellular automata system and these theorems can claim that a cell must have a specific value after N steps or that a group of cells must have a special value after M steps . We can find also a group of theorems equivalent to the Newtonian Laws . There is not  contradiction. We can not predict the value of a cell in the same way we can not predict a position of subatomic particle of a satellite orbiting the Earth even if we can predict the position of the satellite. The point is that there are proofs that assert there are cells for which we can not predict the value without executing the entire evolution .

The Measure of Intelligence

The measure of intelligence of an object X is exactly the Kolmogorov complexity of X .

This come from the simple observation that difficult problems has high KC and for every high KC there are difficult problems .

The interesting thing is to compare an utility function on solving problems . The are a lot of definitions of intelligence based on a measure of utility . The idea is to ask how much can be useful to solve a problem X ? The answer come from the Universal Distribution

ud2

From here is simple to understand that simple problems has high probability and so a solution for these problems is more important . It is more useful the ability to solve a simple problem than a difficult one!

This is a utility measure but we can not accept that solving a simple problem is more intelligent than to solve a difficult one so we must split the 2 definitions of Utility and of Intelligence.

Anyway there is this incredible fact that the Universal Distribution tell : The Intelligence is not so useful!

Despite this deduction my attraction and research is reserved for the intelligence .