One difficulty in accepting my definition of intelligence equal to the Kolmogorov complexity is often its independence from time and interaction. It can be useful to think to some examples where very difficult solutions does not imply interaction , time , execution or executor , for example it is possible to have a program very complex that give solutions to very hard problems and given that program you need only a simple stupid executor to get the result it seem clear to me that the most intelligent ( and interesting ) part is into the program and not into its execution. One book can help is the “The End of Time” by Julian B. Barbour , the best book I ever read about time. Reading this book it is possible to understand how the time emerge from the laws of physic as a result of a special state of the nature and it is not a foundational law, the causality is only the result of the shape of the nature.
The measure of intelligence of an object X is exactly the Kolmogorov complexity of X .
This come from the simple observation that difficult problems has high KC and for every high KC there are difficult problems .
The interesting thing is to compare an utility function on solving problems . The are a lot of definitions of intelligence based on a measure of utility . The idea is to ask how much can be useful to solve a problem X ? The answer come from the Universal Distribution
From here is simple to understand that simple problems has high probability and so a solution for these problems is more important . It is more useful the ability to solve a simple problem than a difficult one!
This is a utility measure but we can not accept that solving a simple problem is more intelligent than to solve a difficult one so we must split the 2 definitions of Utility and of Intelligence.
Anyway there is this incredible fact that the Universal Distribution tell : The Intelligence is not so useful!
Despite this deduction my attraction and research is reserved for the intelligence .
I almost finish the book “Asymmetry: The Foundation of Information” by Scott J. Muller .
I like the description of the concept of Entropy , Symmetry , algorithmic Information but the author did not understand what information is .
I report an evidence in the Table 4.1 of page 96
Symmetry Entropy Information High Low Low Low High High
Here there is a big mistake :
Low Entropy = low Information
High Symmetry = low Information
Low Kolmogorov complexity = low Information
High Entropy != High Information
Low Symmetry != High Information
High Kolmogorov complexity = High information
This is the Correct Table
Symmetry Entropy Information High Low Low Low High ?
When we have a low entropy we know a way to reduce the information to describe the informatical object but when we have high entropy this does not give a way to reduce the information but this does not mean there is not a way to reduce the information to describe the object .
Not only but there are also examples where we have high entropy objects with low Kolmogorov complexity and low information to describe the object . An example is again the Rule 30 Cellular automata that produce a high entropy of data but to describe this data you need only to know the rule number plus a log(N) if you want a complete measure .
This mistaken concept emerges along the entire book.
The starting point on define what I think about the will is the my philosophy of “everything is a program” , or better the rilevance of everything is defined but its behaviour as programs.
Starting from here every human and everything can be defined by a program .
Now by my program classification is possible to define programs into 2 main classes .
The first class is the class of programs with finite states and inside this class there are not the universality so this is a class with limited power programs , the behaviour of programs is predictable and finite. In this class splitting the programs in two parts code and parameter the most significative part is the program , what define the behaviour is the program.
The second class contain possible not-halting programs , universality but at the same time this property open the programs to every possible behaviour ( by the universality ) and in this case splitting the program in the two part code-parameter the parameter become the more interesting part what the program do stay into the parameter , the program become something like a translator.
By this characteristic it is clear that the increasing of the expressive powerfull of a program avoid its role in the definition of the behaviour.
Coming back to the free will ( it is clear that I think knowing the state of a program and the program , if there are not resources limitations is absolutely possible to know the next state and what the programs do) moving from class 1 to the class 2 direction you obtain more free will becouse the behaviour become more complex , there are less limitation done by the program but at the same time you lose the identity , the programs in the second class can do everything and there are not limitation by the definition of the program.
To better understand is possible to think at the universality as a universal language , you can change the interpreter but if your interpreter has the universality there is a way to do everything under that language .
Concluding increasing the power and the free will of a program you avoid its identity .