I am waiting to put the hands on my new Oculus Rift ( I also have the first release dk1 ) and I was thinking on how to know if a new device will be a success .
There is a parameter normally underestimate or absolutely not taken into account in the new devices.
It is the size of the communication channel from the user and the device . This is one of the most important and perhaps the most important index to estimate how much a technology will be an advance .
For the Oculus Rift and in general a virtual reality device it is a great increment in the communication channel . The user get an image for each eye and every image can saturate the input channel of each eye, it is possible to create a device with an image definition such that an eye is not able detect more details and also cover completely the field of view of each eye . It is impossible for a monitor to reach the same saturation even increasing ten or hundred times the resolution because the user is not in a fixed position and probably the 50% or more of the information output from a monitor are lost .
That is only an analysis of the 2d but a virtual reality device output stereoscopic images giving a depth information so it give a 3d component .
The last component is the movement , every head movement, rotation give a different image that means another great increase of the information the user receive .
So the increase of the information given to the user from a V.R. device is so big compared to the monitors that every other side-effect is secondary .
This means that this device will not be only a nerdy device , it will be deployed in the future to a very wide range of users .
By this analysis what I suggest is not to put too much effort to solve sickness side-effect problems it is only a problem for people not enough trained to use the device ( something like to give a mouse to someone for the first time ) what really matter is the size of the channel so the direction of the development should be to increase the definition, increase the resolution!
there are simple rules to recognize fast if a program it is not an AI
- Is it generic? Can work for different problems? ( AGI )
- The solution found by the AI can be a program with different space/time complexity ? Is there a limit in the complexity of the solution?
- Is it computable? An AI must be in general not computable
- Is it an implementation of ILS ?
- How it solve NFL ?
- Is it “universal” ?
Is human immortality possible ?
Before to try to answer this question we need to define what is life.
If we think to us as biological entity this become a biologic question , if we think to us as physical matter this become a physic question.
I think the human being is inside the mind and the mind is not a physic object .
Anyway I think everything is ultimately a program so I try to give an answer by defining a person as a program.
I am not saying that we can define a human as the program implemented by the brain because we know the brain change ; the brain change its matter and ( most important ) change its connections , change its program so if we want do define a person to be the same after some hours when his brain change we can not bound the program by the brain but by the program that change the brain .
The main question is : what happen if we run a program for an unlimited amount of time? There are 2 possibilities
1) enter in a loop (infinite loop) where it go to reuse previous state ( class 1 programs )
2) use always new states so eats every information resource
So in the first case the program dead because make always the same things we remember the same things we can not remember about different loop . It is not like in a movie where the protagonist everyday remember that the previous day is equal to the new day . It is not possible to remember that , to recognize that because it need more memory, it need more states but the program has not more new states so it is really a death.
In the second case the program become U because the difference between the program and U ( where U is the computational implementation of the unification of the physical law ) become infinitely less significant .
In this scenario I suppose the universe is discrete and the conclusion is that immortality ( as its extreme definition of an infinite life ) of programs is not possible .
I am what is called an “early adopter” of bitcoin and I follow the evolution of this technology since the beginning . I feel there is an aspect not enough emphasized . I believe in the strong AI ( … a little bit …) and I also know intelligence and the dominant position are 2 different things and a dominant position can exist also without intelligence . Normally when people fear about AI they fear to lose their dominant position. They don’t understand this can happen independently from an intelligence grown. Anyway a technology never had a capability to be in a dominant position against humans … until now . With the bitcoin a program can be the only one to know the keys of a wallet and this let to choose who to pay and for what . So I don’t know if the bitcoin will increase its value to 1000000 $ or if it lose to 0 $ or if there will be a new cryptovalue but bitcoin show how to build a technology that can gain a dominant position against humans .
One difficulty in accepting my definition of intelligence equal to the Kolmogorov complexity is often its independence from time and interaction. It can be useful to think to some examples where very difficult solutions does not imply interaction , time , execution or executor , for example it is possible to have a program very complex that give solutions to very hard problems and given that program you need only a simple stupid executor to get the result it seem clear to me that the most intelligent ( and interesting ) part is into the program and not into its execution. One book can help is the “The End of Time” by Julian B. Barbour , the best book I ever read about time. Reading this book it is possible to understand how the time emerge from the laws of physic as a result of a special state of the nature and it is not a foundational law, the causality is only the result of the shape of the nature.
Nowadays there is a new name going around in the AI field the “deep learning” the new revolutionary algorithm to solve everything… I am bored to read deep deeep deeeep everywhere . Why deep ? It is Large not deep !? It is only a new name for old stuff with a more powerful hardware. Let me explain where are the troubles. There are 2 main problems or limits in what a Deep Learning can achieve
- The fitness of the objective
- The low computing possibilities of the agent ( is it only a tree ? Is it universal? )
(1) The learning phase need a function to get a value on how good the behavior of the agent is and like what the NFL theorems say it is not possible to solve the problem for every fitness , it is possible to solve the problem for only special fitness ( if there is not a gradually increasing fitness I think the trainer has some trouble ) (2) As every programmer know there are very different programs and very different problems and it is difficult to have a program as solution that work fine for different problems . An agent is a program and if it is a NN with many layers is something like a tree it is only one step forward from a table . A tree-program is very limited . It is difficult to run a sat solver program implemented by a tree…( so don’t try to teach a deep learning how to solve the sat ) . Perhaps the “deep learning” use a RNN , ok here we have a graph another step forward but again which complexity ? You can not solve problems with high complexity with low complexity programs … if you want you can do the opposite for example it is possible to implement a sorting solver using a sat-solver but solving an higher complexity with a lower complexity is really a bad idea and in older post I explain why.