The Mind 42

What is the size required to implement a Mind ?
This is an old question …

I remember many years ago there was estimation of about 1GB . The space to record a complete mind !
It was clear many years ago and now that the estimation was and will be increasing while the time pass .
Today I can see estimation of about 10 or 100 TB . Ok we are waiting for new hardware to have estimation of petabytes …

From Physic studies it seem clear that there is an upper-bound on the numbers in our universe and that number is about 10^~100 . What this mean is that if we write the program U that run the universe ( a TOE, theory of everything ) we need only a number with the same size of 10^~100 to identify uniquely a mind .

So … yes 1 petabytes is not enough … we need 42 special bytes!

With 42 bytes we can execute for sure every existing mind !

 

Advertisements

The Processor

Do we really need a processor to execute programs?

For the people who think a requisite of a program is an executor there is a very interesting experiment.

A very telltale mind experiment arise watching an episode of the famous series Black Mirror 3rd season episode “San Junipero” where people in a hypothetical future instead of die will be uploaded in a simulated virtual world .

Do we really need to run the simulation?
If you think yes at which speed? One second of external world correspond 1 second? 1 microsecond? 1 year? of the simulated virtual world.
We need to run the simulation only to interact with the virtual world. This is the only purpose of a processor, it is only a communication channel. The processor make a navigation into the state-space and we can choose at which state to interact and if we don’t need to interact we don’t need the processor.

The channel theory

oculus-rift-cop-1

I am waiting to put the hands on my new Oculus Rift ( I also have the first release dk1 ) and I was thinking on how to know if a new device will be a success .

There is a parameter normally underestimate or absolutely not taken into account in the new devices.

It is the size of the communication channel from the user and the device . This is one of the most important and perhaps the most important index to estimate how much a technology will be an advance .

For the Oculus Rift and in general a virtual reality device it is a great increment in the communication channel . The user get an image for each eye and every image can saturate the input channel of each eye, it is possible to create a device with an image definition such that an eye is not able detect more details and also cover completely the field of view of each eye . It is impossible for a monitor to reach the same saturation even increasing ten or hundred times the resolution because the user is not in a fixed position and probably the 50% or more of the information output from a monitor are lost .

That is only an analysis of the 2d but a virtual reality device output stereoscopic images giving a depth information so it give a 3d component .

The last component is the movement , every head movement, rotation give a different image that means another great increase of the information the user receive .

So the increase of the information given to the user from a V.R. device is so big compared to the monitors that every other side-effect is secondary .

This means that this device will not be only a nerdy device , it will be deployed in the future to a very wide range of users .

By this analysis what I suggest is not to put too much effort to solve sickness side-effect problems it is only a problem for people not enough trained to use the device ( something like to give a mouse to someone for the first time ) what really matter is the size of the channel so the direction of the development should be to increase the definition, increase the resolution!

 

It is not AI rules

cestino

there are simple rules to recognize fast if a program it is not an AI

 

 

  • Is it generic? Can work for different problems? ( AGI )
  • The solution found by the AI can be a program with different space/time complexity ? Is there a limit in the complexity of the solution?
  • Is it computable? An AI must be in general not computable
  • Is it an implementation of ILS ?
  • How it solve NFL ?
  • Is it “universal” ?