The User Illusion: Cutting consciousness down to size

In this entertaining and ambitious book Tor Nørretranders argues that consciousness, that hallmark of higher intelligence, is nothing more than an illusion, a picture of reality created by our brain that we mistake by the real thing. The book received good reviews and was very well received in his native country, Denmark, and all over the world.

Using fairly objective data, Nørretranders makes his main point that consciousness has a very limited bandwidth, probably no more than 20 bits a second. This means that we cannot, consciously, process more than a few bits a second, distilled from the megabytes of information processed by our senses in the same period. Furthermore, this stream of information creates a simulation of reality, which we mistake for the real thing, and the illusion that our conscious self (the “I”) in in charge, while the unconscious self (the “me”) follows the orders given by the “I”.


There is significant evidence that Nørretranders’ main point is well taken. We know (and he points it out in his book) that consciousness lags behind our actions, even conscious ones, by about half a second. As is also pointed out by another author, Daniel Dennett, in his book Consciousness Explained, consciousness controls much less than we think. Consciousness is more of a module that observes what is going on and explains it in terms of “conscious decisions” and “conscious attention”. This means that consciousness is more of an observer of our actions, than the agent that determines them. Our feeling that we consciously control our desires, actions, and sentiments is probably far from the truth, and a lot of what we consciously observe is a simulation carefully crafted by our “consciousness” module. Nørretranders refers to the fact that some people believe that consciousness is a recent phenomenon, maybe no more than a few thousand years old, as Julian Jaynes defended in his famous book, The Bicameral Mind.

Nørretranders uses these arguments to argue that we should pay less attention to conscious decisions (the “I”, as he describes it) and more to unconscious urges (the “me”, in his book), letting the unconscious “me”, who has access to vastly larger amounts of information, in control of more of your decisions.

Tesla announces full self-driving ability for all its cars

Tesla motors announced all current and future Tesla cars will be built with a ‘Full Self Driving Hardware’ package. This package is the next step in the development of Autopilot, and it will enable Model S, Model X and Model 3 cars to handle junctions, twisting rural roads and parking lots.

According to the press release, this hardware includes eight surround cameras providing 360 degree visibility around the car at up to 250 meters of range, twelve updated ultrasonic sensors, and a forward-facing radar with enhanced processing ability.


The video released by Tesla, on Tesla website, shows the car driving autonomously in a number of different road conditions and parking itself after searching for a free parking space. Elon Musk tweeted “When searching for parking, the car reads the signs to see if it is allowed to park there, which is why it skipped the disabled spot.” He added that in 2017 a driverless Tesla will travel from LA to NYC.


A review of Microsoft Hololens

By a kind invitation from Microsoft, I had the opportunity to experiment, from a user’s perspective, the new Microsoft Hololens. Basically, I was able to wear them for a while and to interact with a number of applications that were spread around a room.hololens

From the outside, the result is not very impressive, as the picture above shows. In a room, which was mostly empty (except for the other guests, wearing similar devices), you can see me wearing the lenses, raising my hand to pull-up a menu, using the menu-pull up gesture.

From the inside, things are considerably more interesting. During configuration, the software identifies the relevant features of the room, and creates an internal model of the space and of the furniture in it.

Applications, both 3D and 2D, can then be deployed in different spaces in the room, using a number of control gestures and menus. Your view of the applications is superimposed with the view of the room, leading to a semi-realistic impression of virtual reality, mixed with the “real” reality. You can move around the 3D holograms in the room (in this case an elephant, a mime and a globe, like the one below, among others).


You can also interact with them using a virtual pointing device (basically a mouse, controlled by your head movements). 2D applications, like video-streaming, appear as suspended screens (or screens lying on top of desks and tables) and can be controlled using the same method. Overall, the impression is very different from the one obtained using 3D Virtuall Reality googles, like Google Cardboard or Oculus Rift. For instance, in a conversation (pictured below) you would be seating in a chair, facing an hologram of your guest, possibly discussing some 3D object sitting between the two.


Overall, I was much more impressed with the possibilities of this technology than I was with Google glasses, which I tried a few years back. The quality of the holograms was quite good, and the integration with the real world quite convincing. The applications need to be developed, though.

On the minus side, the device is somewhat heavy and less than comfortable to wear for extended periods. This limitation could probably be addressed by future developments of the device.

Darwin and the Elephants

The basic idea underlying Charles Darwin theory of evolution is that the number of individuals in a given species would grow exponentially, in the absence of pressures against population growth. Only selective pressures may curb this exponential growth, and select some species over the others.

In Charles Darwin’s own words, in the Origin of the Species:

“There is no exception to the rule that every organic being increases at so high a rate, that if not destroyed, the earth would soon be covered by the progeny of a single pair. Even slow-breeding man has doubled in twenty-five years, and at this rate, in a few thousand years, there would literally not be standing room for his progeny. Linnaeus has calculated that if an annual plant produced only two seeds – and there is no plant so unproductive as this – and their seedlings next year produced two, and so on, then in twenty years there would be a million plants. The elephant is reckoned to be the slowest breeder of all known animals, and I have taken some pains to estimate its probable minimum rate of natural increase: it will be under the mark to assume that it breeds when thirty years old, and goes on breeding till ninety years old, bringing forth three pairs of young in this interval; if this be so, at the end of the fifth century there would be alive fifteen million elephants, descended from the first pair.”


Even though Charles Darwin got his numbers wrong, as pointed out by William Thomson, later to become Lord Kelvin, the idea is entirely correct. The number of elephants at generation n is given by the formula a(n) = 2 × a(n-1) – a(n-3). 

This succession of numbers converges rapidly to a ratio of 1.618 between the number of elephants at generation n and the number at generation n-1.

If one plugs the numbers in, one realizes that even though only 14 elephants are alive after one hundred years and 8360 after five hundred years (not 15 million, as Darwin stated), there would be almost 30 million elephants alive after a thousand years. After three thousand years, there would be a billion trillion elephants, which would have a combined mass equal to that of planet Earth. Assuming they can grow indefinitely, after only seven thousand years, the solid sphere of roughly 10 to the 50th elephants, by that time with a diameter of 200 light-years, would expand outward faster than the speed of light.



A robot chef in every kitchen?

Advances in robotics, image processing and artificial intelligence are quickly opening the door to new areas of application of robotics. Moley Robotics has been developing the world’s first fully-automated and intelligent cooking robot, which cooks recipes by mimicking the movements of a master cook and (more importantly) clears the kitchen when he is done.

As you can see on the video (also available in Moley website), the robotics hands manipulate food, cooking tools, and other implements in much the same way a human chef would.


The first prototype was developed in collaboration with Shadow Robotics, Yachtline, DYSEGNO, Sebastian Conran and Stanford. The robot consists of a pair of articulated robotic hands that can reproduce the entire function of human hands with the same speed, sensitivity and movement.

According to Moley, “The cooking skills of Master Chef Tim Anderson, winner of the BBC Master Chef title were recorded on the system – every motion, nuance and flourish – then replayed as his exact movements through the robotic hands.”

It remains unclear how adaptable the system is to changes in the position of the ingredients, tools, and plates, but these are challenges that will become less and serious as the technology evolves.

Image credits: Moley Robotics

Empathy with robots – science fiction or reality?

A number of popular videos made available by Boston Dynamics (a Google company) has shown different aspects of the potential of bipedal and quadrupedal robots to move around in rough terrain, and to carry out complex tasks. The behavior of the robots is strangely natural, even though they are clearly man-made mechanical contraptions.


In a recent interview given at Disrupt SF, Boston Dynamics CEO Marc Raibert put the emphasis on making the robots friendlier. Being around a 250 pound robot that can move very fast may be very dangerous to humans, and the company is creating smaller and friendlier robots that can move around safely inside peoples houses.

This means that this robots can have many more applications, other than military ones. They may serve as butlers, servants or even as pets.

It is hard to predict what sort of emotional relationship these robots may eventually become able to create with their owners. Their animal-like behavior makes them almost likeable to us, despite their obviously mechanic appearance.

In some of these videos, humans intervene to make the jobs harder for the robots, kicking them, and moving things around in a way that looks frustrating to the robots. To many viewers, this may seem to amount to acts of actual robot cruelty, since the robots seem to become sad and frustrated. You can see some of those images around minute 3 of the video above, made available by TechCrunch and Boston Dynamics or in the (fake) commercial below.

Our ideas that robots and machines don’t have feelings may be challenged in the near future, when human or animal-like mechanical creatures become common. After all, extensive emotional attachment to Roombas robotic vacuum cleaners is nothing new!

Videos made available by TechCrunch and Boston Dynamics.

You can now hail a Uber self-driving car

If you are in Pittsburgh, you can now hail a Uber self-driving vehicle, and see for yourself what the fuss is all about. In fact, you can even ask Siri to hail you a Uber car, which will come by itself and take you wherever you want. Or simply use the easy-to-use Uber app that has already changed the world of private transportation so much.

As you can see on the video, the car comes with a resident engineer. However, he or she does is not normally involved in driving the vehicle, and is there mostly to reassure the customers and, possibly, to obey existing regulations.

Although many people (about a third of Americans, the polls say) are still wary of using driverless vehicles, Uber took a step forward and made, this week, the technology available to anyone who wants to try it.

As The Verge reports, the technology still has a few quirks, but the self-guidance systems of the Ford Fusion cars used manage to address most of the challenges normally posed to a Pittsburgh driver.

Uber users in other cities will have to wait a little more, as the system, extensively developed by CMU researchers, is certainly more mature to be used in Pittsburgh that in other places around the world.

Video source: Uber