You’re not the customer, you’re the product!

The attention that each one of us pays to an item and the time we spend on a site, article, or application is the most valuable commodity in the world, as witnessed by the fact that the companies that sell it, wholesale, are the largest in the world. Attracting and selling our attention is, indeed, the business of Google and Facebook but also, to a larger extent, of Amazon, Apple, Microsoft, Tencent, or Alibaba. We may believe we are the customers of these companies but, in fact, many of the services provided serve, only, to attract our attention and sell it to the highest bidder, in the form of publicity of personal information. In the words of Richard Serra and Carlota Fay Schoolman, later reused by a number of people including Tom Johnson, if you are not paying “You’re not the customer; you’re the product.

Attracting and selling attention is an old business, well described in Tim Wu’s book The Attention Merchants. First created by newspapers, then by radios and television, the market of attention came to maturity with the Internet. Although newspapers, radio programs, and television shows have all been designed to attract our attention and use it to sell publicity, none of them had the potential of the Internet, which can attract and retain our attention by tailoring the contents to each and everyone’s content.

The problem is that, with excessive customization, comes a significant and very prevalent problem. As sites, social networks, and content providers fight to attract our attention, they show us exactly the things we want to see, and not the things as they are. Each person lives, nowadays, in a reality that is different from anyone else’s reality. The creation of a separate and different reality, for each person, has a number of negative side effects, that include the creation of paranoia-inducing rabbit holes, the radicalization of opinions, the inability to establish democratic dialogue, and the diffiulty to distinguish reality from fabricated fiction.

Wu’s book addresses, in no light terms, this issue, but the Netflix documentary The Social Dilemma makes an even stronger point that customized content, as shown to us by social networks and other content providers is unraveling society and creating a host of new and serious problems. Social networks are even more worrying than other content providers because they create pressure in children and young adults to conform to a reality that is fabricated and presented to them in order to retain (and resell) their attention.

Do humankind’s best days lie ahead?

This book, which transcribes one of the several Munk debates organized by an initiative financed by Peter and Melanie Munk, addresses the question of whether the future of humanity will be better or worst than the present.

The debate, also available in video, takes place between four formidable names, the wizards Steven Pinker and Matt Ridley (apologists of the theory that technology will continue to bring progress) and the prophets Alain de Botton and Malcolm Gladwell (doubters of the idea that further technological developments will keep improving the world).

71gpthXVVxL.jpg

The dialogue that takes place, between the Pollyanas and the Cassandras (to use an expression coined in the debate itself) is vivid, interesting and, at times, highly emotional. Not one of the debaters has doubts that progress has improved immensely the human condition in the last few centuries, but the consensus ends with that. Will we be able to use science and technology to surmount the environmental, social, and political challenges faced by humanity or did we already reach “peak development” and the future will be worst than the past? Read or watch the debate, and decide for yourself.

My take is that the Pollyanas, Steven Pinker and Matt Ridley, with their optimistic take on the future, win the debate by a large margin, against the Cassandras, Their arguments that the world will continue to improve, based both on the historical tendencies but also on the hope that technology will solve the significant challenges we face do not meet a coherent resistance from Alain de Botton and Malcolm Gladwell. At least they did not manage to convince me that famines, cybersecurity threats, climate change, and inequality will be enough to reverse the course of human progress.

Enlightenment Now: The case for reason, science, humanism and progress

Steven Pinker’s latest book, Enlightenment Now, deserves high praise and careful attention, in a world where reason and science are being increasingly threatened. Bill Gates called it “My new favorite book of all time“, which may be somewhat of an exaggeration. Still, the book is, definitely, a must read, and should figure in the top 10 of any reader that believes that science plays an important role in the development of humanity.

Pinker’s main point is that the values of the Enlightenment, which he lists as reason, science, humanism, and progress have not only enabled humanity to evolve immensely since they were adopted, somewhere in the 18th century, but are also our best hope for the future. He argues that these values have not only improved our lives immensely, in the last two and a half centuries, but will also lead us to vastly improved lives in the future. “Dare to understand“, the cry for reason made by David Deutsch in his The Beginning of Infinity, is the key argument made by Pinker in this book. The critical use of reason leads to understanding and understanding leads to progress, unlike the beliefs in myths, religions, miracles, and signs from God(s).  Pinker’s demolition of all the values not based on the critical use of reason is complete and utterly convincing. Do not read this book if, at some level, you believe in things that cannot be explained by reason.

To be fair, a large part of the book is dedicated to showing that progress has, indeed, been remarkable, since the 18th century, when reason and science took hold and replaced myths and religions as the major references for the development of nations and societies. No less than 17 chapters are dedicated to describing the many ways humanity has progressed in the last two and a half centuries, in fields as diverse as health, democracy, wealth, peace and, yes, even sustainability.  Pinker may come up as an incorrigible optimist, describing a world so much better than that which existed in the past, so at odds with the most popular current views that everything is going to the dogs. However, the evidence he presents is compelling, well documented, and discussed at length. Counter-arguments against the idea that progress is true and unstoppable are analyzed in depth and disposed of with style and elegance.

But the book is not only about past progress. In fact, it is mostly about the importance of viewing the Enlightenment values as the only ones that will safeguard a future for humanity. If we want a future, we need to preserve them, in a world where fake news, false science, and radical politics are endangering progress, democracy, and human rights.

It is comforting to find a book that so powerfully defends science, reason, and humanistic values against the claims that only a return to the ways of the past will save humanity of certain doom. Definitely, a must read if you believe in, and care for, Humanity.

Hello World: how to be human in the age of the machine

Computers, algorithms, and data are controlling our lives, powering our economy and changing our world. Unlike a few decades ago, the larger companies on the planet deal mostly with data manipulation, processed by powerful algorithms that help us decide what we buy, which songs we like, where we go and how we get there. More and more, we are becoming unwitty slaves to these algorithms, which are with us all the time, running on cell phones, computers, servers, and smart devices. And yet, few people understand what an algorithm is, what artificial intelligence really means, or what machine learning can do.

Hannah Fry’s new book opens a window on this world of algorithms and on the ways they are changing our lives and societies. Despite its name, this book is not about programming nor is it about programs. The book is about algorithms, and the ways they are being used in the most diverse areas, to process data and obtain results that are of economic or societal value.

While leading us through the many different areas where algorithms are used these days, Fry passes on her own views about the benefits they bring but also about the threats they carry with them. The book starts by addressing the issue of whether we, humans, are handling too much power to algorithms and machines. This has not to do with the fear of intelligent machines taking over the world, the fear that a superintelligence will rule us against our will. On the contrary, the worry is that algorithms that are effective but not that intelligent will be trusted to take decisions on our behalf; that our privacy is being endangered by our willingness to provide personal data to companies and agencies; that sub-optimal algorithms working on insufficient data may bring upon us serious unintended consequences.

As Fry describes, trusting algorithms to run our lives is made all the more dangerous by the fact that each one of us is handing over huge amounts of personal data to big companies and government agencies, which can use them to infer information that many of us would rather keep private. Even data that we deem most innocent, like what we shop at the grocery, is valuable and can be used to extract valuable and, sometimes, surprising information. You will learn, for instance, that pregnant women, on their second trimester, are more likely to buy moisturizer, effectively signaling the data analysts at the stores that a baby is due in a few months. The book is filled with interesting, sometimes fascinating, descriptions of cases like these, where specific characteristics on the data can be used, by algorithms, to infer valuable information.

Several chapters are dedicated to a number of different areas where data processing and algorithmic analysis have been extensively applied. Fry describes how algorithms are currently being used in areas as diverse as justice, transportation, medicine, and crime prevention. She explains and analyses how algorithms can be used to drive cars, influence elections, diagnose cancers, make decisions on parole cases and rulings in courts, guess where crimes will be committed, recognize criminals in surveillance videos, predict the risk of Alzheimer from early age linguistic ability, and many other important and realistic applications of data analysis. Most of these algorithms use what we now call artificial intelligence and machine learning but it is clear that, to the author, these techniques are just toolboxes for algorithm designers. The many examples included in these chapters are, in themselves, very interesting and, in some cases, riveting. However, what is most important is the way the author uses these examples to make what I feel is the central point of the book: using an algorithm implies a tradeoff and every application brings with it benefits and risks, which have to be weighted. If we use face recognition algorithms to spot criminals, we have to accept the risk of an algorithm sending an innocent person to jail. If we police more the locations where crimes are more likely to take place, people on those areas may feel they are treated unfairly. If we use social data to target sale campaigns, then it can also be used to market political candidates and manipulate elections. The list of tradeoffs goes on and on and every one of them is complex.

As every engineer knows, there is no such thing as 100% reliability or 100% precision. Every system that is designed to perform a specific task will have a given probability of failing at it, however small. All algorithms that aim at identifying some specific targets will make mistakes. They will falsely classify some non-target cases as targets (false positives) and will miss some real targets (false negatives). An autonomous car may be safer than a normal car with a human driver but will, in some rare cases, cause accidents that would not have happened, otherwise. How many spurious accidents are we willing to tolerate, in order to make roads safer to everyone? These are difficult questions and this book does a good job at reminding us that technology will not make those choices for us. It is our responsibility to make sure that we, as a society, assess and evaluate clearly the benefits and risks of each and every application of algorithms, in order to make the overall result be positive for the world.

The final chapter addresses a different and subtler point, which can be framed in the same terms that Ada Lovelace put it, more than 150 years ago: can computers originate new things, can they be truly creative? Fry does not try to find a final answer to this conundrum, but she provides interesting data on the subject, for the reader to decide by him- or herself. By analyzing the patterns of the music written by a composer, algorithms can create new pieces that, in many cases, will fool the majority of the people and even many experts. Does this mean that computers can produce novel art? And, if so, is it good art? The answer is made the more difficult by the fact that there are no objective measures for the quality of works of art. Many experiences, some of them described in this chapter, show clearly that the beauty is, in many cases, in the eye of the beholder. Computer produced art is good enough to be treated like the real thing, at least when the origin of the work is not known. But many people will argue that copying someone else’s style is not really creating art. Others will disagree. Nonetheless, this final chapter provides an interesting introduction to the problem of computer creativity and the interested reader can pick on some of the leads provided by the book to investigate the issue further.

Overall, Hello World is definitely worth reading, for those interested in the ways computers and algorithms are changing our lives.

Note: this is an edited version of the full review that appeared in Nature Electronics.

 

The Evolution of Everything, or the use of Universal Acid, by Matt Ridley

Matt Ridley never disappoints but his latest book, The Evolution of Everything is probably the most impressive one. Daniel Dennett called evolution the universal acid, an idea that dissolves every existing preconception we may have about the world. Ridley uses this universal acid to show that the ideas behind evolution apply not only to living beings but to all sorts of things in the world and, particularly, to society. The universal acid is used by Ridley to deconstruct our preconceptions about history and to present his own view that centralized control does not work and that bottom-up driven evolution is the engine behind progress.

When Ridley means everything, he is not exaggerating. The chapters in this book cover, among many others, topics as different as the universe, life, moral, culture, technology, leadership, education, religion, and money. To all these topics Ridley applies the universal acid to arrive at the conclusion that (almost) all thas is planned and directed leads to bad results, and that all that evolves by the pressures of competition and natural selection provides advances and improvements in society. Bottom-up mechanisms, he argues, are what creates innovation in the world, be it in the natural world, in culture, in technology or in any other area of society. To this view, he gives explicit credit to Lucretius who, in his magnum opus The Rerum Natura from the fourth century BC, proposed essentially the same idea, and to Adam Smith’s who, in The Wealth of Nations, proposed the central role of commerce in the development of society.

Sometimes, his arguments look too farfetched like, for instance, when he argues that the state should stay out of the education business, or that the 2008 crisis was caused not by runaway private initiative but by wrong governmental policies. Nonetheless, even in these cases, the arguments are very persuasive and always entertaining. Even someone like me, who believes that there are some roles to be played by the state, ends up doubting his own convictions.

All in all, a must read.

 

Meet Duplex, your new assistant, courtesy of Google

Advances in natural language processing have enabled systems such as Siri, Alexa, Google Assistant or Cortana to be at the service of anyone owning a smartphone or a computer. Still, so far, none of these systems managed to cross the thin dividing line that would make us take them for humans. When we ask Alexa to play music or Siri do dial a telephone number, we know very well that we are talking with a computer and the replies of the systems would remind us, were we to forget that.

It was to be expected that, with the evolution of the technology, this type of interactions would become more and more natural, possibly reaching a point where a computer could impersonate a real human, taking us closer to the vision of Alan Turing, a situation where you cannot tell a human apart from a computer by simply talking to both.

In an event widely reported in the media, at the I/O 2018 conference, Google made a demonstration of Duplex, a system that is able to process and execute requests in specific areas, interacting in a very human way with human operators. While Google states that the system is still under development, and only able to handle very specific situations, you get a feeling that, soon enough, digital assistants will be able to interact with humans without disclosing their artificial nature.  You can read the Google AI blog post here, or just listen to a couple of examples, where Duplex is scheduling a haircut or making a restaurant reservation. Both the speech recognition system and the speech synthesis system, as well as the underlying knowledge base and natural language processing engines, operate flawlessly in these cases, anticipating a widely held premonition that AI systems will soon be replacing humans in many specific tasks.

Photo by Kevin Bhagat on Unsplash

European Commission releases communication on Artificial Intelligence

Today, April 25th, 2018, the European Commission released a communication entitled Artificial Intelligence for Europe, and a related press release, addressing what could become the European strategy for Artificial Intelligence.

The document states that “Like the steam engine or electricity in the past, AI is transforming our world, our society and our industry. Growth in computing power, availability of data and progress in algorithms have turned AI into one of the most strategic technologies of the 21st century.

The communication argues that “The EU as a whole (public and private sectors combined) should aim to increase this investment [in Artificial Intelligence] to at least EUR 20 billion by the end of 2020. It should then aim for more than EUR 20 billion per year over the following decade.” These values should be compared with the current value of 4-5 billion, spent in AI.

The communication also addresses some questions raised by the increased ability of AI systems to replace human jobs: “The first challenge is to prepare the society as a whole. This means helping all Europeans to develop basic digital skills, as well as skills which are complementary to and cannot be replaced by any machine such as critical thinking, creativity or management. Secondly, the EU needs to focus efforts to help workers in jobs which are likely to be the most transformed or to disappear due to automation, robotics and AI. This is also about ensuring access for all citizens, including workers and the self-employed, to social protection, in line with the European Pillar of Social Rights. Finally, the EU needs to train more specialists in AI, building on its long tradition of academic excellence, create the right environment for them to work in the EU and attract more talent from abroad.”

This initiative, which has already received significant press coverage, may become Europe’s answer to the strong investments China and the United States are making in Artificial Intelligence technologies. There is also a fact sheet about the communication.

The Second Machine Age

The Second Machine Age, by Erik Brynjolfsson and Andrew McAfee, two MIT professors and researchers, offers mostly an economist’s point of view on the consequences of the technological changes that are remaking civilisation.

Although a fair number of chapters is dedicated to the technological innovations that are shaping the first decades of the 21st century, the book is at its best when the economic issues are presented and discussed.

The book is particularly interesting in its treatment of the bounty vs. spread dilema: will economic growth be fast enough to lift everyone’s standard of living, or will increased concentration of wealth lead to such an increase in inequality that many will be left behind?

The chapter that provides evidence on the steady increase in inequality is specially appealing and convincing. While average income, in the US, has been increasing steadily in the last decades, median income (the income of those who are exactly in the middle of the pay scale) has stagnated for several decades, and may even be decreasing in the last few years. For the ones at the bottom at the scale, the situation is much worst now than decades ago.

Abundant evidence of this trend also comes from the analysis of the shares of GDP that are due to wages and to corporate profits. Although these two fractions of GDP have fluctuated somewhat in the last century, there is mounting evidence that the fraction due to corporate profits is now increasing, while the fraction due to wages is decreasing.

All this evidence, put together, leads to the inevitable conclusion that society has to explicitly address the challenges posed by the fourth industrial revolution.

The last chapters are, indeed, dedicated to this issue. The authors do not advocate a universal basic income, but come out in defence of a negative income tax for those whose earnings are below a given level. The mathematics of the proposal are somewhat unclear but, in the end, one thing remains certain: society will have to address the problem of mounting inequality brought in by technology and globalisation.

The wealth of humans: work and its absence in the twenty-first century

The Wealth of Humans, by Ryan Avent, a senior editor at The Economist, addresses the economic and social challenges imposed on societies by the rapid development of digital technologies.  Although the book includes an analysis of the mechanisms, technologies, and effects that may lead to massive unemployment, brought by the emergence of digital technologies, intelligent systems, and smart robots, the focus is on the economic and social effects of those technologies.

The main point Avent makes is that market mechanisms may be relied upon to create growth and wealth for society, and to improve the average condition of humans, but cannot be relied upon to ensure adequate redistribution of the generated wealth. Left to themselves, the markets will tend to concentrate wealth. This happened in the industrial revolution, but society adapted (unions, welfare, education) to ensure that adequate redistribution mechanisms were put in place.

To Avent, this tendency towards increased income asymmetry, between the top earners and the rest, which is already so clear, will only be made worst by the inevitable glut of labor that will be created by digital technologies and artificial intelligence.

There are many possible redistribution mechanisms, from universal basic income to minimum wage requirements but, as the author points out, none is guaranteed to work well in a society where a large majority of people may become unable to find work. The largest and most important asymmetry that remains is, probably, the asymmetry that exists between developed countries and underdeveloped ones. Although this asymmetry was somewhat reduced by the recent economic development of the BRIC countries, Avent believes that was a one time event that will not reoccur.

Avent points out that the strength of the developed economies is not a direct consequence of the factors that are most commonly thought to be decisive: more capital, adequate infrastructures, and better education. These factors do indeed play a role but what makes the decisive difference is “social capital”, the set of rules shared by members of developed societies that makes them more effective at creating value for themselves and for society. Social capital, the unwritten set of rules that make it possible to create value, in a society, in a country or in a company, cannot be easily copied, sold, or exported.

This social capital (which, interestingly, closely matches the idea of shared beliefs Yuval Harari describes in Sapiens) can be assimilated, by immigrants or new hires, who can learn how to contribute to the creation of wealth, and benefit from it. However, as countries and societies became adverse at receiving immigrants, and companies reduce workforces, social capital becomes more and more concentrated.

In the end, Avent concludes that no public policies, no known economic theories, are guaranteed to fix the problem of inequality, mass unemployment, and lack of redistribution. It comes down to society, as whole, i.e., to each one of us, to decide to be generous and altruistic, in order to make sure that the wealth created by the hidden hand of the market benefits all of mankind.

A must-read if you care about the effects of asymmetries in income distribution on societies.

Europe wants to have one exascale supercomputer by 2023

On March 23rd, in Rome, seven European countries signed a joint declaration on High Performance Computing (HPC), committing to an initiative that aims at securing the required budget and developing the technologies necessary to acquire and deploy two exascale supercomputers, in Europe, by 2023. Other Member States will be encouraged to join this initiative.

Exascale computers, defined as machines that execute 10 to the 18th power operations per second will be roughly 10 times more powerful than the existing fastest supercomputer, the Sunway TaihuLight, which clocks in at 93 petaflop/s, or 93 times 10 to the 15 floating point operations per second. No country in Europe has, at the moment, any machine among the 10 most powerful in the world. The declaration, and related documents, do not fully specify that these machines will clock at more than one exaflop/s, given that the requirements for supercomputers are changing with the technology, and floating point operations per second may not be the right measure.

This renewed interest of European countries in High Performance Computing highlights the fact that this technology plays a significant role in the economic competitiveness of research and development. Machines with these characteristics are used mainly in complex system simulations, in physics, chemistry, materials, fluid dynamics, but they are also useful in storing and processing the large amounts of data required to create intelligent systems, namely by using deep learning.

Andrus Ansip, European Commission Vice-President for the Digital Single Market remarked that: “High-performance computing is moving towards its next frontier – more than 100 times faster than the fastest machines currently available in Europe. But not all EU countries have the capacity to build and maintain such infrastructure, or to develop such technologies on their own. If we stay dependent on others for this critical resource, then we risk getting technologically ‘locked’, delayed or deprived of strategic know-how. Europe needs integrated world-class capability in supercomputing to be ahead in the global race. Today’s declaration is a great step forward. I encourage even more EU countries to engage in this ambitious endeavour”.

The European Commission press release includes additional information on the next steps that will be taken in the process.

Photo of the signature event, by the European Commission. In the photo, from left to right, the signatories: Mark Bressers (Netherlands), Thierry Mandon (France), Etienne Schneider (Luxembourg), Andrus Ansip (European Commission), Valeria Fedeli (Italy), Manuel Heitor (Portugal), Carmen Vela (Spain) and Herbert Zeisel (Germany).