Are Fast Radio Bursts a sign of aliens?

In a recently published paper in The Astrophysical Journal Letters, Manasvi Lingam and Abraham Loeb, from the Harvard Center for Astrophysics, propose a rather intriguing explanation for the phenomena known as Fast Radio Bursts (FRBs). FRBs are very powerful and very short bursts of radio waves, originating, as far as is known, galaxies other than our own. FRBs last for only a few milliseconds, but, during that interval, they shine with the power of millions of suns.

The origin of FRBs remains a mystery. Although they were first detected in 2007, in archived data taken in 2001, and a number of FRBs was observed since then, no clear explanation of the phenomenon was yet found. They could be emitted by supermassive neutron stars, or they could be the result of massive stellar flares, millions of times larger than anything observed in our Sun. All of these explanations, however, remain speculative, as they fail to fully account for the data and to explain the exact mechanisms that generate this massive bursts of energy.

The rather puzzling, and possibly far-fetched, explanation proposed by Lingam and Loeb, is that these short-lived, intense, pulses of radio waves can be artificial radio beams, used by advanced civilizations to power light sail starships.

Light sail starships have been discussed as one technology that could possibly be used to send missions to other stars. A light sail, attached to a starship, deploys into space, and is accelerated using energy in the sending planet by powerful light source, like a laser. Existing proposals are based on the idea of using very small starships, possibly weighting only a few grams, which could be accelerated by pointing a powerful laser at them. Such a starship could be accelerated to a significant fraction of the speed of light in only a few days, using a sufficiently powerful laser, and could reach the nearest stars in only a few decades.

In their article, Lingam and Loeb discuss the rather intriguing idea that FRBs can be the flashes caused by such a technology, used by other civilizations to power their light sail spaceships. By analyzing the characteristics of the bursts, they conclude that these civilizations would have to use massive amounts of energy to produce these pulses, used to power starships with many thousands of tons. The characteristics of the bursts are, according to computations performed by the authors, compatible with an origin in a planet with a size approximately the size of the Earth.

The authors use the available data, to compute an expected number of FRB-enabled civilizations in the galaxy, under the assumption that such a technology is widespread throughout the universe. The reach the conclusion that a few thousands of this type of civilizations in our galaxy would account for the expected frequency of observed FRBs. Needless to say, a vast number of assumptions is used here to reach such a conclusion, which is, they point out, consistent with the values one reaches by using Drake’s equation with optimistic parameters.

The paper has been analyzed by many secondary sources, including The Economist and The Washington Post.

 

Image source: ESO. Available at Wikimedia Commons.

DNA as an efficient data storage medium

In an article recently published in the journal Science, Yaniv Erlich and Dina Zielinski showed that it is possible to store high density digital information in DNA molecules and reliably retrieve it. As they report, they stored a complete operating system, a movie, and other files with a total of more than 2MB, and managed to retrieve all the information with zero errors.

One of the critical factors of success is to use the appropriate coding methods: “Biochemical constraints dictate that DNA sequences with high GC content or long homopolymer runs (e.g., AAAAAA…) are undesirable, as they are difficult to synthesize and prone to sequencing errors.” 

Using the so-called DNA fountain strategy, they managed to overcome the limitations that arise from biochemical constraints and recovery errors. As they report in the Science article “We devised a strategy for DNA storage, called DNA Fountain, that approaches the Shannon capacity while providing robustness against data corruption. Our strategy harnesses fountain codes , which have been developed for reliable and effective unicasting of information over channels that are subject to dropouts, such as mobile TV (20). In our design, we carefully adapted the power of fountain codes to overcome both oligo dropouts and the biochemical constraints of DNA storage.”

The encoded data was written using DNA synthesis and the information was retrieved by performing PCR and sequencing the resulting DNA using Illumina sequencers.

Other studies, including the pioneering one by Church, in 2012, predicted that DNA storage could theoretically achieve a maximum information density of 680 Peta bytes per gram of DNA. The authors managed to perfectly retrieve the information from a physical density of 215 Peta bytes per gram. For comparison, a flash memory with about one gram can carry, at the moment, up to 128GB, a density 3 orders of magnitude lower.

The authors report that the cost of storage and retrieval, which was $3500/Mbyte, still represents a major bottleneck.

Arrival of the Fittest: why are biological systems so robust?

In his 2014 book, Arrival of the Fittest, Andreas Wagner addresses important open questions in evolution: how are useful innovations created in biological systems, enabling natural selection to perform its magic of creating ever more complex organisms? Why is it that changes in these complex systems do not lead only to non-working systems? What is the origin of variation upon which natural selection acts?

Wagner’s main point is that “Natural selection can preserve innovations, but it cannot create them. Nature’s many innovations—some uncannily perfect—call for natural principles that accelerate life’s ability to innovate, its innovability.”51bwxg5grcl-_sx324_bo1204203200_

In fact, natural selection can apply selective pressure, selecting organisms that have useful phenotypic variations, caused by the underlying genetic variations. However, for this to happen, genetic mutations and variations have to occur and, with high enough frequency, they have to lead to viable and more fit organisms.

In most man-made systems, almost all changes in the original design lead to systems that do not work, or that perform much worse than the original. Performing almost any random change in a plane, in a computer or in a program leads to a system that either performs worst than the original, or else, that fails catastrophically. Biological systems seem much more resilient, though. In this book, Wagner explores several types of (conceptual) biological networks: metabolic networks, protein interaction networks and gene regulatory networks.

Each node in these networks corresponds to one specific biological function: in the first case, a metabolic network, where chemical entities interact; in the second case, a protein interaction network, where proteins interact to create complex functions; and in the third case, a gene regulatory network, where genes regulate the expression of other genes. Two nodes in such networks are neighbors if they differ in only one DNA position, in the genotype that encodes the network.

He concludes that these networks are robust to mutations and, therefore, to innovations. In particular, he shows that you can traverse these networks, from node to neighboring node, while keeping the biological function unchanged, only slightly degraded, or even improved. Unlike man-made systems, biological systems are robust to change, and nature can experiment tweaking them, in the process creating innovation and increasingly complex systems. This how the amazingly complex richness of life has been created in a mere four billion years.

 

Taxing robots: a solution for unemployment or a recipe for economic disaster?

In a recent interview with Quartz, Bill Gates, who cannot exactly be called a Luddite, argued that a robot tax should be levied and used to help pay for jobs in healthcare and education, which are hard to automate and can only be done by humans (for now). Gates pointed out that humans are taxed on the salary they make, unlike the robots who could replace them.

Gates argued that governments must take more control of the consequences of increased technological sophistication and not rely on businesses to redistribute the income that is generated by the new generation of robots and artificial intelligence systems.

Although the idea looks appealing, it is in reality equivalent to taxing capital, as this article in The Economist explains. Taxing capital investments will slow down increases in productivity, and may lead, in the end, to poorer societies. Bill Gates’ point seems to be that investing in robots does indeed improve productivity, but also causes significant negative externalities, such as long term unemployment and increased income distribution inequalities. These negative externalities might justify a specific tax on robots, aimed at alleviating these negative externalities. In the end, it comes down to deciding whether economic growth is more important than ensuring everyone has a job.

As The Economist puts it: “Investments in robots can make human workers more productive rather than expendable; taxing them could leave the employees affected worse off. Particular workers may suffer by being displaced by robots, but workers as a whole might be better off because prices fall. Slowing the deployment of robots in health care and herding humans into such jobs might look like a useful way to maintain social stability. But if it means that health-care costs grow rapidly, gobbling up the gains in workers’ incomes, then the victory is Pyrrhic.”

Gates´ comments have been extensively analyzed in a number of articles, including this one by Yanis Varoufakis, a former finance minister of Greece, who argues that the robot tax will not solve the problem and is, at any rate, much worse than the existing alternative, a universal basic income.

The question of whether robots should be taxed is not a purely theoretical one. On February 17th, 2017, the European Parliament approved  a resolution with recommendations to the European Commission, which is heavily based on the draft report proposed by the committee on legal affairs, but leaves out the recommendations (included in the draft report) to consider a tax on robots. The decision to reject the robot tax was, unsurprisingly, well received by the robotics industry, as reported  in this article by Reuters.

PHOTO DATE: 12-12-13 LOCATION: Bldg. 32B - Valkyrie Lab SUBJECT: High quality, production photos of Valkyrie Robot for PAO PHOTOGRAPHERS: BILL STAFFORD, JAMES BLAIR, REGAN GEESEMAN

Image courtesy of NASA/Bill Stafford, James Blair and Regan Geeseman, available at Wikimedia Commons.

 

 

In memoriam of Raymond Smullyan: An unfortunate dualist

Mind-body Dualists believe there are two different realms that define us. One is the physical realm, well studied and understood by the laws of physics, while the other one is the non-physical realm, where our selves exist. Our essence, our soul, if you want, exists in this non-physical realm, and it interacts and controls our physical body through some as yet unexplained mechanism. Most religions are based on a dualist theory, including Christianity, Islam, and Hinduism.

On the other side of the discussion are Monists, who do not believe in the existence of dual realities.  The term monism is used to designate the position that everything is either mental (idealism) or that everything is physical (materialism).

Raymond Smullyan, deceased two days ago (February 10th, 2017),

165

had a clear view on dualism, which he makes clear in this history, published in his book This book needs no title.

An Unfortunate Dualist

Once upon a time there was a dualist. He believed that mind and matter are separate substances. Just how they interacted he did not pretend to know-this was one of the “mysteries” of life. But he was sure they were quite separate substances. This dualist, unfortunately, led an unbearably painful life-not because of his philosophical beliefs, but for quite different reasons. And he had excellent empirical evidence that no respite was in sight for the rest of his life. He longed for nothing more than to die. But he was deterred from suicide by such reasons as: (1) he did not want to hurt other people by his death; (2) he was afraid suicide might be morally wrong; (3) he was afraid there might be an afterlife, and he did not want to risk the possibility of eternal punishment. So our poor dualist was quite desperate.

Then came the discovery of the miracle drug! Its effect on the taker was to annihilate the soul or mind entirely but to leave the body functioning exactly as before. Absolutely no observable change came over the taker; the body continued to act just as if it still had a soul. Not the closest friend or observer could possibly know that the taker had taken the drug, unless the taker informed him. Do you believe that such a drug is impossible in principle? Assuming you believe it possible, would you take it? Would you regard it as immoral? Is it tantamount to suicide? Is there anything in Scriptures forbidding the use of such a drug? Surely, the body of the taker can still fulfill all its responsibilities on earth. Another question: Suppose your spouse took such a drug, and you knew it. You would know that she (or he) no longer had a soul but acted just as if she did have one. Would you love your mate any less?

To return to the story, our dualist was, of course, delighted! Now he could annihilate himself (his soul, that is) in a way not subject to any of the foregoing objections. And so, for the first time in years, he went to bed with a light heart, saying: “Tomorrow morning I will go down to the drugstore and get the drug. My days of suffering are over at last!” With these thoughts, he fell peacefully asleep.

Now at this point a curious thing happened. A friend of the dualist who knew about this drug, and who knew of the sufferings of the dualist, decided to put him out of his misery. So in the middle of the night, while the dualist was fast asleep, the friend quietly stole into the house and injected the drug into his veins. The next morning the body of the dualist awoke-without any soul indeed-and the first thing it did was to go to the drugstore to get the drug. He took it home and, before taking it, said, “Now I shall be released.” So he took it and then waited the time interval in which it was supposed to work. At the end of the interval he angrily exclaimed: “Damn it, this stuff hasn’t helped at all! I still obviously have a soul and am suffering as much as ever!”

Doesn’t all this suggest that perhaps there might be something just a little wrong with dualism?

Raymond M. Smullyan

India considers the adoption of Universal Basic Income

A recent article published in The Economist reports that India is considering the adoption of a Universal Basic Income (UBI) scheme to replace a myriad of existing welfare systems.

Unlike the discussions that are taking place in other countries, this discussion about Universal Basic Income is not motivated by advances in technology and the fear of massive unemployment. The main aim of such a measure would be to replace many existing welfare mechanisms that are expensive, ineffective, and misused.

The scheme would provide every single citizen with a guaranteed basic income of 9 dollars a month ( hardly a vast sum ) and would cost between 6 and 7% of GDP. The 950 existing welfare schemes cost about 5% of GDP. Such a large scale experiment would, at least, contribute to make clear the advantages and disadvantages of UBI as a way to make sure every human being has a minimum wage, independent of any other considerations or the existence of jobs.

tajmahalbyamalmongia

Photo by Amal Mongia, available at Multimedia Commons.

Will the fourth industrial revolution destroy or create jobs?

The impact of the fourth industrial revolution on jobs has been much discussed.

On one side, there are the traditional economists, who argue that technological advances have always created more and better jobs than the ones they destroyed. On the other side, the people that believe that with the arrival of artificial intelligence and robotics, there will simply not exist enough jobs that cannot be done by machines.

So, in this post, I try to present a balanced analysis on the subject, as deeply as allowed by the space and time available.

Many studies have addressed the question of which jobs are more likely to be destroyed by automation.  This study, by McKinsey, provides a very comprehensive analysis.

lixo

Recently, The Economist also published a fairly balanced analysis of the topic, already posted in this blog. In this analysis, The Economist makes a reference to a number of studies on the jobs that are at high risk but, in the end, it sides with the opinion that enough jobs will be created to replace the ones technology will destroy.

A number of books and articles have been written on the topic, including “Raising the Floor“, “The Wealth of Humans: Work, Power, and Status in the Twenty-first Century“, “The Second Machine Age“, and “No More Work“, some of them already reviewed in this blog.

In most cases, the authors of these books advocate the need for significant changes in the way society is organized, and on the types of social contracts that need to be drawn. Guaranteeing every one a universal basic income is a proposal that has become very popular, as a way to address the question of how humanity will live in a time when there are much less jobs to go around.

Further evidence that some deep change is in the cards is provided by data that shows that, with the begining of the XXI century, income is being moved away from jobs (and workers) towards capital (and large companies):

15134556_10210587766985940_7255657276005857315_n

On the other side of the debate, there are many people who believe that humans will always be able to adapt and add value to society, regardless of what machines can or cannot do. David Autor, in his TED talk, makes a compelling point that many times before it was argued that “this time is different” and that it never was.

Other articles, including this one in the Washington Post, argue that the fears are overblown. The robots will not be coming in large numbers, to replace humans. Not in the near future, anyway.

Other economists, such as  Richard Freeman, in an article published in Harvard Magazine agree and also believe that the fears are unwarranted: “We should worry less about the potential displacement of human labor by robots than about how to share fairly across society the prosperity that the robots produce.

His point is that the problem is not so much on the lack of jobs, but on the depression of wages. Jobs may still exist, but will not be well paid, and the existing imbalances in income distribution will only become worst.

Maybe, in the end, this opinion represents a balanced synthesis of the two competing views: jobs will still exist, for anyone who wants to take them, but there will be competition (from robots and intelligent agents) for them, pushing down the wages.