Google recently announced the Tensor Processing Unit (TPU), an application-specific integrated circuit (ASIC) tailored for machine learning applications that, according to the company, delivers an order of magnitude improved performance, per watt, over existing general purpose processors.

The chip, developed specifically to speed up the increasingly common machine learning applications, has already powered a number of state of the art applications, including AlphaGo and StreetView. According to Google, this type of applications is more tolerant to reduced numerical precision and therefore can be implemented using fewer transistors per operation. Because of this, Google engineers were able to squeeze more operations per second out of each transistor.

The new chip is tailored for TensorFlow, an open source library that performs numerical computation using data flow graphs. Each node in the graph represents one mathematical operation that acts on the tensors that come in through the graph edges.

Google stated that TPU represents a jump of ten years into the future, in what regards Moore’s Law, which has been recently viewed as finally coming to a halt. Developments like this, with alternative architectures or alternative ways to perform computations, are likely to continue to lead to exponential improvements in computing power for years to come, compatible with Moore’s Law.