The calculations required to find new drugs are unimaginable to the human mind: it is estimated that there could be more potential molecules to be evaluated than atoms in the solar system. Such a vast search would take a long time with current computer systems, but this is changing with a new generation of hardware designed at the Argonne National Laboratory in Chicago.
Basically, the greatest advance has been produced by deep learning algorithms (a class of algorithms designed for automatic learning that have multiple layers). These algorithms are optimal for quickly finding patterns in a large amount of data. Along with these software enhancements, new hardware designs will exponentially multiply the possibilities.
Designs like the ones Argonne has announced that it is already testing on its new Cerebras start-up computer and that promise to accelerate algorithm training in various orders of magnitude. This computer is part of a new generation of specialized artificial intelligence hardware.
Currently, the most common chips being used in deep learning are known as graphic processing units (GPUs). These chips have traditionally been used in video games and computer-generated graphics production, so the biggest stumbling block of GPUs is that they are general-purpose: while they have driven the artificial intelligence revolution of this decade, your designs are not optimized for such a task.
This inefficiency in hardware design, as it limits the speed at which chips can execute deep learning algorithms, it also requires huge amounts of energy in the process. Many of the specialized chips are also optimized for deep learning business applications, such as computer vision and natural language processing, but they may not work as well when handling the types of data proper to scientific research.
In comparison, new chip architecture designs, such as that presented by Argonne, have the potential to train deep learning algorithms up to 1,000 times faster than GPUs, and use a lot less energy in the process.
At the moment, thanks to the size of the Cerebras chip, which is larger than an iPad and has millions of transistors to perform large amounts of calculations, it is not necessary to connect several smaller processors, which would slow down the training of the model. The tests have already reduced the training time of algorithms from weeks to hours. The interesting thing is that Argonne has been testing the system in research for the development of new cancer drugs.
The goal is to design a deep learning model that can predict how a tumour would respond to a drug or combination of medications. This would involve developing an AI model to predict the properties of millions of molecular combinations. It’s like looking for a needle in a haystack, but thanks to this new architecture that improves not only the software but also the hardware, now everything can be done much faster.
Cerebras is not the only company that has taken advantage of these new architectures since then: on the pitch there are also start-ups like Graphcore, Sambanova and Groq. All this, in a very short time, could offer us discoveries in the field of medicine that otherwise would have taken decades to produce.
Por el momento, gracias al tamaño del chip de Cerebras, que es más grande que un iPad y dispone de millones de transistores para realizar grandes cantidades de cálculos, no es necesario conectar varios procesadores más pequeños, lo que enlentecería el entrenamiento del modelo. En las pruebas ya se ha logrado reducir así el tiempo de entrenamiento de algoritmos de semanas a horas. Y lo más interesante es que Argonne ha estado probando el sistema en una investigación para el desarrollo de nuevos fármacos contra el cáncer.
El objetivo es diseñar un modelo de aprendizaje profundo que pueda predecir cómo un tumor respondería a un medicamento o una combinación de medicamentos. Ello implicaría desarrollar un modelo de IA para predecir las propiedades de millones de combinaciones moleculares. Es como buscar una aguja en un pajar, pero gracias a esta nueva arquitectura que mejora no solo el software sino también el hardware, ahora todo se podrá hacer mucho más deprisa.
Cerebras no es la única compañía que desde entonces se ha aprovechado de estas nuevas arquitecturas: en el terreno de juego también hay startups como Graphcore, SambaNova y Groq. Todo ello, en muy poco tiempo, podría ofrecernos hallazgos en el campo de la medicina que de otro modo hubieran tardado décadas en producirse.