the new version of the machine learning tool

PyTorch 2.0 is the latest version of the popular open source deep learning framework. This release introduces significant improvements and better compatibility with multicore GPUs. Developers can also use the new profiling tool to optimize the performance of their models.

PyTorch, one of the most popular tools for machine learning, has just released version 2.0. This new version offers many exciting enhancements and features for data scientists, developers and researchers. With state-of-the-art features, PyTorch 2.0 could well be the answer to the complex challenges of deep learning.

PyTorch 2.0: a major breakthrough for machine learning

PyTorch 2.0 is now available to all after several months of previewing. This release represents a major step forward from PyTorch 1.0. Originally launched by Facebook in 2018, the project has benefited from incremental improvements over the years. It is one of open source frameworks offering tensor computation for deep learning.

The creation of the PyTorch Foundation in September 2022 was intended to foster collaboration and contributions by establishing more transparent governance.

Programmers worked hard to contribute new code and functionality to the open source initiative. Within this framework, the project has benefited from the participation of 428 different contributors. PyTorch 2.0 focuses on performance, and the new ” Accelerated Transformers “is a notable example.

Modern language models and generative AI are widely used in the powered by these transformers transformers. They help language models to bridge the gap between concepts. Accelerating these transformers dramatically improves model learning speed and overall performance. With PyTorch 2.0, developers can now create better and faster deep learning models. This will enable development of even more innovative applications in artificial intelligence.

Intel: a fervent supporter of PyTorch 2.0

PyTorch 2.0 is supported by numerous contributors, including Intel, the leading silicon manufacturer. The computer processor giant is a fervent supporter of free software and open source.doption of PyTorch to an open governance model. As a major contributor to PyTorch, Intel is actively involved in the community.

Furthermore, although AI and ML are often associated with GPUs, CPUs also play an important role. This is one of the reasons why Intel is investing heavily in this component. PyTorch has features that enable optimal selection of the quantization method suited to a specific teaching platform.

The unified backend developed by Intel remains compatible with the FBGEMM method. The latter is a matrix calculation library developed by Facebook for x86 CPU architectures. In addition, Intel’s oneDNN technology is available for the TensorFlow open-source library dedicated to machine learning. In short, Intel is making a significant contribution to the development of PyTorch 2.0, providing tools and technologies to enhance the library’s capabilities.