With the development of artificial intelligence technology, major technology companies have increased their investment in deep learning, and the same is true for the National Science Foundation. Now, it promotes deep learning algorithms by funding researchers from several universities in the United States. Run on FPGAs and supercomputers. Although the current view only represents a trend of deep learning, with the commercial operations of the major technology companies and more in-depth learning into the university research center and the national laboratory, the development of deep learning is positive. enhancement.
The following is the original text:Machine learning has made great strides in the past few years, thanks in large part to the development of new technologies for compute-intensive workload expansion. NSF's latest funding project seems to suggest that what we see may be just the tip of the iceberg, as researchers are trying to extend the technology of deep learning to more computers and newer processors.
A particularly interesting project implemented by the University of New York at Stony Brook, which demonstrates that FPGAs (field programmable gate arrays) are superior to GPUs, and they have found that deep learning algorithms can run faster and more efficiently on FPGAs. Breaking through the current traditional understanding.
According to the project summary:Researchers expect that the slowest part of the algorithm on the GPU will run significantly on the FPGA. At the same time, the fastest part of the algorithm on the GPU will have similar performance on the FPGA, but the power consumption will be extremely low.
In fact, apart from GPUs, the idea of ​​running these models on hardware is not new. For example, IBM recently made a sensation with a new brain-inspired chip that claims to perfectly support neural networks and other cogniTIve-inspired workloads. Microsoft demonstrated its Adam project in July this year, which revisited a popular deep learning technology to run on a general-purpose Intel CPU processor.
Because of its customizable features, FPGAs have unique advantages. In June of this year, Microsoft explained how it can speed up Bing search by offloading certain process parts to FPGAs. Later in the month, at Gigaom's Structure conference, Intel announced that the upcoming hybrid chip architecture would put FPGAs together on the CPU (actually they would share memory), mainly for professional big data loads and Microsoft Bing. Case.
However, FPGAs are not the only potential infrastructure choice for deep learning models. NSF also sponsors researchers at New York University to test deep learning algorithms and other workloads via Ethernet-based remote direct memory access technology, which is widely used on supercomputers, but now it is being brought to enterprise systems, RDMA connections By sending messages directly to memory, the CPU, switches, and other components are prevented from delaying the process, speeding up the transfer of data between computers.
Speaking of supercomputers, another new NSF-funded project led by machine learning experts Stanford University (Baidu and Coursera) Andrew Ng and supercomputer expert Jack Dongarra of the University of Tennessee and Indiana University's Geoffrey Fox, aiming to make deep learning models Programmable with Python and bring it to supercomputers and extended cloud systems. It is reported that this project, which received nearly $1 million in funding from NSF, is called the Rapid Python Deep Learning Infrastructure.
RaPyDLI will be built as an open source module that can be accessed from the Python user interface, but can be safely implemented through interactive analysis and visualization in the largest supercomputer or cloud C/C++ or Java environment. RaPyDLI will support GPU accelerators and Intel Phi coprocessors as well as a broad range of storage technologies including Files, NoSQL, HDFS and Database.
All the work that is done today is to make the in-depth learning algorithms easier and improve their performance. These three projects are only a small part, but if these technologies can be used by the technology giants in the commercial field, or they can be used in research centers and national laboratories. It would be very beneficial for a computer to solve real complex problems.
Delivery from Malaysia Warehouse
Guangzhou Fengjiu New Energy Technology Co.,Ltd , https://www.flashfishbatteries.com