Want to have a place in AI, Intel's hardware alone is not enough to grab the market, to provide powerful performance through software optimization, and to improve the convenience of developing application systems, will be the key to the success of the layout
Part I: Intel AI Strategy Full Analysis (1): Extend the four product lines
[Intel's four levels of enhanced AI software applications] Intel's support for AI applications, first of all, the processor built-in low-level software original instructions, such as MKL-DNN; for deep learning software framework, to provide optimized program code; for big data analysis Application platform, the version of Hadoop, Spark to provide or contribute program code; and in the integration of software development tools, they also launched Deep Learning SDK.
Image source: iThome
Optimistic about the computing processing demand brought by artificial intelligence and the future development, Intel not only launched a variety of server-side processors to respond to, during the Intel AI Day event held in November 2016, they announced that they would be more active in developing deep learning applications. The library of computational and communication processes is integrated into the processor hardware in the form of basic instructions (primiTIves). In the support of AI application software development, they have begun to provide parts, including: link library, programming language support, platform, software development kit, program development framework.
Coincidentally, IBM and Nvidia also announced in November that they will jointly develop a new deep learning software development kit PowerAI, which can be used with IBM's specially designed server for AI applications - OpenPOWER LC (using Power computing architecture and Nvidia) NVLink Interconnect Technology) to provide enterprise-class deep learning solutions.
Intel's strategy for artificial intelligence on Intel AI Day coincided with the announcement of PowerAI by IBM and Nvidia during the week, and at the same time, the SuperComputer 2016 conference, which coincided with the global high-performance computing community. Obviously, the two camps are quite competitive, and their ultimate goal is actually the next wave of enterprise applications.
Intel AI Solution Overview
In the layout of AI applications, Intel is not just providing a processor platform, software support is more important - they develop a variety of link libraries, software development platforms, actively support a variety of deep learning application frameworks, but also introduce integrated s solution.
Actively support a variety of deep learning frameworks, and provide program code for IA architecture performance optimizationTaking the various program development frameworks used to develop AI applications as an example, Intel is committed to providing optimized program code for system performance in the Intel Architecture Computing Architecture (IA) to improve execution performance.
For example, in the field of deep learning technology, several popular open source software frameworks (Deep Learning Framework), such as Caffe, Theano, Torch, MXNet, Neon, Intel have provided Intel Architecture optimizer code. As for the TensorFlow part, Intel and Google Cloud Platform officially announced the strategic cooperation at the Intel AI Day event in November, and the relevant program code will be released as early as the beginning of 2017.
Caffe
This is a framework developed by the Berkeley Vision and Learning Center (BLVC). Intel offers a special version of Intel OpTImized Caffe for Xeon and Xeon Phi processor platforms, which integrates Intel's developed math core link library MKL. (Math Kernel Library), and has been working on software performance optimization for the AVX2 and AVX-512 instruction sets.
And how effective has Caffe been improved? An example from Intel is the illegal video detection application of audio and video service provider LeTV Cloud. They used the Intel OpTImized Caffe for video classification training on the Xeon E5-2680 v3 server platform, resulting in a 30-time performance improvement (compared to their previous BLVC Caffe, with the OpenBlas link library as a convolution Neural network).
Theano
It is a deep learning framework developed by the LISA Laboratory at the University of Montreal, Canada. Intel also provides an improved link library that is optimized for multi-core computing environments, and is used in the case study of the Graduate School of Medicine, Kyoto University. When they used this test to search for new drugs, the accuracy was up to 98.1%, and another deep learning network (Deep Belief Networks, DBN) also achieved an 8-fold performance increase.
Torch
Torch is also one of the deep learning frameworks that many people care about. The main members of the current training are research scientists and software engineers from Facebook, Twitter, and Google DeepMind. Intel provides optimized support for this framework, and integrates the MKL link library to improve the efficiency of the deep neural network instruction set executed on the server hardware side.
In this environment, Intel cited the example of their collaboration with Pikazo Software Inc. to develop an image-style conversion app for their back-end processing performance enhancements - if the app just launched the benchmark as a benchmark, now Pizako The speed of the app in the rendering can be increased by 28 times, and the size of the image that can be processed is also expanded to 15 times.
Neon
Neon is a link library developed by Intel's Nervana Systems Inc., which emphasizes ease of use and high performance. In its existing technology architecture, it can be divided into three layers: deep learning functions (algorithms), data models, and solutions.
At this Intel AI Day conference, Intel also announced the launch of the Intel Nervana Graph Compiler as a common foundation for the AI ​​application layer to provide a more advanced depiction of the architectural style of deep learning applications. School practice.
Provides high-order graphics operation compiler for neural network-like networks
The deep learning framework Neon's access to hardware resources will be interfaced through the conversion APIs of different platforms. Therefore, the upper layer application does not need to consider hardware differences. Next, Intel will add between Neon's original architecture and hardware conversion layer. A layer of Nervana Graph Compiler is designed to provide high-order processing for neural network applications to perform across multiple hardware devices simultaneously.
Bolier Manometer,Square Pressure Gauge,Square Manometer With Capillary,Square Manometer
ZHOUSHAN JIAERLING METER CO.,LTD , https://www.zsjrlmeter.com