Making applications perform better on Intel® architecture-based clusters with multiple fabric flexibility
- Now optimized for 2nd Generation Intel® Xeon
- Phi™ processors and Intel® Omni-Path Architecture
- Designed and developed for high scalability
- Supports the latest MPI-3.1 standard
- MPICH ABI compatibility
Deliver Flexible, Efficient, and Scalable Cluster Messaging
Intel® MPI Library focuses on enabling MPI applications to perform better for clusters based on Intel® architecture—implementing the high-performance MPI-3.1 standard on multiple fabrics. Quickly deliver maximum end user performance―even if you change or upgrade to new interconnects―without requiring changes to the software or operating environment.
Use this high-performance MPI message library to develop applications that can run on multiple cluster interconnects chosen by the user at runtime. Benefit from a free runtime environment kit for products developed with Intel MPI Library. Get excellent performance for enterprise, divisional, departmental, workgroup, and personal high-performance computing.
Intel MPI Library is available as part of Intel® Parallel Studio XE Cluster Edition and as a free stand-alone version. A license purchase includes Priority Support.
“Fast and accurate state of the art general purpose CFD solvers is the focus at S & I Engineering Solutions Pvt, Ltd. Scalability and efficiency are key to us when it comes to our choice and use of MPI Libraries. The Intel® MPI Library has enabled us to scale to over 10k cores with high efficiency and performance.”
Nikhil Vijay Shende, Director,
S & I Engineering Solutions, Pvt. Ltd.
- Scaling verified up to 150k Processes
- Thread safety allows you to trace hybrid multithreaded MPI applications for optimal performance on multi- and many-core Intel® Architecture.
- Improved start scalability through the mpiexec.hydra process manager
- Low latency MPI implementation up to 2 times as fast as alternative MPI libraries
- Enable optimized shared memory dynamic connection mode for large SMP nodes
- Increase performance with improved DAPL, OFA, and TMI fabric support
- Accelerate your applications using the enhanced tuning utility for MPI
Interconnect Independence & Flexible Runtime Fabric Selection
- Get high-performance interconnects, including InfiniBand* and Myrinet*, as well as TCP, shared memory, and others
- Efficiently work through the Direct Access Programming Library (DAPL*), Open Fabrics Association (OFA*), and Tag Matching Interface (TMI*), making it easy for you to test and run applications on a variety of network fabrics.
- Optimizations to all levels of cluster fabrics: from shared memory thru Ethernet and RDMA-based fabrics to the tag matching interconnects
|MPI 3.0 Standard Support||The next major evolution of the Message Passing Interface is with the release of the MPI-3.0 standard. Significant changes to remote memory access (RMA) one-sided communications, addition of non-blocking collective operations, and large counts messages greater than 2GB will enhance usability and performance. Now available in the Intel® MPI Library 5.0.|
|Binary compatibility||Intel® MPI Library 5.0 offers binary compatibility with existing MPI-1.x and MPI-2.x applications. Even if you’re not ready to move to the new standard, you can still take advantage of the latest Intel® MPI Library 5.0 performance improvements without recompiling. Furthermore, the Intel® MPI Library is an active collaborator in the MPICH ABI Compatibility Initiative, ensuring any MPICH-compiled code can use our runtimes.|
|Support for Mixed Operating Systems||Run a single MPI job using a cluster with mixed operating systems (Windows* OS and Linux OS*) under the Hydra process manager. Get more flexibility in job deployment with this added functionality.|
|Latest Processor Support Haswell, Ivy Bridge, Intel® Many Integrated Core Architecture||Intel consistently offers the first set of tools to take advantage of the latest performance enhancements in the newest Intel product, while preserving compatibility with older Intel and compatible processors. New support includes AVX2, TSX, FMA3 and AVX-512.|
Implementing the high performance version 3.0 of the MPI-3 specification on multiple fabrics, Intel® MPI Library 5.0 for Windows* and Linux* focuses on making applications perform better on IA-based clusters. Intel® MPI Library 5.0 enables you to quickly deliver maximum end-user performance, even if you change or upgrade to new interconnects without requiring major modifications to the software or to the operating environment. Intel also provides a free runtime environment kit for products developed with the Intel® MPI Library.
Optimized shared memory path for multicore platforms allows more communication throughput and lower latencies. Native InfiniBand interface (OFED verbs) also provides support for lower latencies. Multi-rail capability for higher bandwidth and increased interprocess communication and Tag Matching Interface (TMI) support for higher performance on Intel® True Scale, Qlogic* PSM, and Myricom* MX solutions.
Intel® MPI Library 5.0 Supports Multiple Hardware Fabrics
Whether you need to run TCP sockets, shared memory, or one of many Remote Direct Memory Access (RDMA) based interconnects, including InfiniBand*, Intel® MPI Library 5.0 covers all your configurations by providing an accelerated universal, multi-fabric layer for fast interconnects via the Direct Access Programming Library (DAPL*) or the Open Fabrics Association (OFA*) methodology. Develop MPI code independent of the fabric, knowing it will run efficiently on whatever network is chosen by the user at runtime.
Additionally, Intel® MPI Library 5.0 provides new levels of performance and flexibility for applications achieved through improved interconnect support for Intel® True Scale, Myrinet* MX, and QLogic* PSM interfaces, faster on-node messaging and an application tuning capability that adjusts to the cluster architecture and application structure.
Intel® MPI Library 5.0 dynamically establishes the connection, but only when needed, which reduces the memory footprint. It also automatically chooses the fastest transport available. Memory requirements are reduced by several methods including a two-phase communication buffer enlargement capability which allocates only the memory space actually required.
Several suites are available combining the tools to build, verify and tune your application. The products covered in this product brief are highlighted in blue. Named-user or multi-user licenses along with volume, academic, and student discounts are available.
|Processor support||Validated for use with multiple generations of Intel® and compatible processors including but not limited to: 2nd Generation Intel® Core™2 processor, Intel® Core™2 processor, Intel® Core™ processor, Intel® Xeon™ processor, and Intel® Xeon Phi™ Coprocessors|
|Operating systems||Windows* and Linux*|
|Programming languages||Natively supports C, C++ and Fortran development|
|System requirements||Please refer to www.intel.com/software/products/systemrequirements/ for details on hardware and software requirements.|
|Support||A free Runtime Environment Kit is available to run applications that were developed using Intel® MPI LibraryAll product updates, Intel® Premier Support services, and Intel® Support Forums are included for one year. Intel Premier Support gives you confidential support, technical notes, application notes, and the latest documentation. Join the Intel® Support Forums community to learn, contribute, or just browse! http://software.intel.com/en-us/forums|