Siirry suoraan sisältöön

Intel® oneAPI Base & HPC Toolkit

Take your HPC, Enterprise, AI, and Cloud applications to the Max with fast, scalable and portable parallel code

Intel® oneAPI Base & HPC Toolkit is a comprehensive suite of development tools that make it fast and easy to build modern code that gets every last ounce of performance out of the newest Intel® processors in high-performance computing (HPC) platforms. Intel® oneAPI Base & HPC Toolkit simplifies creating code with the latest techniques in vectorization, multi-threading, multi-node, memory optimization, and accelerator offloading.

Intel oneAPI Base & HPC Toolkit includes:

  • Industry-leading Compilers: C++ and Fortran compilers for building performance-oriented applications.
  • Performance Libraries: Includes Intel® Math Kernel Library (MKL), Intel® Data Analytics Acceleration Library (DAAL), and Intel® Threading Building Blocks (TBB).
  • Advanced Analysis Tools: For design, MPI, cluster tuning, and cluster health checking to enhance uptime and productivity.
  • MPI Library and Benchmarks: For scalable parallel applications.
  • Support for Standards: OpenMP support for multi-threading and vectorization.
  • Development Environment: Compatible with multiple IDEs and provides consistent programming with Intel® Advanced Vector Extensions (Intel® AVX-512).
  • Scalability and Latency: Features like the next-generation Intel® MPI Library for greater scalability and reduced latency..

The tools that comprised Intel Parallel Studio XE are now included in Intel oneAPI Base & HPC Toolkit. To upgrade your Intel Parallel Studio XE license, contact Alfasoft for oneAPI promotion upgrade price.


What is Intel oneAPI Toolkit?

Break away from proprietary single-architecture languages and deliver parallel programming productivity with uncompromised performance for Intel® CPUs and accelerators. Take advantage of Priority Support for fast development with direct access to Intel engineers for technical questions.

Intel oneAPI Base & HPC Toolkit helps developers, researchers, and data scientists confidently develop performant code quickly and correctly, and scale compute-intensive workloads that exploit cutting-edge features Intel CPUs, GPUs, FPGAs, and HPC clusters. It includes industry-leading C++ and Fortran compilers, standards-driven OpenMP support, MPI library and benchmarks, and advanced analysis tools for design, MPI, cluster tuning, and cluster health checking to enhance uptime and productivity.

Intel oneAPI Base & HPC Toolkit includes all the Intel compilers (C/C++, Fortran, DPC++ etc.) and multiple platform support (Windows, Linux and Mac OSX) to give you more flexibility for the future.

Build, analyze, optimize and scale fast HPC applications for various architectures with vectorization, multithreading, multi-node parallelization, and memory optimization techniques using the Intel oneAPI Base & HPC Toolkit.

Simplify cross-architecture HPC application deployment on Intel CPUs and accelerators using Intel’s industry-leading compilers and libraries. Efficiently create fast parallel code and boost application performance that exploits cutting-edge features of current and future Intel® architecture.

Quickly gauge application performance, resource use, and areas for optimization to ensure fast cross-architecture performance.

Learn how resource use impacts your code — including compute, memory, I/O, and more to make sound cross-architecture design decisions.

Supports HPC standards, including C/C++, Fortran, Python, OpenMP and MPI, for easy integration with legacy code.
Works seamlessly with other Intel tools to accelerate specialized workloads (AI analytics, rendering, deep learning inference, video processing, etc.).

Take advantage of Priority Support. Intel offers the ability to connect directly to Intel engineers for answers to technical questions.

Who Needs It

C, C++, Data Parallel C++, Fortran, Python, OpenMP, and MPI software developers and architects building HPC, enterprise, AI, and cloud solutions • Developers looking to maximize their software’s performance and flexibility to support cross architectures on current and future Intel® platforms.

What it Does

  • Creates fast parallel code. Boost application performance that scales on current and future Intel platforms with industry-leading compilers, performance libraries, performance profilers, and code and cluster analysis tools.
  • Builds code faster. Simplify the process of creating fast, scalable, reliable parallel code.
  • Delivers Priority Support. Connect directly to Intel’s engineers for confidential, quick answers to technical questions. Access older versions of the products. And receive updates for a year.

oneAPI HPC Toolkit Licensing

The licensing and the naming of the editions have changed with the launch of oneAPI.

Supported platforms

Intel oneAPI Base & HPC Toolkit is offered with support for Windows and Linux. Support for both platforms are included with each license.


Intel oneAPI Base & HPC Toolkit is offered with support for Fortran, C++ and Data Parallel C++. Support for all described languages is included with each license.


Intel oneAPI Base & HPC Toolkit is offered in two editions: Single-Node and Multi-Node.

The target platforms for development and deployment can range from a workstation to a multi-node cluster requiring different support efforts. Choose the paid product with the support that best fits the used model targeted:

  • Intel® oneAPI Base and HPC Toolkit Single-Node: Target platform of shared memory systems including PCs, laptops, or workstations.
  • Intel® oneAPI Base and HPC Toolkit Multi-Node: Target platform of shared memory systems such as PCs, laptops, workstations, or distributed memory high-performance compute clusters.
Transition from IPS to Intel oneAPI

Converting from Intel Parallel Studio to oneAPI Base and HPC Toolkit

When upgrading, a new serial number is generated for the oneAPI license and the previous IPSXE serial number is marked as “retired”. All IPSXE owners will still be able to use all of their IPSXE tools after the upgrade.

In addition, these IPSXE licenses are in most cases not only extended to include new components, but they also usually gain a complete tool suite including C ++ and Fortran compilers (both!) for an additional operating system.

If you have, for example, IPSXE Composer Edition for Fortran, Windows with active support, as a result of the transition to the Intel® oneAPI Base & HPC Toolkit, you will also receive a license for use under Linux, which then also includes the Intel Fortran Compiler and the MKL for Linux.

Intel also offer a special bundle for Fortran users.  Intel Fortran Compilers includes; the new LLVM based Intel Fortran Compiler (ifx), the Intel Fortran Compiler Classic (ifort), and Math Kernel Library – for Windows or Linux. The new Intel Fortran Compilers suite give existing Parallel Studio XE Composer for Fortran users with current support the choice to upgrade to Intel Fortran Compilers OR to oneAPI Base & HPC Toolkit (Single-node), depending what componets the developer is in need of.

Are you a current Intel Parallel Studio XE user? To minimize the cost and make the transition to oneAPI as smooth as possible, contact Alfasoft BEFORE you upgrade your license in IRC.


Components included with Intel oneAPI HPC Toolkit

  • Intel® oneAPI DPC++/C++ Compiler (DPCPP/ICX)
    A standards-based, CPU, GPU and FPGA compiler supporting Data Parallel C++, C++, C, SYCL and OpenMP. that leverages well-proven LLVM compiler technology and Intel’s history of compiler leadership for performance. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® C++ Compiler Classic
    A standards-based C/C++ compiler supporting OpenMP focused on CPU development. Take advantage of more cores and built-in technologies in platforms based on Intel® CPU architectures. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® Fortran Compiler (IFX) for XPU development
    A standards-based CPU and GPU compiler supporting Fortran and OpenMP. Leverages well-proven LLVM compiler technology and Intel’s history of compiler leadership for performance. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® Fortran Compiler Classic
    A standards-based Fortran compiler supporting OpenMP focused on CPU development. Take advantage of more cores and built-in technologies in platforms based on Intel® CPU architectures. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® Cluster Checker
    Verify that cluster components work together seamlessly for optimal performance, improved uptime, and lower total cost of ownership.
  • Intel® VTune Profiler
    Performance analysis tool for serial and multithreaded applications. Intel VTune Profiler optimizes application performance, system performance, and system configuration for HPC, cloud, IoT, media, storage, and more.
  • Intel® Inspector
    Locate and debug threading, memory, and persistent memory errors early in the design cycle to avoid costly errors later.
  • Intel® MPI Library
    Deliver flexible, efficient, scalable cluster messaging on Intel® architecture.
  • Intel® Trace Analyzer and Collector
    Understand MPI application behaviour across its full runtime.
  • Intel® oneAPI DPC++ Library
    Speed up data parallel workloads with these key productivity algorithms and functions.
  • Intel® oneAPI Threading Building Blocks
    Simplify parallelism with this advanced threading and memory-management template library.
  • Intel® oneAPI Math Kernel Library
    Accelerate math processing routines, including matrix algebra, fast Fourier transforms (FFT), and vector math.
  • Intel® oneAPI Data Analytics Library
    Boost machine learning and data analytics performance.
  • Intel® oneAPI Video Processing Library
    Deliver fast, high-quality, real-time video decoding, encoding, transcoding, and processing.
  • Intel® Advisor
    Design code for efficient vectorization, threading and offloading to accelerators.
  • Intel® Distribution for Python
    Achieve fast math-intensive workload performance without code changes for data science and machine learning problems.
  • Intel® DPC++ Compatibility Tool
    Migrate legacy CUDA code to a multi-platform program in DPC++ code with this assistant.
  • Intel® Integrated Performance Primitives
    Speed performance of imaging, signal processing, data compression, cryptography, and more.



Intel oneAPI Tookit 2024.1 launched!

A concise summary of the updates in the Intel® oneAPI Base Toolkit 2024.1

  • Intel GPU Support: The 2024 update introduces Intel GPU support to their Python distribution within the oneAPI toolkit1. This enhancement enables developers to harness the power of Intel GPUs for various workloads.
  • MLNet Framework Integration: The toolkit now includes support for Microsoft’s open-source MLNet framework1. This integration facilitates machine learning tasks and expands the toolkit’s capabilities.
  • Deep Neural Network Library Optimizations: The 2024 release brings performance optimizations to the Deep Neural Network Library (DNNL) within oneAPI1. These improvements enhance the efficiency of deep learning computations.
  • Collective Communications Library Enhancements: Developers can benefit from improved hardware resource utilization in the Collective Communications Library (CCL) of oneAPI1. This library is essential for parallel communication in distributed computing.
  • CUDA-to-SYCL Migration: The update includes expanded CUDA-to-SYCL migration capabilities1. This feature simplifies transitioning CUDA code to the SYCL programming model.
  • AI Acceleration Improvements: Various enhancements have been made to accelerate AI workloads within the toolkit1.
  • oneMKL Math Library: The oneMKL math library now integrates RNG offload1. This aids in efficient random number generation.
  • Code Profiling for NPUs: Developers can now profile code offloaded to NPUs (Neural Processing Units)1.
  • C++ Parallel STL Support: In preview, the toolkit offers C++ parallel STL support for easy GPU offloading1.

Intel oneAPI Toolkit 2024 helps accelerate HPC, AI and rendering applications

Intel oneAPI Toolkit 2024 is a developer tools with new features and enhancements that accelerate HPC, AI and also rendering applications on various platforms, including Intel CPUs, GPUs and AI accelerators. The tools are based on oneAPI, an open and unified programming model that supports C++, OpenMP, SYCL, Fortran MPI and Python. Developers can access the tools from Intel or via popular repositories and containers, and they can also use the Intel Developer Cloud to evaluate the latest Intel hardware and software.

What challenges are developers and ISVs facing?

Many data-centric workloads run best when deployed across a mix of architectures (CPU, GPU, FPGA, AI and other accelerators). However, different architectures typically require unique languages, tools and libraries, adding complexity for developers and limiting code reuse. This makes it difficult to take advantage of multiarchitecture systems and new architectures, optimize application performance, and maintain code efficiently and cost-effectively.

Intel® Software Development Tools help developers build and optimise high-performance applications efficiently through a complete set of advanced compilers, libraries, optimised frameworks, and analysis, debug and porting tools. Built on a legacy with decades of expertise, the tools support:

Familiar languages⎯C, C++, SYCL, Fortran and Python, plus standards including MPI and OpenMP, providing full continuity with existing code.

Intel CPUs, GPUs and FPGAs⎯Enabling unique hardware features such as those for performance, high-bandwidth memory, AI and rendering.

Highlights of the 2024 Intel oneAPI Toolkit release

Intel DPC++/C++ Compiler improves productivity and code offload with a near complete SYCL 2020 implementation, an easier way to adapt C++ code using virtual functions to run with SYCL device offload, and improved error messaging and handling for SYCL and OpenMP code.

Intel Fortran Compiler provides initial Fortran 2023 standards support and adds popular LLVM sanitisers to catch various errors and bugs on CPU.

Intel oneAPI Math Kernel Library integrates vector math optimizations into RNGs for HPC simulations, statistical sampling, and more on X86 CPUs and Intel GPUs. It also delivers high-performance benchmarks optimized for Intel Xeon CPU Max Series and Intel Data Center GPU Max Series.

Intel oneAPI Data Analytics Library optimises big data analysis with integration into Microsoft’s open source ML.Net framework to build and ship machine learning models.

Intel oneAPI Deep Neural Network Library streamlines storage efficiency and optimises performance on Intel Xeon processors. It also enhances compatibility with graph compiler capabilities and advances code generation through the Compiler Xbyak backend and accelerated sparse_tensor_dense_matmul() performance on Intel Xeon processors with TensorFlow 2.5.

Intel oneAPI Threading Building Blocks (oneTBB) can be compiled on WebAssembly (Wasm) using Emscripten, facilitating the library’s use by applications running on a web browser.

Intel MPI Library simplifies large MPI message passing and provides more granular process grouping by using MPI Sessions. It also improves MPI application performance on systems with nodes that include the data genter GPU Max Series through efficient message passing and collective operations infrastructure. It enables Fortran codes to use larger data sets through seamless support of 8-byte integers with native support of ILP64. It also supports systems with software management stacks based on the PMIx standard.

Intel oneAPI Collective Communications Library boosts performance for distributed AI workloads through better utilisation of hardware resources.

Intel Integrated Performance Primitives helps users securely transmit data faster with Intel AVX-2 and Intel AVX-512 optimisations for the AES-GCM and RSA algorithms. It also provides fast performance for the image and signal processing domains with Intel AVX-512 optimisations for various functions.

Buy the Intel oneAPI 2024 Toolkits with Priority Support

Alfasoft is an Intel Software Elite Reseller. We can support you with Intel oneAPI licensing advice and discounts. Purchase the Intel oneAPI 2024 Toolkits with Priority Support to gain access to private, dedicated Intel engineer support, access to an earlier version, plus many other benefits.

Please get in touch with Alfasoft via email or call us if you have any queries or need a quote, and we will reach out to you.



Common Hardware Requirements

CPU Processor Requirements

Systems based on Intel® 64 architectures below are supported as host and target platforms.

  • Intel® Core™ processor family or higher
  • Intel® Xeon® processor family
  • Intel® Xeon® Scalable processor family

Requirements for Accelerators

  • Integrated GEN9 or higher GPUs including latest Intel® Iris® Xe MAX graphics
  • FPGA Card: see Intel(R) DPC++ Compiler System Requirements.

Disk Space Requirements

  • ~3 GB of disk space (minimum) if only installing compiler and its libraries: Intel oneAPI DPC++/C++ Compiler, Intel® DPC++ Compatibility Tool, Intel® oneAPI DPC++ Library and Intel® Threading Building Block
  • Maximum of ~24 GB diskspace if installing all components

During the installation process, the installer may need up to 6 GB of additional temporary disk storage to manage the download and intermediate installation files.

Memory Requirements

  • 8 GB RAM recommended
  • For FPGA development, see Intel(R) DPC++ Compiler System Requirements.

Common Software Requirements

Operating System Requirements in Intel oneAPI base HPC toolkit

The operating systems listed below are supported on Intel® 64 Architecture. Individual tools may support additional operating systems and architecture configurations. See the individual tool release notes for full details.

For developing applications for offloading to accelerators like GPU or FPGA, a specific version of GPU driver is required for the supported operating system. Please visit the Installation guide for Intel® oneAPI Toolkits “Install Intel GPU Drivers” section for up to date information.

For Linux

  • GNU* Bash is required for local installation and for setting up the environment to use the toolkit.

For CPU Host/Target Support

For CPU Host/Target Support

For GPU Accelerator Support

For GPU Accelerator Support

For Windows

For CPU Support

For GPU Accelerator Support

oneAPI for GPU Accelerator Support

For macOS

oneAPI is for macOS


  • Intel® Xeon® processors
  • Intel® Xeon® Scalable processors
  • Intel® Core™ processors


  • Intel® Processor Graphics Gen9 and above
  • Xe architecture

Languages in Intel oneAPI base HPC toolkit

  • Data Parallel C++ (DPC++) and SYCL (Note Must have Intel oneAPI Base Toolkit installed)
  • C and C++

Operating systems in Intel oneAPI base HPC toolkit

  • Windows
  • Linux
  • macOS (Not all Intel oneAPI HPC Toolkit components are available for macOS. The following components are included: Intel® C++ Compiler Classic and Intel® Fortran Compiler Classic.)

Development environments in Intel oneAPI base HPC toolkit

  • Compatible with compilers from Microsoft, GCC, Intel, and others that follow established language standards
  • Windows: Microsoft Visual Studio
  • Linux: Eclipse*

Distributed environments

  • MPI

Open Fabrics Interfaces (OFI) framework implementation supporting the following

  • InfiniBand*
  • iWARP, RDMA over Converged Ethernet (RoCE)
  • Amazon Web Services Elastic Fabric Adapter (AWS EFA)
  • Intel® Omni-Path Architecture (Intel® OPA)
  • Ethernet, IP over InfiniBand (IPoIB), IP over Intel OPA



oneAPI and the transition from Intel Parallel Studio

oneAPI  HPC and the transition from Intel Parallel Studio
Intel oneAPI transmission matrix

Single-Node vs Multi-Node

  • Single-Node: supported for use on laptop, notebook, desktop, PC, or workstation
  • Multi-Node: supported for use on laptop, notebook, desktop, PC, workstation and distributed memory systems ie HPC clusters
    – The applications the developer is writing will determine which option they should purchase

License Types for oneAPI HPC

Named user

  • 10 Workgroup (former 2-seat Concurrent). Support for up to 10 developers
  • 25 Workgroup (former 5-seat Concurrent). Support for up to 250 developers
  • 50 Workgroup. Support for up to 50 developers

With oneAPI Toolkit the need for license server has been removed. Users must be compliant with Intel’s EULA and purchase the number of seats to match the number of users who need to use the software concurrently.

  • Academic
    Degree-granting institutions only; colleges and universities that teach student in higher education towards earning a degree
  • Commercial
    All other users, including government and non-profit institutions that are not degree-granting
  • Languages
    No language selection. Intel oneAPI Base & HPC Toolkit includes both Fortran, C++, Data Parallel C++ and Python compilers in the same SKU.
  • Operating Systems
    All supported OS’s will be included in the same SKU – Windows, Linux, macOS. No OS selection.

Network licenses change with Intel oneAPI HPC Toolkit

Network licenses (concurrent) are still available as 2 and 5 concurrent user licenses, but these are named differently with oneAPI:

  • 10 Workgroup means up to 10 developers can get support. This license type was previously called 2 Concurrent.
  • 25 Workgroup means up to 25 developers can get support. This license type was previously called 5 Concurrent.
  • 50 Workgroup means up to 50 developers can get support.

I.e. for oneAPI, a maximum number of developers is set for the network licenses, who can then be registered in the IRC and are entitled to request Intel technical support in the confidential Intel Online Service Center. FlexLM is no longer included for the network licenses (as was the case with IPSXE). Licensing and usage restrictions now result from the legal basis (i.e. the license agreement – EULA).

Performance Analyzers & Libraries

Analyzers (ie VTune Profiler and MPI) and libraries will no longer be available as standalone products.

  • New licenses will only be available through the purchase of oneAPI Base or oneAPI Base & HPC Toolkit Single Node or Multi-Node, depending on the product.
  • Upgrade Promotions to oneAPI are available for existing users with active support

Existing Users – what’s next?

All registered users of Intel Parallel Studio and Intel System Studio (ISPXE or ISS) with active support will receive an email from Intel with the option to upgrade to the oneAPI product corresponding to their existing product free of charge.

  • If the user chooses the free upgrade option, their existing legacy product will be retired, and a new serial number will be issued to the same support end date as their legacy product.
  • Once upgraded to oneAPI, users will be eligible for all version updates & upgrades that Intel releases for oneAPI while their support agreement is active.
  • For continued support, eligible users can purchase oneAPI SSR SKUs
  • If the user does not choose to upgrade to oneAPI, the user can continue using their existing product, but they will not receive any new version updates (unless there are security bug fixes)
  • They will only be able to stay on the version they currently own.
  • Users with active support will continue to have the free upgrade link available to them while support for their license remains active. Once support expires, the free upgrade option will go away.
  • Users who have support in the post-expiry renewal range can purchase a post-expiry SSR SKU for their existing product to get their support active. Then, they will receive a link to upgrade to oneAPI for free as part of their service & support agreement.

oneAPI HPC Upgrade Promotion: available to IPSXE users with active support at a discounted price for a limited time

  • The Upgrade Promo SKU might be a more cost-effective option for existing Composer Edition users to purchase instead of selecting the free oneAPI Upgrade offer, depending on the type of license they own.
  • Users need to be aware that the Upgrade Promotions are ONLY eligible for purchase BEFORE they click the “Upgrade” to oneAPI if they are currently under active support.
  • Once a user upgrades for free, the upgrade promotion goes away.
  • New Update: IPSXE users who have support which has expired less than 6 months can also purchase a oneAPI Upgrade Promotion SKU.
  • With this purchase, a new serial number will be issued with 12 months of support extended from their current support expiration date.

oneAPI HPC Upgrade

  • Upgrade to another suite with additional capabilities, i.e. from oneAPI Base Toolkit to oneAPI Base & HPC Toolkit
  • Upgrade from oneAPI Single-Node to Multi-Node
  • With this purchase, a new serial number will be issued with 12 months of support extended from their current support expiration date.


Intel oneAPI HPC Toolkits Comparison

Intel oneAPI Toolkits Comparison
Intel oneAPI Toolkits comparison matrix


Intel oneAPI Base & HPC Toolkit Support

Alfasoft offer paid licenses of Intel oneAPI software with support.

  • 1 & 3-year support options will be available for all toolkits
  • Additional years of support can be added with the corresponding pre-expiry renewal SKU

Support with Intel oneAPI Base & HPC Toolkit includes

  • Submit questions, problems and other technical support questions
  • Monitor issues you’ve submitted previously
  • Direct and private interaction with Intel’s support engineers, including the ability to submit confidential support requests
  • Accelerated response time for technical questions and other product needs
  • Free download access to all new product updates and continued access to older versions of the product
  • Priority assistance for escalated defects and feature requests
  • Access to a vast library of self-help documentation built from decades of experience with creating high-performance code
  • Access to Intel public community forums supported by community technical experts and monitored by Intel engineers

Without paid support, support for free tools is only available through public community forums and only for the latest version update of the current year.