Intel® oneAPI Base & HPC Toolkit is a comprehensive suite of development tools that make it fast and easy to build modern code that gets every last ounce of performance out of the newest Intel® processors in high-performance computing (HPC) platforms. Intel® oneAPI Base & HPC Toolkit simplifies creating code with the latest techniques in vectorization, multi-threading, multi-node, memory optimization, and accelerator offloading.
- A cross-architecture language Data Parallel C++ along with C++ and Python.
- Best-in-class compilers and Performance libraries.
- Advanced analyzers and debuggers.
- A CUDA code migration tool.
About
What is Intel oneAPI?
Break away from proprietary single-architecture languages and deliver parallel programming productivity with uncompromised performance for Intel® CPUs and accelerators. Take advantage of Priority Support for fast development with direct access to Intel engineers for technical questions.
Intel oneAPI Base & HPC Toolkit helps developers, researchers, and data scientists confidently develop performant code quickly and correctly, and scale compute-intensive workloads that exploit cutting-edge features Intel CPUs, GPUs, FPGAs, and HPC clusters. It includes industry-leading C++ and Fortran compilers, standards-driven OpenMP support, MPI library and benchmarks, and advanced analysis tools for design, MPI, cluster tuning, and cluster health checking to enhance uptime and productivity.
Intel oneAPI Base & HPC Toolkit includes all the Intel compilers (C/C++, Fortran, DPC++ etc.) and multiple platform support (Windows, Linux and Mac OSX) to give you more flexibility for the future.
Build, analyze, optimize and scale fast HPC applications for various architectures with vectorization, multithreading, multi-node parallelization, and memory optimization techniques using the Intel oneAPI Base & HPC Toolkit.
- Build
Simplify cross-architecture HPC application deployment on Intel CPUs and accelerators using Intel’s industry-leading compilers and libraries. Efficiently create fast parallel code and boost application performance that exploits cutting-edge features of current and future Intel® architecture. - Analyse
Quickly gauge application performance, resource use, and areas for optimization to ensure fast cross-architecture performance. - Optimize
Learn how resource use impacts your code — including compute, memory, I/O, and more to make sound cross-architecture design decisions. - Supports HPC standards, including C/C++, Fortran, Python, OpenMP and MPI, for easy integration with legacy code.
- Works seamlessly with other Intel tools to accelerate specialized workloads (AI analytics, rendering, deep learning inference, video processing, etc.).
- Take advantage of Priority Support. Intel offers the ability to connect directly to Intel engineers for answers to technical questions.
Who Needs It
C, C++, Data Parallel C++, Fortran, Python, OpenMP, and MPI software developers and architects building HPC, enterprise, AI, and cloud solutions • Developers looking to maximize their software’s performance and flexibility to support cross architectures on current and future Intel® platforms.
What it Does
- Creates fast parallel code. Boost application performance that scales on current and future Intel platforms with industry-leading compilers, performance libraries, performance profilers, and code and cluster analysis tools.
- Builds code faster. Simplify the process of creating fast, scalable, reliable parallel code.
- Delivers Priority Support. Connect directly to Intel’s engineers for confidential, quick answers to technical questions. Access older versions of the products. And receive updates for a year.
oneAPI Licensing
The licensing and the naming of the editions have changed with the launch of oneAPI.
Supported platforms
Intel oneAPI Base & HPC Toolkit is offered with support for Windows, Linux and Mac OS. Support for all OS is included with each license.
Languages
Intel oneAPI Base & HPC Toolkit is offered with support for Fortran, C++ and Data Parallel C++. Support for all described languages is included with each license.
Editions
Intel oneAPI Base & HPC Toolkit is offered in two editions: Single-Node and Multi-Node.
The target platforms for development and deployment can range from a workstation to a multi-node cluster requiring different support efforts. Choose the paid product with the support that best fits the used model targeted:
- Intel® oneAPI Base and HPC Toolkit Single-Node: Target platform of shared memory systems including PCs, laptops, or workstations.
- Intel® oneAPI Base and HPC Toolkit Multi-Node: Target platform of shared memory systems such as PCs, laptops, workstations, or distributed memory high-performance compute clusters.

Converting from Intel Parallel Studio to oneAPI Base and HPC Toolkit
When upgrading, a new serial number is generated for the oneAPI license and the previous IPSXE serial number is marked as “retired”. All IPSXE owners will still be able to use all of their IPSXE tools after the upgrade.
In addition, these IPSXE licenses are in most cases not only extended to include new components, but they also usually gain a complete tool suite including C ++ and Fortran compilers (both!) for an additional operating system.
If you have, for example, IPSXE Composer Edition for Fortran, Windows with active support, as a result of the transition to the Intel® oneAPI Base & HPC Toolkit, you will also receive a license for use under Linux, which then also includes the Intel Fortran Compiler and the MKL for Linux.
Are you a current Intel Parallel Studio XE user? To minimize the cost and make the transition to oneAPI as smooth as possible, contact Alfasoft BEFORE you upgrade your license in IRC.
Upgrade Promotion
Customers who are planning a Support Service Renewal (SSR) of their IPSXE license should check whether a chargeable upgrade to oneAPI is not the cheaper option over an IPSXE support and mainteance renewal. Please contact Alfasoft, to check whether this is the case. Important: An “Upgrade Promotions” offer can only be used as long as your IPSXE license to oneAPI has not been upgraded free of charge. I.e. Specifically: do not follow the link to the free upgrade on IRC, but ask us beforehand if you are considering a SSR of your IPSXE license.
Click here to read the Intel OneAPI Base and HPC Toolkit WhitePaper!
Included
What’s included in the Intel oneAPI & HPC Toolkit?
- Intel® oneAPI DPC++/C++ Compiler
A standards-based, CPU, GPU and FPGA compiler supporting Data Parallel C++, C++, C, SYCL and OpenMP. that leverages well-proven LLVM compiler technology and Intel’s history of compiler leadership for performance. Experience seamless compatibility with popular compilers, development environments, and operating systems. - Intel® C++ Compiler Classic
A standards-based C/C++ compiler supporting OpenMP focused on CPU development. Take advantage of more cores and built-in technologies in platforms based on Intel® CPU architectures. Experience seamless compatibility with popular compilers, development environments, and operating systems. - Intel® Fortran Compiler (Beta) for XPU development
A standards-based CPU and GPU compiler supporting Fortran and OpenMP. Leverages well-proven LLVM compiler technology and Intel’s history of compiler leadership for performance. Experience seamless compatibility with popular compilers, development environments, and operating systems. - Intel® Fortran Compiler Classic
A standards-based Fortran compiler supporting OpenMP focused on CPU development. Take advantage of more cores and built-in technologies in platforms based on Intel® CPU architectures. Experience seamless compatibility with popular compilers, development environments, and operating systems. - Intel® Cluster Checker
Verify that cluster components work together seamlessly for optimal performance, improved uptime, and lower total cost of ownership. - Intel® VTune Profiler
Performance analysis tool for serial and multithreaded applications. Intel VTune Profiler optimizes application performance, system performance, and system configuration for HPC, cloud, IoT, media, storage, and more. - Intel® Inspector
Locate and debug threading, memory, and persistent memory errors early in the design cycle to avoid costly errors later. - Intel® MPI Library
Deliver flexible, efficient, scalable cluster messaging on Intel® architecture. - Intel® Trace Analyzer and Collector
Understand MPI application behaviour across its full runtime. - Intel® oneAPI DPC++ Library
Speed up data parallel workloads with these key productivity algorithms and functions. - Intel® oneAPI Threading Building Blocks
Simplify parallelism with this advanced threading and memory-management template library. - Intel® oneAPI Math Kernel Library
Accelerate math processing routines, including matrix algebra, fast Fourier transforms (FFT), and vector math. - Intel® oneAPI Data Analytics Library
Boost machine learning and data analytics performance. - Intel® oneAPI Video Processing Library
Deliver fast, high-quality, real-time video decoding, encoding, transcoding, and processing. - Intel® Advisor
Design code for efficient vectorization, threading and offloading to accelerators. - Intel® Distribution for Python
Achieve fast math-intensive workload performance without code changes for data science and machine learning problems. - Intel® DPC++ Compatibility Tool
Migrate legacy CUDA code to a multi-platform program in DPC++ code with this assistant. - Intel® Integrated Performance Primitives
Speed performance of imaging, signal processing, data compression, cryptography, and more.
News
New features in Intel oneAPI

2022 RELEASE HIGHLIGHTS
- Intel DPC++/C++ Compiler includes new SYCL 2020 features, new openMP 5.x features, and new platform optimizations for recently released and upcoming Xeon Scalable Processor.
- Intel Fortran Compiler, based on LLVM technology, is now fully production-ready for CPUs and GPUs.
- Intel® VTune Profiler introduces support for Intel® microarchitecture code named Alder Lake in Microarchitecture Exploration and Memory Access analyses.
- Intel® oneAPI Deep Neural Networks Library now allows you to seamlessly take advantage of new Intel AMX instructions in future Intel Xeon Scalable Processors for int8 inference and bfloat16 training AI workloads.
- The Intel Fortran Compiler now supports OpenMP 5.1 compute offload to GPUs, near complete support for Fortran 2003, and support for the most commonly used Fortran 2008 features.
- Intel MPI includes new performance optimizations for Google Cloud Platform fabric. (OFI/tcp)
- Intel® processor optimizations in compilers and performance libraries and continued CPU performance enhancements. The Intel® oneAPI DPC++/C++ Compiler adds new SYCL 2020 features and extensions, and expands its OpenMP 5.x support.
- Added support for Microsoft Visual Studio 2022.
- Developers are now able to use tools via the intel-meta layer that is provided through OpenEmbedded or Yocto Project to accelerate development of optimized Yocto Project Linux kernels and applications.
- Python 3.9 is now supported with new data parallel Python technology providing zero copy data exchange performance across packages.
- The Intel® Neural Compressor expands functionality with new algorithms and methods for PyTorch and TensorFlow and a domain-specific acceleration library for NLP models.
- The Intel® Optimization for PyTorch now supports Python 3.9 and Microsoft’s Windows Subsystem for Linux (WSL).
- Now included in the toolkit is the Intel® Implicit SPMD Program Compiler (ISPC), a variant of the C programming language with extensions for “single program, multiple data” (SPMD) programming, for increased rendering performance.
- Intel® Open Image Denoise now supports 16 bit half-precision floating point images and new buffer functions for better overall performance.
- The latest version of Intel® OSPRay improves motion blur with Quarternions and rolling shutter support, while Intel® OSPRay Studio improves Python binding usage, and enhances gITF support.
- 32-bit Intel® Integrated Performance Primitives (Intel® IPP) and 32-bit Intel® oneAPI Math Kernel Library (oneMKL) on Windows* OS are provided separately as part of Intel® oneAPI Base Toolkit 32-bit package. It can be downloaded here as an add-on that requires Intel® oneAPI Base Toolkit to be installed first.
- Added support for Microsoft* Windows Subsystem for Linux 2 (WSL2) on CPU and for limited components on GPU. For more information on usage, refer to Use Intel® oneAPI Toolkits on Microsoft* Windows Subsystem for Linux 2 (WSL 2).
NEW FEATURES (COMPONENT DESCRIPTION)
Intel® oneAPI Fortran Compiler 2022.0
- The Intel® oneAPI product packages provide two Fortran compilers. Intel® Fortran Compiler Classic (ifort) provides best-in-class Fortran language features and performance for CPU. The Intel® Fortran Compiler (ifx) enables developers needing OpenMP* offload to Intel GPUs. The OpenMP 5.0, 5.1 GPU offload features in ifx are not available in ifort. For calendar year 2022 ifort continues to be our best-in-class Fortran compiler for customers not needing GPU offload support. The default compiler for the Microsoft Visual Studio* environment is ifort.
- Our latest compiler, the Intel® Fortran Compiler (ifx), is production-ready for CPUs and GPUs. ifx is based on the Intel® Fortran Compiler Classic (ifort) frontend and runtime libraries, but uses LLVM backend compiler technology. In this initial release ifx completely implements Fortran 77, Fortran90/95, Fortran 2003 (except parameterized derived types) and Fortran 2008 (except coarrays) language standards and OpenMP 4.5 and OpenMP 5.0/5.1 directives and offloading features. ifx is binary (.o/.obj) and module file (.mod) compatible. Binaries and libraries generated with ifort can be linked with binaries and libraries built with ifx, and .mod files generated with one compiler can be used by the other (64-bit targets only). Both compilers use the the same runtime libraries. ifx may or may not match performance of ifort compiled applications. Performance and Fortran Standard language improvements will be coming in ifx with each update release throughout 2022.
- In this release, the ifx version number is 2022.0.0 and the ifort version number is 2021.5.0.
Intel® oneAPI DPC++/C++ Compiler 2022.0
- The Intel® oneAPI DPC++/C++ Compiler adds new SYCL 2020 features and extensions, and expands its OpenMP 5.x support.
- New platform optimizations for recently released and upcoming Intel Processors.
- Added support for Microsoft Visual Studio 2022.
Intel® oneAPI DPC++ Library (oneDPL) 2021.6
- Intel® oneAPI DPC++ Library adds serial versions of the following algorithms: for_each_n, copy, copy_backward, copy_if, copy_n, is_permutation, fill, fill_n, move and move_backward, allowing developers to invoke these functions directly in SYCL device code. For details, see Tested Standard C++ API References.
- This release adds the ability to use OpenMP for thread-level parallelism. Algorithms launched with dpl::execution::par/par_unseq policies can run on top of OpenMP parallel regions as an alternative to oneTBB or serial execution. This allows developers who already use OpenMP on multicore CPUs to also use oneDPL high-level parallel algorithms in their codes without introducing extra dependencies and performance risks
Intel® DPC++ Compatibility Tool 2022.0
- Migration is improved for CUB, CUDA Driver, and CUDA properties API migration
- Introduced new capability to automatically generate makefiles for newly migrated DPC++ source files
Intel® oneAPI Math Kernel Library (oneMKL) 2022.0
- The new oneMKL support for BLAS & LAPACK LP64/ILP64 allows users to use both 32-bit integer interfaces and 64-bit integer interfaces in the same application.
- Added new support for FFT MKL_Verbose, Sparse matrix multiply for scientific computing.
- Intel® oneAPI Math Kernel Library optimizations for Intel® Xeon® and Xe architectures.
Intel® oneAPI Threading Building Blocks (oneTBB) 2021.5
- Added support for Microsoft Visual Studio 2022* and Python 3.9*.
- Intel® oneAPI Threading Building Blocks improved its synchronization mechanism to reduce contention when multiple task arena’s are used concurrently, allowing for task_arena’s to be more independent and more task_arena’s to execute simultaneously/concurrently without performance impact.
Intel® Distribution for GDB* 2021.5
- Added support for Microsoft Visual Studio 2022. Automatically detect and alert developers of a debugger version mismatch between host and target system for Intel® Distribution of GDB in Microsoft Visual Studio.
- Improved GPU debugging experience by adding a new secure server connection mechanism for Intel® Distribution for GDB*.
Intel® Integrated Performance Primitives (Intel IPP) 2021.5
- Added optimizations to Intel® IPP Cryptography’s AES-GCM (Advanced Encryption Standard – Galois Counter Mode) for smaller packets size for 3rd Generation Intel® Xeon® Processor Scalable (SPR).
- Intel® Integrated Performance Primitives image processing added new functionality for CT & MR machine data types by adding the 6s (signed short) data type with 3D resizing.
Intel® oneAPI Collective Communications Library (oneCCL) 2021.5
- Intel® oneAPI Collective Communications Library adds productivity enhancements with a new event tracking mechanism based on SYCL events and for the addition of OFI/verbs provider with Linux dmabuf support directly from the package without needing to build it manually.
Intel® oneAPI Data Analytics Library (oneDAL) 2021.5
- The new Intel® Extension for Scikit-learn* t-SNE (Stochastic Neighbor Embedding) features enhance the developer’s ability to take big high-dimensional data and visualize it on a low-dimensional (think 2d/3d) map.
- Introduced Intel® oneAPI Data Analytics Library distributed support for DPC++ machine learning algorithms, including decision forest, DBSCAN, K-means and covariance.
- Intel® oneAPI Deep Neural Networks Library (oneDNN) 2021.5
- Intel® oneAPI Deep Neural Networks Library now allows you to seamlessly take advantage of new Intel® AMX instructions in future Intel® Xeon® Scalable Processor for int8 inference and bfloat16 training AI workloads
Intel® oneAPI Video Processing Library (oneVPL) 2021.7
- The Intel® oneAPI Video Processing Library (oneVPL) now supports Python 3.7*.
- In addition, new features include C++ and Python binding updates that add improved properties, support for AV1 encode temporal layer parameter extensions and an update to the sample tools
Intel® Distribution for Python* 2022.0
- Intel® Distribution for Python now supports Python version 3.9
- The dpctl package offers developers increased debugging capabilities with improved error handling and reporting
- Data Parallel Python technology now provides zero-copy data exchange performance across packages
Intel® Advisor 2022.0
- Intel® Advisor has been updated to include recent versions of 3rd party components, which include functional and security updates.
- Now provides actionable recommendations to optimize GPU General Purpose Register Files (GRF) and more comprehensive analysis with expanded GPU memory and compute metrics.
Intel® VTune™ Profiler 2022.0
- Intel® VTune™ Profiler introduces support for Intel® microarchitecture code-named Alder Lake in Microarchitecture Exploration and Memory Access analyses.
- Profile DirectX applications on the CPU host to identify the gaps between the API calls and the reasons causing such inefficiencies.
- Annotate your code and collect arbitrary statistics on FreeBSD OS with little to no overhead using the Instrumentation and Tracing Technology API (ITT API).
Diagnostics Utility for Intel® oneAPI Toolkits
- New supported operating systems:
- SLES* 15 SP3
- RHEL* 8.3
- Ability to update compatibility database from online storage.
- User experience improvements were done to reflect the user studies’ feedback.
- This utility’s sources are available in open source.
- New implemented checks:
- The oneapi_env_check shows the version information of the oneAPI products installed in the environment.
- The dependencies_check verifies the compatibility of oneAPI products versions and GPU drivers versions.
- The debugger_check verifies if the environment is ready to use Intel® Distribution for GDB*.
Visual Studio Code Extensions for Intel® oneAPI Toolkits
- DevCloud Connector for Intel® oneAPI Toolkits
- Selection of the node with desired HW in settings with a short description. Connection to this node.
- Error handling if the selected node is unavailable.
- The PBS job created by extension is now named as “vscode”.
- Connection time reduced to 30 sec (from 45-50 sec).
- GDB with GPU Debug Support for Intel® oneAPI Toolkits
- Offline help page for Intel® Distribution for GDB commands.
- Quick default debug configuration settings feature.
- Updates to address user experience issues.
- Environment Configurator for Intel® oneAPI Toolkits
- oneAPI environment initialization in Windows now works without administrator rights.
- Code Sample Browser for Intel® oneAPI Toolkits.
- New command palette-based method of browsing samples.
- UI/UX improvement – auto-create a new folder for the selected samples.
- UI/UX improvement – updates to settings reflected in real-time.
- Analysis Configurator for Intel® oneAPI Toolkits
- Code completion snippets & hovers for FPGA attributes.
- Added automatic detection where VTune and Advisor are installed.
Intel® FPGA Add-On for oneAPI Base Toolkit 2022.1 (Optional)
- The OpenCL runtime environment (part of the DPC++ runtime for FPGAs) is now open-sourced.
- Added support for the Intel® custom platforms with Intel® Quartus® Prime software version 21.3.
- Intel® FPGA Add-on for oneAPI Base Toolkit now supports “fast, flat BSP compile flow”, allowing BSP creators to reduce their BSP size and complexity, which leads to easier and quicker floor planning.
Deprecation Notices
- Microsoft Visual Studio* 2017 integration is deprecated and will be removed in a future release
System
System requirements
Common Hardware Requirements
CPU Processor Requirements
Systems based on Intel® 64 architectures below are supported as host and target platforms.
- Intel® Core™ processor family or higher
- Intel® Xeon® processor family
- Intel® Xeon® Scalable processor family
Requirements for Accelerators
- Integrated GEN9 or higher GPUs including latest Intel® Iris® Xe MAX graphics
- FPGA Card: see Intel(R) DPC++ Compiler System Requirements.
Disk Space Requirements
- ~3 GB of disk space (minimum) if only installing compiler and its libraries: Intel oneAPI DPC++/C++ Compiler, Intel® DPC++ Compatibility Tool, Intel® oneAPI DPC++ Library and Intel® Threading Building Block
- Maximum of ~24 GB diskspace if installing all components
Memory Requirements
- 8 GB RAM recommended
- For FPGA development, see Intel(R) DPC++ Compiler System Requirements.
Common Software Requirements
Operating System Requirements
The operating systems listed below are supported on Intel® 64 Architecture. Individual tools may support additional operating systems and architecture configurations. See the individual tool release notes for full details.
For developing applications for offloading to accelerators like GPU or FPGA, a specific version of GPU driver is required for the supported operating system. Please visit the Installation guide for Intel® oneAPI Toolkits “Install Intel GPU Drivers” section for up to date information.
For Linux
- GNU* Bash is required for local installation and for setting up the environment to use the toolkit.
For CPU Host/Target Support

For GPU Accelerator Support

For Windows
For CPU Support

For GPU Accelerator Support

For macOS

Processors
- Intel® Xeon® processors
- Intel® Xeon® Scalable processors
- Intel® Core™ processors
GPUs
- Intel® Processor Graphics Gen9 and above
- Xe architecture
Languages
- Data Parallel C++ (DPC++) and SYCL (Note Must have Intel oneAPI Base Toolkit installed)
- C and C++
Operating systems
- Windows
- Linux
- macOS (Not all Intel oneAPI HPC Toolkit components are available for macOS. The following components are included: Intel® C++ Compiler Classic and Intel® Fortran Compiler Classic.)
Development environments
- Compatible with compilers from Microsoft, GCC, Intel, and others that follow established language standards
- Windows: Microsoft Visual Studio
- Linux: Eclipse*
Distributed environments:
- MPI
Open Fabrics Interfaces (OFI) framework implementation supporting the following
- InfiniBand*
- iWARP, RDMA over Converged Ethernet (RoCE)
- Amazon Web Services Elastic Fabric Adapter (AWS EFA)
- Intel® Omni-Path Architecture (Intel® OPA)
- Ethernet, IP over InfiniBand (IPoIB), IP over Intel OPA
Licensing
License Options
oneAPI and the transition from Intel Parallel Studio

Single-Node vs Multi-Node
- Single-Node: supported for use on laptop, notebook, desktop, PC, or workstation
- Multi-Node: supported for use on laptop, notebook, desktop, PC, workstation and distributed memory systems ie HPC clusters
- The applications the developer is writing will determine which option they should purchase
License Types
Named user
- 10 Workgroup (former 2-seat Concurrent). Support for up to 10 developers
- 25 Workgroup (former 5-seat Concurrent). Support for up to 250 developers
- 50 Workgroup. Support for up to 50 developers
With oneAPI Toolkit the need for licenseserver has been removed. Users must be compliant with Intel’s EULA and purchase the number of seats to match the number of users who need to use the software concurrently.
- Academic
Degree-granting institutions only; colleges and universities that teach student in higher education towards earning a degree - Commercial
All other users, including government and non-profit institutions that are not degree-granting - Languages
No language selection. Intel oneAPI Base & HPC Toolkit includes both Fortran, C++, Data Parallel C++ and Python compilers in the same SKU. - Operating Systems
All supported OS’s will be included in the same SKU – Windows, Linux, macOS. No OS selection.
Network licenses changes with Intel oneAPI Toolkit
Network licenses (concurrent) are still available as 2 and 5 concurrent user licenses, but these are named differently with oneAPI:
- 10 Workgroup means up to 10 developers can get support. This license type was previously called 2 Concurrent.
- 25 Workgroup means up to 25 developers can get support. This license type was previously called 5 Concurrent.
- 50 Workgroup means up to 50 developers can get support.
I.e. for oneAPI, a maximum number of developers is set for the network licenses, who can then be registered in the IRC and are entitled to request Intel technical support in the confidential Intel Online Service Center. FlexLM is no longer included for the network licenses (as was the case with IPSXE). Licensing and usage restrictions now result from the legal basis (i.e. the license agreement – EULA).
Performance Analyzers & Libraries
Analyzers (ie VTune Profiler and MPI) and libraries will no longer be available as standalone products.
- New licenses will only be available through the purchase of oneAPI Base or oneAPI Base & HPC Toolkit Single Node or Multi-Node, depending on the product.
- Upgrade Promotions to oneAPI are available for existing users with active support
Existing Users – what’s next?
All registered users of Intel Parallel Studio and Intel System Studio (ISPXE or ISS) with active support will receive an email from Intel with the option to upgrade to the oneAPI product corresponding to their existing product free of charge.
- If the user chooses the free upgrade option, their existing legacy product will be retired, and a new serial number will be issued to the same support end date as their legacy product.
- Once upgraded to oneAPI, users will be eligible for all version updates & upgrades that Intel releases for oneAPI while their support agreement is active.
- For continued support, eligible users can purchase oneAPI SSR SKUs
- If the user does not choose to upgrade to oneAPI, the user can continue using their existing product, but they will not receive any new version updates (unless there are security bug fixes)
- They will only be able to stay on the version they currently own.
- Users with active support will continue to have the free upgrade link available to them while support for their license remains active. Once support expires, the free upgrade option will go away.
- Users who have support in the post-expiry renewal range can purchase a post-expiry SSR SKU for their existing product to get their support active. Then, they will receive a link to upgrade to oneAPI for free as part of their service & support agreement.
oneAPI Upgrade Promotion: available to IPSXE users with active support at a discounted price for a limited time
- The Upgrade Promo SKU might be a more cost-effective option for existing Composer Edition users to purchase instead of selecting the free oneAPI Upgrade offer, depending on the type of license they own.
- Users need to be aware that the Upgrade Promotions are ONLY eligible for purchase BEFORE they click the “Upgrade” to oneAPI if they are currently under active support.
- Once a user upgrades for free, the upgrade promotion goes away.
- New Update: IPSXE users who have support which has expired less than 6 months can also purchase a oneAPI Upgrade Promotion SKU.
- With this purchase, a new serial number will be issued with 12 months of support extended from their current support expiration date.
oneAPI Upgrade
- Upgrade to another suite with additional capabilities, i.e. from oneAPI Base Toolkit to oneAPI Base & HPC Toolkit
- Upgrade from oneAPI Single-Node to Multi-Node
- With this purchase, a new serial number will be issued with 12 months of support extended from their current support expiration date.
Comparison
Intel oneAPI Toolkits Comparison

Support
Support
Alfasoft offer paid licenses of Intel oneAPI software with support.
• 1 & 3-year support options will be available for all toolkits
• Additional years of support can be added with the corresponding pre-expiry renewal SKU
Support with oneAPI Toolkit includes
- Confidential communications direct with Intel support engineers to keep your issues private and out of public forums.
- Access to all version updates & upgrades that release during the supported year(s), as well as to older versions (up to 3 versions prior to the current version)
- Direct access to software engineers with insider knowledge of the tools and Intel platforms to help resolve issues quickly.
- Access to and support for current and previous product releases such as Intel Parallel Studio XE and Intel System Studio.
- Focused training and assistance in use direct from Intel engineers.
- Accelerated response time to support tickets.
- Access to a robust set of online tutorials and self-help forums built up over years of efforts.
- Priority solutions for issues and feature requests.
- Help shape future products with suggestions and enhancement requests
What’s included with Intel Priority Support? (video)
Without paid support, support for free tools is only available through public community forums and only for the latest version update of the current year.