Skip to content

Intel® oneAPI Base & HPC Toolkit

Take your HPC, Enterprise, AI, and Cloud applications to the Max with fast, scalable and portable parallel code

Intel® oneAPI Base & HPC Toolkit is a comprehensive suite of development tools that make it fast and easy to build modern code that gets every last ounce of performance out of the newest Intel® processors in high-performance computing (HPC) platforms. Intel® oneAPI Base & HPC Toolkit simplifies creating code with the latest techniques in vectorization, multi-threading, multi-node, memory optimization, and accelerator offloading.

  • A cross-architecture language Data Parallel C++ along with C++, Fortran and Python.
  • Best-in-class compilers and Performance libraries.
  • Advanced analyzers and debuggers.
  • A CUDA code migration tool.

The tools that comprised Intel Parallel Studio XE are now included in Intel oneAPI Base & HPC Toolkit. To upgrade your Intel Parallel Studio XE license, contact Alfasoft for oneAPI Promotion Upgrade Price.

Info

What is Intel oneAPI?

Break away from proprietary single-architecture languages and deliver parallel programming productivity with uncompromised performance for Intel® CPUs and accelerators. Take advantage of Priority Support for fast development with direct access to Intel engineers for technical questions.

Intel oneAPI Base & HPC Toolkit helps developers, researchers, and data scientists confidently develop performant code quickly and correctly, and scale compute-intensive workloads that exploit cutting-edge features Intel CPUs, GPUs, FPGAs, and HPC clusters. It includes industry-leading C++ and Fortran compilers, standards-driven OpenMP support, MPI library and benchmarks, and advanced analysis tools for design, MPI, cluster tuning, and cluster health checking to enhance uptime and productivity.

Intel oneAPI Base & HPC Toolkit includes all the Intel compilers (C/C++, Fortran, DPC++ etc.) and multiple platform support (Windows, Linux and Mac OSX) to give you more flexibility for the future.

Build, analyze, optimize and scale fast HPC applications for various architectures with vectorization, multithreading, multi-node parallelization, and memory optimization techniques using the Intel oneAPI Base & HPC Toolkit.

  • Build
  • Simplify cross-architecture HPC application deployment on Intel CPUs and accelerators using Intel’s industry-leading compilers and libraries. Efficiently create fast parallel code and boost application performance that exploits cutting-edge features of current and future Intel® architecture.
  • Analyse
  • Quickly gauge application performance, resource use, and areas for optimization to ensure fast cross-architecture performance.
  • Optimize
  • Learn how resource use impacts your code — including compute, memory, I/O, and more to make sound cross-architecture design decisions.
  • Supports HPC standards, including C/C++, Fortran, Python, OpenMP and MPI, for easy integration with legacy code.
  • Works seamlessly with other Intel tools to accelerate specialized workloads (AI analytics, rendering, deep learning inference, video processing, etc.).
  • Take advantage of Priority Support. Intel offers the ability to connect directly to Intel engineers for answers to technical questions.

Who Needs It

C, C++, Data Parallel C++, Fortran, Python, OpenMP, and MPI software developers and architects building HPC, enterprise, AI, and cloud solutions • Developers looking to maximize their software’s performance and flexibility to support cross architectures on current and future Intel® platforms.

What it Does

  • Creates fast parallel code. Boost application performance that scales on current and future Intel platforms with industry-leading compilers, performance libraries, performance profilers, and code and cluster analysis tools.
  • Builds code faster. Simplify the process of creating fast, scalable, reliable parallel code.
  • Delivers Priority Support. Connect directly to Intel’s engineers for confidential, quick answers to technical questions. Access older versions of the products. And receive updates for a year.

oneAPI Licensing

The licensing and the naming of the editions have changed with the launch of oneAPI.

Supported platforms

Intel oneAPI Base & HPC Toolkit is offered with support for Windows, Linux and Mac OS. Support for all OS is included with each license.

Languages

Intel oneAPI Base & HPC Toolkit is offered with support for Fortran, C++ and Data Parallel C++. Support for all described languages is included with each license.

Editions

Intel oneAPI Base & HPC Toolkit is offered in two editions: Single-Node and Multi-Node.

The target platforms for development and deployment can range from a workstation to a multi-node cluster requiring different support efforts. Choose the paid product with the support that best fits the used model targeted:

  • Intel® oneAPI Base and HPC Toolkit Single-Node: Target platform of shared memory systems including PCs, laptops, or workstations.
  • Intel® oneAPI Base and HPC Toolkit Multi-Node: Target platform of shared memory systems such as PCs, laptops, workstations, or distributed memory high-performance compute clusters.
Editions
Transition from IPS to Intel oneAPI

Converting from Intel Parallel Studio to oneAPI Base and HPC Toolkit

When upgrading, a new serial number is generated for the oneAPI license and the previous IPSXE serial number is marked as “retired”. All IPSXE owners will still be able to use all of their IPSXE tools after the upgrade.

In addition, these IPSXE licenses are in most cases not only extended to include new components, but they also usually gain a complete tool suite including C ++ and Fortran compilers (both!) for an additional operating system.

If you have, for example, IPSXE Composer Edition for Fortran, Windows with active support, as a result of the transition to the Intel® oneAPI Base & HPC Toolkit, you will also receive a license for use under Linux, which then also includes the Intel Fortran Compiler and the MKL for Linux.

Intel also offer a special bundle for Fortran users.  Intel Fortran Compilers includes; the new LLVM based Intel Fortran Compiler (ifx), the Intel Fortran Compiler Classic (ifort), and Math Kernel Library – for Windows or Linux. The new Intel Fortran Compilers suite give existing Parallel Studio XE Composer for Fortran users with current support the choice to upgrade to Intel Fortran Compilers OR to oneAPI Base & HPC Toolkit (Single-node), depending what componets the developer is in need of.

Are you a current Intel Parallel Studio XE user? To minimize the cost and make the transition to oneAPI as smooth as possible, contact Alfasoft BEFORE you upgrade your license in IRC.

Included

What’s included in the Intel oneAPI & HPC Toolkit?

  • Intel® oneAPI DPC++/C++ Compiler
    A standards-based, CPU, GPU and FPGA compiler supporting Data Parallel C++, C++, C, SYCL and OpenMP. that leverages well-proven LLVM compiler technology and Intel’s history of compiler leadership for performance. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® C++ Compiler Classic
    A standards-based C/C++ compiler supporting OpenMP focused on CPU development. Take advantage of more cores and built-in technologies in platforms based on Intel® CPU architectures. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® Fortran Compiler (Beta) for XPU development
    A standards-based CPU and GPU compiler supporting Fortran and OpenMP. Leverages well-proven LLVM compiler technology and Intel’s history of compiler leadership for performance. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® Fortran Compiler Classic
    A standards-based Fortran compiler supporting OpenMP focused on CPU development. Take advantage of more cores and built-in technologies in platforms based on Intel® CPU architectures. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® Cluster Checker
    Verify that cluster components work together seamlessly for optimal performance, improved uptime, and lower total cost of ownership.
  • Intel® VTune Profiler
    Performance analysis tool for serial and multithreaded applications. Intel VTune Profiler optimizes application performance, system performance, and system configuration for HPC, cloud, IoT, media, storage, and more.
  • Intel® Inspector
    Locate and debug threading, memory, and persistent memory errors early in the design cycle to avoid costly errors later.
  • Intel® MPI Library
    Deliver flexible, efficient, scalable cluster messaging on Intel® architecture.
  • Intel® Trace Analyzer and Collector
    Understand MPI application behaviour across its full runtime.
  • Intel® oneAPI DPC++ Library
    Speed up data parallel workloads with these key productivity algorithms and functions.
  • Intel® oneAPI Threading Building Blocks
    Simplify parallelism with this advanced threading and memory-management template library.
  • Intel® oneAPI Math Kernel Library
    Accelerate math processing routines, including matrix algebra, fast Fourier transforms (FFT), and vector math.
  • Intel® oneAPI Data Analytics Library
    Boost machine learning and data analytics performance.
  • Intel® oneAPI Video Processing Library
    Deliver fast, high-quality, real-time video decoding, encoding, transcoding, and processing.
  • Intel® Advisor
    Design code for efficient vectorization, threading and offloading to accelerators.
  • Intel® Distribution for Python
    Achieve fast math-intensive workload performance without code changes for data science and machine learning problems.
  • Intel® DPC++ Compatibility Tool
    Migrate legacy CUDA code to a multi-platform program in DPC++ code with this assistant.
  • Intel® Integrated Performance Primitives
    Speed performance of imaging, signal processing, data compression, cryptography, and more.

News

New features in Intel oneAPI Toolkit

intelOne API 2023 New features

Intel oneAPI developer toolkits 2023

Optimised, standards-based support for powerful new architectures

The latest oneAPI and AI 2023 tools continue to empower developers with multiarchitecture performance and productivity, delivering optimised support for Intel’s upcoming portfolio of CPU and GPU architectures and advanced capabilities:

  • 4th Gen Intel Xeon Scalable Processors (formerly codenamed Sapphire Rapids) with Intel
  • Advanced Matrix Extensions (Intel AMX), Quick Assist Technology (QAT), Intel AVX-512, bfloat16, and more
  • Intel Xeon Processor Max Series high-bandwidth memory
  • Intel Data Center GPUs, including Flex Series with hardware AV1 encode and Max Series (formerly codenamed Ponte Vecchio) with datatype flexibility, Intel Xe Matrix Extensions (Intel XMX), vector engine, XE-Link, and other features
  • Existing Intel CPUs, GPUs, and FPGAs

The tools deliver performance and productivity enhancements and also add support for new Codeplay plug-ins for NVIDIA and AMD that make it easier than ever for developers to write SYCL code for non-Intel GPU architectures. These standards-based tools deliver choice in hardware and ease in developing high-performance applications that run on multiarchitecture systems.

What’s new in the 2023 oneAPI and AI tools?

Compilers & SYCL support

  • Intel oneAPI DPC++/C++ Compiler improves CPU and GPU offload performance and broadens SYCL language support for improved code portability and productivity.
  • Intel oneAPI DPC++ Library (oneDPL) expands support of the C++ standard library in SYCL kernels with additional heap and sorting algorithms and adds the ability to use OpenMP for thread-level parallelism.
  • Intel DPC++ Compatibility Tool (based on the open source SYCLomatic project) improves the migration of CUDA library APIs, including those for runtime and drivers, cuBLAS, and cuDNN.
  • Intel Fortran Compiler implements coarrays, eliminating the need for external APIs such as MPI or OpenMP, expands OpenMP 5.x offloading features, adds DO CONCURRENT GPU offload, and improves optimisations for source-level debugging.

Performance libraries

  • Intel oneAPI Math Kernel Library increases CUDA library function API compatibility coverage for BLAS and FFT; for Sapphire Rapids, leverages Intel XMX to optimize matrix multiply computations for TF32, FP16, BF16, and INT8 data types; and provides interfaces for SYCL and C/Fortran OpenMP offload programming.
  • Intel oneAPI Threading Building Blocks improve support and use of the latest C++ standard for parallel sort, offers an improved synchronization mechanism to reduce contention when multiple task arena calls are used concurrently, and add support for Microsoft Visual Studio 2022 and Windows Server 2022.
  • Intel oneAPI Video Processing Library supports the industry’s only hardware AV1 codec in the Intel Data Center GPU Flex Series and Intel Arc(tm) processors; expands OS support for RHEL9, CentOS, Stream 9, SLES15Sp4, and Rocky 9 Linux; and adds parallel encoding feature to sample multi transcode.

Analysis & Debug

  • Intel VTune Profiler enables the ability to identify MPI imbalance issues via its Application Performance Snapshot feature; delivers visibility into Xe Link cross-card traffic for utilisation, bandwidth consumption, and other issues; and adds support for 4th Gen Intel Xeon Scalable Processors (Sapphire Rapids), Max Series (Ponte Vecchio), and 13th Gen Intel Core processors.
  • Intel Advisor adds automated roofline analysis for Intel Data Center GPU MAX Series to identify and prioritize memory, cache, or compute bottlenecks and understand their causes and delivers actionable recommendations for optimising data-transfer reuse costs of CPU-to-GPU offloading.

AI and Analytics

  • Intel AI Analytics Toolkit can now be run natively on Windows with full parity to Linux except for distributed training (GPU support is coming in Q1 2023).
  • Intel oneAPI Deep Neural Network Library further supports the delivery of superior CNN performance by enabling advanced features in 4th Gen Intel Xeon Scalable Processors including Intel AMX, AVX-512, VNNI, and bfloat16.
  • Intel Distribution of Modin integrates with new heterogeneous data kernels (HDK) solution in the back end, enabling AI solution scale from low-compute resources to large- or distributed-computed resources.
  • Beta additions for Intel Distribution for Python include the compute-follows-data model extension to GPU, data exchange between libraries and frameworks, and data-parallel extensions for NumPy and Numba packages.

Rendering & Visual Computing

  • Intel oneAPI Rendering Toolkit includes the Intel Implicit SPMD Program Compiler runtime library for fast SIMD performance on CPUs.
  • Intel Open Volume Kernel Library increases memory-layout efficiency for VDB volumes and adds an AVX-512 8-wide CPU device mode for increased workload performance.
  • Intel OSPRay and Intel OSPRay Studio add features for multi-segment deformation motion blur for mesh geometry, primitive, and objects; face-varying attributes for mesh and subdivision geometry; new light capabilities such as photometric light types; and instance ID buffers to create segmentation images for AI training.

Why should you use oneAPI HPC?

With 48% of developers targeting heterogeneous systems that use more than one kind of processor*, more efficient multiarchitecture programming is required to address the increasing scope and scale of real-world workloads.

Using oneAPI’s open, unified programming model with Intel’s standards-based multiarchitecture tools provides freedom of choice in hardware, performance, productivity, and code portability for CPUs and accelerators. Code written for proprietary programming models, like CUDA, lacks portability to other hardware, creating a siloed development practice that locks organisations into a closed ecosystem.

*Evans Data Global Development Survey Report 22.1, June 2022

Buy the Intel oneAPI HPC 2023 Toolkits with Priority Support

Alfasoft is an Intel Software Elite Reseller. We can support you with Intel oneAPI licensing advice and discounts. Purchase the Intel oneAPI 2023 Toolkits with Priority Support to gain access to private, dedicated Intel engineer support, access to earlier version, plus many other benefits.

For questions, please get in touch with Alfasoft via email or call us.

System requirements

System requirements

Common Hardware Requirements

CPU Processor Requirements

Systems based on Intel® 64 architectures below are supported as host and target platforms.

  • Intel® Core™ processor family or higher
  • Intel® Xeon® processor family
  • Intel® Xeon® Scalable processor family

Requirements for Accelerators

  • Integrated GEN9 or higher GPUs including latest Intel® Iris® Xe MAX graphics
  • FPGA Card: see Intel(R) DPC++ Compiler System Requirements.

Disk Space Requirements

  • ~3 GB of disk space (minimum) if only installing compiler and its libraries: Intel oneAPI DPC++/C++ Compiler, Intel® DPC++ Compatibility Tool, Intel® oneAPI DPC++ Library and Intel® Threading Building Block
  • Maximum of ~24 GB diskspace if installing all components

During the installation process, the installer may need up to 6 GB of additional temporary disk storage to manage the download and intermediate installation files.

Memory Requirements

  • 8 GB RAM recommended
  • For FPGA development, see Intel(R) DPC++ Compiler System Requirements.

Common Software Requirements

Operating System Requirements in Intel oneAPI base HPC toolkit

The operating systems listed below are supported on Intel® 64 Architecture. Individual tools may support additional operating systems and architecture configurations. See the individual tool release notes for full details.

For developing applications for offloading to accelerators like GPU or FPGA, a specific version of GPU driver is required for the supported operating system. Please visit the Installation guide for Intel® oneAPI Toolkits “Install Intel GPU Drivers” section for up to date information.

For Linux

  • GNU* Bash is required for local installation and for setting up the environment to use the toolkit.

For CPU Host/Target Support

For CPU Host/Target Support

For GPU Accelerator Support

For GPU Accelerator Support

For Windows

For CPU Support

For GPU Accelerator Support

oneAPI for GPU Accelerator Support

For macOS

oneAPI is for macOS

Processors

  • Intel® Xeon® processors
  • Intel® Xeon® Scalable processors
  • Intel® Core™ processors

GPUs

  • Intel® Processor Graphics Gen9 and above
  • Xe architecture

Languages in Intel oneAPI base HPC toolkit

  • Data Parallel C++ (DPC++) and SYCL (Note Must have Intel oneAPI Base Toolkit installed)
  • C and C++

Operating systems in Intel oneAPI base HPC toolkit

  • Windows
  • Linux
  • macOS (Not all Intel oneAPI HPC Toolkit components are available for macOS. The following components are included: Intel® C++ Compiler Classic and Intel® Fortran Compiler Classic.)

Development environments in Intel oneAPI base HPC toolkit

  • Compatible with compilers from Microsoft, GCC, Intel, and others that follow established language standards
  • Windows: Microsoft Visual Studio
  • Linux: Eclipse*

Distributed environments

  • MPI

Open Fabrics Interfaces (OFI) framework implementation supporting the following

  • InfiniBand*
  • iWARP, RDMA over Converged Ethernet (RoCE)
  • Amazon Web Services Elastic Fabric Adapter (AWS EFA)
  • Intel® Omni-Path Architecture (Intel® OPA)
  • Ethernet, IP over InfiniBand (IPoIB), IP over Intel OPA

Licensing

License Options

oneAPI and the transition from Intel Parallel Studio

oneAPI  HPC and the transition from Intel Parallel Studio
Intel oneAPI transmission matrix

Single-Node vs Multi-Node

  • Single-Node: supported for use on laptop, notebook, desktop, PC, or workstation
  • Multi-Node: supported for use on laptop, notebook, desktop, PC, workstation and distributed memory systems ie HPC clusters
    – The applications the developer is writing will determine which option they should purchase

License Types for oneAPI HPC

Named user

  • 10 Workgroup (former 2-seat Concurrent). Support for up to 10 developers
  • 25 Workgroup (former 5-seat Concurrent). Support for up to 250 developers
  • 50 Workgroup. Support for up to 50 developers

With oneAPI Toolkit the need for licenseserver has been removed. Users must be compliant with Intel’s EULA and purchase the number of seats to match the number of users who need to use the software concurrently.

  • Academic
    Degree-granting institutions only; colleges and universities that teach student in higher education towards earning a degree
  • Commercial
    All other users, including government and non-profit institutions that are not degree-granting
  • Languages
    No language selection. Intel oneAPI Base & HPC Toolkit includes both Fortran, C++, Data Parallel C++ and Python compilers in the same SKU.
  • Operating Systems
    All supported OS’s will be included in the same SKU – Windows, Linux, macOS. No OS selection.

Network licenses changes with Intel oneAPI HPC Toolkit

Network licenses (concurrent) are still available as 2 and 5 concurrent user licenses, but these are named differently with oneAPI:

  • 10 Workgroup means up to 10 developers can get support. This license type was previously called 2 Concurrent.
  • 25 Workgroup means up to 25 developers can get support. This license type was previously called 5 Concurrent.
  • 50 Workgroup means up to 50 developers can get support.

I.e. for oneAPI, a maximum number of developers is set for the network licenses, who can then be registered in the IRC and are entitled to request Intel technical support in the confidential Intel Online Service Center. FlexLM is no longer included for the network licenses (as was the case with IPSXE). Licensing and usage restrictions now result from the legal basis (i.e. the license agreement – EULA).

Performance Analyzers & Libraries

Analyzers (ie VTune Profiler and MPI) and libraries will no longer be available as standalone products.

  • New licenses will only be available through the purchase of oneAPI Base or oneAPI Base & HPC Toolkit Single Node or Multi-Node, depending on the product.
  • Upgrade Promotions to oneAPI are available for existing users with active support

Existing Users – what’s next?

All registered users of Intel Parallel Studio and Intel System Studio (ISPXE or ISS) with active support will receive an email from Intel with the option to upgrade to the oneAPI product corresponding to their existing product free of charge.

  • If the user chooses the free upgrade option, their existing legacy product will be retired, and a new serial number will be issued to the same support end date as their legacy product.
  • Once upgraded to oneAPI, users will be eligible for all version updates & upgrades that Intel releases for oneAPI while their support agreement is active.
  • For continued support, eligible users can purchase oneAPI SSR SKUs
  • If the user does not choose to upgrade to oneAPI, the user can continue using their existing product, but they will not receive any new version updates (unless there are security bug fixes)
  • They will only be able to stay on the version they currently own.
  • Users with active support will continue to have the free upgrade link available to them while support for their license remains active. Once support expires, the free upgrade option will go away.
  • Users who have support in the post-expiry renewal range can purchase a post-expiry SSR SKU for their existing product to get their support active. Then, they will receive a link to upgrade to oneAPI for free as part of their service & support agreement.

oneAPI HPC Upgrade Promotion: available to IPSXE users with active support at a discounted price for a limited time

  • The Upgrade Promo SKU might be a more cost-effective option for existing Composer Edition users to purchase instead of selecting the free oneAPI Upgrade offer, depending on the type of license they own.
  • Users need to be aware that the Upgrade Promotions are ONLY eligible for purchase BEFORE they click the “Upgrade” to oneAPI if they are currently under active support.
  • Once a user upgrades for free, the upgrade promotion goes away.
  • New Update: IPSXE users who have support which has expired less than 6 months can also purchase a oneAPI Upgrade Promotion SKU.
  • With this purchase, a new serial number will be issued with 12 months of support extended from their current support expiration date.

oneAPI HPC Upgrade

  • Upgrade to another suite with additional capabilities, i.e. from oneAPI Base Toolkit to oneAPI Base & HPC Toolkit
  • Upgrade from oneAPI Single-Node to Multi-Node
  • With this purchase, a new serial number will be issued with 12 months of support extended from their current support expiration date.

Comparison

Intel oneAPI HPC Toolkits Comparison

Intel oneAPI Toolkits Comparison
Intel oneAPI Toolkits comparison matrix

Support

Intel oneAPI Base & HPC Toolkit Support

Alfasoft offer paid licenses of Intel oneAPI software with support.

  • 1 & 3-year support options will be available for all toolkits
  • Additional years of support can be added with the corresponding pre-expiry renewal SKU

Support with Intel oneAPI Base & HPC Toolkit includes

  • Submit questions, problems and other technical support questions
  • Monitor issues you’ve submitted previously
  • Direct and private interaction with Intel’s support engineers, including the ability to submit confidential support requests
  • Accelerated response time for technical questions and other product needs
  • Free download access to all new product updates and continued access to older versions of the product
  • Priority assistance for escalated defects and feature requests
  • Access to a vast library of self-help documentation built from decades of experience with creating high-performance code
  • Access to Intel public community forums supported by community technical experts and monitored by Intel engineers

Without paid support, support for free tools is only available through public community forums and only for the latest version update of the current year.