#!trpst#trp-gettext data-trpgettextoriginal=5013#!trpen#Hoppa till innehåll#!trpst#/trp-gettext#!trpen#

Intel oneAPI Base & IoT Toolkit

Accelerate the Development of Smart, Connected Devices

Benefit from fast integration across the software stack and optimized performance and power efficiency to quickly bring IoT solutions to market. The Intel oneAPI Base & IoT Toolkit help enable a more intelligent, efficient IoT, supporting enterprises moving to the edge to capture more data, analyze it fast, and act on it quickly.

  • Build.
    Implement efficient, high-performance code for IoT applications that exploits all the cutting-edge features of powerful Intel® architecture (CPU, GPU, FPGA). Optimize threading, memory, and persistent memory debugging early in the design cycle.
  • Analyze.
    Quickly pinpoint code-tuning opportunities with deep analysis of performance characteristics, including behavioural system analysis, power-related metrics and hardware-specific optimizations.
  • Unite.
    Unite sensors to devices and devices to the cloud with connectivity tools and sensor libraries.
  • Integrate.
    Works seamlessly with other Intel domain-specific tools (AI analytics, video processing, deep learning inference, etc.) to accelerate specialized IoT applications and workloads.
  • Priority Support.
    Intel offers the ability to connect directly to Intel engineers for confidential answers to technical questions.

Info

What is Intel oneAPI?

Accelerate the development of cross-architecture IoT applications for smart, connected devices. Boost performance and power efficiency using the Intel oneAPI Base & IoT Toolkit. This complete set of tools includes an industry-leading C++ compiler, powerful libraries, and analysis and debug tools for fast, efficient development. Simplify deployment across Intel CPUs, GPUs, and FPGAs, and exploit all the cutting-edge hardware features. Connect edge devices seamlessly to cloud services to transfer data and innovate with access to hundreds of sensors. Take advantage of Priority Support and fast development with direct access to Intel engineers for technical questions.

oneAPI Licensing

The licensing and the naming of the editions have changed with the launch of oneAPI.

Supported platforms

Intel oneAPI Base & HPC Toolkit is offered with support for Windows, Linux and Mac OS. Support for all OS is included with each license.

Languages

Intel oneAPI Base & IoT Toolkit is offered with support for C++ and Data Parallel C++. Support for all described languages is included with each license.

Editions

Intel oneAPI Base & IoT Toolkit is offered in two editions: Single-Node and Multi-Node.

Included

What’s included in the Intel oneAPI & HPC Toolkit?

  • Intel® oneAPI DPC++/C++ Compiler
    A standards-based, CPU, GPU and FPGA compiler supporting Data Parallel C++, C++, C, SYCL and OpenMP. that leverages well-proven LLVM compiler technology and Intel’s history of compiler leadership for performance. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® C++ Compiler Classic
    A standards-based C/C++ compiler supporting OpenMP focused on CPU development. Take advantage of more cores and built-in technologies in platforms based on Intel® CPU architectures. Experience seamless compatibility with popular compilers, development environments, and operating systems.
  • Intel® Cluster Checker
    Verify that cluster components work together seamlessly for optimal performance, improved uptime, and lower total cost of ownership.
  • Intel® Inspector
    Locate and debug threading, memory, and persistent memory errors early in the design cycle to avoid costly errors later.
  • Intel® oneAPI DPC++ Library
    Speed up data parallel workloads with these key productivity algorithms and functions.
  • Intel® oneAPI Threading Building Blocks
    Simplify parallelism with this advanced threading and memory-management template library.
  • Intel® oneAPI Math Kernel Library
    Accelerate math processing routines, including matrix algebra, fast Fourier transforms (FFT), and vector math.
  • Intel® oneAPI Data Analytics Library
    Boost machine learning and data analytics performance.
  • Intel® oneAPI Video Processing Library
    Deliver fast, high-quality, real-time video decoding, encoding, transcoding, and processing.
  • Intel® Advisor
    Design code for efficient vectorization, threading and offloading to accelerators.
  • Intel® Distribution for Python
    Achieve fast math-intensive workload performance without code changes for data science and machine learning problems.
  • Intel® DPC++ Compatibility Tool
    Migrate legacy CUDA code to a multi-platform program in DPC++ code with this assistant.
  • Intel® Integrated Performance Primitives
    Speed performance of imaging, signal processing, data compression, cryptography, and more.

News

New features in Intel oneAPI

intelOne API 2023 New features

Intel oneAPI developer toolkits 2023

Optimised, standards-based support for powerful new architectures

The latest oneAPI and AI 2023 tools continue to empower developers with multiarchitecture performance and productivity, delivering optimised support for Intel’s upcoming portfolio of CPU and GPU architectures and advanced capabilities:

  • 4th Gen Intel Xeon Scalable Processors (formerly codenamed Sapphire Rapids) with Intel
  • Advanced Matrix Extensions (Intel AMX), Quick Assist Technology (QAT), Intel AVX-512, bfloat16, and more
  • Intel Xeon Processor Max Series high-bandwidth memory
  • Intel Data Center GPUs, including Flex Series with hardware AV1 encode and Max Series (formerly codenamed Ponte Vecchio) with datatype flexibility, Intel Xe Matrix Extensions (Intel XMX), vector engine, XE-Link, and other features.
  • Existing Intel CPUs, GPUs, and FPGAs

The tools deliver performance and productivity enhancements and also add support for new Codeplay plug-ins for NVIDIA and AMD that make it easier than ever for developers to write SYCL code for non-Intel GPU architectures. These standards-based tools deliver choice in hardware and ease in developing high-performance applications that run on multiarchitecture systems.

What’s new in the 2023 oneAPI and AI tools?

Compilers & SYCL support

  • Intel oneAPI DPC++/C++ Compiler improves CPU and GPU offload performance and broadens SYCL language support for improved code portability and productivity.
  • Intel oneAPI DPC++ Library (oneDPL) expands support of the C++ standard library in SYCL kernels with additional heap and sorting algorithms and adds the ability to use OpenMP for thread-level parallelism.
  • Intel DPC++ Compatibility Tool (based on the open-source SYCLomatic project) improves the migration of CUDA library APIs, including those for runtime and drivers, cuBLAS, and cuDNN.
  • Intel Fortran Compiler implements coarrays, eliminating the need for external APIs such as MPI or OpenMP, expands OpenMP 5.x offloading features, adds DO CONCURRENT GPU offload and improves optimisations for source-level debugging.

Performance libraries

  • Intel oneAPI Math Kernel Library increases CUDA library function API compatibility coverage for BLAS and FFT; for Sapphire Rapids, leverages Intel XMX to optimize matrix multiply computations for TF32, FP16, BF16, and INT8 data types; and provides interfaces for SYCL and C/Fortran OpenMP offload programming.
  • Intel oneAPI Threading Building Blocks improve support and use of the latest C++ standard for parallel sort, offers an improved synchronization mechanism to reduce contention when multiple task arena calls are used concurrently, and add support for Microsoft Visual Studio 2022 and Windows Server 2022.
  • Intel oneAPI Video Processing Library supports the industry’s only hardware AV1 codec in the Intel Data Center GPU Flex Series and Intel Arc(tm) processors; expands OS support for RHEL9, CentOS, Stream 9, SLES15Sp4, and Rocky 9 Linux; and adds parallel encoding feature to sample multi transcode.

Analysis & Debug

  • Intel VTune Profiler enables the ability to identify MPI imbalance issues via its Application Performance Snapshot feature; delivers visibility into Xe Link cross-card traffic for utilisation, bandwidth consumption, and other issues; and adds support for 4th Gen Intel Xeon Scalable Processors (Sapphire Rapids), Max Series (Ponte Vecchio), and 13th Gen Intel Core processors.
  • Intel Advisor adds automated roofline analysis for Intel Data Center GPU MAX Series to identify and prioritize memory, cache, or compute bottlenecks and understand their causes and delivers actionable recommendations for optimising data-transfer reuse costs of CPU-to-GPU offloading.

AI and Analytics

  • Intel AI Analytics Toolkit can now be run natively on Windows with full parity to Linux except for distributed training (GPU support is coming in Q1 2023).
  • Intel oneAPI Deep Neural Network Library further supports superior CNN performance by enabling advanced features in 4th Gen Intel Xeon Scalable Processors, including Intel AMX, AVX-512, VNNI, and bfloat16.
  • Intel Distribution of Modin integrates with new heterogeneous data kernels (HDK) solution in the back end, enabling AI solution scale from low-compute resources to large- or distributed-computed resources.
  • Beta additions for Intel Distribution for Python include the compute-follows-data model extension to GPU, data exchange between libraries and frameworks, and data-parallel extensions for NumPy and Numba packages.

Rendering & Visual Computing

  • Intel oneAPI Rendering Toolkit includes the Intel Implicit SPMD Program Compiler runtime library for fast SIMD performance on CPUs.
  • Intel Open Volume Kernel Library increases memory-layout efficiency for VDB volumes and adds an AVX-512 8-wide CPU device mode for increased workload performance.
  • Intel OSPRay and Intel OSPRay Studio add features for multi-segment deformation motion blur for mesh geometry, primitive, and objects; face-varying attributes for mesh and subdivision geometry; new light capabilities such as photometric light types; and instance ID buffers to create segmentation images for AI training.

Why should you use oneAPI?

With 48% of developers targeting heterogeneous systems that use more than one kind of processor*, more efficient multiarchitecture programming is required to address the increasing scope and scale of real-world workloads.

Using oneAPI’s open, unified programming model with Intel’s standards-based multiarchitecture tools provides freedom of choice in hardware, performance, productivity, and code portability for CPUs and accelerators. Code written for proprietary programming models, like CUDA, lacks portability to other hardware, creating a siloed development practice that locks organisations into a closed ecosystem.

*Evans Data Global Development Survey Report 22.1, June 2022

Buy the Intel oneAPI 2023 Toolkits with Priority Support

Alfasoft is an Intel Software Elite Reseller. We can support you with Intel oneAPI licensing advice and discounts. Purchase the Intel oneAPI 2023 Toolkits with Priority Support to gain access to private, dedicated Intel engineer support, access to earlier version, plus many other benefits.

If you have any questions, pPlease get in touch with Alfasoft via email or phone.

System

System requirements

Common Hardware Requirements

CPU Processor Requirements

Systems based on Intel® 64 architectures below are supported as host and target platforms.

  • Intel® Core™ processor family or higher
  • Intel® Xeon® processor family
  • Intel® Xeon® Scalable processor family

Requirements for Accelerators

  • Integrated GEN9 or higher GPUs including latest Intel® Iris® Xe MAX graphics
  • FPGA Card: see Intel(R) DPC++ Compiler System Requirements.

Disk Space Requirements

  • ~3 GB of disk space (minimum) if only installing compiler and its libraries: Intel oneAPI DPC++/C++ Compiler, Intel® DPC++ Compatibility Tool, Intel® oneAPI DPC++ Library and Intel® Threading Building Block
  • Maximum of ~24 GB diskspace if installing all components

During the installation process, the installer may need up to 6 GB of additional temporary disk storage to manage the download and intermediate installation files.

Memory Requirements

  • 8 GB RAM recommended
  • For FPGA development, see Intel(R) DPC++ Compiler System Requirements.

Common Software Requirements

Operating System Requirements

The operating systems listed below are supported on Intel® 64 Architecture. Individual tools may support additional operating systems and architecture configurations. See the individual tool release notes for full details.

For developing applications for offloading to accelerators like GPU or FPGA, a specific version of GPU driver is required for the supported operating system. Please visit the Installation guide for Intel® oneAPI Toolkits “Install Intel GPU Drivers” section for up to date information.

For Linux

  • GNU* Bash is required for local installation and for setting up the environment to use the toolkit.

For CPU Host/Target Support

For GPU Accelerator Support

For Windows

For CPU Support

For GPU Accelerator Support

For macOS

Processors

  • Intel® Xeon® processors
  • Intel® Xeon® Scalable processors
  • Intel® Core™ processors

GPUs

  • Intel® Processor Graphics Gen9 and above
  • Xe architecture

Languages

  • Data Parallel C++ (DPC++) and SYCL (Note Must have Intel oneAPI Base Toolkit installed)
  • C and C++

Operating systems

  • Windows
  • Linux
  • macOS (Not all Intel oneAPI HPC Toolkit components are available for macOS. The following components are included: Intel® C++ Compiler Classic and Intel® Fortran Compiler Classic.)

Development environments

  • Compatible with compilers from Microsoft, GCC, Intel, and others that follow established language standards
  • Windows: Microsoft Visual Studio
  • Linux: Eclipse*

Distributed environments:

  • MPI

Open Fabrics Interfaces (OFI) framework implementation supporting the following

  • InfiniBand*
  • iWARP, RDMA over Converged Ethernet (RoCE)
  • Amazon Web Services Elastic Fabric Adapter (AWS EFA)
  • Intel® Omni-Path Architecture (Intel® OPA)
  • Ethernet, IP over InfiniBand (IPoIB), IP over Intel OPA

Licensing

License Options

Editions: Single-Node vs Multi-Node

Intel oneAPI Base & IoT Toolkit is offered as Single-Node or Multi-Node.

  • Single-Node
    Supported for use on laptop, notebook, desktop, PC, or workstation
  • Multi-Node
    Supported for use on laptop, notebook, desktop, PC, workstation and distributed memory systems ie HPC clusters
    • The applications the developer is writing will determine which option they should purchase.

License Types

Named user

  • Concurrent 2-seat: any 2 concurrent users / up to 10 registered developers
  • Concurrent 5-seat: any 5 concurrent users / up to 25 registered developers
    • No more Floating 1-seat option
    • FlexLM will no longer be included

Users must be compliant with Intel’s EULA and purchase the number of seats to match the number of users who need to use the software concurrently.

  • Academic
    Degree-granting institutions only; colleges and universities that teach student in higher education towards earning a degree
  • Commercial
    All other users, including government and non-profit institutions that are not degree-granting
  • Languages
    No language selection. Intel oneAPI Base & HPC Toolkit includes both Fortran, C++, Data Parallel C++ and Python compilers in the same SKU.
  • Operating Systems
    All supported OS’s will be included in the same SKU – Windows, Linux, macOS. No OS selection.

Performance Analyzers & Libraries

Analyzers (ie VTune Profiler and MPI) and libraries will no longer be available as standalone products.

  • New licenses will only be available through the purchase of oneAPI Base or oneAPI Base & HPC Toolkit Single Node or Multi-Node, depending on the product.
  • Upgrade Promotions to oneAPI are available for existing users with active support.

Existing Users – what’s next?

All registered users of Intel Parallel Studio and Intel System Studio (ISPXE or ISS) with active support will receive an email from Intel with the option to upgrade to the oneAPI product corresponding to their existing product free of charge.

  • If the user chooses the free upgrade option, their existing legacy product will be retired, and a new serial number will be issued to the same support end date as their legacy product.
  • Once upgraded to oneAPI, users will be eligible for all version updates & upgrades that Intel releases for oneAPI while their support agreement is active.
  • For continued support, eligible users can purchase oneAPI SSR SKUs
  • If the user does not choose to upgrade to oneAPI, the user can continue using their existing product, but they will not receive any new version updates (unless there are security bug fixes)
  • They will only be able to stay on the version they currently own.
  • Users with active support will continue to have the free upgrade link available while their license support remains active. Once support expires, the free upgrade option will go away.
  • Users with support in the post-expiry renewal range can purchase a post-expiry SSR SKU for their existing product to get their support active. Then, they will receive a link to upgrade to oneAPI for free as part of their service & support agreement.

oneAPI Upgrade Promotion: available to IPSXE users with active support at a discounted price for a limited time

  • The Upgrade Promo SKU might be a more cost-effective option for existing Composer Edition users to purchase instead of selecting the free oneAPI Upgrade offer, depending on the license type they own.
  • Users need to be aware that the Upgrade Promotions are ONLY eligible for purchase BEFORE they click the “Upgrade” to oneAPI if they are currently under active support.
  • Once a user upgrades for free, the upgrade promotion goes away.
  • New Update: IPSXE users with support that expired less than 6 months can also purchase an oneAPI Upgrade Promotion SKU.
  • With this purchase, a new serial number will be issued with 12 months of support extended from their current support expiration date.

oneAPI IoT Upgrade

  • Upgrade to another suite with additional capabilities, i.e. from oneAPI Base Toolkit to oneAPI Base & HPC Toolkit
  • Upgrade from oneAPI Single-Node to Multi-Node
  • With this purchase, a new serial number will be issued with 12 months of support extended from their current support expiration date.

Please contact us for an offer today!

Comparison

Intel oneAPI Toolkits Comparison

Intel oneAPI IoT Toolkits Comparison
Intel oneAPI Toolkits comparison matrix

Support

Support

Alfasoft offer paid licenses of Intel oneAPI software with support.

• 1 & 3-year support options will be available for all toolkits
• Additional years of support can be added with the corresponding pre-expiry renewal SKU

Support with oneAPI Toolkit includes

  • Submit questions, problems and other technical support questions
  • Monitor issues you’ve submitted previously
  • Direct and private interaction with Intel’s support engineers, including the ability to submit confidential support requests
  • Accelerated response time for technical questions and other product needs
  • Free download access to all new product updates and continued access to older versions of the product
  • Priority assistance for escalated defects and feature requests
  • Access to a vast library of self-help documentation built from decades of experience with creating high-performance code
  • Access to Intel public community forums supported by community technical experts and monitored by Intel engineers

What’s included with Intel Priority Support? (video)

Without paid support, support for free tools is only available through public community forums and only for the latest version update of the current year.

Click here to get access to Intel Support!