What’s New: Intel today announced two new members of its Intel® Xeon® processor portfolio: Cascade Lake advanced performance (expected to be released the first half of 2019) and the Intel Xeon E-2100 processor for entry-level servers (general availability today). These two new product families build upon Intel’s foundation of 20 years of Intel Xeon platform leadership and give customers even more flexibility to pick the right solution for their needs.
“We remain highly focused on delivering a wide range of workload-optimized solutions that best meet our customers’ system requirements. The addition of Cascade Lake advanced performance CPUs and Xeon E-2100 processors to our Intel Xeon processor lineup once again demonstrates our commitment to delivering performance-optimized solutions to a wide range of customers.”
–Lisa Spelman, Intel vice president and general manager of Intel Xeon products and data center marketing
How Cascade Lake Performs: Cascade Lake advanced performance represents a new class of Intel Xeon Scalable processors designed for the most demanding high-performance computing (HPC), artificial intelligence (AI) and infrastructure-as-a-service (IaaS) workloads. The processor incorporates a performance optimized multi-chip package to deliver up to 48 cores per CPU and 12 DDR4 memory channels per socket. Intel shared initial details of the processor in advance of the Supercomputing 2018 conference to provide further insight to the company’s extended innovations in workload types.
Cascade Lake advanced performance processors are expected to continue Intel’s focus on offering workload-optimized performance leadership by delivering both core CPU performance gains1 and leadership in memory bandwidth constrained workloads. Performance estimations include:
- Linpack up to 1.21x versus Intel Xeon Scalable 8180 processor and 3.4x2 versus AMD* EPYC* 7601
- Stream Triad up to 1.83x versus Intel Scalable 8180 processor and 1.3x2 versus AMD EPYC 7601
- AI/Deep Learning Inference up to 17x images-per-second2 versus Intel Xeon Platinum processor at launch.
How Intel Xeon E-2100 Processors Enhance Cloud Security: Intel® SGX on the Intel Xeon E-2100 processor family delivers hardware-based security and manageability features to further secure customer data and applications. This feature is currently unique to the Intel Xeon E processor family and allows new entry-level servers featuring an Intel Xeon E-2100 processor to provide an additional layer of hardware-enhanced security measures when used with properly enabled cloud applications.
Whom Intel Xeon E-2100 Processors Help: The Xeon E-2100 processor is targeted at small- and medium-size businesses and cloud service providers. The processor supports workloads suitable for entry-level servers, but also has applicability across all computing segments requiring enhanced data protections for the most sensitive workloads.
Small businesses deploying Intel Xeon E-2100 processor-based servers will benefit from the processor’s enhanced performance and data security. They will allow businesses to operate smoothly by supporting the latest file-sharing, storage and backup, virtualization, and employee productivity solutions.
How You Get It: Intel Xeon E-2100 processors are available today through Intel and leading distributors.
More Context: Cascade Lake Advanced Performance (Press Presentation) | Xeon E-2100 (Press Presentation) |
The Small Print:
1Performance Leadership: Based on our current understanding of the Linpack performance of general purpose processors commercially available in 2019. Unprecedented Memory Bandwidth: Native DDR memory bandwidth. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit https://ift.tt/ThHRhx. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance.
2Performance results are based on testing or projections as of 6/2017 to 10/3/2018 (Stream Triad), 7/31/2018 to 10/3/2018 (LINPACK) and 7/11/2017 to 10/7/2018 (DL Inference) and may not reflect all publicly available security updates. LINPACK: AMD EPYC 7601: Supermicro AS-2023US-TR4 with 2 AMD EPYC 7601 (2.2GHz, 32 core) processors, SMT OFF, Turbo ON, BIOS ver 1.1a, 4/26/2018, microcode: 0x8001227, 16x32GB DDR4-2666, 1 SSD, Ubuntu 18.04.1 LTS (4.17.0-041700-generic Retpoline), High Performance Linpack v2.2, compiled with Intel(R) Parallel Studio XE 2018 for Linux, Intel MPI version 18.0.0.128, AMD BLIS ver 0.4.0, Benchmark Config: Nb=232, N=168960, P=4, Q=4, Score = 1095GFs, tested by Intel as of July 31, 2018. compared to 1-node, 2-socket 48-core Cascade Lake Advanced Performance processor projections by Intel as of 10/3/2018.Stream Triad: 1-node, 2-socket AMD EPYC 7601, http://www.amd.com/system/files/2017-06/AMD-EPYC-SoC-Delivers-Exceptional-Results.pdf tested by AMD as of June 2017 compared to 1-node, 2-socket 48-core Cascade Lake Advanced Performance processor projections by Intel as of 10/3/2018.DL Inference: Platform: 2S Intel® Xeon® Platinum 8180 CPU @ 2.50GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel® SSD DC S3700 Series (800GB, 2.5in SATA 6Gb/s, 25nm, MLC).Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, dummy dataset was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50),and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l“. Tested by Intel as of July 11th 2017 -. compared to 1-node, 2-socket 48-core Cascade Lake Advanced Performance processor projections by Intel as of 10/7/2018.
No product can be absolutely secure. Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice (Notice Revision #20110804).
The post Intel Shows Breadth of Data-Centric Platform with Cascade Lake Advanced Performance and Xeon E-2100 appeared first on Intel Newsroom.
from Intel Newsroom https://ift.tt/2RBbl8W