New Intel Architectures and Technologies Target Expanded Market Opportunities

SANTA CLARA, Calif., Dec. 12, 2018 – At Intel “Archi­tec­tu­re Day,” top exe­cu­ti­ves, archi­tects and fel­lows reve­a­led next-gene­ra­ti­on tech­no­lo­gies and dis­cus­sed pro­gress on a stra­tegy to power an expan­ding uni­ver­se of data-inten­si­ve workloads for PCs and other smart con­su­mer devices, high-speed net­works, ubi­qui­tous arti­fi­ci­al intel­li­gence (AI), spe­cia­li­zed cloud data cen­ters and auto­no­mous vehicles.

Intel demons­tra­ted a ran­ge of 10nm-based sys­tems in deve­lo­p­ment for PCs, data cen­ters and net­wor­king, and pre­view­ed other tech­no­lo­gies tar­ge­ted at an expan­ded ran­ge of workloads.

More: New Intel Archi­tec­tures and Tech­no­lo­gies Tar­get Expan­ded Mar­ket Opportunities


Tog­e­ther the­se tech­no­lo­gies lay the foun­da­ti­on for a more diver­se era of com­pu­ting in an expan­ded addressa­ble mar­ket oppor­tu­ni­ty of more than $300 bil­li­on by 2022.
1The com­pa­ny also shared its tech­ni­cal stra­tegy focu­sed on six engi­nee­ring seg­ments whe­re signi­fi­cant invest­ments and inno­va­ti­on are being pur­sued to dri­ve leaps for­ward in tech­no­lo­gy and user expe­ri­ence. They include: advan­ced manu­fac­tu­ring pro­ces­ses and pack­a­ging; new archi­tec­tures to speed-up spe­cia­li­zed tasks like AI and gra­phics; super-fast memo­ry; inter­con­nects; embedded secu­ri­ty fea­tures; and com­mon soft­ware to uni­fy and sim­pli­fy pro­gramming for deve­lo­pers across Intel’s com­pu­te roadmap.

 

Intel Archi­tec­tu­re Day Highlights:

  • Indus­try-First 3D Stack­ing of Logic Chips: Intel demons­tra­ted a new 3D pack­a­ging tech­no­lo­gy, cal­led “Fove­r­os,” which for the first time brings the bene­fits of 3D stack­ing to enable logic-on-logic integration.
     
    Fove­r­os paves the way for devices and sys­tems com­bi­ning high-per­for­mance, high-den­si­ty and low-power sili­con pro­cess tech­no­lo­gies. Fove­r­os is expec­ted to extend die stack­ing bey­ond tra­di­tio­nal pas­si­ve inter­po­sers and sta­cked memo­ry to high-per­for­mance logic, such as CPU, gra­phics and AI pro­ces­sors for the first time.
     
    The tech­no­lo­gy pro­vi­des tre­men­dous fle­xi­bi­li­ty as desi­gners seek to “mix and match” tech­no­lo­gy IP blocks with various memo­ry and I/O ele­ments in new device form fac­tors. It will allow pro­ducts to be bro­ken up into smal­ler “chip­lets,” whe­re I/O, SRAM and power deli­very cir­cuits can be fabri­ca­ted in a base die and high-per­for­mance logic chip­lets are sta­cked on top.
     
    Intel expects to launch a ran­ge of pro­ducts using Fove­r­os begin­ning in the second half of 2019. The first Fove­r­os pro­duct will com­bi­ne a high-per­for­mance 10nm com­pu­te-sta­cked chip­let with a low-power 22FFL base die. It will enable the com­bi­na­ti­on of world-class per­for­mance and power effi­ci­en­cy in a small form factor.
     
    Fove­r­os is the next leap for­ward fol­lo­wing Intel’s breakth­rough Embedded Mul­ti-die Inter­con­nect Bridge (EMIB) 2D pack­a­ging tech­no­lo­gy, intro­du­ced in 2018.
     
  • New Sun­ny Cove CPU Archi­tec­tu­re: Intel intro­du­ced Sun­ny Cove, Intel’s next-gene­ra­ti­on CPU micro­ar­chi­tec­tu­re desi­gned to increase per­for­mance per clock and power effi­ci­en­cy for gene­ral pur­po­se com­pu­ting tasks, and includes new fea­tures to acce­le­ra­te spe­cial pur­po­se com­pu­ting tasks like AI and cryp­to­gra­phy. Sun­ny Cove will be the basis for Intel’s next-gene­ra­ti­on ser­ver (Intel® Xeon®) and cli­ent (Intel® Core™) pro­ces­sors later next year. Sun­ny Cove fea­tures include:
     
    • Enhan­ced micro­ar­chi­tec­tu­re to exe­cu­te more ope­ra­ti­ons in parallel.
    • New algo­rith­ms to redu­ce latency.
    • Increased size of key buf­fers and caches to opti­mi­ze data-cen­tric workloads.
    • Archi­tec­tu­ral exten­si­ons for spe­ci­fic use cases and algo­rith­ms. For exam­p­le, new per­for­mance-boos­ting ins­truc­tions for cryp­to­gra­phy, such as vec­tor AES and SHA-NI, and other cri­ti­cal use cases like com­pres­si­on and decompression.

     
    Sun­ny Cove enables redu­ced laten­cy and high through­put, as well as offers much grea­ter par­al­le­lism that is expec­ted to impro­ve expe­ri­en­ces from gam­ing to media to data-cen­tric applications.

  • Next-Gene­ra­ti­on Gra­phics: Intel unvei­led new Gen11 inte­gra­ted gra­phics with 64 enhan­ced exe­cu­ti­on units, more than dou­ble pre­vious Intel Gen9 gra­phics (24 EUs), desi­gned to break the 1 TFLOPS bar­ri­er. The new inte­gra­ted gra­phics will be deli­ver­ed in 10nm-based pro­ces­sors begin­ning in 2019.
     
    The new inte­gra­ted gra­phics archi­tec­tu­re is expec­ted to dou­ble the com­pu­ting per­for­mance-per-clock com­pared to Intel Gen9 gra­phics. With >1 TFLOPS per­for­mance capa­bi­li­ty, this archi­tec­tu­re is desi­gned to increase game playa­bi­li­ty. At the event, Intel show­ed Gen11 gra­phics near­ly doubling the per­for­mance of a popu­lar pho­to reco­gni­ti­on appli­ca­ti­on when com­pared to Intel’s Gen9 gra­phics. Gen11 gra­phics is expec­ted to also fea­ture an advan­ced media enco­der and deco­der, sup­port­ing 4K video streams and 8K con­tent crea­ti­on in cons­trai­ned power enve­lo­pes. Gen11 will also fea­ture Intel® Adap­ti­ve Sync tech­no­lo­gy enab­ling smooth frame rates for gaming.
     
    Intel also reaf­firm­ed its plan to intro­du­ce a dis­crete gra­phics pro­ces­sor by 2020.
  • One API” Soft­ware: Intel announ­ced the “One API” pro­ject to sim­pli­fy the pro­gramming of diver­se com­pu­ting engi­nes across CPU, GPU, FPGA, AI and other acce­le­ra­tors. The pro­ject includes a com­pre­hen­si­ve and uni­fied port­fo­lio of deve­lo­per tools for map­ping soft­ware to the hard­ware that can best acce­le­ra­te the code. A public pro­ject release is expec­ted to be available in 2019.
  • Memo­ry and Sto­rage: Intel dis­cus­sed updates on Intel® Opta­ne™ tech­no­lo­gy and the pro­ducts based upon that tech­no­lo­gy. Intel® Opta­ne™ DC per­sis­tent memo­ry is a new pro­duct that con­ver­ges memo­ry-like per­for­mance with the data per­sis­tence and lar­ge capa­ci­ty of sto­rage. The revo­lu­tio­na­ry tech­no­lo­gy brings more data clo­ser to the CPU for fas­ter pro­ces­sing of big­ger data sets like tho­se used in AI and lar­ge data­ba­ses. Its lar­ge capa­ci­ty and data per­sis­tence redu­ces the need to make time-con­sum­ing trips to sto­rage, which can impro­ve workload per­for­mance. Intel Opta­ne DC per­sis­tent memo­ry deli­vers cache line (64B) reads to the CPU. On avera­ge, the avera­ge idle read laten­cy with Opta­ne per­sis­tent memo­ry is expec­ted to be about 350 nano­se­conds when appli­ca­ti­ons direct the read ope­ra­ti­on to Opta­ne per­sis­tent memo­ry, or when the reques­ted data is not cached in DRAM. For sca­le, an Opta­ne DC SSD has an avera­ge idle read laten­cy of about 10,000 nano­se­conds (10 micro­se­conds), a remar­kab­le impro­ve­ment.2 In cases whe­re reques­ted data is in DRAM, eit­her cached by the CPU’s memo­ry con­trol­ler or direc­ted by the appli­ca­ti­on, memo­ry sub-sys­tem respon­si­ve­ness is expec­ted to be iden­ti­cal to DRAM (<100 nanoseconds).

The com­pa­ny also show­ed how SSDs based on Intel’s 1 Tera­bit QLC NAND die move more bulk data from HDDs to SSDs, allo­wing fas­ter access to that data.
 
The com­bi­na­ti­on of Intel Opta­ne SSDs with QLC NAND SSDs will enable lower laten­cy access to data used most fre­quent­ly. Taken tog­e­ther, the­se plat­form and memo­ry advan­ces com­ple­te the memo­ry and sto­rage hier­ar­chy pro­vi­ding the right set of choices for sys­tems and applications.

  • Deep Lear­ning Refe­rence Stack: Intel is releasing the Deep Lear­ning Refe­rence Stack, an inte­gra­ted, high­ly-per­for­mant open source stack opti­mi­zed for Intel® Xeon® Sca­lable plat­forms. This open source com­mu­ni­ty release is part of our effort to ensu­re AI deve­lo­pers have easy access to all of the fea­tures and func­tion­a­li­ty of the Intel plat­forms. The Deep Lear­ning Refe­rence Stack is high­ly-tun­ed and built for cloud nati­ve envi­ron­ments. With this release, Intel is enab­ling deve­lo­pers to quick­ly pro­to­ty­pe by redu­cing the com­ple­xi­ty asso­cia­ted with inte­gra­ting mul­ti­ple soft­ware com­pon­ents, while still giving users the fle­xi­bi­li­ty to cus­to­mi­ze their solutions.
    • Ope­ra­ting Sys­tem: Clear Linux* OS is cus­to­mizable to indi­vi­du­al deve­lo­p­ment needs, tun­ed for Intel plat­forms and spe­ci­fic use cases like deep learning;
    • Orchestra­ti­on: Kuber­netes* mana­ges and orchestra­tes con­tai­ne­ri­zed appli­ca­ti­ons for mul­ti-node clus­ters with Intel plat­form awareness;
    • Con­tai­ners: Docker* con­tai­ners and Kata* con­tai­ners uti­li­ze Intel® Vir­tua­liza­ti­on Tech­no­lo­gy to help secu­re container;
    • Libra­ri­es: Intel® Math Ker­nel Libra­ry for Deep Neu­ral Net­works (MKL DNN) is Intel’s high­ly opti­mi­zed math libra­ry for mathe­ma­ti­cal func­tion performance;
    • Run­times: Python* pro­vi­ding appli­ca­ti­on and ser­vice exe­cu­ti­on run­time sup­port is high­ly tun­ed and opti­mi­zed for Intel architecture;
    • Frame­works: Ten­sor­Flow* is a lea­ding deep lear­ning and machi­ne lear­ning framework;
    • Deploy­ment: Kube­Flow* is an open-source indus­try-dri­ven deploy­ment tool that pro­vi­des a fast expe­ri­ence on Intel archi­tec­tu­re, ease of instal­la­ti­on and simp­le use.

1Intel cal­cu­la­ted 2022 total addressa­ble mar­ket oppor­tu­ni­ty deri­ved from indus­try ana­lyst reports and inter­nal estimates.

2Avera­ge idle read laten­cy is the mean time for read data to return to a reques­t­ing pro­ces­sor. This is an avera­ge; some laten­ci­es will be lon­ger. Tests docu­ment per­for­mance of com­pon­ents on a par­ti­cu­lar test, in spe­ci­fic sys­tems. Dif­fe­ren­ces in hard­ware, soft­ware or con­fi­gu­ra­ti­on will affect actu­al per­for­mance. Con­sult other sources of infor­ma­ti­on to eva­lua­te per­for­mance as you con­sider your purcha­se. For more com­ple­te infor­ma­ti­on about per­for­mance and bench­mark results, visit www.intel.com/benchmarks.

For­ward-Loo­king Statements

State­ments in this news sum­ma­ry that refer to future plans and expec­ta­ti­ons, inclu­ding with respect to Intel’s future pro­ducts and the expec­ted avai­la­bi­li­ty and bene­fits of such pro­ducts, are for­ward-loo­king state­ments that invol­ve a num­ber of risks and uncer­tain­ties. Words such as “anti­ci­pa­tes,” “expects,” “intends,” “goals,” “plans,” “belie­ves,” “seeks,” “esti­ma­tes,” “con­ti­nues,” “may,” “will,” “would,” “should,” “could,” and varia­ti­ons of such words and simi­lar expres­si­ons are inten­ded to iden­ti­fy such for­ward-loo­king state­ments. State­ments that refer to or are based on esti­ma­tes, fore­casts, pro­jec­tions, uncer­tain events or assump­ti­ons, inclu­ding state­ments rela­ting to total addressa­ble mar­ket (TAM) or mar­ket oppor­tu­ni­ty and anti­ci­pa­ted trends in our busi­nesses or the mar­kets rele­vant to them, also iden­ti­fy for­ward-loo­king state­ments. Such state­ments are based on the company’s cur­rent expec­ta­ti­ons and invol­ve many risks and uncer­tain­ties that could cau­se actu­al results to dif­fer mate­ri­al­ly from tho­se expres­sed or impli­ed in the­se for­ward-loo­king state­ments. Important fac­tors that could cau­se actu­al results to dif­fer mate­ri­al­ly from the company’s expec­ta­ti­ons are set forth in Intel’s ear­nings release dated Octo­ber 25, 2018, which is included as an exhi­bit to Intel’s Form 8‑K fur­nis­hed to the SEC on such date. Addi­tio­nal infor­ma­ti­on regar­ding the­se and other fac­tors that could affect Intel’s results is included in Intel’s SEC filings, inclu­ding the company’s most recent reports on Forms 10‑K and 10‑Q. Copies of Intel’s Form 10‑K, 10‑Q and 8‑K reports may be obtai­ned by visi­ting our Inves­tor Rela­ti­ons web­site at www.intc.com or the SEC’s web­site at www.sec.gov.