Home Blog Page 3

HYPERSCALER AND CLOUD IS NEW TERND

0
HYPERSCALER AND CLOUD IS NEW TERND

For the very first time, and despite the Great Depression, Intel chip maker booked less income in the fourth quarter of the year than did in the third quarter of the same year.

And the culprit is the slowdown in different areas of the business datacenter, with firms and government spending at the same time that China reacts to global growth and some companies that sell Intel chips have also paid a bit slower than usual.

All these changes took a surprise from Intel

with the company not found in October last year. And with the company’s record year’s record of $ 70.85 billion, up 12.9 percent, and net income $ 21.1 billion, an increase of 119.3 percent (almost 2.2X), is not too hard.

Intel has about a quarter of its self-managed markets that can be priced at $ 300 billion, and it is somewhat expensive because it has happened in recent years, although the difficulty has got 10 nanometer manufacturing process to be marketed.

But they are beginning to look as if they’re just energetic as the company takes 14 nanometers “Skylake” Xeon SPs that they have purchased and they receive 10 nanometers “Ice Lake” Xeon SPs that will be marketed later this year and early on again.

“Cascade Lake”

The coming Xeons, with some benefits to Skylake, will appear in markets as it tends to buy servers that occur in the first quarter of 2018. It’s hard to say what’s the reason and what it’s about here.

Differences between Skylake and Cascade Lake Xeons may not be enough for large hyperscalers and cloud builders to relieve challenges, and can change who bought Skylakes last year so they can sit in the Cascade Lake generation and wait for a great deal again with the Generation of Ice Lakes .

“We have strong triple and strong growth in 2018 in the cloud, and are driven by cycle products, as well as typical multiyear thin patterns with Xeon Scalable,” Navin Shenoy, vice president of executive and general manager of Data Center Group and Intel, told Wall Street when it comes to financial decisions Q4 and the full year of 2018.

“When you look back at all the trendy history of the cloud business, we always stop the loyalty of the business, and there is a time where people build and then there is a period where people are getting our money From our customers it’s time to calculate the current turnover for the period of use, and that starts in the second half of the fourth quarter and we expect that to continue throughout the first half of this year. ”

It is important not to get the wrong idea for a normal withdrawal from the fourth year of one to four years ago, in order of 8 percent to 10 percent in Data Center Group, and Intel finance chief chief and chief executive officer temporarily warned Wall Street that could be two at the beginning of 2019.

And it will look like the pockets of rain will extend until mid-2019, when Cascade Lake Xeons wants to be delivered in quantities and when AMD will add the “Roma” Epyc X86 server, it also produces Intel’s competitive grief.

We’ve discovered-a year ago when Xeon peak business was met, and we thought we were just watching on the watch page.

Do not get the wrong idea, though.

This is a very powerful and profitable Center Data Center, and just because slow growth does not mean Intel can not master for many years to come.

It will not have the same degree of control over technology and infrastructure because it has been preferred for more than ten years for publishers and have such an unprecedented competition.

In the fourth year, Intel’s total sales increased 9.4 percent to $ 18.66 billion, and net income exceeded $ 5.2 billion, leading to a $ 687 loss recorded in the past year.

This year’s marketing platform in the Data Center Center – which includes processors, chipsets, and motherboards – rose 9.9 percent to $ 5.59 billion, and missed – things like Omni-Path and networked silicon or

Framework Skeleton Frames – sales of $ 475 million, down 2.5 percent a year ago. All said, the Data Center Group had $ 6.07 billion in sales, up 8.7 percent, and recorded an operating operation of $ 3.06 billion,

AMD NAILS ITS EPYC SERVER NEW TARGET FOR 2019

0
AMD NAILS ITS EPYC SERVER NEW TARGET FOR 2019

Two share of 5 percent of server market mail for Edo chip AMD likely will be easier than the first 5 percent, which is achieved as AMD out fourth and, significantly, interested in the reason that executive officer Lisa Su stepped up to the plate at the end of 2016 as ” Naples “in Naples.

Everyone is watching to see that AMD will create a server-side number, and truth, a big deal with hyperscalers – especially Amazon, Microsoft, and Tencent – bringing home a pork market help for Naples Epic’s generation.

“Four-Time Shot Time Servers Server is a two-step succession based on the need for high-end Epic 32-core processors with our virtual, HPC and virtual customer companies,” Su said on the phone yesterday after markets near Wall Street Analysts.

“As a result, we believe that we are achieving our goal for a one-digit single-digit unit out of server 2018. We have a much stronger time than clouds, highlighted by industry leaders Amazon EC2.

Such as announcing a new version of the most popular computing powered by Epyc processor Businesses can easily transfer AWS to AMD and save 10 percent or more based on the benefits of our platform technology.

Microsoft Azure also announced the availability of general-based AMD storage space at the same time, as well as a new HPC powered Epyc processor for 33 percent faster than their X86 exhibition competitive. ”

Su called several HPC victory systems

including the Epyc-based processor groups to Procter and Gamble, the US Department of Energy (the NERSC-9 “Perlmutter” system), and the University of Stuttgart, and the system at the Lawrence Livermore National.

Laboratory that uses two two Epyc processors and Instinct GPU accelerator that will be used for data analysis and workload of learning machines. While there was a similar amount of kicking and testing tires between the company’s customers and HPC said Su,

it’s hyperscalers and shareware processors transmission drive Epyc at present, and these are customers who will be on the front row for Roma Epycs, which suit sockets and will give a big show when it comes to the middle of the year and make the second round.

“As we saw in 2019,

I hope that the early settlement of Rome will also be a virtual basis,” Su said. “It’s going to be the first, but we have a set of strong corporate platforms, and as I mentioned earlier this breath from the OEM platform gives us a good confidence that we will work more widely and financially,

but in my opinion, what we have said before is that after reaching one market share in the 4th quarter of 2018, we expect 4-6 quarters to reach 10 percent of the market share and I still stay in that environment. ”

AMD can be one of the fences when it can get 10 percent of delivery, but the situation can be more optimistic.

With the next generation of Epyc processors this year, as planned, AMD uses a combined 7 core nanometer core core manufactured by Taiwan Semiconductor Manufacturing Corp.

and memory of 14 nanometers and I / O circuits based on GlobalFoundries.

AMD’s Fabric wafer has renegotiated results with GlobalFoundries and the freedom to choose foundry for a smaller process of 12 nanometers and the GlobalFoundries commitment to nothing at 12 nanometers and larger.

(GlobalFoundries 7 nanometer thistles in research, development, and production are called heat-led, basically saying the volume of potential customers does not justify investments and losses from the next move of the node, but AMD and IBM, for large chip server designers depend on GlobaFoundries, writing on the wall and transferring to TSMC and Samsung for future chips, respectively.)

The point is that Rome’s chip will offer 2x fuel and 4x floating performance before Naples, and this is not the kind of lump performance that Intel will deliver on Xeon’s main processor SP this year .

Intel is not for the 10 nanometer “Lake Ice” Xeons until early 2020 at any one, more than three quarters of Q3 2019 over Q1 2020, AMD will be able to

COMPUTER NEED THE EARTH YET

0
COMPUTER NEED THE EARTH YET

When Arm Holdings, the division of Softbank’s conglomerate is the design and licensing of major components in an architectural processor that enhances its name, launches a Neoverse architectural design structure for middle and suburban data October, the architectural company creates a rhythmic dance and promises to produce a 30 percent achievement in the system level with every generation.

This is a set of achievement achievements and target node processes. Directly from this year, the Neoverse “Ares” design, known as the N1, is good for arm performance and provides more than twice the performance policy and shows much more profit than the many tasks tested on the prototype of chips and simulators. Better performance will not be worse.

People who have less faith in what they can get with Ares Kripik are much more because of the many factors in the hardware and compilers that come together to make more contracts from hands that have been designed to begin with the Neoverse project, with the aim of creating a dedicated design processor for the core and key touch, more than five years ago.

When we made Mike Filippo

one of the forearms and engineering chief architect, Day Tech was settled in San Jose for the chipheads to become an engineer now like a pine beach, the one who pursued and said, “You may think, but unfortunately in the engineering section, only because we step here not otherwise we can get here and we just make our work harder. ”

Alternatively, we can not use the Arm to defeat dramatic performance goals. Although there is a high chance, high performance can be reduced, each process is more difficult to use and is often used by angels to train technology in an ever-expanding system

Even 30 percent performance

at the system level – every year means achievement 2.2 higher than the base of three years in the future. This is better than the Intel Xeon architecture that was delivered in recent years and faster than what IBM has sent Power7 Power9 over the generations from 2010 to 2018.

At the same time, it’s time to get more creative with core-core designs, with enough resistance to the Cortex-A72 and Cortex-A75 core is unnecessary and thus makes it a server server with a custom 64-bit custom.

This has the impression of slowing down on the design server delivery as early as the architecture, and hoping in another future the Arm’s license has been on the on-premises server, or impressed on getting this design

Neoverse

install controller controller, Phy and PCI-Express Arm controller does not Provide and quickly get the ribbon for choice and get the chips to the field.

(This also goes down to Semiconductor Manufacturing Corp in Taiwan today, possibly from some of Samsung Electronics, maybe someday English Microelectronics Corp of China, and – if the hell is frozen hard as a quartz – maybe Intel, GlobalFoundries is a game block and hanged in node 14 / 16 nanometers.)

Ares design is set to 7 nodal nodes in TSMC; It’s unclear about Huawei’s Hisilicon Technologies that depend on design Ares to make the processor of the Kunpeng 920, which was launched in early January and will be delivered later this year.

However, Kunpeng 920, according to Hisilicon, characterizes its own ARMv8 core, looks like a 64-core Ares, with 8 DDR4 control devices and PCI-Express 4.0, announcing at Tech Day Arm. Hmmmm ….

According to the Neoverse design, the hand-rolling of the topology that connects the system elements in the chip with the Cortex-A72 to the mesh topology with better bandwidth in all components and the lower probability between, on average, and this can be more than what, giving N1 Ares Neoverse chips like the highest achievement with “Cosmos” ahead of the modified Cortex-A72. It’s like a picture in Ares:

“He is a five-year employee of love, on many development websites, to create new infrastructure products,” explains Filippo. “We went to this project at the same time to optimize achievement, ten efficiency

ARM GOES TO WAR IN THE DATACENTEONE THE ARE DESGIN

0
ARM GOES TO WAR IN THE DATACENTEONE THE ARE DESGIN

When Arm Holdings, Softbank’s conglomerate distribution is the main component design and license in the processor architecture that puts its name on, launching the Neoverse architectural armor reshuffle for data center and last October’s advantage, the company puts architecture in rigorous rhythm and promises to deliver improvements 30 percent performance at system level with each generation.

This is a set of performance achievements and process node targets. And straight from this year’s chute, with Neoverse’s “Ares” design, which in turn will be known as the N1, Arm’s better performance and to secure the bases more than double the performance, and show more profit than this in many workloads tested on prototype chips and simulators.

It’s not so much that the Low Arm balled the advantages of performance he thought he could get with Ares chips so much because of many different factors in the hardware and compilers that came together at the same time to create a bigger contract than Arm had planned when he started a Neoverse project, aimed at delivering a custom processor design for core and key touches, more than five years ago.

When we propose to Mike Filippo

who is a Fellow Arm and chief architect of engineering, recently on Tech Day in San Jose for the chipheads to become an engineer now a crazy beach, he laughed and said, “You might think So, but unfortunately in engineering part, just because we step here does not mean we can get in here and we just do our work harder. ”

Everything else, we might not be able to use the Arm to defeat their dramatic performance goals. While there is always a chance that performance can be suppressed, every node process is more difficult to use and it is always difficult to predict how other technologies in the system will continue to grow.

Even a 30 percent performance – again, at the system level – every year means performance is 2.2X higher than the base three years into the future.

This is better than the Intel Xeon architecture that has been delivering in recent years and is slightly faster than what IBM has sent at Power7 Power9 through generations from 2010 to 2018.

Frankly, it’s time for the arm to get more creative with a server-core design, with enough persistence of Cortex-A72 and Cortex-A75 core unnecessary and hence attracting partner server companies to invest in creating custom designs and 64 -bit.

This has a slight effect of transmission server designs as an early architectural Arm, and is expected in the future more Arm holders already in the server space, or who aspire to be able to take this Neoverse design, insert the memory controller, PHY and controller The PCI-Express arm does not supply and quickly get the ribbon for their choice and get the chip into the field.

(It’s really down to Taiwan Semiconductor Manufacturing Corp. at this point, with possibly some possibly with Samsung Electronics, probably someday United Microelectronics Corp of China, and – if a solid right-frozen hell like quartz – maybe Intel, GlobalFoundries is from game rejection and hung on node 14/16 nanometer.)

Ares designs are tuned for 7 nanometer nodes in TSMC;

it’s unclear how much of HiSilicon Huawei Technologies relies on the Ares design to create the Kunpeng 920 processor, which he exposes back in early January and will be delivered later this year.

But Kunpeng 920, though what HiSilicon said about designing its own ARMv8 core, looks like a 64-core Ares chip reference, with eight DDR4 memory controllers and a PCI-Express 4.0 peripheral controller, announced at the Arm Tech Day event. Hmmmm ….

With the Neoverse design

the moving arm of the ring topology connects the system elements on the chip with the Cortex-A72 for mesh topology with higher bandwidth across the entire component and also latency reduction between them, on average and this can be more than anything else.

what more, give Ares Neoverse N1 chip such a great performance boost compared to the previous “Cosmos” based on the modified Cortex-A72. Here’s what the Ares chart looks like:

“He is a five-year love worker, at many development sites, to create a set of new infrastructure products,” Filippo explained. “We go to this project at the same time optimizing performance, ten’s efficiency.

THE ART OF TAILORING here is the systerm of app

0
THE ART OF TAILORING here is the systerm of app

Various applications and algorithms that generate AI are in various colors to make different hardware available for use. At Stanford Conference HPC 2019, Brett Newman from Microway offers some general guidelines about designing the system according to the nature of the software.

Although Newman’s official title is vice presidential marketing and customer involvement in Microway, HPC provider and now AI’s iron, he is part of a technology team that produces high iron for buyers.

The team, Newman said, is full of people who “spend days designing and delivering large quantities, and more importantly, the diversity of many HPC and AI systems.”

Over the years

Newman has been involved in hundreds of deployment of this system, working with customers to determine what kind of hardware they should take.

He later served in IBM, where he worked in Power Systems, which developed PowerAI’s Big Blue learning machine pitch and sky sell Power9 GPU-fast as a system used in the “Super Summit” at the Oak Ridge National Laboratory.

In conjunction with Stanford Newman’s presentation, it is remarkable to promote Microway, a 35 year old HPC system integrator. The company only jumped in the early GPU cars, since 2007, the same year Nvidia released the first version of the CUDA runtime and software development toolkit.

The AI ​​market came alone, only, but Microway immediately acknowledged that the GPU-based system would benefit from the larger market. “We are not just vendors out there,” Newman said. “But we are one of the better stores when it comes to AI hardware solutions.”

In a presentation at Stanford conference

Newman talked about the new language criteria AI customers consider when choosing a system to do their work load, and how to map the criteria for hardware.

In this regard, he prepares the procedure for the user to be considered as considering the various options. Remember that the options described here are all played on the GPU provided by Nvidia, which, as we all know, has become a dominant player in machine learning exercises.

Setting aside for a moment of vendor choice, Newman, believe two first-ever first consumer questions should ask when considering and installing AI systems: Do you like and AI data workloads and calculus are you happy?

The nature of the dataset is the GPU preferred key,

especially about memory capacity. Is data compatible with GPU memory, and if GPU or GPU matches? (The top-of-the-line Nvidia V100 GPU exists with either 16 GB or 32 GB of local HBM2 memory, so that the options are more convenient) In most cases, data sets will be larger than 32 GB, so it will be distributed the GPU on the server or even on a server group.

This leads to a secondary insight into how data can be divided into logical entries that can be inserted into the GPU or GPU, which ultimately determines the size of the AI ​​model sports collection. You should also consider the size of individual data items (assuming you have less or less dataset) and many of these items will fit the GPU with the provided memory capacity.

For example,

if you have 128 GB of 8 GB images, you can use four GPU servers, with each GPU loaded with four images. Being a small dataset, but you have a general idea.

For the AI ​​training model, Newman invites to use the NVLink-enabled Nvidia GPU. The Vlink GPU connects between one and more at speeds up to 300 GB / sec (if you use NVSwitch Connect Nvidia) and contributes and cache cache atoms. NVLink provides a better price / performance than the 32Gb / second PCI-Express 3.0 base connection to the GPU for this work.

Newman said, on various GPU servers with NVLink, you can expect dividends of 10 percent to 20 percent when working compared to a slower GPU than the PCI-Express. Remember that by vlink-V100 GPUs provide 11 percent more computing capacity of PCI Express friends (125 versus 112 tensor raw flops, respectively), so it’s hard to say how much to boost performance