Home Blog

GEARING UP FOR EXASCALE WITH THE LENOVO

0
GEARING UP FOR EXASCALE WITH THE LENOVO

Lenovo is, amongst all major HPC systems suppliers in the world, can be unique by having a large market HPC.

HPC business firms were initially based on IBM’s business system server a couple of years ago plus a storage and storage device for Lenovo-licensed devices from Big Blue as part of the deal. But Lenovo,

which provides hyperscaler and Chinese cloud builder, knows one or two things about computationally dispersed scale and how to get the cost of the machine to compete. And so, they win a business and grow their business.

The Top500 position

November 2018 using benchmarks Linpack Fortran is a case that can be produced by Lenovo. When you join the current list, Lenovo has an aggregate 234.3 petaflops Linpack performance (16.6 percent of the total capacity list) of all 7.74 million core systems in 140, for the first two reaches Cray,

which has 193 petaflops oomph exactly two straight in Linpack in 49 engines with more than 7 million nuclei, consisting of 13.6 percent of total capacity.

For that reason, the “Summit” supercomputer at the Oak Ridge National Laboratory and the “Sierra” system at Lawrence Livermore National Laboratory are not labeled IBM machines (they sell labels with Mellanox and Nvidia) even though IBM is the main contractor,

but if you add this machine to the base True Blue machine, then IBM has 15 machines on the list with a total of 296.4 ongoing petaflops performance at Linpack in all 7.6 million terrain. Essentially, Lenovo remains involved with IBM and Cray and is still far from China’s competitors, Inspector, Sugon, and Huawei.

At this time, said Scott Tease

Lenovo’s high performance computing authority, he has the largest network of bases around the world, with machines being installed in 17 countries. Lenovo can ompete in China during native rivalry, and because IBM records HPC history in North America and Europe.

Lenovo can compete in the market. Other Chinese companies want to find in the Western economies, and Western companies want to enter China. Both produce a mix, and none of them has many machines on Lenovo’s rich list. So, it does not look like a big deal, really.

Tease tells Platform Next, many business centers, empires, and academic HPC centers who want to see how you get Top500 as part of a process engine, although they can make your own Linpack test. Building a great system shows that you can build a great system.

But when we get the exascale era

Lenovo is ready to be ready for big change. These will all be more complex, and the need to add Moore Law to economic chips and slow performance.

“One of the things that we are talking to our customers is that the processor platform based platform processor will not bring us into exascale,” Tease said. “It will be a blend of different accelerator technology, with Nvidia or AMD GPU or Intel Futuristic Accelerator Configurable or Intel and Xilinx FPGAs, the exascale system will be a mix of CPU and accelerators.

The angle we take is to leave us open for different partnerships In Europe, there are RISC-V and Arm, in China there is a homegrown comput plus AMD Epyc, and in the USA there are CPU-GPU hybrid engines and CSA.

Intel We are trying to stay open in view of our global diversity, and we tell people that whatever investment we make in exascale products, our goal is to take the product and sell it to HPC and AI customers of all sizes. It will not be the thing we design once and then no longer sell. ”

One of the changes is that general purpose, Xeon thinks that the substrate is a feature of the HPC cluster over the past decade is to see the competition for the first time. No one will ever know, but everyone will see the first and the great.

“There is a lot of fun in AMD coming back to the market because it is an option because there are competitors ahead,” explained Tease. “So it’s good that we do not see the interest in the technological process in time because no one can survive Intel, then there’s an arm-processor and RISC-V and.

TALKING DATABASES WITH HADOOP IS DOUUG CLENING

0
TALKING DATABASES WITH HADOOP IS DOUUG CLENING

The list of technologies that have been created because of the limited nature of enterprise-based relational databases are quite long, but these tried and true technologies remain in the midst of modern enterprises.

But they are increasingly sharing space in datacenter with various databases and data stores trying to get performance constraints or scales to solve very bad problems.

Doug Cutting, creator of open source Hadoop platforms that mimic Google’s eponymous file system and MapReduce chunking and chewing systems, runs against any limitation while working on Yahoo, and doing something about it, in the end, helping to lay off new industries and solve some of the big problems.

As the chief architect of Cloudera, which is the largest distributor of commercial Hadoop platforms and soon to become larger when it takes over Hortonworks rivals, Knowledge Cutting where the Hadoop platform fits today and where it can grow in the future.

The Next Platform has recently talked to Cutting about the possibilities.

Timothy Prickett Morgan: I’ll know what’s happening next to Hadoop. Platforms come and go. What do you do to encore after Hadoop? I mean, we are the Next Platform, so as in the job title to think of what follows.

I just spent a week at the conference of Supercomputing 2018, where the hot topic of conversation was a kind of collecting traditional simulation and modeling and learning machine, either as part of the workflow or embedded in it as part of an ensemble or generate set simulation exercises and distributing stuff to in the simulation to find the next step in the simulation or use machine learning to provide the dataset and choose the algorithm or part of the algorithm to run.

Good platforms are relatively easy to set up to absorb new technologies, and infrastructure, which starts as a group processing engine, is a good example of Spark in memory processing and processing flow coupled with learning machine on and as a traditional SQL query database supported and as different file systems have been launched. All kinds of things have come and gone to the Hadoop platform.

I do not know if you can contact Hadoop again. It’s interesting to consider how this platform is more focused, or maybe they deviate or stay different forever so something new comes together, like Hadoop, when you do it.

It would be interesting to see if anyone can ever just analyze the platform, but if you look at the use in the market database, evidence shows that every time you think you have a database that can do everything, researchers with spring database just like mushrooms.

It’s hard to get the same platform uniformity.

I think it as an old ecosystem that is included primarily on open source projects that benefit from interoperability as new things come together. They interact with older things and some longer things become no longer interesting when something better comes together. Some things keep their utilities.

Therefore, many are more tools and faster and faster than they have in the past as open source innovation is driven more by users. Many new technologies come out of people’s frustration with existing tools and they see how they can add new gadgets that build existing objects and give them what they need.

Doug Cutting: The way I think we see more capabilities.

So you do not necessarily have solutions that are generated and promoted by vendors, so we get this fast new tool that can be used to create it.

They put it there and others can try and see who else is useful. When there are many people who find it useful, open source openers like ourselves begin to support them. We’ve seen this happen with Spark, with Kafka, with some things now.

For the Hadoop arrangement, I think it is less likely to be disturbed. We have this new ecosystem style, and I think it’s a basic disruption. It’s simply no longer everything builtaround the relational database management system that is heavily guarded by several vendors.

 

INTEL BETS HEAVILY IN NEW STRACKING NEW MOOD

0
INTEL BETS HEAVILY IN NEW STRACKING NEW MOOD

Innovation needs motivation, and there is nothing like a competitor who tries to build you daily for real conditions. From the financial point of view, vendors of RISC / Unix and AMD slump vendors have been very good at Intel, and hegemony in central data is not bigger and revenue and profits continue to record.

The latter is a lucky cloud caused by increases in hyperscalers and cloud builders, which has given some pressure competition that will be paired with Intel by its prone to OEM and ODM players.

Although Intel seeks to upgrade the monopoly that is close to supposing the server in the data center and progressing to the network (with limited results) and storage devices (better with flash memory and now being issued and potentially creating Optane 3D XPoint memory continuously), lack The competition has been very disturbing Intel’s powerful engineering.

It’s remarkable for Intel to make boxing money

and that market servers grow faster than smaller ones can eat the market, as AMD Epyc and Marvell ThunderX2 attacks and some waving swords by IBM Power9 do not have really shame on Intel’s core business. And the two-year delay of the 10 nanometer process,

which generates Intel’s deficiencies, has no effect. But in 2019, when AMD and Marvell joined this generation of devices in the advanced process of Taiwan Semiconductor Manufacturing Corp, it will come to Intel again and will be burned.

It is the task of King Koduri, Core Core vice-president and Visual Computing, general edge solution manager and Intel chief officer, and Jim Keller, senior vice president of silicon engineering, to damage the attacks.

Koduri and Keller

are the people who are responsible for Radeon GPU and Epyc CPU CPU which is restarted by AMD. And these two, among others, the main copper in Intel, put forward the attacks and defense designs at the Architecture Day event that was held this week at the former mountain of Intel founder Robert Noyce in Los Altos.

The perfect scene looks like Intel slipping high in Silicon Valley and trying to carve out a single piece of stuff for himself in the datacenter.

ROME OR CAN NOT – OR SAFE – DAY

We will get more details on what the Artists say, our conversation with both Koduri and Keller, and what we think in response to what has been revealed, but in this early part we will only take a high level of view that both laid out to begin that day.

Everyone has been accustomed to Intel’s ten-year methodology token, known by Pat Gelsinger, who previously served as chairman of Intel’s Central Data Center and was the only heir to the chip makers but had received the CEO and became the highest executive in VMware.

By typing, Intel eliminated the chip refinement process to two parts to reduce the risk, signaling a transistor process and tock shrinkage to a change of architecture that uses the process to be 12 to 18 months later when it is completed and disassembled.

With a tick-tock way, Intel can maintain a steady stream of performance, and works well. Right to the point where the fleas become larger and the sacks increase.

Intel broke the rat by 14 nanometers, extends its life and makes tick-tick-tick-tick more performance than chip-making nodes – which is necessary because of the delay in the release of 10 nanometer manufacturing processes, the laborer, Chris Williams said.

Intel five years ago it is believed that it could go out in 2015, then 2016, then in 2017, then in late 2019, and now, at least for waiters, in early 2020 when it appears “Ice Lake” Xeons. It has stretched the stretch of 14 nanometers and rolled off at 10 nanometers and some were condemned tolls that depend on the 10 nanometer process.

The lessons have no tolls that are very dependent on the previous bug

and learn to mix and play chip-engraved elements in different process and cram them into 2D packages or arrange them into 3D packages. In fact, you simply release the beep chips where you will help and you leave all the chips in the pack – such as the memory controller and the I / O controller that has a lot of power

THE VITAL ENGINES IN NEW MANNER

0
THE VITAL ENGINES IN NEW MANNER

It takes a long time, but if the situation continues, we can see the server market earning more than $ 100 billion next year.

The infrastructure is infallible, and the shadows are not just the rise of the price of the machine, which is filled with expensive processors, memory, flash, and sometimes GPU or FPGA fast, but a large number of computers with modern applications take and, perhaps, larger values organizations are getting this important trading engine.

Waves of current server owners 

and drag along with storage and storage – make dot-com boom look like dressing. At the third, IDC retrieved the world using 3.16 million servers, adding 18.3 percent over the last year, and when all the invoices have been added to this iron, it adds up to $ 23:37 billion in cash hundreds of machine makers, and increased 37.7 per cent by third parties in 2017.

Market Prognosticators It is true that the server market will eventually explode to this stage, it is only 15 years very early on its forecast, not seeing the recession in 2001-2002 and 2007-2009 on the horizon and the wave is precisely Sweeping company datacenters,

drive usage and drive delivery and checking units. But the virtualization compaction has long been implemented, and now it’s time to retrieve virtualization for certain workloads – through hypervisors or containers – and many major workloads that push off sales in metal exposed to the present.

Some historical figures are illustrated. In 1997, when the boom dot-com started, revenue and server market revenue was more than double in three years, at $ 12 billion and 1.8 million units.

At that time, the x86 machine contained about half of the mail from all servers, but generated a fraction of the income pie, which was $ 55 billion in that year, as there were many large RISC / Unix iron as web tread and dot-com database databases. At that time, IDC had thought that the server market would hit $ 90 billion in 2003 – something that would not happen.

But it’s time to come

and it takes major changes to the Bureau-style computerization service – whether through the use of a free or near hypersal application, or through a virtual facility area that allows people to open their own applications or the other – to make supersonic explosives. Now we use nearly twice as many as we are in the year twenty years ago.

So what is that? If server shipments increase by 35 percent during the fourth quarter, then 12.4 million machines will come out of factory factories in 2018. And there is approximately 40X capacity more than each server (about 2X clock speed, about 2 in the work unit every hour for work, and about 10 on the number of cores per machine (this is thought to be very rough.)

Thus, the total capacity of computers used in the 275X is used as a boom-boom point, and if you consider using a computer by adding a variety More than that – from 1997 to 2018 – global domestic products will grow by about 65 percent, against 60,000 percent growth in server computing capacity.

Let’s get tired. Server capacity.

As we have said before, there are many factors that fall into the server boom now. Increased costs in CPU, memory, and flash memory can indicate half of revenue growth this year, and much more will split between improved configurations with GPUs and FPGAs and adding unit units.

That does not make a burst that’s not important, you mind. It just shows you the real power of Moore’s Law, though it slows down, to allow us to do things, with less, less expensive applications, one or two years ago.

We live in the future.

There are also other factors in the server section of 2018. Companies, especially companies that consider contracting IT budgets as the Great Depression that cross the global economy, may be able to buy ahead of the projected downturn. A new study by the chief financial officer by the University.

INTEL UNFOLDS ROADMAPS FOR NEW CPU AND GPU

0
INTEL UNFOLDS ROADMAPS FOR NEW CPU AND GPU

It will not be Today’s Artist, as it had earlier this week on a former Intel intellectual property founder Robert Noyce, that such a chip would not open several pages on the road map for CPU and GPU in the future.

The details are somewhat rare

as is often the case, in the road map that is revealed to many. However, the main product lines, key managers and ODMs and OEMs, see the highs of printing, as King of the Republic, core group core vectors of Core and Visual Computing, the central solution manager’s solution and chief architect of Intel, explains in his speech at the Architecture Day .

The composition of these data does not make much sense of what this network is counting, networks, and thanksgiving are looking for when traveling to the market or, as often as in the past year, no.

There is no problem when Intel is at least smart that it has been seen for more than ten years, and it’s healthy for Intel and its rivals in the datacenter.

But the fool will think that Intel, when it is characterized by a paranoia legend, can not turn it off and allow a good cricket. It has happened again and once again, as we know it, and why it’s easy: Is Intel, and its rivals, making it very difficult.

At a certain level

it’s nice to have any chip out of time, less behave like the designers want. Modern CPUs, GPUs, and FPGAs can be said to be the most complicated, and most important – and foremost tools to backtrack and appreciate.

what has been done in decades of cocoons in data centers and critical Intel critiques to make great innovations in art and architecture growing rapidly. This is the hardest market, and this is why it is a great reward for the winners.

Ronak Singhal

one of the company’s Intel Fellow and chief architect of the company, launches some of the key features and CPUs that will be used in its Core processors and Xeon servers, as well as providing tips for instructions on Atom processors that are listed on the system as storage machines, network functionality, and other workloads in greenhouses.

He received a Bachelor’s degree in engineering and computer from Carnegie Mellon and worked on Intel after graduating in 1997, and especially in team performance for the Pentium 4 processor, in the NetBurst design that Intel could have denied until 10 GHz on his return.

(Thermal degradation was too high for work, because the company met all our disappointments.) Singhal leads the performance team for the transformation of “Nehalem” Xeons, who started his career in 2009 with modified architecture, and “Westmere” Xeons, and after It heads the core development of “Haswell” Xeons.

It is responsible for the design of CPU core for family, Core, Atom, and Xeon family.

There are several changes that come with the core of the core that are used in the Core and Xeon processors, and Singhal ran through several people when he said in Architecture Days earlier this week. Undoubtedly, this is slow to the core of Xeon.

Core and Atom:

As you will find, there will be an annual rhythm to update the microcritur to the core used in the Core and Xeon lines, matching the annualization and refinement (and annual output) of the 14 nanometer and 10 nanometer making process, as we have discussed earlier this week.

This means that the old old tandem model officially dies for Cores and Xeons, about the hardware and design that Intel has used for over a decade to reduce the risk by simply replacing one of the things – a processor or a micro architect – at the same time.

But the AMD and Arm rivalry has steadily increased, with annual design improvements coupled with an increase in manufacturing processes, so Intel has to step quickly and to mitigate the risks.

We assume that Intel is stealing slightly, and for changing the monolithic of Cores and Xeons to modules module modules, combining different processors with different functions, such as AMD, Xilinx, and Barefoot Network have stated that the chip will come in 2019. We do not it would be very impressive if it worked and then “Ice Lake” Xeons was executed at 10 n