Home Blog Page 4

THE TRANSFORMATION NEW HPS

0
THE TRANSFORMATION NEW HPS

There are two waiter in the world.

They are looking for hyperscalers and cloud builders, who are looking for 50,000 or 100,000 waiters and thus have enough influence to take care of the dog production.

This is the lightweight business with very strong limits. And then there is a market company, which evolves in many ways in conjunction with hyperscaler and cloud builder but has more complex demand and lower cost, and not only producers will charge premiums. People ran away from him.

Hewlett Packard Enterprise does not make it secretive that they want to go out and learn more late, and have experienced the last and half basically quitting themselves from what they call the Tier 1 server, which excludes Super 8 – Amazon, Google, Microsoft , Facebook, Alibaba, Baidu, Tencent, and JD.com – but also smaller Internet and some other telecommunications and services companies.

If you take the Tier 1 discount on the HPE years ago,

your core market share has increased by 3 percent in the first fiscal year 2019 that ended in January; if you leave Business Level 1, which is larger than 4 trillion, then the unemployment rate in the HPE dropped 3 percent.

That’s what Tarek Robbiati, chief financial officer of HPE, told Wall Street analysts this week when overcoming these numbers. The Phase 1 business system, which mainly covers the Cloudline engine developed in collaboration with Taiwanese manufacturer Foxconn, is mainly used in HPE, but is still affiliated with revenue and profits.

On the fourth day, HPE totally dropped 1.6 percent to $ 7.55 billion, but if you get the influence of this 1st level business, even now it is small, the HPE is smaller than 1 percent. Two years ago, this Tier 1 business, which included server hosting, switches, storage and software, consisting of a single server HPE server, we considered.

It is hard to say how small a Phase 1 business is now, but we think it’s about $ 400 million in the fiscal Q1 2019, and much for the waiter. Product sales to Tier 1 vendors – never mentioned, but we strive to make Microsoft and eBay a great consumer of HPE gear-specific, among them – less than enough to have no effect at the end of this year.

In the Americas, which comprise North, Central, and South America, market share increases in “two digits”, according to Robbiati. HPE has created several category products and methods of identifying results, and now the Network Datacenter appears to be unified in Compute in its Hybrid IT category;

HPE also triggered the Pointnext service category and educated the Edge Intelligent category, which includes aruba wires and wireless aruba network cabling and other computing.

Similarly with IBM,

it is difficult to estimate what the core, the basic business system in the HPE looks like, and it is increasingly difficult for some reason.

For this reason, comparative and network mapping eliminates substantial comparisons, and we’ve been talking for some time on the business networking site in HPE, most of which came from 3Com last year, very little did not want to threaten it once again.

Additionally, the Pointnext section includes breaks for the HPE system and for new “GreenLake” new devices as packaging and pricing services for the company for several months and says the current CEO of HPE, Antonio Neri now 450 customers.

This business system provides simple means such as the cloud for the HPE system and has grown in double digit numbers and will be at an annual rate of $ 1 billion. It is still smaller than the main business ProLiant, and it corresponds to the Synergy business infrastructure that is compiled (computed in the Compute sector below IT Hybrid), now Neri has $ 1 billion annual run.

Explain how many parts of HPE have been done in the past two years:

The full composite business, dominated by ProLiant servers at this time with hyperscale engine engines like Cloudlines and a small number of Synergy composite systems, fell 3.3 percent to $ 3.4 billion. In the segment Calculate, what HPE estimates value (roughly translated to the ProLiant server) increases 19 percent annually

ONE STEP CLOSER TO DEEP LEARNING PROJECTS

0
ONE STEP CLOSER TO DEEP LEARNING PROJECTS

A group of researchers at Sandia National Laboratories have developed tools that can cross train the neural network standard neural network (CNN) for neural control models that can be used in neuromorphic processors.

The researcher claims that the conversion will allow deep learning applications to take advantage of neuromorphic hardware of energy efficiency, which is designed to mimic how biological neurons work.

The tool known as Whetstone, works by adjusting artificial neuron behavior during the learning process to activate only when it reaches the appropriate limit. As a result, the activation of neurons is a binary option – whether it’s nails or not.

By doing so, Whetstone converts artificial neural networks into spiking nerve networks. This tool does this by using the additional “refining process” (thereby Whetstone) through each layer so the network becomes a discrete activation.

According to researcher Brad Whetstone Aimone, this discrete activation greatly reduces the cost of communication between layers, and therefore energy consumption, but with minimum precision loss. “We continue to admire that without dramatically changing the network, we can get the standard neural network accurately,” he said. ”

We’re usually in percent or more achievements.”

In the paperwork to be presented later this month at the annual workshop of the Neuro-Inspired Elements Computing (NICE), they documented test cases where they reached 98.1 percent accuracy of 47,640 CNN neurons-the use of data sets containing images of digitally handwriting MINST.

In most cases, the final trained circuit is mobile across different neuromorph platforms, which is one of its main advantages. All work in this area – the University of Manchester project Spinnaker, IBM TrueNorth Research processor, exposed neuromorphic Brainchip system-on-chip, and Intel Loihi – is, for the most part, using proprietary software technology for application development.

As Simone said

the field of neuromorphic computing like the CUDA GPU before coming together, says – pretty much Wild West. By offering agnostic standard hardware, exercise machines, hope is that people will be able to share wider development and ecosystem efforts to create early neuromorphic software.

Whetstone’s initial work relies on Spinnaker hardware (48-nodes, 864-core boards), but Sandia researchers hope to gain access to the Loihi platform in the near future.

TrueNorth is another option, though architecture has a more unconventional design that will make the port more complex.

Because all the exercises are carried out in a conventional learning environment, researchers have access to a rich ecosystem with tools and frameworks to work with. In their early work, they used TensorFlow and CUDA on accelerated GPU Nvidia devices.

Researchers also use Hard, a high-level neural network library, as a wrapper around low-level training software.

As a result of additional processing, longer training time, however. According to researchers, they use “about twice as much training as ever,” although this depends on a number of different factors. In the future, they hope to optimize workflows to reduce time penalties.

Training time aside

the main goal of this work is to leverage learning technologies in standards to produce software inferences capable of running in much lower cover power when used

. Not only this will allow applications to be suitable for low-power consumer or edge devices, but also enables web giants like Google, Facebook, and Baidu to curb energy demand data centers when they deliver their own in-depth learning. Likewise, hungry supercomputers who use machine learning in their workflow can also benefit.

“I think it’s not too long before it can be implemented to have neuromorphic hardware embedded in traditional data center shelves or HPC,” suggested William Severa, a leading mathematician in the project. That said, he states that you do not really need neuromorphic hardware, as if taking advantage of this Whetstone model.

“We only refuse 1s and 0s to allow it,” he explained. So conventional, non-neuromorphic hardware like GPU or CPU will work well and it will be more efficient than to depend on the frame

IBM PUMPS UP THE VOLUME N COMPUTER PROJECT

0
IBM PUMPS UP THE VOLUME N COMPUTER PROJECT

IBM has announced that it has provided a new sign in “quantum quantum,” a metric used by the company to quantify its quantum computer capabilities.

The latest quantum quantities 16 are powered by the newly launched Q System One, producing 20-qubit machines at Q Network Company, and twice from Tenerife 5-qubit that IBM installed in 2017.

Quantum quantum takes several factors, including qubit numbers, error gates and measurements, coherent time, connections, crosstalk tools, and circuit compatible software compilers. This metric is made more comprehensive to calculate performance calculations that only calculate the number of qubits in the processor.

As reported in January, Q System One was the first company to experiment with quantum computers designed for commercial use.

Although he uses the same 20 qubit chips with the machine installed on the IBM Commercial Q Network, Q System One has added additional features such as RF isolation, vibration damper, strict temperature control, and automatic calibration designed to reduce level errors.

In addition to providing more qubits, error recognition is seen as the most important for developing a universal quantum computer.

So because of the two qubit operations appear to be more common than qubit operations in ordinary quantum applications, measurement errors are considered a good indicator for door loyalty.

Quantum Performance IBM.

Q System One to deliver the lowest possible rate of error for a quantum system, with a total of two qubits gates of just 1.7 percent.

It is more than half a percent lower than the best IBM for Q20 Poughkeepsie engine and is more than a longer point than the Q20 Tokyo system. The Q An error rate system looks similar to the IonQ 11-qubit device, which uses the qubit ion trap of superconducting technology used by IBM, Google, Intel, and others.

Google also focuses on tolerance errors on the new Bristlecone 72-qubit chips, although the company has not provided many ways in the particular way of how the device works.

Their previous 9-qubit chips had a 0.6 percent fall in the two qubits and developers were the key to making the most of their new device.

That said, people believe that the level of two qubits should fall between 0.5 percent (with at least 49 qubits) to achieve quantum supremacy, and to 0.1 percent (with billions of qubits) to transmit many quantum computers.

It can be decades or more, but from IBM’s perspective, it can make practical system work faster. In particular, if the system can be very important to system classification in some of the major applications such as chemical computing, financial risk modeling, and supply chain optimization, it will have practical commercial practices for many customers.

The surplus is based on what IBM is referring to as “quantum advantage,” where the company determines the quantum calculating that calculates hundreds or ten times faster than the classification, requiring less memory to be desired by classical computers, or that is not possible with classic computers.

IBM also

pointed out that he has been able to change its quantum quantities every year from 2017. And although it only shows third-generation technology, that IBM can maintain a better rate of Moore for the past few years, the company will be able to experience their quantum excess in the 2020s . Based on the increasing quantum level desired, IBM has determined its purpose.

“Today, we are preparing an action plan for quantum computing, because our IBM Q team wants to find out where the quantum design will make real impact on science and “When we made scientific discovery and the pursuit of initial success for quantum computing, our goal is to continue to push higher quantum quantities to ultimately show a quantum advantage. ”

Sheldon and IBM

Fellow Jay Gambetta offer some guidance on the way IBM wants to continue quantum quantum in blog post recently.

One of the most important of the most important is to prevent the time being in error, stating that the device is “exactly the same as specified in a clear time”. Now coherent to System Q Q1: Average of 73 microseconds, but.

STARTUP SHEDS SOME LIGHT ON A PLO COMPUTER

0
STARTUP SHEDS SOME LIGHT ON A PLO COMPUTER

Optalysys, a UK-based startup, has introduced entry-level optical inverters, the system is first in the market. The new system, known as the FT: X2000, was sold to selected partner groups and customers to prepare for commercial technology last year.

In the conversation with The Next Platform, Nick New, CEO and Optalysys, depicts FT: X2000 as the forerunner of the future FT: X3000, which will be the first offering offered by the company. It is scheduled to be released in early Q3.

Therefore, the FT:

X2000 prepares the ground, New says, with technology in the hands of people who can launch the platform through the steps and start developing the software.

After a few years ago, a number of optical estimates started out of the shadows, coupled with the need for alternative technologies that could provide a decent way to compute beyond the Moore Law on CMOS chips.

Since optical computing uses photons and non-electrons as the basis for the process, it can be operated at a higher speed – literally, light speed – when it takes less power.

In addition to Optalysys, other companies that want to exploit the advantages of optical computing are Lightmatter, Lighton, Fathom Computing, and Lightelligence.

However, optical computing activities still do not have a different mindset of alternative technologies, such as quantum or neuromorphic computing. This means that these companies have some extra work to build customers and investors about these advantages.

It said that the Optalysys release press at the launch of the FT: X2000 stated that Google Venture and Baidu invested in its competitors – Lightmatter and Lightelligence, as evidence that today’s optical technology has been given attention.

All this is primarily focused on applications provided by artificial intelligence umbrellas and larger data analysis, but each person differs from optical technology.

In the case of Optalysys, this device uses diffraction properties as a way of implementing Fast Fourier Transforms (FFTs), a basic mathematical tool that can be used to represent a variety of physical and abstract phenomena.

Early adoption technology is a genetic discovery, which can be a pattern matching problem that can be mapped efficiently in Optalysys optical devices. Other case studies include numerical weather forecasts and math processes.

Optalysys hardware is based on high-resolution liquid crystal microdisplays, which are commercially used in various consumer and industry markets.

The company uses these microdisplays to absorb low power laser light to encode numerical representations associated with FFTs. According to New, there is nothing really exotic or expensive on hardware because of components based components.

“There is no large fabrication cost that can be used as chip design,” she said. “The components of a mobile phone, for example, are not very different from what our system is.”

Inbound product FT:

X2000 comes with an incomplete interface, using the display port for input and USB ports for output. However, commercial products this summer will have a standard PCI-Express interface that is ideal for connecting to a traditional or desktop server.

This idea uses optical devices as a coprocessor to speed up basic numerical operations such as learning machines, pattern recognition, and other types of analysis depending on the correlation.

PCI-Express’s initial product will pull 10 watts up to 60 watts, but according to the New will do 30 faster than the GPU in this type of application. Over the past twelve months, the company will implement a more robust platform that is considered to be 100 times faster, but with less power consumption.

These new attributes rapidly improve the performance of optical technology optical scalability. In particular, when the resolution of microdisplays increases, the process becomes more effective due to parallel technology technology.

It only says that early subscribers to the FT:

X2000 include some major players on computers, defenses, and space development, but are reluctant to assign specific names. The early PCI-Express optical shoots will be broadcast exclusively on central data users, but there is a greater potential for product connectivity and small form, where

VORACIOUS APPETITE IS THE NEW COMPUTER IS NORMAL

0
VORACIOUS APPETITE IS THE NEW COMPUTER IS NORMAL

Builders of hyperscalers and clouds in the world place half of the bases plugged into the compacting juices every year because it is cheaper for them to be able to upgrade their machines from old-fashioned items to eating and fueling and saving less. So, they also have to get most of the packed waiters to overcome specific Specter / Meltdown deficiencies that have been known for two years now – just seven months before.

The matter of scale, we think.

Whatever, with higher income than crazy on large infrastructure – between 20 percent and 40 percent, depends on the company and the year – there is a fee required for the server for this large company.

And when you add a higher price, Intel can reduce the capacity of the “Skylake” Xeon SP processor, plus higher memory and light costs, plus machines that are much better for machines (including GPUs and sometimes FPGAs) turned into a killer of the year in 2018, although the income of all manufacturers was slightly lower due to hyperscalers and cloud builders taking a breath at the end of last year.

Expenses are a proxy for the full health of the IT sector, as we care about them, but only want to know what happens to the important part of the data center.

Server delivery increased 5 percent to 2.99 million engines in Q4 2018, according to IDC’s latest stats that are less than predetermined.

However, delivery rates, reduced by Super 8 expenses slightly less than the specified, are still sufficient for all the reasons we described above to depreciate an increase of 12.6 percent to $ 23.62 billion. Therefore, if you do not have a great memory, the extra money you spend on your server – almost two tier levels just three years ago.

For the last year, server income increased 31.4 percent to $ 88.33 billion – a record year and set a stage for the $ 100 billion server, the world’s first, in 2019.

Among the major providers of ODMs that IDC is unilateral and is a great provider of hyperscalers and cloud builders – this includes Quanta Computer, Inventec, WiWynn, Foxconn, and others – the downsides in Q4, rising just 11.6 percent to $ 4, 74 billion. It is important to disregard this ODM line in linear statistical servers with hyperscale and market clouds, as many OEM vendors have a custom server unit that also sells to Super 8;

For example

Inspur has a big business with Alibaba and Tencent in China and Dell did a great business with Microsoft, as Hewlett Packard Enterprise is still doing but time afterwards it has been packaged from the marketplace where the limitation is a light laser beam.

But it’s a similar proxy for the taste of the Super 8, which collectively uses roughly 40 percent of the world’s servers every year, while trying to make money. And this may have been lost, and also because the company is waiting for the future of “Cascade Lake” Xeon SP processor from Intel and Epyc “Rome”

processor from AMD, both of which have hardware reduction to keep the Specter / Meltdown weaknesses. Although server providers at ODM reduced in Q4, to 2018, the company added 42.3 percent to $ 21.1 billion. So no one will name that bad year.

Supplying the hyperscale

cloud, and HPC markets in the server and hatred – some of you might say the infection – the company came to be able to download the device especially in the last few years, it’s best to see the market in the market every year and every month, as shown for sales by the seller at the top and at the bottom of the table:

Money is important, so we will focus on it. Dell is a market leader in 2018, with a revenue of $ 16.36 billion for servers, according to IDC, adding 37.5 percent from 2017.

Dell’s waiter business, as reflected in our analysis two weeks ago, grew and grew, becoming one of the themes founder Michael Dell who took the company several years ago. Hewlett Packard Enterprise does not grow rapidly as a marketplace, primarily because it supports money laundering, cloud developers, and business suport operators