Home Blog Page 2



When GlobalFoundries decides to stop the development and implementation of both lithography and ultra-violet lithography (EUV) lithography in the 7-nanometer node process in August, it looks like IBM, the second only to AMD as a client chip server for the most advanced fab in Malta, New York, will be left with its future power processor.

But that’s never happened anymore than with AMD, who switched from Globalfoundries to Taiwan Semiconductor Manufacturing Corp to future generations of “Rome” epic processors.

As we have seen, IBM does not go with TSMC, as most servers, networks, and ASIC chip makers are created in today’s data center arena, but prefer older research chips and Samsung development partners as wafer bakers.

We say at that time that you do not really need a computer learning conclusion from a supercomputer that works to think about it.

Big Blue stopped being himself four years ago when he paid Globalfoundries $ 1.5 billion to take IBM Microelectronics out of his hand. At that time, IBM also agreed with the former AMD founder to take over its branch in East Fishkill.

New York

to continue to make Power8 and Power8 chips based on 22 nanometer processes developed by IBM and submitted by GlobalFoundries. East Fishkill fish packaging is in the process of being exchanged from 45 nanometers of equipment and processes used to create Power7 chips from 2010 as well as from 32 nanometer processes used to make Power7 + in 2012.

(This way, it is a major sign there and it signifies a change in I / O and possibly a memory subsystem in the Power processor, it differs from a plus sign, which means a change in process or architecture or both in power chips, but also short major versions that have major changes.)

GlobalFoundries also takes over the manufacture of IBM’s primary processor system, which has several elements similar to the power chip lines but has different command and memory sets and I / O architecture from the Power line.

The power chip design plan we found in August 2015 and IBM spoke more about early 2018 called GlobalFoundries to make IBM’s future PowerPower processor, which had a good chance of being installed at least one and perhaps two exascale class supercomputer systems in the United States by 2021 to 2022.

Expected in 2015 is a Power10 chip will be made using 10 nanometer processes at Globalfoundries instead of 7 nanometer transistor gate geometries. Look at:

This action plan is not published when we write about it.

Earlier this year at the OpenPower Summit, IBM placed a roadmap that offers more insight into the key features of Power7 through Power10 as an indicator of how the processor architecture and system will change:

You will find that Power10 space only promises “new micro-architecture” and “new technology” and does not provide the right dimensions for the process used to create transistors.

It says no 10 nanometers or 7 nanometers

and we think it’s because IBM does not have high confidence in results with the 7-nanometer node process (although the Fabulous approach to Fab8’s two points in Malta has been used up to 7 nanometers in transit or otherwise) or other chip designers’ commitment to use Globalfoundries in both nodes of 10 nanometers, which were canceled in favor of all entries in 7 nanometers and some points in the past few years.

The original agreement between IBM and Globalfoundries is to run from 2014 to 2024 and out to a node of 10 nanometers. This is one of the reasons why Big Blue paid $ 1.5 billion to get IBM Microelectronics from its revenue stream and, with a $ 4.7 billion abolition in 2014 for his fabrics in Vermont and New York, from his books.

But the agreement is only made for five years, if you are considering the upcoming Power9 Processor renewal process in 2019 based on a 14 nanometer processor filtered from Globalfoundries. Power9 ‘is expected to have additional I / O and memory circuits, as Power8’ support continues to add NVLink support to Power8 design.

Part of the road map we see, IBM will double the core at Power10 48 one monolithic die using a 10-nanometer process sometimes between 2019 and 2020 and then moves to 7 nanometers to get more thrusts off with Power11 out more. The chip manufacturing nodes are increasingly difficult for every server chip maker, and it produces a catastrophe



With the line for early production in the future (if not anymore), business models for quantum computing have been far from scratch.

This is true for beginners like quantum quantum quantum tool makers, D-Wave (we talk about this recently with his CEO), and altogether for big companies that have tossed their hats into the ring of quantum products.

Although companies like IBM, Intel, and Google also have a lot to do if the risk of bubble appearing before the commercial, R & D investments are quite large, especially for those who admire their own devices and hire legions of physics theorists to bear the burden of long-term measurement problems to substrate completely.

All of this is important

to us with Bob Sutor, 36-year-old IBM veteran and now VP of Q strategy and ecosystem that directs quantum companies.

As we do, IBM has two business quantities; one that is based on a free quantum 5-qubit system to allow developers to develop early stage applications and form the basis for future quantum development.

On the commercial side, Q Network, which has more than 30 academic and research expertise worthy of attention, such as JP Morgan Chase, among others.

In other words, these are those who pass through the dots that stop some qubits and want to work on what systems can be, in the early stages of quantum, which is called the correct “scale” -20 qubits.

Reading on the scale is not to reduce the progress. Getting a level of error and a coherent period is a big business, where Sutor and his troops in Yorktown Heights, NY laboratories are well equipped to handle. IBM’s research has a great reputation for the location, along with physicists to run on algorithm users who are going to go and the easiest way to connect to these tasks.

Sutor did not develop the quantum business of ROI, though for IBM Research, companies in larger companies were more risky to make long bets on new technologies.

“There is no return on investment for at least three years,” said Sutor.

“What has been done has led some of the highest analyst companies to use for a short term strategic decision. The thing to keep in mind is that IBM has existed for more than a hundred years, IBM Research since 1946. We used to accumulate things and the only real surprise is that we are in the world in 2016 if we have computers, not like IBM. ”

What’s missing from the conversation about returning a return on investment is how it feels like how unique IBM approaches can win over others.

It is, of course, by taking any of these things into a fairly wide range to quantify the business for the company involved. And it also states that the quantum hardware is sufficiently differentiated to create any competition except the most used software stacks.

Despite the quantum ecosystems that survived a few years ago,

Sutor said there was a sense between the ecosystem of limited quantum device makers whose competition began to heat. Differential points can be seen from outside and beyond qubits, focusing on connectivity, conformity, and error rates, but still difficult to compare between model system doors (D-Wave is the only foreign one with its approach to its approach) since the metric nuanced.

“Companies are more concerned about relative and commercial positioning and investment prospects, now, you can go directly to Experience Q and see the error for every single qubit on every existing machine that we each day will be and we will be happy with other quantum companies to announce the same statistics, “told Platform Next.

IBM has created

a mission to set metrics and standards into its own technology so users can begin measuring relative merits. Error level reporting Experience Q is one of the elements with another Quantum Volume that will come to others.

This will take a holistic view of the quantum system in terms of the balance between achievement and ability with error rate, qubit, coherent estimation, and other factors. It is called the “volume” because it will be multidimensional by killing new algorithms for these systems to be built on their own ring.




The importance of neuromorphic computing results in research into new types of memory devices that can mimic biological and synthetic neurons. This paper recently displays the present situation in the field that finds the best technology available.

Six types of devices are monitored including memory-free access (father), IE Memristors, memory phase changes (PCM), magnetic-resistive spintronics based random access memory (MRAM), ferroelectric field-effect transistors (FeFETs), and synaptic transistors . We will continue.


According to the “flying” model of the memristor, the father is an electronic switch that exhibits non-volatility, that, the state will retain resistance even after the voltage is turned off. It can be made from some of the most common oxides of compounds.

According to authors, the main advantage is the father of size, CMOS compatibility, less power consumption, and modulation of analog conductance, all of which have the first option for memory generation next.

The compatibility of neuromorphic computing is associated with the ability of the missionary to change the country under historical pressure. As a result of this behavior, it has the temporal and analog nature of biological and synthetic neurons. However, making this driver more uniform so that they operate will always be a challenge, he says.

We reported in 2017, a group at the University of Michigan Electrical and Computer Science University headed by Wei Lu demonstrating the device using the neuromorphic Memristors bar arranged in series. In the process of assisting commercial technology as the chief of crossbar scientists, the company engaged in the establishment in 2010 and is hiring customers to introduce solutions to the market of crossbar dad.


Based on the type of lightning that uses diffusive active metals, the technology also attracts researchers. According to the authors, these Memristors can derive synaptic plasticity behavior using a unique conductance feature that allows the old forgotten, short-term information, when lock-up Alexa web.

Experimental devices with a profiler profile with ReRAM can display unattended learning. This work was led by a team of researchers at the University of Massachusetts, which only applies to writing three authors. So far, there is no commercial execution. Hewlett Packard Enterprise has been keen for many years because of the rule, especially for a consolation system, known as a Machine.


PCM is a high performance, non-volatile memory, in this case, based on a compound of glass chalcogenide that replaces resistance as it moves from one phase to another.

The crystalline phase of the material is one of the low resistivity, while the amorphous phase shows high level of appreciation. The phase change is available by using or removing an electric current. Unlike NAND-based conventional memory, a number of PCM devices are able to withstand the tickets infinitely.

such as IBM Research, which came with PCM DIMM that can serve as a volatile cache. Over the past year, researchers have built PCI-Express PCM cards that can be connected to the Power8 server and multicolored exchange data interconnect data processor Interconnect (Capi).

neuromorphic computerization, including the effort to propose a neuromorphic circuit design complete with PCM to mimic both nerve and synapses.


Spam-based MRAM, sometimes referred to as MRAM (STT-MRAM) torque transfer, stores magnetic data but uses electrons to read and write. Magnetic characters provide instability, which provides electronic speeds. Some implementations can write data almost as fast as DRAM.

Storage elements include two ferromagnetic layers, which include free layers and coated layers, non-magnetic oxide sandwich coating. Act by resolving the required resistance to change the magnetization of the direction in the other direction.

Different resistance situations can be achieved by inserting a domain wall in a free layer. Stochastic properties to change the condition of the device can be used to simulate synthetic stochastic behavior.

STT-MRAM’s commercial products are developed in Avalanche Technology, Spin Memory, and Everspin Technologies. Samsung and Intel also began to control …



The secret of the longevity of the great body is a continuing process of reinventionalism. People who can adapt to the times are people who live, and we just have to see the current and widespread General History of History to see how fast the contribution to the global economy can explode.

In addition to the fun or not, IBM is a company that is still lacking in rapid study of changes in the information technology sector. However, we know, may not look like the financial results of IBM that have been seen in 2018 and the previous year.

Big Blue

still has one of the biggest business systems on the planet, and it’s more beneficial, about gross profit as a percentage of revenues, from business server chips, storage, and heavily-scaled networks for decades, larger than IBM systems business.

The net effect is both spending the total amount of cash every year.) Comparison is the case for Apple, because Intel creates processors, chipsets, and keyboards when IBM does everything and makes system operations, middleware, databases, other software, and provides support and financing for the system.

This vast IBM operating system, covered by the system’s main System z, is located at the heart of the world’s largest largest organization, and is powered by the Power Systems business, aiming for all goal and goal winners in the epic battle between Sun Microsystems , Hewlett Packard, and others in the Unix open server that has existed since the 1980s for many years. IBM won, but he won a decline market by a 10X factor. But it certainly beat the war.

However, inside the RISC

Unix company and HPC home server has shut down Linux so hard on the system, as if to process all of the system memory on a large scale like SAP HANA for outgoing cluster that runs various types of HPC simulation and stock exchange models or analytics data.

Linux is the development of platform operating systems that are located in the datacenter – Windows Server Windows is underdeveloped for years as Linux is the platform choice for new applications. It is clear that Microsoft has pursued and developed open source software (what is running Linux) and why it is trying to upgrade commercial open source software and good.

Red Hat Linux to $ 34 billion.

The IBM strategy works quite well for the company to continue investing in the future of the main System z and Power Processors into the future, but they need a great part of the software system and Red Hat will give.

Let’s try to tempt the business of this system, because Big Blue will surely want to say the numbers better in the word buzzan today and obscuring real business reality. We will start by number 4 as reported and then make some guesstimating.

IBM did not detect Wall Street with twenty-four of its revenue cuts over the past few years, and the company again experienced a 3.5 percent increase in sales in 2018, at $ 21.76 billion. This is largely due to strengthening US dollars against foreign currencies in countries where Big Blue is practiced in most businesses, and this is a common complaint to multinational monsters.

IBM has a total profit of $ 10.68 billion from this revenue in four years, down 1.6 percent, and income before taxes of $ 4.34 billion, dropping just ten by ten points; They bring $ 1.95 billion to the bottom, which is better than the $ 1.05 billion reduces over a number of writowns.

In the fourth year

IBM sold $ 2.62 billion (including storage) of hardware from 21.3 percent last year, with 71 percent development in the main system z as the generation of z14 machines began operating in Q4 2017.

IBM sells $ 238 again million hardware systems to other parts of the company, but this is internal sales that are only used to estimate the profit capitalization of the business system. In any case, the major skeleton that spans a year ago is very difficult.

The Power System system consists of a scalable system, used at one and thirteen to customers using AIX platforms and IBM IBM itself to build end-end enterprise applications and databases.



Quantum computing is often described as a way to solve esoteric problems that can not be tried with conventional computers. But that’s not how Airbus thinks of the technology.

The company recently started the Airbus Quantum Computing Challenge, a global initiative aimed at bringing expert QCs seeking technology to help solve aviation physics problems being used in aerospace applications.

The challenge is open to individuals or research teams – post-graduate, PhD, academic, researcher, beginner, and other professionals in the field – with proven experience in quantum computing.

Airbus has positioned five areas of focus for competition:

computational fluid dynamics, partial differential equations, aircraft climbing proficiency, wing box design, and aircraft loading. At present, this field is dealt with with traditional engineering approaches, which depend on HPC. All of these are important to the company’s aerospace businesses and their ability to differentiate their products from competition.

The goal here does not seem to save money at high performance computing expenses. (The company only uses about 3 percent of the IT budget at HPC.)

On the contrary, Airbus seems to be interested in the potential of quantum computing to provide better results than simulated and modeling abrasive power, and can control the size of the problem is greater than we are can think of both with a digital computer.

This is not the first Airbus to use this technology.

In 2015, the company set up a quantum computing unit at its facility in Newport, United Kingdom. A year later, Airbus invested in QC Ware, the beginning of quantum computing software that wants to bring technology to enterprise users.

Airbus has also used one of the D-Wave 2000-qubit engines on projects investigating the use of quantum annealing for error analysis (FTA). In the aerospace industry, FTA is a method for determining complex system failures caused by a combination of sub-system failures.

It is usually used during aircraft feasibility and certification.

Because the FTA is a kind of NP-hard problem, it’s a good candidate for quantum computing.This project involves the translation of FTA software into a form that will work on quantum quantum performance and benchmark performance against commercial SAT solvers.

What they find is that the QC performance is not related to the size of the problem and can be used together with the classical SAT solution to reduce production time by factor four.

More importantly

it helps to convince Airbus to continue exploring quantum computing for high-performance computing use cases.

The new spirit gained by Airbus for quantum computing is reflected in their prediction that QC technology will “forever change how the aircraft is built and flown.” And this is where the company’s new quantum computing challenges enter.

As mentioned, one of the main focuses is the liquid computing dynamics (CFDs), the classic HPC applications and the critical design elements of aircraft design.

In particular, CFDs are used to determine aerodynamic behavior, the ultimate goal is to reduce drag and increase fuel efficiency. In this case, the challenge is to find an algorithm that quantum computing can solve this problem faster and larger than conventional training or, alternatively, used in conjunction with one.

The area close to the partial differential equation is another aerodynamic element. The challenge in this case is more detailed in Airbus who are interested in implementing using in-depth learning techniques through a quantum computing approach.

The Airbus Challenge for aircraft climber efficiency is driven by increased short-haul flights, where transport and landing segments are more important than longer flights.

The goal is to reduce the time and cost of fuel during early climbing, using quantum computing to provide optimal cost / benefit.

The wing box design challenge lies around balancing the weight and integrity of the wider wing structure. It takes into account several elements – aircraft loads, mass modeling, and structural analysis – which must be calculated at the same time.

It not only makes the process take time, but also exposed to the estimates in question. Airbus thinks quantum computing will allow engineers to explore a wider design space to achieve optimum design.

The fifth challenge is to improve the ability of the aircraft. Like everyone else, it needs to balance