Saturday, 5 May 2018

SUPER COMPUTER

Roll back time a half-century or so and the smallest computer in the world was a gargantuan machine that filled a room. When transistors and integrated circuits were developed, computers could pack the same power into microchips as big as your fingernail. So what if you build a room-sized computer today and fill it full of those same chips? What you get is a supercomputer—a computer that's millions of times faster than a desktop PC and capable of crunching the world's most complex scientific problems. What makes supercomputers different from the machine you're using right now? Let's take a closer look!

Photo: This is Titan, a supercomputer based at Oak Ridge National Laboratory. At the time of writing in 2018, it's the world's fifth most powerful machine (it was the third most powerful in 2017). Picture courtesy of Oak Ridge National Laboratory, US Department of Energy, published on Flickr in 2012 under a Creative Commons Licence

What Is Super Computer ?

Before we make a start on that question, it helps if we understand what a computer is: it's a general-purpose machine that takes in information (data) by a process called input, stores and processes it, and then generates some kind of output (result). A supercomputer is not simply a fast or very large computer: it works in an entirely different way, typically using parallel processing instead of the serial processing that an ordinary computer uses. Instead of doing one thing at a time, it does many things at once.

Chart: Who has the most supercomputers? Almost 90 percent of the world's 500 most powerful machines can be found in just six countries: China, the USA, Japan, Germany, France, and the UK. Drawn in January 2018 using the latest data from TOP500, November 2017.
                                                     
                      

                             

Serial and Parallel Processing
What's the difference between serial and parallel? An ordinary computer does one thing at a time, so it does things in a distinct series of operations; that's called serial processing. It's a bit like a person sitting at a grocery store checkout, picking up items from the conveyor belt, running them through the scanner, and then passing them on for you to pack in your bags. It doesn't matter how fast you load things onto the belt or how fast you pack them: the speed at which you check out your shopping is entirely determined by how fast the operator can scan and process the items, which is always one at a time. (Since computers first appeared, most have worked by simple, serial processing, inspired by a basic theoretical design called a Turing machine, originally conceived by Alan Turing.)


A typical modern supercomputer works much more quickly by splitting problems into pieces and working on many pieces at once, which is called parallel processing. It's like arriving at the checkout with a giant cart full of items, but then splitting your items up between several different friends. Each friend can go through a separate checkout with a few of the items and pay separately. Once you've all paid, you can get together again, load up the cart, and leave. The more items there are and the more friends you have, the faster it gets to do things by parallel processing—at least, in theory. Parallel processing is more like what happens in our brains.

                                          
Why Do Super Computer Use Parallel Processing 

Most of us do quite trivial, everyday things with our computers that don't tax them in any way: looking at web pages, sending emails, and writing documents use very little of the processing power in a typical PC. But if you try to do something more complex, like changing the colors on a very large digital photograph, you'll know that your computer does, occasionally, have to work hard to do things: it can take a minute or so to do really complex operations on very large digital photos. If you play computer games, you'll be aware that you need a computer with a fast processor chip and quite a lot of "working memory" (RAM), or things really slow down. Add a faster processor or double the memory and your computer will speed up dramatically—but there's still a limit to how fast it will go: one processor can generally only do one thing at a time.


Now suppose you're a scientist charged with forecasting the weather, testing a new cancer drug, or modeling how the climate might be in 2050. Problems like that push even the world's best computers to the limit. Just like you can upgrade a desktop PC with a better processor and more memory, so you can do the same with a world-class computer. But there's still a limit to how fast a processor will work and there's only so much difference more memory will make. The best way to make a difference is to use parallel processing: add more processors, split your problem into chunks, and get each processor working on a separate chunk of your problem in parallel.

Massively Parallel Computer
Once computer scientists had figured out the basic idea of parallel processing, it made sense to add more and more processors: why have a computer with two or three processors when you can have one with hundreds or even thousands? Since the 1990s, supercomputers have routinely used many thousands of processors in what's known as massively parallel processing; at the time I'm updating this, in January 2018, the world's fastest supercomputer, the Sunway TaihuLight, Fujitsu K, has around 40,960 processing modules, each with 260 processor cores, which means 10,649,600 processor cores in total!
Unfortunately, parallel processing comes with a built-in drawback. Let's go back to the supermarket analogy. If you and your friends decide to split up your shopping to go through multiple checkouts at once, the time you save by doing this is obviously reduced by the time it takes you to go your separate ways, figure out who's going to buy what, and come together again at the end. We can guess, intuitively, that the more processors there are in a supercomputer, the harder it will probably be to break up problems and reassemble them to make maximum efficient use of parallel processing. Moreover, there will need to be some sort of centralized management system or coordinator to split the problems, allocate and control the workload between all the different processors, and reassemble the results, which will also carry an overhead.
With a simple problem like paying for a cart of shopping, that's not really an issue. But imagine if your cart contains a billion items and you have 65,000 friends helping you with the checkout. If you have a problem (like forecasting the world's weather for next week) that seems to split neatly into separate sub-problems (making forecasts for each separate country), that's one thing. Computer scientists refer to complex problems like this, which can be split up easily into independent pieces, as embarrassingly parallel computations (EPC)—because they are trivially easy to divide.
But most problems don't cleave neatly that way. The weather in one country depends to a great extent on the weather in other places, so making a forecast for one country will need to take account of forecasts elsewhere. Often, the parallel processors in a supercomputer will need to communicate with one another as they solve their own bits of the problems. Or one processor might have to wait for results from another before it can do a particular job. A typical problem worked on by a massively parallel computer will thus fall somewhere between the two extremes of a completely serial problem (where every single step has to be done in an exact sequence) and an embarrassingly parallel one; while some parts can be solved in parallel, other parts will need to be solved in a serial way. A law of computing (known as Amdahl's law, for computer pioneer Gene Amdahl), explains how the part of the problem that remains serial effectively determines the maximum improvement in speed you can get from using a parallel system.

Cluster

You can make a supercomputer by filling a giant box with processors and getting them to cooperate on tackling a complex problem through massively parallel processing. Alternatively, you could just buy a load of off-the-shelf PCs, put them in the same room, and interconnect them using a very fast local area network (LAN) so they work in a broadly similar way. That kind of supercomputer is called a cluster. Google does its web searches for users with clusters of off-the-shelf computers dotted in data centers around the world.


Photo: Supercomputer cluster:NASA's Pleiades ICE Supercomputer is a cluster of 112,896 cores made from 185 racks of Silicon Graphics (SGI) workstations. Picture by Dominic Hart courtesy of NASA Ames Research Center.

Grids

Agrid is a supercomputer similar to a cluster (in that it's made up of separate computers), but the computers are in different places and connected through the Internet (or other computer networks). This is an example of distributed computing, which means that the power of a computer is spread across multiple locations instead of being located in one, single place (that's sometimes called centralized computing).

Grid super computing comes in two main flavors. In one kind, we might have, say, a dozen powerful mainframe computers in universities linked together by a network to form a supercomputer grid. Not all the computers will be actively working in the grid all the time, but generally we know which computers make up the network. The CERN Worldwide LHC Computing Grid, assembled to process data from the LHC (Large Hadron Collider) particle accelerator, is an example of this kind of system. It consists of two tiers of computer systems, with 11 major (tier-1) computer centers linked directly to the CERN laboratory by private networks, which are themselves linked to 160 smaller (tier-2) computer centers around the world (mostly in universities and other research centers), using a combination of the Internet and private networks.


The other kind of grid is much more ad-hoc and informal and involves far more individual computers—typically ordinary home computers. Have you ever taken part in an online computing project such as SETI@home, GIMPS, FightAIDS@home, Folding@home, MilkyWay@home, or ClimatePrediction.net? If so, you've allowed your computer to be used as part of an informal, ad-hoc supercomputer grid. This kind of approach is called opportunistic supercomputing, because it takes advantage of whatever computers just happen to be available at the time. Grids like this, which are linked using the Internet, are best for solving embarrassingly parallel problems that easily break up into completely independent chunks.

What Software Do Super Computers Run?

You might be surprised to discover that most supercomputers run fairly ordinary operating systems much like the ones running on your own PC, although that's less surprising when we remember that a lot of modern supercomputers are actually clusters of off-the-shelf computers or workstations. The most common supercomputer operating system used to be Unix, but it's now been superseded by Linux (an open-source, Unix-like operating system originally developed by Linus Torvalds and thousands of volunteers). Since supercomputers generally work on scientific problems, their application programs are sometimes written in traditional scientific programming languages such as Fortran, as well as popular, more modern languages such as C and C++.

What Do Super Computer Actually Do?

As we saw at the start of this article, one essential feature of a computer is that it's a general-purpose machine you can use in all kinds of different ways: you can send emails on a computer, play games, edit photos, or do any number of other things simply by running a different program. If you're using a high-end cellphone, such as an Android phone or an iPhone or an iPod Touch, what you have is a powerful little pocket computer that can run programs by loading different "apps" (applications), which are simply computer programs by another name. Supercomputers are slightly different.
Typically, supercomputers have been used for complex, mathematically intensive scientific problems, including simulating nuclear missile tests, forecasting the weather, simulating the climate, and testing the strength of encryption (computer security codes). In theory, a general-purpose supercomputer can be used for absolutely anything.

While some supercomputers are general-purpose machines that can be used for a wide variety of different scientific problems, some are engineered to do very specific jobs. Two of the most famous supercomputers of recent times were engineered this way. IBM's Deep Blue machine from 1997 was built specifically to play chess (against Russian grand master Gary Kasparov), while its later Watson machine (named for IBM's founder, Thomas Watson, and his son) was engineered to play the game Jeopardy. Specially designed machines like this can be optimized for particular problems; so, for example, Deep Blue would have been designed to search through huge databases of potential chess moves and evaluate which move was best in a particular situation, while Watson was optimized to analyze tricky general-knowledge questions phrased in (natural) human language.



Monday, 30 April 2018

Heat Sink

what is heat sink and its work 

Though the term heat sink probably isn't one most people think of when they hear the word computer, it should be. Without heat sinks, modern computers couldn't run at the speeds they do. Just as you cool down with a cold bottle of Gatorade after a high impact workout, heat sinks cool down your computer's processor after it runs multiple programs at once. And without a quality heat sink, your computer processor is at risk of overheating, which could destroy your entire system, costing you hundreds, even thousands of dollars.

But what exactly is a heat sink and how does it work? Simply put, a heat sink is an object that disperses heat from another object. They're most commonly used in computers, but are also found in cell phones, DVD players and even refrigerators. In computers, a heat sink is an attachment for a chip that prevents the chip from overheating and, in modern computers, it's as important as any other component.

If you aren't very tech-savvy, think of the heat sink like a car radiator. The same way a radiator draws heat away from your car's engine, a heat sink draws heat away from your computer's central processing unit (CPU). The heat sink has a thermal conductor that carries heat away from the CPU into fins that provide a large surface area for the heat to dissipate throughout the rest of the computer, thus cooling both the heat sink and processor. Both a heat sink and a radiator require airflow and, therefore, both have fans built in.
Before the 1990s, heat sinks were usually only necessary in large computers where the heat from the processor was a problem. But with the introduction of faster processors, heat sinks became essential in almost every computer because they tended to overheat without the aid of a cooling mechanism.

On the next page we'll take a look at some different types of heat sinks, as well as the scientific principles that explain how they work.




Thermal Conductivity 

Heat can be transferred in three different ways: convection, radiation and conduction. Conduction is the way heat is transferred in a solid, and therefore is the way it is transferred in a heat sink. Conduction occurs when two objects with different temperatures come into contact with one another. At the point where the two objects meet, the faster moving molecules of the warmer object crash into the slower moving molecules of the cooler object. When this happens, the faster moving molecules from the warmer object give energy to the slower moving molecules, which in turn heats the cooler object. This process is known as thermal conductivity, which is how heat sinks transfer heat away from the computer's processor.


Heat sinks are usually made of metal, which serves as the thermal conductor that carries heat away from the CPU. However, there are pros and cons to using every type of metal. First, each metal has a different level of thermal conductivity. The higher the thermal conductivity of the metal, the more efficient it is at transferring heat.
One of the most common metals used in heat sinks is aluminum. Aluminum has a thermal conductivity of 235 watts per Kelvin per meter (W/mK). (The thermal conductivity number, in this case 235, refers to the metal's ability to conduct heat. Simply put, the higher the thermal conductivity number of a metal, the more heat that metal can conduct.) Aluminum is also cheap to produce and is lightweight. When a heat sink is attached, its weight puts a certain level of stress on the motherboard, which the motherboard is designed to accommodate. Yet the lightweight make up of aluminum is beneficial because it adds little weight and stress to the motherboard.

One of the best and most common materials used to make heat sinks is copper. Copper has a very high thermal conductivity of 400 W/mK. It is, however, heavier than aluminum and more expensive. But for operating systems that require an extensive amount of heat dissipation, copper is frequently used.

So where does the heat go once it's been conducted from the processor through the heat sink? A fan inside the computer moves air across the heat sink and out the computer. Most computers also have an additional fan installed directly above the heat sink to help properly cool the processor. Heat sinks with these additional fans are called active heat sinks, while those with the single fan are called passive heat sinks. The most common fan is the case fan, which draws cool air from outside the computer and blows it through the computer, expelling the hot air out of the rear.

Saturday, 28 April 2018

What is processor

Aprocessor or micro-processor is the 'brains' of a computer system. It is the processor that controls the working of all of the hardware and software.

The processor is sometimes referred to as the Central Processing Unit (CPU).

There are many processors available and processor specification is usually one of the first things considered when buying a new personal computer (PC). The type of processor and its speed have the greatest impact on the overall performance of a computer system. Processor performance is related directly to its speed of operation and its architecture

Competition among processor manufacturers is fierce and because of this there is a wide and diverse choice of processors in the market place. Processor manufacturers, such as Intel and Advanced Micro Devices (AMD) are continually developing more advanced processors and new models are released within the space of months rather than years. This is in stark contrast to earlier processor developments, such as the 8086, 80286 and 80386 which were released years apart.


And a  processor, or "microprocessor," is a small chip that resides in computers and other electronic devices. Its basic job is to receive input and provide the appropriate output. While this may seem like a simple task, modern processors can handle trillions of calculations per second.

The central processor of a computer is also known as the CPU, or "central processing unit." This processor handles all the basic system instructions, such as processing mouse and keyboard input and running applications. Most desktop computers contain a CPU developed by either Intel or AMD, both of which use the x86 processor architecture. Mobile devices, such as laptops and tablets may use Intel and AMD CPUs, but can also use specific mobile processors developed by companies like ARM or Apple.

Modern CPUs often include multiple processing cores, which work together to process instructions. While these "cores" are contained in one physical unit, they are actually individual processors. In fact, if you view your computer's performance with a system monitoring utility like Windows Task Manager (Windows) or Activity Monitor (Mac OS X), you will see separate graphs for each processor. Processors that include two cores are called dual-core processors, while those with four cores are called quad-core processors. Some high-end workstations contain multiple CPUs with multiple cores, allowing a single machine to have eight, twelve, or even more processing cores.

Besides the central processing unit, most desktop and laptop computers also include a GPU. This processor is specifically designed for rendering graphics that are output on a monitor. Desktop computers often have a video card that contains the GPU, while mobile devices usually contain a graphics chip that is integrated into the motherboard. By using separate processors for system and graphics processing, computers are able to handle graphic-intensive applications more efficiently.

Difference between intel processor and AMD processor


Key Difference: AMD and Intel are two different companies that are competing with each other to capture the semiconductor industry. The major difference between the two is price, while Intel offers high prices for its products; AMD offers cheap prices for the masses.

Anyone who has ever gone shopping for a computer or a laptop has heard the names AMD and Intel. Both of these companies are big players in computer microprocessor and semiconductor chip markets. These companies have been serving as the biggest competitors to each other and have even faced a few litigations. Though both the companies produce microprocessors, semiconductors and other computer hardware, they are different from each other in many ways.

AMD (Advanced Micro Devices) is an American multinational company that creates semiconductors, processors and other hardware for computers. It was founded in 1969 and has been in a constant battle for market share with Intel. Intel Corporation is also an American multinational company that produces semiconductors and other hardware for computing systems. It was founded in 1968, one year before AMD and has managed to capture approximately 70% of the market.

In an attempt to capture the market it is often under legal disputes as it has been known to employ a lot of ‘underhand’ techniques to maintain the leader in the market. In 1975, AMD and Intel had a short partnership working on Intel’s 8080 microprocessor for IBM. However, that partnership quickly dissolved with each company starting its own endeavors. While, Intel’s objective is to provide the most technologically advanced systems, AMD offers the customers’ value for their money. AMD processors are usually cheaper compared to Intel processors of the same architecture.

The main rivalry between the companies is often boiled down to motherboards and processors.
The primary difference between AMD and Intel motherboards is that they only accept the same kind of processor. Hence, an AMD motherboard would only work with an AMD processor, and likewise, an Intel motherboard will only work with an Intel processor, and not the other way around.
  The reason for this is the fact that each processor requires a different socket type. Intel motherboards have LGA 1156 and LGA 1366 sockets, while AMD motherboards have AM2 and AM3 sockets.

However, the fact that an AMD motherboard would only work with an AMD processor, and likewise for Intel, the sales and market share of Intel and AMD motherboards directly correspond to the sales and market share of Intel and AMD processors. The number of slots and the amount of RAM that the motherboard can accommodate is mainly dependent on the make and model of the motherboard. This of course has an effect on pricing. Hence, a motherboard with more SATA ports and/or with more RAM compatibility will cost more.

Pentium is the brand of processors that belong to Intel, while AMD sells the processors under the AMD name itself. The Pentium processor is a consumer-level product. It is placed higher than the low-end Atom and Celeron products but below the faster Core i3, i5, i7 and the Xeon processors.

As compared to Intel Core, Pentium has a lower clock frequency, a partially disabled L3 cache, in addition to disabled hyper-threading and virtualization. One of the main difference between the processors is the fact that Intel is often more expensive than its AMD equivalents. This generalization also applies to Pentium. This price difference is often driven by Intel, as it often holds the top spot when comparing the performance of microprocessors.

Also, Intel processors, such as Pentium have longer pipelines than AMD processors. This allows them to have a much higher clock speed than what could be normally achieved. However, AMD has found another way to compete with the increased clock speed, in the means of storing and accessing the CPU memory.

Intel Pentium processors store their memory in an L2 (level 2) cache. This is almost double the size of AMD Athlon processors’ cache. The L2 cache is a memory bank that stores and transmits data to the L1 (level 1) cache. The L1 (level 1) cache in turn, stores and transmits data to the processor itself. Hence, the larger the L2 cache, the faster the processing speed.

AMD Athlon processors have roughly half the L2 cache space of a Pentium processor; however its L2 cache space is integrated directly into the processor itself. This allows AMD Athlon processors to access their cache data much quicker than Intel Pentium processors, providing a faster processing speed despite its size.

So, even though technically AMD Athlon processors’ clock rate and cache space are listed as less on paper, it provides comparable performance to Intel Pentium and all at a relatively lower price.
Both the companies aim to come out with the next best thing and stay a step ahead of each other. Hence, their products are always closely related with minor differences that each of the company thinks will make their product better. Due to this, the processors of the two companies are virtually the same.



Tuesday, 24 April 2018

GIGABYTE AORUS RGB Fusion & Liquid Cooling System

Aorus was described as a ‘new story of Gigabyte’ which is fitting given the product line-up’s growth from the laptop division into computer components via the launch of six Aorus motherboards for the Z270 chipset.
It won’t be long until Aorus’ eagle logo (jokingly described as the ‘bottle opener’ by one member of the press) will extend to further components in Gigabyte’s arsenal, such as peripherals and graphics cards (there has been a sneak peek of the Aorus GTX 1080 Ti HERE).
Main talking points for Gigabyte’s new Aorus motherboards were: RGB Fusion, Smart Fan 5, liquid cooling support (on higher end models), and other features.

SUPER COMPUTER

R oll back time a half-century or so and the smallest computer in the world was a gargantuan machine that filled a room. When transistors a...