Monday, July 9, 2007






AMD vs. Intel Processors
There now exists only two major brands of central processing units (CPUs): Intel and AMD. AMD, by the way, stands for Advanced Micro Devices. The difference between the two processor brands is sometimes hard to see, as each has its own line, and even give such vital statistics, such as processor speed, in different terms. It can be difficult sometimes to simply compare an AMD processor and an Intel processor. Typically, AMD has produced cheaper alternatives to the Intel processor line. But the big question is, has this come at a cost of performance? In the processor industry, just as in any other, it is likely that you get what you pay for. Nonetheless, here we take a look at some AMD and Intel products and try to figure out a comparative cost to performance ratio.



Pentium 4 Extreme Edition vs. AMD Athlon FX 64


To compare the companies as fairly as possible, lets take a look at two of the newest processors by each, and compare the speed to cost ratio. The newest processor by AMD is the AMD Athlon FX 64-bit processor. AMD claims that this processor gives you "leading-edge performance and unparalleled technology with its simultaneous 32-bit and 64-bit computing." 64-bit processing is certainly a new development, and one that is perhaps long overdue. This chip is really something to behold, and hints that AMD is going to be in this battle for quite sometime. Some of the nice features of the FX include a 128-bit integrated DDR memory controller (up to 6.4GB/sec memory bandwidth for breakthrough performance and extraordinary cinematic computing experiences) and HyperTransport technology (much like Intel's hyperthreading), which provides "increased bandwidth and reduced I/O bottlenecks for increased system performance and better multitasking." You can purchase one of these processors for about $800 or so, if you look hard enough. You will find that their "core operating speeds" are about 2.6GHZ or so max.

Intel's latest chip is the Pentium 4 extreme edition. This processor is also very impressive with about a 3.2GHZ core operating speed. Like the most recent p4s before it, the Pentium 4 extreme edition comes with Hyper-Threading technology. However, it also comes with a hefty 800 MHz system bus and its major distinction from previous versions: a L3 cache at 2 MB, which is integrated into the chip and runs at the processor core clock speed. Another interesting architectual note is that the Pentium 4 extreme has 170 mln transistors, compared with the 104 mln found on the Athlon FX. This processor currently sells for about $1000 - a bit more expensive.
The cost to speed ratios for the Pentium 4 extreme edition and the AMD Athlon FX are actually very similar. Deciding which processor to spend your hard earned money on depends more upon what you actually plan on using your computer for. After various trials, it was found that the Pentium 4 extreme does better - overall - with gaming programs like Doom 3 and AntiPlanet. However, if you aren't a big gamer, you could probably save yourself some money going with the FX, and not really seeing any difference in application performance. In fact, you may even find that some games work better with Athlon FX. It really seems that AMD, with its new 64-bit processor, really has put out some stiff competition for Intel and the Pentium line.
Comments
Processor speed: Intel Pentium Dual Core vs. AMD Turion?
I know the processor speed isn't the only thing to look at, but if they have the same memory and hard drive space, are both of these processors powerful? Does one outdo the other? I'm looking at Intel Pentium Dual Core (T2060 or T2080) and AMD Turion 64 X2 Dual-Core TL-50. I'm not a gamer, but I do muti-task with many web and Word windows.
Victor C Lopez Jr.
Leah Mae Evan
Rhea Mae Sanguer
Warren Andojar
John Angelo Genzola
Jemmar Paron
Jayson Villasor
Cesar Ryan Bondoc

Monday, July 2, 2007

group 3 IT-213

The History of Intel


Intel was founded on July 18, 1968 with one main goal in mind: to make semiconductor memory more practicle. Intels first microprocessor, the 4004 microcomputer, was released at the end of 1971. The chip was smaller then a thumbnail, contained 2300 transistors, and was capable of executing 60,000 operations in one second. Shortly after the release of th 4004 the 8008 microcomputer was released and was capable of executing twice as many operations per second then the 4004. Intels commitment to the microprocessor led to IBM's choice of Intel's 8088 chip for the CPU of the its first PC. In 1982, Intel introduced the first 286 chip, it contained 134,000 transistors and provided around three times the performance of the other microprocessors at the time. In 1989 the 486 processor was released that contained 1.2 million transistors and the first built in math coprocessor. The chip was approximately 50 times faster then Intels original 4004 processor and equaled the performance of a powerful mainframe computer. In 1993 Intel introduced the Pentium processor, which was five times as fast as the 486, it contained 3.1 million transistors, and was capable of 90 million instructions per second (MIPS). In 1995 Intel introduced its new technology, MMX, MMX was designed to enhance the computers multimedia performance. Throughout the years that followed Intel released several lines of processors including the Celeron, the P2, P3, and P4. Intel processors now reach speeds upwards of 2200 MHZ or 2.2 GHZ.



Intel founder: Silicon Valley no longer unique
By Robert McMillan

The region that gave birth to such legendary high technology startups as Apple Computer Inc., Hewlett-Packard Co. and Cisco Systems Inc. may be seeing some of its influence wane, Gordon Moore, one of the founders of Intel Corp., said Wednesday.

Though Silicon Valley was once unparalleled as the natural home of high technology startups, things have changed in the nearly 40 years since Moore, along with Robert Noyce and Andy Grove founded Intel. "It's uniqueness is not as great as it was in the beginning. Other areas have picked up on the technology," Moore said of the region. "Now it's spread around to a lot of other places."

China, for example, is fast rising as a technology player, he said. "We have very formidable competition in the world. I think the impact of China is just beginning to be felt," he said. "China is training 10 times as many engineers. ... Their technology is catching up fairly rapidly. It's a very entrepreneurial society."

Chief among the challenges ahead for Silicon Valley is the relative weakness of the U.S. public education system, which Moore characterized as a problem for the entire country, and the San Francisco Bay Area's notoriously high cost of living, both which are making it harder to attract top workers. "It's so damned expensive, especially the housing. It's hard to move young people in."

The median price paid for a Bay Area home was US$534,000 in January, according to real estate research firm DataQuick Information Systems Inc.

But Moore did express a qualified faith in both the region and the country that had given birth to his company. "Silicon Valley is still a great place to start a company," he said. "I expect the U.S. will still be a successful player, but I don't think it will enjoy the position it's had in the past 20 years."

Moore's comments came Wednesday, at a press event to honor the 40th anniversary of the April 1965 Electronics magazine article that first articulated Moore's famous law on the rate of growth in the chip industry. Originally, a somewhat obscure prediction that the number of components on an integrated circuit would continue to double every year, Moore's Law has come to be regarded as an article of faith in an industry that has defined itself with rapid growth. In 1975, Moore updated his law to predict that components would double every two years.

Though he was at first embarrassed that his observation had become an industry rule -- "it was (in) a McGraw Hill publication that we described as one of the throwaway journals," he said Wednesday -- Moore eventually grew more comfortable with his status as a lawmaker. "Gradually, I got to accept it. It was shorthand for showing what the technology allowed you to do."

With the dimensions of chip components now being measured in atoms, it seems that the ability of engineers to keep doubling the number of transistors they put on chips may now be in jeopardy. But on Wednesday, Moore warned against writing off his famous maxim before its time. "I've never been able to see more than two or three (product) generations ahead without seeing something that appeared to be an impenetrable barrier there," he said.

For example, the 90 nanometer process technology commonly used by chipmakers today once seemed an impossibility, Moore said. "I remember the time that I thought 1 micron was probably going to be the limit," he said. "It wasn't a barrier at all." There are 1,000 nm in a micron, which represents one millionth of a meter.

Though Moore stopped short Wednesday of predicting that his law would hold for another 40 years, he pointed out that it has continually defied a more pessimistic maxim. "Moore's Law is a violation of Murphy's Law," he said. "Everything gets better as you make things smaller."



Moore's Law

The term Moore's Law was coined by Carver Mead around 1970.[4] Moore's original statement can be found in his publication "Cramming more components onto integrated circuits", Electronics Magazine 19 April 1965:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.[1]

Under the assumption that chip "complexity" is proportional to the number of transistors, regardless of what they do, the law has largely held the test of time to date. However, one could argue that the per-transistor complexity is less in large RAM cache arrays than in execution units. From this perspective, the validity of one formulation of Moore's Law may be more questionable.

Gordon Moore's observation was not named a "law" by Moore himself, but by the Caltech professor, VLSI pioneer, and entrepreneur Carver Mead.[2] Moore, indicating that it cannot be sustained indefinitely, has since observed "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens."[5]

Moore may have heard Douglas Engelbart, a co-inventor of today's mechanical computer mouse, discuss the projected downscaling of integrated circuit size in a 1960 lecture.[6] In 1975, Moore projected a doubling only every two years. He is adamant that he himself never said "every 18 months", but that is how it has been quoted. The SEMATECH roadmap follows a 24 month cycle.

In April 2005, Intel offered $10,000 to purchase a copy of the original Electronics Magazine.[7]

[edit] Understanding Moore's Law

Moore's law is not about just the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest[1]. As more transistors are made on a chip the cost to make each transistor reduces but the chance that the chip will not work due to a defect rises. If the rising cost of discarded non working chips is balanced against the reducing cost per transistor of larger chips, then as Moore observed in 1965 there is a number of transistors or complexity at which "a minimum cost" is achieved. He further observed that as transistors were made smaller through advances in photolithography this number would increase "a rate of roughly a factor of two per year".[1]

[edit] Formulations of Moore's Law

PC hard disk capacity (in GB). The plot is logarithmic, so the fit line corresponds to exponential growth.
PC hard disk capacity (in GB). The plot is logarithmic, so the fit line corresponds to exponential growth.

The most popular formulation is of the doubling of the number of transistors on integrated circuits every 18 months. At the end of the 1970s, Moore's Law became known as the limit for the number of transistors on the most complex chips. However, it is also common to cite Moore's Law to refer to the rapidly continuing advance in computing power per unit cost, because increase in transistor count is also a rough measure of computer processing power. On this basis, the power of computers per unit cost - or more colloquially, "bangs per buck" - doubles every 24 months (or, equivalently, increases 32-fold in 10 years).

A similar law (sometimes called Kryder's Law) has held for hard disk storage cost per unit of information.[8] The rate of progression in disk storage over the past decades has actually sped up more than once, corresponding to the utilization of error correcting codes, the magnetoresistive effect and the giant magnetoresistive effect. The current rate of increase in hard drive capacity is roughly similar to the rate of increase in transistor count. However, recent trends show that this rate is dropping, and has not been met for the last three years. See Hard disk capacity.

Another version states that RAM storage capacity increases at the same rate as processing power.

Pixels per dollar based on Australian recommended retail price of Kodak digital cameras
Pixels per dollar based on Australian recommended retail price of Kodak digital cameras

Similarly, Barry Hendy of Kodak Australia has plotted the "pixels per dollar" as a basic measure of value for a digital camera, demonstrating the historical linearity (on a log scale) of this market and the opportunity to predict the future trend of digital camera price and resolution.

Due to the mathematical power of exponential growth (similar to the financial power of compound interest), seemingly minor fluctuations in the relative growth rates of CPU performance, RAM capacity, and disk space per dollar have caused the relative costs of these three fundamental computing resources to shift markedly over the years, which in turn has caused significant changes in programming styles. For many programming problems, the developer has to decide on numerous time-space tradeoffs, and throughout the history of computing these choices have been strongly influenced by the shifting relative costs of CPU cycles versus storage space.



Amdahl's law


Jump to: navigation, search
The speedup of a program using multiple processors in parallel computing is limited by the sequential fraction of the program. For example, if 0.5 portion of the program is sequential, the theoretical maximum speedup using parallel computing would be 2 as shown in the diagram no matter how many processors are used.  i.e. (1/(0.5+(1-0.5)/N)) when N is very big
The speedup of a program using multiple processors in parallel computing is limited by the sequential fraction of the program. For example, if 0.5 portion of the program is sequential, the theoretical maximum speedup using parallel computing would be 2 as shown in the diagram no matter how many processors are used. i.e. (1/(0.5+(1-0.5)/N)) when N is very big

Amdahl's law, named after computer architect Gene Amdahl, is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.

The generalized Amdahl's law is:

\frac{1}{\sum_{k=

where

  • P_k \ is a percentage of the instructions that can be improved (or slowed),
  • S_k \ is the speed-up multiplier (where 1 is no speed-up and no slowing),
  • k \ represents a label for each different percentage and speed-up, and
  • n \ is the number of different speed-up/slow-downs resulting from the system change.

Description

Amdahl's law is a formula that computes the expected speedup of parallelized implementations of an algorithm relative to the non-parallelized algorithm. For example, if a parallelized implementation of an algorithm can run 12% of the algorithm's operations arbitrarily fast (while the remaining 88% of the operations are not parallelizable), Amdahl's law states that the maximum speedup of the parallelized version is \frac{1}{1 - 0.12} = 1.136 times faster than the non-parallelized implementation.

More technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion P of that computation where the improvement has a speedup of S. (For example, if an improvement can speed up 30% of the computation, P will be 0.3; if the improvement makes the portion affected twice as fast, S will be 2). Amdahl's law states that the overall speedup of applying the improvement will be

\frac{1}{(1 - P) + \frac{P}{S}}.

To see how this formula was derived, assume that the running time of the old computation was 1, for some unit of time. The running time of the new computation will be the length of time the unimproved fraction takes, (which is 1 − P), plus the length of time the improved fraction takes. The length of time for the improved part of the computation is the length of the improved part's former running time divided by the speedup, making the length of time of the improved part P/S. The final speedup is computed by dividing the old running time by the new running time, which is what the above formula does.

Here's another example. We are given a task which is split up into four parts: P1 = .11 or 11%, P2 = .18 or 18%, P3 = .23 or 23%, P4 = .48 or 48%, which add up to 100%. Then we say P1 is not sped up, so S1 = 1 or 100%, P2 is sped up 5x, so S2 = 5 or 500%, P3 is sped up 20x, so S3 = 20 or 2000%, and P4 is sped up 1.6x, so S4 = 1.6 or 160%. By using the formula \frac{P1}{S1} + \frac{P2}{S2} + \frac{P3}{S3} + \frac{P4}{S4}, we find the running time is {\frac{.11}{1} + \frac{.18}{5} + \frac{.23}{20} + \frac{.48}{1.6}} = .4575 or a little less than ½ the original running time which we know is 1. Therefore the overall speed boost is \frac{1}{.4575} = 2.186 or a little more than double the original speed using the formula \frac{1}{\frac{P1}{S1} + \frac{P2}{S2} + \frac{P3}{S3} + \frac{P4}{S4}}. Notice how the 20x and 5x speedup don't have much effect on the overall speed boost and running time when over half of the task is only sped up 1x, (i.e. not sped up), or 1.6x.

Victor Lopez Jr.

Jayson Villasor

Rhea Mae Sanguer

Leah Mae Evan

Warren Andojar

Liwayway Gerolaga

Cesar Ryan Bondoc

Karlmax Pacifico

John Angelo Genzola