Eto, Charlie iz Semiaccurate-a ima novi tekst, doduse od jucer je, ali tek sam ga procito,
pogadjate, prognoze opet nisu dobre za Fermi, evo neke od zanimljivijih predpostavki (manji isjecak iz teksta, za cijeli clanak kliknite link "tekst"):
...
"For reasons tied to the power consumption and weak transistors, Fermi GF100 simply will not run at high clocks. Last March, sources told SemiAccurate that the intended clock frequencies were 750MHz for the 'low' clock and 1500MHz for the high clock. Since you can only pull off so many miracles with a voltage bump, we hear the A3 production silicon has a top bin of 600/1200MHz, and that is after an average of two shader clusters are turned off."
Nvidia was claiming 60 percent more performance than Cypress last fall. That quickly dropped to 40 percent better, and at CES, Nvidia could only show very select snippets of games and benchmarks that were picked to show off its architectural strengths. Those maxed out at about 60 percent better than Cypress, so consider them a best case.
If that 60 percent was from a fully working 512 shader, 750/1500MHz Fermi GF100, likely the case at 280W power draw, than a 448 shader 600/1200MHz GPU would have 87.5 percent of the shaders and 80 percent of the clock. 160 * 0.875 * 0.8 = 112 percent of the performance of ATI's Cypress. This is well within range of a mildly updated and refreshed Cypress chip. Don't forget that ATI has a dual Cypress board on the market, something that Fermi GF100 can't hope to touch for performance.
Fermi GF100 is about 60 percent larger than Cypress, meaning at a minimum that it costs Nvidia at least 60 percent more to make, realistically closer to three times. Nvidia needs to have a commanding performance lead over ATI in order to set prices at the point where it can make money on the chip even if yields are not taken into account. ATI has set the upper pricing bound with its dual Cypress board called Hemlock HD5970.
Rumors abound that Nvidia will only have 5,000 to 8,000 Fermi GF100s, branded GTX480 in the first run of cards. The number SemiAccurate has heard directly is a less specific 'under 10,000'. There will have been about two months of production by the time those launch in late March, and Nvidia bought 9,000 risk wafers late last year. Presumably those will be used for the first run. With 104 die candidates per wafer, 9,000 wafers means 936K chips.
Even if Nvidia beats the initial production targets by ten times, its yields are still in the single digit range. At $5,000 per wafer, 10 good dies per wafer, with good being a very relative term, that puts cost at around $500 per chip, over ten times ATI's cost. The BoM cost for a GTX480 is more than the retail price of an ATI HD5970, a card that will slap it silly in the benchmarks. At these prices, even the workstation and compute cards start to have their margins squeezed."
...
opet, ne znam koliko ima u ovome istine, svi znao da Charlie voli raditi zanimljive predpostavke, na nasu i nVidijinu zalost cesto su se pokazale tocnima.
Procitatje cijeli clanak, donesite vlastite zakljucke, iz ovoga opet nema nista dobrog po nVidiu, samim time i nama jer se u ovom slucaju nista bitno po pitanju cijena i odnosa snaga na trzistu ne bi mijenjalo, Fermi-based kartice se gotovo ne bi mogle OC-at jer bi vec u startu radile na granici svojih mogucnosti, bile bi skupe i u konacnici slabije od ATI zvijeri
vidjet cemo sve za mjesec dana, ne bi bilo dobro da je ovo realno stanje stvari
Pa o tome ja pricam vec tjedan dana ali malo tko me razumije i vecina tupi po svome.Ovaj clanak nisam jos procitao ali ja sam na anandtechu godinama i decki su jako pouzdani dok sa charliem nemam iskustva.
Zasto je nvidia u ovolikim problemima nemoze se objasniti u 2-3 recenice ali pokusat cu.Velika je razlika ati-a(sad amd) i nvidie u strategijama,dok nvidia uporno gura svoje cipove da budu veci i veci ati je dobro predvidjeo koje su realne mogucnosti 40 nm vafera i odustao od utrke veci-bolji i 2005 poceo planirati evergreen koji je za razliku od ovoga sto je izaslo trebao biti 30% veci a sto bi i dalje bilo manje za 30% od fermija.Zakljucite sami sto nas ceka od nvidie.