Site hosted by Angelfire.com: Build your free website today!

VGA Guide

author:westfale/posted on december 17th , 2004/last updated on december 26th , 2004
INNER WORKINGS EXPLAINED :
graphics chip
DirectX/OpenGL
memory
AGP
PCIe
Drivers

THE MANUFACTURERS :
ATI
MATROX
NVIDIA
S3
XGI

i often hear people in games asking which graphics card they should buy . these conversations then often degenerate into flamewars between ATI and nvidia fanboys/fangirls . so i decided to write this little piece to help those who have a real life and who thus cannot spend the whole time reading reviews on ars technica like me . NO , I DO NOT live in my parents' basement . i live in MY basement :) .

at any time there are about a dozen different cards on the market , and choosing between them can be a daunting task . so let's start by discussing what the graphics chip actually does : it calculates the geometry and the lighting in the program you are running , and then renders this into the final image which it then displays on screen .


so there are basically two parts to a graphics chip, the geometry engine (also called "vertex shader") , and the raster engine ("render engine" , "pixel shader" . "pixel" stands for "picture element") . older chips contained only a raster engine , letting the CPU handle the geometry (which was called "t&l" , or "transform and lighting" , before marketing people created the whole "shader" buzzword) . all currently available graphics chips have both geo and raster engines .

there are at present 5 major manufacturers of graphics chips in the world , and these companies then sell the chips to the manufacturers of the actual graphics cards . the cards contain the chip , memory modules (for storing information about the next frame to be rendered (frame buffer) , textures (texture buffer , duh) , and z-buffer ( which contains info on the depth of a scene ; x and y data are for width and height) ) , and , if needed , TV encoders or video inputs (so you can record video to your PC from say a camcorder or a VCR) . the chips themselves contain the RAMDACs (random access memory digital to analog converter) , which turn the image created by the chip into something your monitor can understand .

the memory can be connected to the chip through a 64-bit , 128-bit , or 256-bit data path . this memory is usally of the DDR variety , which means that it sends two pieces of data per clock cycle rather than one . given that 8 bits equals one byte , you can calculate that a 200MHz DDR memory module (effective data transfer rate is 400MHz) would give the graphics chip a memory bandwidth of 3.2gigabytes per second on a 64-bit bus , 6.4GB/s on a 128-bit bus , and 12.8GB/sec on a 256-bitter . the more bandwidth , the better , of course ; as it makes it easier (faster) for the chip to access the data it needs to work with .

then of course , there are the special effects that the chip can calculate . in the old days when man had yet to discover fire (circa 1994) , games had to be programmed differently for each specific graphics chip . in my old issues of PCGamer , graphics cards are actually advertised with games compatibility rather than speed or price . then came the introduction of openGL and a while later , directX . these programs basically act as "reference books" , for lack of a better term . basically , they define which effects you can use as a programmer , and how these effects are implemented . as a game developer you then program for openGL (quake runs on openGL) , or directX (unreal , planetside ) . each new version of these "reference books" contains a larger vocabulary than the preceding one , giving developers ever more stuff to play with . all the graphics chip companies have to do is build a chip that accelerates graphics according to openGL or directX standards , so if you have a directX compatible graphics chip then that will play EVERY directX game out there .

a directX 7 chip , like the original radeon card is compatible with those directX7 effects only ; you can run DX8 games on the card , but they will render without the eyecandy that a DX8 card (like the geforce3) would give you . morrowind is a good example for this : with hardware DX8 support you get realistic-looking water , without it you get , uh , grey soup . the original radeon simply does not understand the DX8 vocabulary , only the DX7 it was designed for . think of it like ordering pizza with extra pepperoni : the radoen knows what pizza is (DX7) , but you need a newer card to understand the pepperoni (DX8) . hope i made this clear enough .

there is also the question of how much memory your card should have : 128MB or 256MB ? there are presently few games that fill out 128MB at resolutions like 1024x768 , so more RAM does not necessarily mean better performance . however , a card with more RAM will be more future-proof . if given the choice between a geforce FX 5600 256Mb and a geforce FX 5600ULTRA 128MB , the ultra will always be faster because its processor runs higher clock-speeds (and its RAM also) . so more RAM can be a good thing , but it will not make a slow chip perform better . it simply gives the chip access to more stuff directly on the card instead of storing it in slower main memory .

AGP : all data is transferrred into the graphics card through the AGP bus . the faster the bus , the more data can be transferred in a given amount of time . today , motherboards and graphics cards usually support the AGP4x or AGP8x standards . current games however do not fully utilise the bandwidth of AGP4x , much less 8x , so there is no appreciable performance difference between the two modes on identical graphics cards . of course , on a card with less on board memory (like 64MB) , the faster AGP rate makes transferring textures from main memory to graphics memory that much faster . once again , this is not necessary in 128 or 256MB cards , as these have enough onboard RAM as to not be affected very much by the AGP speed .

PCIe : PCI express (not to be confused with PCI-X) is the replacement for the AGP/PCI slots we have now on our motherboards . this is fairly new technology that won't be widely available for a few months from now . the slots for graphics cards are referred to as 16x slots , while slots for other cards (like soundcards) are wired up on fewer strands (2x or 4x) .

Drivers are very important as they translate a game's commands into something your graphics card understands and then executes . a new driver often solves small graphics anomalies by giving the card different instructions than the old driver , and sometimes performance can be improved by newer drivers as well . you can click on the names of the chip manufacturers below to go directly to their websites from where you can download the latest drivers for your card .

now let's take a look at the actual graphics chips & cards currently available for sale :

ATI :

9000/9100/9200 - based on the old 8500 core . feature compatible with DX8 (most recent version of DX is version 9) . slow . available with 128-bit and even slower 64-bit memory busses . too slow to play most current games at acceptable framerates .

9550 - basically a 9600 chip , but at much lower clockspeeds (250MHz core/200MHz RAM , effective 400MHz DDR) . very slow and not really suited to any gamer , although if you don't play games much and need a decent card for a media center it'l do . there are both 128-bit and 64-bit memory bus versions out there , with the 64-bitter naturally being even slower .

9600 - compatible with DX9 , essentially half a radeon 9800 , with 4 rendering pipes instead of 8 , 2 vertex shaders instead of 4 . exists in 64-bit and 128-bit versions . SE is slowest version (64-bit) . non-PRO is also not the fastest card out there , being handicapped by slow (but cheap) memory . PRO and XT are fast enough for most current games , and are a pretty decent value . have reputation for overclocking very well .

9800 - ATi's fastest card until the arrival of the X800 series . 8 pipes , quad shaders , 256-bit RAM . non-PRO , PRO and XT are all very fast . SE has 4 pipes deactivated and comes in 128 and 256-bit versions . SE has lower clockspeed than 9600PRo , so a 128-bit 9800SE is actually slower ovrall than a 9600PRO , but is more expensive .

X600 - the replacement for the 9600 , but according to ATI it will only be available in a PCIe version . the chip itself is really just a 9600 , but with an on-die PCIe controller . there will be several different versions (from slowest to fastest : SE , regular , PRO , and XT) . the XT has a 500MHz core clock (same as the 9600XT) , but it uses faster memory than the 9600 . if you have a 9600 now and need to buy a PCIe card , the X600 is not really much of an upgrade . better to save your money and get something a bit more robust .

X700 - the X700 series is replacing the X600 cards. the top version features 8 rendering pipes and 6 vertex shaders , but only a 128-bit memory bus . performance is a huge steup from the X600, but availablity is sketchy at best . there have been rumours that the high-end card will be replaced by a low-end version of the X800. the X800 uses a 256-bit memory bus , so it might be cheaper for manufacturers to make a 256-bitter with slow RAM (like a low-end X800), rather than the X700XT with its high speed (and thus expnsive) 128-bit RAM .

X800 - ATI's replacement for the 9800 series . it is based off the same basic technology as the 9800 , but has 16 rendering pipes instead of 8 , and 6 vertex shaders instead of 4 . clockspeeds vary from version to version , but all are faster than any 9800 series card . the lower-end versions have only 12 pixel pipes activated , and there is talk of an 8-pipe version for OEM customers (large computer companies like dell) . the 8-piper should still be somewhat faster than a radeon 9800PRO (due to higher clockspeeds) , but if you have a 9800XT now an 8-pipe X800 is not much (if any) of an improvement . the 12-pipe PRO version is a lot faster than the old 9800XT though , and the 16-pipe XT and XT platinum edition are sometimes as much as twice as fast as the leaders of the last generation .

X850 - the X850 is a slighlty enhanced X800 . it is enhanced ONLY through a new manufacturing process , and NOT through architectural changes . the new more efficient process will increase profits for ATI, but also allows the canadians to crank up the clockspeed a little for some versions of the card . if you already have an X800 or GF6800 , it's not worth upgrading to an X850 (the speed boost is really minor).

MATROX :

parhelia and P650/750 chips . parhelia is the top of the line matrox card but offers only as much speed as the old radeon 9200 . P650/750 are half a full parhelia , half as many rendering pipes and 128-bit RAM . even slower , naturally . these are quuite expensive though because they are aimed at the workstation market where image quality is more important than speed . 750 and parhelia have triple monitor support which works in planetside and quake-based games (as well as MS flightsimulator) , albeit slowly . matrox claims partial DX9 compatibility for these cards , but matrox' drivers only work in pure DX8 mode . there is now a PCI Express version of the Parhelia available, but unfortunatley the PCIe connectivity seems to be about the only thing that's new about it , architecture and clockspeeds haven't changed at all. i'm guessing that P650/750 PCIe cards won't be far behind.

NVIDIA :

5200/ultra - slowest version of nvidia's DX9 family . performance comparable to ATI's 9000/9100/9200 , but fully DX9 compatible and thus a better choice . non-ultra VERY slow . exists in 128-bit and even slower 64-bit versions , usually referred to as "SE" models .

5500 - this is a slower clocked version of the 5600 . it's pretty slow , especially in its 64-bit memory version . best to avoid this .

5600/ultra - nvidia's midrange card , beaten solidly by ATI's offerings . no longer being made , but still available at a lot of places . avoid .

5700/ultra - the replacement for the underperforming 5600 . based off a modified 5900 core rather than the weaker 5800 . is about even with ATI's 9600 series in most benchmarks . once again , beware of the 64-bit SE/LE/XT models, they're not all that much cheaper than a regular 5700 but don't perform as well.

5900/ultra - nvidia's top of the line until the arrival of the 6800 series . ultra now called "5950" after a minor speed boost . about equal performance in most games to ATI's 9800 series , but ATI retains a lead in next generation shader performance . still , not exactly a bad choice .

5900XT - unlike ATI , which calls its fastest cards "XT" , nvidia calls slower versions "XT" . the 5900XT uses the same core as the regular 5900 , but at 390MHz rather than 400 . memory drops from 850MHz DDR to 700 . sells for only a few dollars more than 5700ultra and 9600XT , but beats both into a bloody pulp . the best 200 dollar card currently available , by far - and it even overclocks very well . the spiritual successor to the venerable Ti4200 , if you will .

6600 - nvidia's answer to the radeon X700 . performs quite well (about on the same level as last generation's high-end cards) , but like its ATI counterpart , the high-end version suffers from expensive memory. see , faster RAM costs a LOT more money than slow RAM , so a 6600GT with its 1GHz 128-bit RAM is quite expensive to make. on the other hand , the lower-end models of the 6800 range (the LE and standard cards) all have a 256-bit bus and therefore do not need to run such high RAM clocks to get a lot of bandwidth . thus , a 6800LE can be had for about the same money as a 6600GT. performance-wise there is no clear winner at lower resolutions , but at higher resolutions (or higher AA/AF settings) the 6800's extra bandwidth usually decides in the 6800's favour.

6800 - the replacement for the 5900/5950 series . no idea why they call it geforce6 though - if you count all major architectures in the GF line (NV10/geforce1 , NV20/geforce3 , NV30/geforceFX , and NV40/geforce 6800) the 6800 is the fourth one ; and if you count architectures and their revisions (NV10 , NV15/geforce2 , NV20 , NV25/geforce4 , NV30 , NV35/FX5700 and 5900 , NV40) you end up with the 6800 being number 7 . perhaps NV wanted to avoid the 7000's as ATI used to sell a 7000 line a while ago . whatever the case may be , the 6800 is a major step forward for NV after the debacle that was NV30/35 . like its ATI counterpart , the 6800 is available in 12 and 16 pipeline versions (regular 6800 has 12 pipes , the GT , ultra , and ultra extreme have 16) , and there is an 8-pipe version for the dells of this world (although i'm sure someone will start selling these to consumers in a few months , mark my words) . in terms of pure performance , the ATI cards are generally somewhat faster than the NV cards , although the margin is very small . in certain tests the NVcards beat the ATIs by some margin , but all in all it's too close to call . nvidia's big selling point here is the full support of the 3.0 shadermodel (geforce 5900 and ATI 9800/X800 use shader 2.0) , which some developers incorporate into their games . SM3.0 is a smaller step up from 2.0 than 2.0 was from 1.x , but an advantage IS an advantage , no matter how small , and NV will exploit that by paying developers (through their "the way it's meant to be played" program) to incorporate SM3.0 support into their games , and then advertising their cards as the only ones being "fully" compatible with those games . i'd buy a 6800 for this reason , but if you have an NV allergy , by all means , get an ATI . both the X800 and the 6800 are a HUGE step forward in terms of performance , neither one will disappoint you .

S3 :

S8/NITRO - the S8 is the first new chip to be released by S3 since their acquisition by VIA . fully DX9 compatible , the chip performs on about the same level as a radeon 9600PRO . openGL drivers are slow though and there seem to be display errors in openGL as well . nitro is clocked slightly faster . pointless to buy because proven cards with solid drivers (9600PRO , 5700 series) are available for similar money ; nonetheless , when you consider that this is their first new architecture in five years (!) , it's not so bad . shows that they , as a company , have potential . maybe their next chip will give ATI/nvidia a good scare. (update) well since i wrote that there have been several tests of S8 cards posted on teh web , and it seems taht in each one of them , the S8 had driver problems . it's been a long time since my original VGA Guide post at the planetside forums , but for some reason it seems that S3 haven't done much driver work since then . their openGL support STILL seems to suck as well.

S4/NITRO - this is the smaller version of the S8 chip , with only half as many rendering pipes . performance is between the radeon 9200 and the radeon 9600 non-PRO and the geforce 5600 non-ULTRA . not bad for the price , but if you have to buy something in that category i'd recommend that you go with a more proven solution (like the aforementioned ATI/NV chips) , as S3's drivers are not quite there yet .

XGI :

XGI is partially owned by SiS . they currently make three chips , all of which suck . i will only describe the top version of their card , i'm tired of typing and really , they suck . volari V8 DUO - this has TWO of XGI's volari V8 chips on it , for a total of 16 rendering pipes . fully DX9 compatible . it still only performs as well as a radeon 9600PRO or an nvidia 5700ultra , which does not prevent XGI from suggesting a $400 MSRP . image quality is also less than perfect , and several websites have said that XGI use lower quality filtering in their drivers to help performance . other versions of the volari include the V8 (single chip, and not a bad performer for the money had it been released when the 9600 cards were new), V5 (4 pipes) , and the V3 , which is based on the old (and equally atrocious) trident XP5 chip and is only DX8 compatible . XGI bought trident some time ago , only god knows why. it seems that XGI have gotten their drivers together though , as several sites say that rendering issues have gone away and performance has stabilised over the last few months. i still can't recommend any of the XGI chips to you , but it's good to know that they haven't abandoned their products and are devloping drivers for them . hopefully their next chips will be competitive.


so there you have it . i hope this helps some of you make better buying decisions , and i will update this every once in a while when new chips are released .
Google

Back to Article Index
Home