Gaming vs Workstation GPU's

This discussion was created from comments split from: Hitfilm on Microsoft Surface Book.

Comments

  • Quadro and GeForce literally run on the same architecture. Drivers being more stable is a marketing pitch, it doesn't actually mean anything in the real world. 

    What they do with higher end cards is add more hardware in them, such as error correcting VRAM (not just ECC RAM) etc.

    HitFilm works best with GeForce cards (and AMD of course), because "workstation" cards benefit when it comes to extremely high VRAM counts and no errors when rendering (no glitches) and FP64.

  • Aladdin4dAladdin4d Moderator

    @CNK

    "Drivers being more stable is a marketing pitch, it doesn't actually mean anything in the real world. "

    Ummm sorry but that's wrong. The drivers are quite a bit different. In general a gaming GPU only has to deal with gaming API's and single precision calculations. Visual quality and speed are paramount. Workstation GPU's are used in applications that have radically different requirements.

    CAD applications for example, need GPU's that can handle polygon counts that are unheard of in the VFX or gaming world. You can't use clever tricks like bump maps to add detail that's just visual fluff. The details all have to be modeled and that means more polygons so even small mechanical models can have over a billion polygons depending on the complexity of the parts. Now scale that up to designing a car, a bridge or the next Airbus. Double precision calculations, accuracy and data integrity are absolute requirements. A 'glitch' in a model isn't just an annoyance. It's something that can lead to production problems, cost overruns, product recalls and potentially even mean the difference between life and death for someone. Driver stability is paramount because the lack of stability means the data can't be trusted.

    Compared to gaming GPU drivers, workstation GPU drivers are, in a sense, incomplete. The demanding requirements come with a high computational penalty and the only realistic way to overcome that and keep performance acceptable is for application developers to get closer to the 'bare metal'. Workstation GPU's and drivers allow this in ways gaming GPU's just don't and it's up to the application developer to complete their custom driver implementation. 

  • I would agree that the workstation drivers are probably more stable. The number of apps they have to test againsts is quite small and they also often do custom drivers to tweak performance. The workstation profit margins allow for more testing and validation.

    The workstation GPUs are commonly the same silicon as the consumer. They spec the same. In the package, they do jumper the enabling of certain features. ECC being a common one. Nvidia used to difference them via different BIOS but people hacked consumer cards putting the workstation BIOS in a consumer card to be able to run the workstation drivers on the consumer card. These days it is not hackable.

    The Tesla cards are a different animal. Primarily in the greater number of FP64 ALUs on chip verses the graphics oriented workstation and consumer cards.

    Some driver functions are likely "crippled" on consumer OpenGL drivers. AFAIK, double sided surfaces are one. Consumer apps, aka games, do not use this but CAD apps do. Supposedly there can be an order of magnitude performance difference between workstation and consumer GL implementations.

  • CNKCNK
    edited January 13

    I sort of forwarded what the NVIDIA rep told me today.

    I guess a better way to put it would be that they made sure that nobody could hack the cards anymore, and that pro's want to buy Quadros for more VRAM, and when rendering making sure not a single pixel has it's value wrong due to some glitch.

    They didn't want to tell me any more detail than that, so they sent me this pdf if you're interested: http://www.nvidia.com/object/quadro_geforce.html

    Completely unrelated I suppose from what the OP's question is, but it's still interesting because the difference isn't really a difference when both cards are technically the same but they're not because NVIDIA likes money.

     Should also be noted that NVIDIA has recently enabled 10 bit opengl support for their GeForce cards. AMD did that way before NVIDIA, but it's a step in the right direction I suppose.

    If people still want to discuss this topic I wouldn't mind to see a new thread be created from this mods.

  • Aladdin4dAladdin4d Moderator

    The profit margins on workstation cards aren't as great as you might think. The support staff ratio alone compared to what's required for supporting gaming cards is insane. @CNK you mentioned not having a single pixel wrong. In a game if a pixel changes because of a GPU or driver change it's no big deal. If you're  Boeing it means halting production and it may mean keeping production  halted for a long time. 

    In top tier CAD work the geometry is commonly double precision with an accuracy down to 10e-12 (1.0*10E-12= 0.000,000,000,001 or 1 Trillionth, Pico) or even lower on parts ranging from 1.0e3 (0.001 mm) to 1.0e-9 (0.000,000,001 nm nanometer). That accuracy has to be maintained across multiple generations of hardware and software without any issues whatsoever. That requirement is called regression compliance. If a change in hardware or software causes any change in the geometry the regression compliance is broken. You can't manufacture and install parts on an airliner that are different than the ones all the simulation, testing and certification was done on originally. If you can't locate what caused the change then any new parts made must undergo all the same simulation, testing and certification before they could ever be used.

    Each new GPU and each and every driver revision must undergo regression testing before it can be released. Doing just that alone takes an army of people that do nothing but test thousands of models in thousands of applications to the level of model integrity required for each individual industry that would be using the GPU. That's the kind of thing you pay for when you buy a workstation card and this is just one example of several costs and things gaming GPU's aren't subjected to. 

  • edited January 13

    Totally interesting thread development, here. I've got two different cards installed on the tower I use Hitfilm on.  I'm wondering if I ought to try switching cards. I'll have to get out the paperwork with the specs and see if the other card is rated higher than my GeForce.

  •  Well, that is all very confusing to someone who knows little of the Graphics Card market. So imagine my confusion when I am recommended a Nvidia 1070 card and am presented with this lot of choices :

    https://www.mwave.com.au/searchresult?w=Nvidia+1070

    Some are gaming cards, some are not, yet all are Nvidia 1070 cards?

  • @Triem23 Are you sure you're not confusing USD with AUD?

  • Triem23Triem23 Moderator

    @Cnk entirely possible. Probable even. My bad! 

Sign in to comment

Leave a Comment