Compare any two graphics cards:
VS

GeForce GTX Titan X vs Radeon RX Vega 56

Intro

The GeForce GTX Titan X comes with a clock frequency of 1000 MHz and a GDDR5 memory frequency of 1750 MHz. It also uses a 384-bit bus, and makes use of a 28 nm design. It features 3072 SPUs, 192 Texture Address Units, and 96 Raster Operation Units.

Compare that to the Radeon RX Vega 56, which comes with a core clock frequency of 1156 MHz and a HBM2 memory frequency of 1600 MHz. It also features a 2048-bit bus, and makes use of a 14 nm design. It is comprised of 3584 SPUs, 224 Texture Address Units, and 64 Raster Operation Units.

Display Graphs

Hide Graphs

Benchmarks

These are real-world performance benchmarks that were submitted by Hardware Compare users. The scores seen here are the average of all benchmarks submitted for each respective test and hardware.

3DMark Fire Strike Graphics Score

Radeon RX Vega 56 21011 points
GeForce GTX Titan X 17879 points
Difference: 3132 (18%)

Power Usage and Theoretical Benchmarks

Power Consumption (Max TDP)

Radeon RX Vega 56 210 Watts
GeForce GTX Titan X 250 Watts
Difference: 40 Watts (19%)

Memory Bandwidth

Performance-wise, the Radeon RX Vega 56 should theoretically be a lot better than the GeForce GTX Titan X in general. (explain)

Radeon RX Vega 56 419430 MB/sec
GeForce GTX Titan X 336000 MB/sec
Difference: 83430 (25%)

Texel Rate

The Radeon RX Vega 56 should be a lot (about 35%) faster with regards to anisotropic filtering than the GeForce GTX Titan X. (explain)

Radeon RX Vega 56 258944 Mtexels/sec
GeForce GTX Titan X 192000 Mtexels/sec
Difference: 66944 (35%)

Pixel Rate

If using a high resolution is important to you, then the GeForce GTX Titan X is the winner, by far. (explain)

GeForce GTX Titan X 96000 Mpixels/sec
Radeon RX Vega 56 73984 Mpixels/sec
Difference: 22016 (30%)

Please note that the above 'benchmarks' are all just theoretical - the results were calculated based on the card's specifications, and real-world performance may (and probably will) vary at least a bit.

Price Comparison

Display Prices

Hide Prices

GeForce GTX Titan X

Amazon.com

Check prices at:

Radeon RX Vega 56

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Specifications

Display Specifications

Hide Specifications

Model GeForce GTX Titan X Radeon RX Vega 56
Manufacturer nVidia AMD
Year March 2015 September 2017
Code Name GM200 Vega 10 XL
Memory 12288 MB 8192 MB
Core Speed 1000 MHz 1156 MHz
Memory Speed 7000 MHz 1600 MHz
Power (Max TDP) 250 watts 210 watts
Bandwidth 336000 MB/sec 419430 MB/sec
Texel Rate 192000 Mtexels/sec 258944 Mtexels/sec
Pixel Rate 96000 Mpixels/sec 73984 Mpixels/sec
Unified Shaders 3072 3584
Texture Mapping Units 192 224
Render Output Units 96 64
Bus Type GDDR5 HBM2
Bus Width 384-bit 2048-bit
Fab Process 28 nm 14 nm
Transistors 8000 million 12500 million
Bus PCIe 3.0 x16 PCIe 3.0 x16
DirectX Version DirectX 12.0 DirectX 12.0
OpenGL Version OpenGL 4.5 OpenGL 4.5

Memory Bandwidth: Bandwidth is the max amount of information (counted in megabytes per second) that can be transported past the external memory interface in one second. It's calculated by multiplying the bus width by its memory speed. In the case of DDR type RAM, the result should be multiplied by 2 again. If DDR5, multiply by 4 instead. The higher the card's memory bandwidth, the better the card will be in general. It especially helps with anti-aliasing, HDR and higher screen resolutions.

Texel Rate: Texel rate is the maximum number of texture map elements (texels) that can be applied per second. This figure is worked out by multiplying the total texture units of the card by the core clock speed of the chip. The better the texel rate, the better the graphics card will be at texture filtering (anisotropic filtering - AF). It is measured in millions of texels per second.

Pixel Rate: Pixel rate is the maximum amount of pixels that the graphics chip could possibly record to its local memory in one second - measured in millions of pixels per second. Pixel rate is calculated by multiplying the number of ROPs by the the core speed of the card. ROPs (Raster Operations Pipelines - also called Render Output Units) are responsible for filling the screen with pixels (the image). The actual pixel output rate is also dependant on lots of other factors, most notably the memory bandwidth of the card - the lower the memory bandwidth is, the lower the potential to get to the max fill rate.

Display Prices

Hide Prices

GeForce GTX Titan X

Amazon.com

Check prices at:

Radeon RX Vega 56

Amazon.com

Check prices at:

Please note that the price comparisons are based on search keywords - sometimes it might show cards with very similar names that are not exactly the same as the one chosen in the comparison. We do try to filter out the wrong results as best we can, though.

Comments

Be the first to leave a comment!

Your email address will not be published. Required fields are marked *


*

WordPress Anti Spam by WP-SpamShield