We discussed the changes brought about by the Ada Lovelace architecture in our GeForce RTX 4090 review and we refer you to this article to learn more about it. The GeForce RTX 4080 does not use the same graphics chip as the GeForce RTX 4090 (AD102), but a slightly smaller version called AD103. This has 45.9 billion transistors spread over an area of 378.6 mm² – compared to the 76.3 billion transistors over 608.5 mm² of the upper model.
In fact, this version only has 7 GPC blocks on board. The classic rendering units (CUDA Cores) number 9728, which is halfway between those of the GeForce RTX 3080 (8704 CUDA) and the GeForce RTX 3080 Ti (10240 CUDA). Note however that these units are not quite the same as those from the Ampère architecture. For the rest, there are 304 4th generation Tensor Cores to accelerate processing dedicated to AI, 76 3rd generation RT Cores for processing related to raytracing, 304 units dedicated to textures and 112 rendering units. Also note the presence of a 64 MB L2 cache, compared to 4 to 6 MB on the GeForce RTX 3000 series.
The number of calculation units does not ultimately change much compared to the old generation of Nvidia graphics cards, but the switch to a fineness of engraving in 5 nm (named N4 at TSMC) allows the manufacturer to push the cursor of frequency far enough. The GeForce RTX 4080 is thus given for 2505 MHz on its Boost frequency, i.e. an increase of 795 MHz compared to the RTX 3080. A point which, combined with the architectural developments on the various processing units, would make it possible to achieve a significant generational leap on performances.
Surrounding the GPU is 16GB of 2800MHz GDDR6X operating on a 256-bit bus. This achieves a bandwidth of 716.8 GB/s. The entire graphics card is listed for a maximum TGP of 320 watts, but Nvidia is changing its approach a bit. Thus, on the GeForce RTX 2000 and 3000 series, the TGP was very often the power consumption that was found in most cases. On the GeForce RTX 4000s, it’s more about the power consumption limit. Nvidia, for example, indicates that the average in-game consumption of the GeForce RTX 4080 is more like 250 watts.
Cooling between 250 and 320 watts is usually not a big problem for graphics card manufacturers. Nvidia, however, saw things big for the cooling system, directly inherited from that of the GeForce RTX 4090. We thus find a bulky card (30.5 cm long, 3 slots) and quite heavy (no less than 2, 2kg).
The cooling system is still based on a vapor chamber which is in contact with the GPU and the memory chips. Two 11.6 cm diameter fans circulate the air on either side of the graphics card through aluminum fins. These fans have the good taste of using an oil bath rather than a ball bearing system. This offers better durability and more contained noise – we find this type of system on Noctua fans, for example.
If the graphics card operates on a PCIe 4 interface, it is powered via a PCIe 5 type 12HPWR cable. A new format which allows the power supply of the graphics card to be limited to a single cable, but which however does not seem to be completely mastered by the various cable manufacturers – some GeForce RTX 4090s having seen their power socket melt . Still, Nvidia provides a 12HPWR adapter to 3 8-pin PCIe sockets so that the graphics card can be used with a power supply that does not yet have a specific cable.
The reference card – called Founders Edition – also offers four video outputs: three DisplayPort 1.4a and one HDMI 2.1a. The latter is able to output an 8K HDR signal at 60Hz or 4K HDR at 120Hz and also supports variable refresh. Remember that the video encoder and decoder now manage the H.264, HEVC, VC1, VP9 and AV1 formats.
Performance in games
As expected, the GPU operating frequency is quite high. In all of our tests, we measured a maximum of 2790 MHz and an average of 2758 MHz in 4K – always on a mix of games with classic rendering, raytracing and raytracing with DLSS active. In WQHD and Full HD, the average is 2769 MHz.
This high frequency, combined with the evolutions brought by the Ada Lovelace architecture, makes it possible to observe a nice surge in performance. In games run in rasterization only and in 4K definition, the GeForce RTX 4080 is 50% faster than the GeForce RTX 3080 10 GB. It is ahead of the GeForce RTX 3090 in a very good way (a lead of 30% on average) . The GeForce RTX 4090 is still much faster with an average frame rate of 26% higher. Obviously, the Radeon RX 6900 XT is soundly beaten (+45%), but that’s not really its rival; rather, we will have to wait for the verdict once the Radeon RX 7900 XTX and RX 7900 XT have been tested.
In raytracing, the evolution is just as interesting, with an average 67% higher than on the RTX 3080, and 43% higher than on the GeForce RTX 3090. The GeForce RTX 4090, on the other hand, is 41% faster in this exercise . Two worlds ! The newcomer from Nvidia therefore excels in the art of running games in raytracing, and this all the more so as it can be assisted by DLSS to see the image rate soar; a point all the more true with the arrival of a growing number of DLSS3 compatible games where the image rate is practically doubled compared to execution in DLSS2. This is a pretty big plus for Nvidia graphics cards.
In fact, the GeForce RTX 4080 is an accomplished graphics card in its design (exemplary finish, controlled heating and noise) and which excels in all other areas. Very fast in games, it makes short work of the most technically demanding titles, and in particular those which use and abuse raytracing. A good point which comes right at the moment when, oh surprise, games using an increased number of rays are announced for 2023.
Nvidia also continues to design its graphics cards as a complete experience, adding hardware to software. This has resulted in the arrival of DLSS3, currently exclusive to the GeForce RTX 4000 series.upscaling technology that literally increases the frame rate tenfold in games, without having a noticeable impact on the rendering quality. Developers tend to assiduously follow Nvidia in the integration of its technologies, there is no doubt that the majority of games will use this solution in the months to come.
Going by the nomenclature of the map, we can estimate that the generation jump in games is quite significant. The GeForce RTX 4080 is on average 55% faster on frame rate than the GeForce RTX 3080; but at the cost of a sharp price increase. The newcomer is thus sold for €1,469 in its Founders Edition version, when the previous model was launched at €719 – then adjusted to €759. A price doubled for performances which are not – the player does not really gain in exchange, which undoubtedly explains in part why the DLSS3 is exclusive to the RTX 4000 – Nvidia arguing that it is there a material limit; a moot point.
However, we can also compare the performance/price ratio with the GeForce RTX 3090, launched at €1,549 at its time and now adjusted to €1,199. There, the GeForce RTX 4080 is indeed more interesting, with an average speed increased by 33% for a price increase of 22% at the current price.
In short, the price positioning can be seen in different ways, the fact remains that this graphics card will remain out of reach for many budgets, the majority of players spending little more than €700 to equip themselves; and the average basket is significantly lower. It will be especially interesting to see the performance offered by the Radeon RX 7900 XT and Radeon RX 7900 XTX, its two real competitors at AMD.
Efficient cooling system.
HDMI 2.1a video output.
Nvidia software ecosystem.
Shutdown ventilation when the card is idle.
Requires a very well ventilated enclosure.
Cooling system that undermines CPU air heatsinks.
Power connector in the middle of the card.