-
Hello. What is exactly what determines the maximum amount of monitors you can have connected? Is it an overall bandwidth budget shared across the GPU's display output connectors, or does each connector get its own dedicated bandwidth? For example, there is the DSC case: https://nvidia.custhelp.com/app/answers/detail/a_id/5338/~/using-high-resolution%2Frefresh-rate-displays-with-vesa-display-stream
This makes it seem like when using DSC the bandwidth of two connectors is used for one connector, is this assumption accurate? If DSC is not used can you connect a monitor to each of the GPU's display outputs and use up their respective max supported resolution / refresh rate? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi. In general, a single NVIDIA GPU can drive four monitors: the GPU has four "heads" (the engine that generates raster timings to be sent to a monitor). This of course depends on what physical connectors are on your graphics card, though you can use DisplayPort MultiStream to connect multiple DisplayPort monitors to one DP connector on the graphics card. Each head has to fetch pixels from video memory; the memory bandwidth available to the heads is shared across all heads, but it is finite, and varies by the configuration. If you're stepping through the code, the "IsModePossible" path attempts to validate if there is sufficient memory bandwidth to satisfy the GPU-wide requested configuration. Between the head and the monitor is another hardware block on the GPU called the "Output Resource" (OR) that translates the timings produced by the head to the protocol of the particular monitor (TMDS, DP, etc). Display Stream Compression (DSC) is primarily about compressing the data in the protocol, so between the OR and the monitor. The section you quote about using two heads in some configurations is a little bit different: we'll use DSC in a fair number of cases to compress the data between the OR and the monitor, but in some of the most extreme cases a single head cannot fetch and produce pixels fast enough to feed the OR (IIRC, the limit is somewhere around 1.3G pixels/second, but my numbers may be off). Those are the cases where we need to use two heads. The quoted text is describing NVIDIA's Windows driver. The Linux driver does not yet implement using 2 heads in this way; we're working on it, but I don't have a firm ETA. I hope that helps. |
Beta Was this translation helpful? Give feedback.
Hi. In general, a single NVIDIA GPU can drive four monitors: the GPU has four "heads" (the engine that generates raster timings to be sent to a monitor). This of course depends on what physical connectors are on your graphics card, though you can use DisplayPort MultiStream to connect multiple DisplayPort monitors to one DP connector on the graphics card.
Each head has to fetch pixels from video memory; the memory bandwidth available to the heads is shared across all heads, but it is finite, and varies by the configuration. If you're stepping through the code, the "IsModePossible" path attempts to validate if there is sufficient memory bandwidth to satisfy the GPU-wide requested configura…