-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathREADMEtext.txt
13272 lines (9841 loc) · 597 KB
/
READMEtext.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
NVIDIA Accelerated Linux Graphics Driver README and Installation Guide
NVIDIA Corporation
Last Updated: Sun Jul 5 14:51:37 UTC 2020
Most Recent Driver Version: 450.57
Published by
NVIDIA Corporation
2701 San Tomas Expressway
Santa Clara, CA
95050
NOTICE:
ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS,
DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, "MATERIALS")
ARE BEING PROVIDED "AS IS." NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED,
STATUTORY, OR OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS
ALL IMPLIED WARRANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A
PARTICULAR PURPOSE. Information furnished is believed to be accurate and
reliable. However, NVIDIA Corporation assumes no responsibility for the
consequences of use of such information or for any infringement of patents or
other rights of third parties that may result from its use. No license is
granted by implication or otherwise under any patent or patent rights of
NVIDIA Corporation. Specifications mentioned in this publication are subject
to change without notice. This publication supersedes and replaces all
information previously supplied. NVIDIA Corporation products are not
authorized for use as critical components in life support devices or systems
without express written approval of NVIDIA Corporation.
NVIDIA, the NVIDIA logo, NVIDIA nForce, GeForce, NVIDIA Quadro, Vanta, TNT2,
TNT, RIVA, RIVA TNT, Tegra, and TwinView are registered trademarks or
trademarks of NVIDIA Corporation in the United States and/or other countries.
Linux is a registered trademark of Linus Torvalds. Fedora and Red Hat are
trademarks of Red Hat, Inc. SuSE is a registered trademark of SuSE AG.
Mandriva is a registered trademark of Mandriva S.A. Intel and Pentium are
registered trademarks of Intel Corporation. Athlon is a registered trademark
of Advanced Micro Devices. OpenGL is a registered trademark of Silicon
Graphics Inc. PCI Express is a registered trademark and/or service mark of
PCI-SIG. Windows is a registered trademark of Microsoft Corporation in the
United States and other countries. Other company and product names may be
trademarks or registered trademarks of the respective owners with which they
are associated.
Copyright 2006 - 2013 NVIDIA Corporation. All rights reserved.
______________________________________________________________________________
TABLE OF CONTENTS
______________________________________________________________________________
Chapter 1. Introduction
Chapter 2. Minimum Requirements
Chapter 3. Selecting and Downloading the NVIDIA Packages for Your System
Chapter 4. Installing the NVIDIA Driver
Chapter 5. Listing of Installed Components
Chapter 6. Configuring X for the NVIDIA Driver
Chapter 7. Frequently Asked Questions
Chapter 8. Common Problems
Chapter 9. Known Issues
Chapter 10. Allocating DMA Buffers on 64-bit Platforms
Chapter 11. Specifying OpenGL Environment Variable Settings
Chapter 12. Configuring Multiple Display Devices on One X Screen
Chapter 13. Configuring GLX in Xinerama
Chapter 14. Configuring Multiple X Screens on One Card
Chapter 15. Support for the X Resize and Rotate Extension
Chapter 16. Configuring a Notebook
Chapter 17. Using the NVIDIA Driver with Optimus Laptops
Chapter 18. Programming Modes
Chapter 19. Configuring Flipping and UBB
Chapter 20. Using the /proc File System Interface
Chapter 21. Configuring Power Management Support
Chapter 22. PCI-Express Runtime D3 (RTD3) Power Management
Chapter 23. Using the X Composite Extension
Chapter 24. Using the nvidia-settings Utility
Chapter 25. NVIDIA Spectre V2 Mitigation
Chapter 26. Using the nvidia-smi Utility
Chapter 27. The NVIDIA Management Library
Chapter 28. Using the nvidia-debugdump Utility
Chapter 29. Using the nvidia-persistenced Utility
Chapter 30. Configuring SLI and Multi-GPU FrameRendering
Chapter 31. Configuring Frame Lock and Genlock
Chapter 32. Configuring Depth 30 Displays
Chapter 33. Offloading Graphics Display with RandR 1.4
Chapter 34. PRIME Render Offload
Chapter 35. Direct Rendering Manager Kernel Modesetting (DRM KMS)
Chapter 36. Configuring External and Removable GPUs
Chapter 37. Addressing Capabilities
Chapter 38. NVIDIA Contact Info and Additional Resources
Chapter 39. Acknowledgements
Appendix A. Supported NVIDIA GPU Products
Appendix B. X Config Options
Appendix C. Display Device Names
Appendix D. GLX Support
Appendix E. Dots Per Inch
Appendix F. i2c Bus Support
Appendix G. VDPAU Support
Appendix H. Audio Support
Appendix I. Tips for New Linux Users
Appendix J. Application Profiles
Appendix K. GPU Names
______________________________________________________________________________
Chapter 1. Introduction
______________________________________________________________________________
1A. ABOUT THE NVIDIA ACCELERATED LINUX GRAPHICS DRIVER
The NVIDIA Accelerated Linux Graphics Driver brings accelerated 2D
functionality and high-performance OpenGL support to Linux x86_64 with the use
of NVIDIA graphics processing units (GPUs).
These drivers provide optimized hardware acceleration for OpenGL and X
applications and support nearly all recent NVIDIA GPU products (see Appendix A
for a complete list of supported GPUs).
1B. ABOUT THIS DOCUMENT
This document provides instructions for the installation and use of the NVIDIA
Accelerated Linux Graphics Driver. Chapter 3, Chapter 4 and Chapter 6 walk the
user through the process of downloading, installing and configuring the
driver. Chapter 7 addresses frequently asked questions about the installation
process, and Chapter 8 provides solutions to common problems. The remaining
chapters include details on different features of the NVIDIA Linux Driver.
Frequently asked questions about specific tasks are included in the relevant
chapters. These pages are posted on NVIDIA's web site (http://www.nvidia.com),
and are installed in '/usr/share/doc/NVIDIA_GLX-1.0/'.
1C. ABOUT THE AUDIENCE
It is assumed that the user and reader of this document has at least a basic
understanding of Linux techniques and terminology. However, new Linux users
can refer to Appendix I for details on parts of the installation process.
1D. ADDITIONAL INFORMATION
In case additional information is required, Chapter 38 provides contact
information for NVIDIA Linux driver resources, as well as a brief listing of
external resources.
______________________________________________________________________________
Chapter 2. Minimum Requirements
______________________________________________________________________________
2A. MINIMUM SOFTWARE REQUIREMENTS
Software Element Supported versions Check With...
--------------------- --------------------- ---------------------
Linux kernel 2.6.32 and newer `cat /proc/version`
X.Org xserver * 1.7, 1.8, 1.9, 1.10, `Xorg -version`
1.11, 1.12, 1.13,
1.14, 1.15, 1.16,
1.17, 1.18, 1.19,
1.20
Kernel modutils 2.1.121 and newer `insmod --version`
glibc 2.11 `/lib/libc.so.6`
libvdpau ** 0.2 `pkg-config
--modversion vdpau`
* Please see "Q. How do I interpret X server version numbers?" in Chapter 7
for a note about X server version numbers.
** Required for hardware-accelerated video playback. See Appendix G for more
information.
If you need to build the NVIDIA kernel module:
Software Element Min Requirement Check With...
--------------------- --------------------- ---------------------
binutils 2.9.5 `size --version`
GNU make 3.77 `make --version`
gcc 2.91.66 `gcc --version`
All official stable kernel releases from 2.6.32 and up are supported;
pre-release versions, such as 4.19-rc1, are not supported. The Linux kernel
can be downloaded from http://www.kernel.org or one of its mirrors.
binutils and gcc can be retrieved from http://www.gnu.org or one of its
mirrors.
Sometimes very recent X server versions are not supported immediately
following release, but we aim to support all new versions as soon as possible.
Support is not added for new X server versions until after the video driver
ABI is frozen, which usually happens at the release candidate stage.
Prerelease versions that are not release candidates, such as "1.10.99.1", are
not supported.
If you are setting up the X Window System for the first time, it is often
easier to begin with one of the open source drivers that ships with X.Org
(either "vga", "vesa", or "fbdev"). Once your system is operating properly
with the open source driver, you may then switch to the NVIDIA driver.
These software packages may also be available through your Linux distributor.
______________________________________________________________________________
Chapter 3. Selecting and Downloading the NVIDIA Packages for Your System
______________________________________________________________________________
NVIDIA drivers can be downloaded from the NVIDIA website
(http://www.nvidia.com).
The NVIDIA graphics driver uses a Unified Driver Architecture: the single
graphics driver supports all modern NVIDIA GPUs. "Legacy" GPU support has been
moved from the unified driver to special legacy GPU driver releases. See
Appendix A for a list of legacy GPUs.
The NVIDIA graphics driver is bundled in a self-extracting package named
'NVIDIA-Linux-x86_64-450.57.run'. On Linux-x86_64, that file contains
both the 64-bit driver binaries as well as 32-bit compatibility driver
binaries; the 'NVIDIA-Linux-x86_64-450.57-no-compat32.run' file only
contains the 64-bit driver binaries.
______________________________________________________________________________
Chapter 4. Installing the NVIDIA Driver
______________________________________________________________________________
This chapter provides instructions for installing the NVIDIA driver. Note that
after installation, but prior to using the driver, you must complete the steps
described in Chapter 6. Additional details that may be helpful for the new
Linux user are provided in Appendix I.
4A. BEFORE YOU BEGIN
Before you begin the installation, exit the X server and terminate all OpenGL
applications (note that it is possible that some OpenGL applications persist
even after the X server has stopped). You should also set the default run
level on your system such that it will boot to a VGA console, and not directly
to X. Doing so will make it easier to recover if there is a problem during the
installation process. See Appendix I for details.
If you're installing on a system that is set up to use the Nouveau driver,
then you should first disable it before attempting to install the NVIDIA
driver. See Interaction with the Nouveau Driver for details.
4B. STARTING THE INSTALLER
After you have downloaded the file 'NVIDIA-Linux-x86_64-450.57.run',
change to the directory containing the downloaded file, and as the 'root' user
run the executable:
# cd yourdirectory
# sh NVIDIA-Linux-x86_64-450.57.run
The '.run' file is a self-extracting archive. When executed, it extracts the
contents of the archive and runs the contained 'nvidia-installer' utility,
which provides an interactive interface to walk you through the installation.
'nvidia-installer' will also install itself to '/usr/bin/nvidia-installer',
which may be used at some later time to uninstall drivers, auto-download
updated drivers, etc. The use of this utility is detailed later in this
chapter.
You may also supply command line options to the '.run' file. Some of the more
common options are listed below.
Common '.run' Options
--info
Print embedded info about the '.run' file and exit.
--check
Check integrity of the archive and exit.
--extract-only
Extract the contents of './NVIDIA-Linux-x86_64-450.57.run', but do
not run 'nvidia-installer'.
--help
Print usage information for the common commandline options and exit.
--advanced-options
Print usage information for common command line options as well as the
advanced options, and then exit.
4C. INSTALLING THE KERNEL INTERFACE
The NVIDIA kernel module has a kernel interface layer that must be compiled
specifically for each kernel. NVIDIA distributes the source code to this
kernel interface layer.
When the installer is run, it will check your system for the required kernel
sources and compile the kernel interface. You must have the source code for
your kernel installed for compilation to work. On most systems, this means
that you will need to locate and install the correct kernel-source,
kernel-headers, or kernel-devel package; on some distributions, no additional
packages are required.
After the correct kernel interface has been compiled, the kernel interface
will be linked with the closed-source portion of the NVIDIA kernel module.
This requires that you have a linker installed on your system. The linker,
usually '/usr/bin/ld', is part of the binutils package. You must have a linker
installed prior to installing the NVIDIA driver.
4D. REGISTERING THE NVIDIA KERNEL MODULE WITH DKMS
The installer will check for the presence of DKMS on your system. If DKMS is
found, you will be given the option of registering the kernel module with
DKMS, and using the DKMS infrastructure to build and install the kernel
module. On most systems with DKMS, DKMS will take care of automatically
rebuilding registered kernel modules when installing a different Linux kernel.
If 'nvidia-installer' is unable to install the kernel module through DKMS, the
installation will be aborted and no kernel module will be installed. If this
happens, installation should be attempted again, without the DKMS option.
Note that versions of 'nvidia-installer' shipped with drivers before release
304 do not interact with DKMS. If you choose to register the NVIDIA kernel
module with DKMS, please ensure that the module is removed from the DKMS
database before using a non-DKMS aware version of 'nvidia-installer' to
install an older driver; otherwise, module source files may be deleted without
first unregistering the module, potentially leaving the DKMS database in an
inconsistent state. Running 'nvidia-uninstall' before installing a driver
using an older installer will invoke the correct `dkms remove` command to
clean up the installation.
Due to the lack of secure storage for private keys that can be utilized by
automated processes such as DKMS, it is not possible to use DKMS in
conjunction with the module signing support built into 'nvidia-installer'.
4E. SIGNING THE NVIDIA KERNEL MODULE
Some kernels may require that kernel modules be cryptographically signed by a
key trusted by the kernel in order to be loaded. In particular, many
distributions require modules to be signed when loaded into kernels running on
UEFI systems with Secure Boot enabled. 'nvidia-installer' includes support for
signing the kernel module before installation, to ensure that it can be loaded
on such systems. Note that not all UEFI systems have Secure Boot enabled, and
not all kernels running on UEFI Secure Boot systems will require signed kernel
modules, so if you are uncertain about whether your system requires signed
kernel modules, you may try installing the driver without signing the kernel
module, to see if the unsigned kernel module can be loaded.
In order to sign the kernel module, you will need a private signing key, and
an X.509 certificate for the corresponding public key. The X.509 certificate
must be trusted by the kernel before the module can be loaded: we recommend
ensuring that the signing key be trusted before beginning the driver
installation, so that the newly signed module can be used immediately. If you
do not already have a key pair suitable for module signing use, you must
generate one. Please consult your distribution's documentation for details on
the types of keys suitable for module signing, and how to generate them.
'nvidia-installer' can generate a key pair for you at install time, but it is
preferable to have a key pair already generated and trusted by the kernel
before installation begins.
Once you have a key pair ready, you can use that key pair in
'nvidia-installer' by passing the keys to the installer on the command line
with the --module-signing-secret-key and --module-signing-public-key options.
As an example, it is possible to install the driver with a signed kernel
module in silent mode (i.e., non-interactively) by running:
# sh ./NVIDIA-Linux-x86_64-450.57.run -s \
--module-signing-secret-key=/path/to/signing.key \
--module-signing-public-key=/path/to/signing.x509
In the example above, 'signing.key' and 'signing.x509' are a private/public
key pair, and the public key is already enrolled in one of the kernel's
trusted module signing key sources.
On UEFI systems with secure boot enabled, nvidia-installer will present a
series of interactive prompts to guide users through the module signing
process. As an alternative to setting the key paths on the command line, the
paths can be provided interactively in response to the prompts. These prompts
will also appear when building the NVIDIA kernel module against a kernel which
has CONFIG_MODULE_SIG_FORCE enabled in its configuration, or if the installer
is run in expert mode.
KEY SOURCES TRUSTED BY THE KERNEL
In order to load a kernel module into a kernel that requires module
signatures, the module must be signed by a key that the kernel trusts. There
are several sources that a kernel may draw upon to build its pool of trusted
keys. If you have generated a key pair, but it is not yet trusted by your
kernel, you must add a certificate for your public key to a trusted key source
before it can be used to verify signatures of signed kernel modules. These
trusted sources include:
Certificates embedded into the kernel image
On kernels with CONFIG_MODULE_SIG set, a certificate for the public key
used to sign the in-tree kernel modules is embedded, along with any
additional module signing certificates provided at build time, into the
kernel image. Modules signed by the private keys that correspond to the
embedded public key certificates will be trusted by the kernel.
Since the keys are embedded at build time, the only way to add a new
public key is to build a new kernel. On UEFI systems with Secure Boot
enabled, the kernel image will, in turn, need to be signed by a key that
is trusted by the bootloader, so users building their own kernels with
custom embedded keys should have a plan for making sure that the
bootloader will load the new kernel.
Certificates stored in the UEFI firmware database
On kernels with CONFIG_MODULE_SIG_UEFI, in addition to any certificates
embedded into the kernel image, the kernel can use certificates stored in
the 'db', 'KEK', or 'PK' databases of the computer's UEFI firmware to
verify the signatures of kernel modules, as long as they are not in the
UEFI 'dbx' blacklist.
Any user who holds the private key for the Secure Boot 'PK', or any of the
keys in the 'KEK' list should be able to add new keys that can be used by
a kernel with CONFIG_MODULE_SIG_UEFI, and any user with physical access to
the computer should be able to delete any existing Secure Boot keys, and
install his or her own keys instead. Please consult the documentation for
your UEFI-based computer system for details on how to manage the UEFI
Secure Boot keys.
Certificates stored in a supplementary key database
Some distributions include utilities that allow for the secure storage and
management of cryptographic keys in a database that is separate from the
kernel's built-in key list, and the key lists in the UEFI firmware. A
prominent example is the MOK (Machine Owner Key) database used by some
versions of the 'shim' bootloader, and the associated management
utilities, 'mokutil' and 'MokManager'.
Such a system allows users to enroll additional keys without the need to
build a new kernel or manage the UEFI Secure Boot keys. Please consult
your distribution's documentation for details on whether such a
supplementary key database is available, and if so, how to manage its
keys.
GENERATING SIGNING KEYS IN NVIDIA-INSTALLER
'nvidia-installer' can generate keys that can be used for module signing, if
existing keys are not readily available. Note that a module signed by a newly
generated key cannot be loaded into a kernel that requires signed modules
until its key is trusted, and when such a module is installed on such a
system, the installed driver will not be immediately usable, even if the
installation was successful.
When 'nvidia-installer' generates a key pair and uses it to sign a kernel
module, an X.509 certificate containing the public key will be installed to
disk, so that it can be added to one of the kernel's trusted key sources.
'nvidia-installer' will report the location of the installed certificate: make
a note of this location, and of the certificate's SHA1 fingerprint, so that
you will be able to enroll the certificate and verify that it is correct,
after the installation is finished.
By default, 'nvidia-installer' will attempt to securely delete the generated
private key with 'shred -u' after the module is signed. This is to prevent the
key from being exploited to sign a malicious kernel module, but it also means
that the same key can't be used again to install a different driver, or even
to install the same driver on a different kernel. 'nvidia-installer' can
optionally install the private signing key to disk, as it does with the public
certificate, so that the key pair can be reused in the future.
If you elect to install the private key, please make sure that appropriate
precautions are taken to ensure that it cannot be stolen. Some examples of
precautions you may wish to take:
Prevent the key from being read by anybody without physical access to the
computer
In general, physical access is required to install Secure Boot keys,
including keys managed outside of the standard UEFI key databases, to
prevent attackers who have remotely compromised the security of the
operating system from installing malicious boot code. If a trusted key is
available to remote users, even root, then it will be possible for an
attacker to sign arbitrary kernel modules without first having physical
access, making the system less secure.
One way to ensure that the key is not available to remote users is to keep
it on a removable storage medium, which is disconnected from the computer
except when signing modules.
Do not store the unencrypted private key
Encrypting the private key can add an extra layer of security: the key
will not be useful for signing modules unless it can be successfully
decrypted, first. Make sure not to store unencrypted copies of the key on
persistent storage: either use volatile storage (e.g. a RAM disk), or
securely delete any unencrypted copies of the key when not in use (e.g.
using 'shred' instead of 'rm'). Note that using 'shred' may not be
sufficient to fully purge data from some storage devices, in particular,
some types of solid state storage.
ALTERNATIVES TO THE INSTALLER'S MODULE SIGNING SUPPORT
It is possible to load the NVIDIA kernel module on a system that requires
signed modules, without using the installer's module signing support.
Depending on your particular use case, you may find one of these alternatives
more suitable than signing the module with 'nvidia-installer':
Disable UEFI Secure Boot, if applicable
On some kernels, a requirement for signed modules is only enforced when
booted on a UEFI system with Secure Boot enabled. Some users of such
kernels may find it more convenient to disable Secure Boot; however, note
that this will reduce the security of your system by making it easier for
malicious users to install potentially harmful boot code, kernels, or
kernel modules.
Use a kernel that doesn't require signed modules
The kernel can be configured not to check module signatures, or to check
module signatures, but allow modules without a trusted signature to be
loaded, anyway. Installing a kernel configured in such a way will allow
the installation of unsigned modules. Note that on Secure Boot systems,
you will still need to ensure that the kernel be signed with a key trusted
by the bootloader and/or boot firmware, and that a kernel that doesn't
enforce module signature verification may be slightly less secure than one
that does.
4F. ADDING PRECOMPILED KERNEL INTERFACES TO THE INSTALLER PACKAGE
When 'nvidia-installer' runs, it searches for a pre-compiled kernel interface
layer for the target kernel: if one is found, then the complete kernel module
can be produced by linking the precompiled interface with 'nv-kernel.o',
instead of needing to compile the kernel interface on the target system.
'nvidia-installer' includes a feature which allows users to add a precompiled
interface to the installer package. This is useful in many use cases; for
example, an administrator of a large group of similarly configured computers
can prepare an installer package with a precompiled interface for the kernel
running on those computers, then deploy the customized installer, which will
be able to install the NVIDIA kernel module without needing to have the kernel
development headers or a compiler installed on the target systems. (A linker
is still required.)
To use this feature, simply invoke the '.run' installer package with the
--add-this-kernel option; e.g.
# sh ./NVIDIA-Linux-x86_64-450.57.run --add-this-kernel
This will unpack 'NVIDIA-Linux-x86_64-450.57.run', compile a kernel
interface layer for the currently running kernel (use the --kernel-source-path
and --kernel-output-path options to specify a target kernel other than the
currently running one), and create a new installer package with the kernel
interface layer added.
Administrators of large groups of similarly configured computers that are
configured to require trusted signatures in order to load kernel modules may
find this feature especially useful when combined with the built-in support
for module signing in 'nvidia-installer'. To package a .run file with a
precompiled kernel interface layer, plus a detached module signature for the
linked module, just use the --module-signing-secret-key and
--module-signing-public-key options alongside the --add-this-kernel option.
The resulting package, besides being installable without kernel headers or a
compiler on the target system, has the added benefit of being able to produce
a signed module without needing access to the private key on the install
target system. Note that the detached signature will only be valid if the
result of linking the precompiled interface with 'nv-kernel.o' on the target
system is exactly the same as the result of linking those two files on the
system that was used to create the custom installer. To ensure optimal
compatibility, the linker used on both the package preparation system and the
install target system should be the same.
4G. OTHER FEATURES OF THE INSTALLER
Without options, the '.run' file executes the installer after unpacking it.
The installer can be run as a separate step in the process, or can be run at a
later time to get updates, etc. Some of the more important commandline options
of 'nvidia-installer' are:
'nvidia-installer' options
--uninstall
During installation, the installer will make backups of any conflicting
files and record the installation of new files. The uninstall option
undoes an install, restoring the system to its pre-install state.
--ui=none
The installer uses an ncurses-based user interface if it is able to locate
the correct ncurses library. Otherwise, it will fall back to a simple
commandline user interface. This option disables the use of the ncurses
library.
______________________________________________________________________________
Chapter 5. Listing of Installed Components
______________________________________________________________________________
The NVIDIA Accelerated Linux Graphics Driver consists of the following
components (filenames in parentheses are the full names of the components
after installation). Some paths may be different on different systems (e.g., X
modules may be installed in /usr/X11R6/ rather than /usr/lib/xorg/).
o An X driver ('/usr/lib/xorg/modules/drivers/nvidia_drv.so'); this driver
is needed by the X server to use your NVIDIA hardware.
o A GLX extension module for X
('/usr/lib/xorg/modules/extensions/libglxserver_nvidia.so.450.57');
this module is used by the X server to provide server-side GLX support.
o EGL and OpenGL ES libraries ( '/usr/lib/libEGL.so.1',
'/usr/lib/libGLESv1_CM.so.450.57', and
'/usr/lib/libGLESv2.so.450.57' ); these libraries provide the API
entry points for all OpenGL ES and EGL function calls. They are loaded at
run-time by applications.
o A Wayland EGL external platform library
('/usr/lib/libnvidia-egl-wayland.so.1') and its corresponding
configuration file (
'/usr/share/egl/egl_external_platform.d/10_nvidia_wayland.json' ); this
library provides client-side Wayland support on top of the EGLDevice and
EGLStream families of extensions, for use in combination with an
EGLStream-enabled Wayland compositor:
https://cgit.freedesktop.org/~jjones/weston/
More information can be found along with the EGL external interface and
Wayland library source code at
https://github.com/NVIDIA/eglexternalplatform and
https://github.com/NVIDIA/egl-wayland.
o Vendor neutral graphics libraries provided by libglvnd
('/usr/lib/libOpenGL.so.0', '/usr/lib/libGLX.so.0', and
'/usr/lib/libGLdispatch.so.0'); these libraries are currently used to
provide full OpenGL dispatching support to NVIDIA's implementation of
EGL.
Source code for libglvnd is available at
https://github.com/NVIDIA/libglvnd
o GLVND vendor implementation libraries for GLX
('/usr/lib/libGLX_nvidia.so.0') and EGL ('/usr/lib/libEGL_nvidia.so.0');
these libraries provide NVIDIA implementations of OpenGL functionality
which may be accessed using the GLVND client-facing libraries.
'libGLX_nvidia.so.0' is also used as the Vulkan ICD. Its configuration
file is installed as '/etc/vulkan/icd.d/nvidia_icd.json'.
An additional Vulkan layer configuration file is installed as
'/etc/vulkan/implicit_layer.d/nvidia_layers.json'. These layers add
functionality to the Vulkan loader.
o A GLVND GLX client ICD loader library ('/usr/lib/libGL.so.1'), This
library provides API entry points for all GLX function calls, and is
loaded at run-time by applications.
This library is included as a convenience to support systems that do not
already have an existing libglvnd installation. On systems with libglvnd
libraries already installed, the existing libraries should be used
instead.
o Various libraries that are used internally by other driver components.
These include '/usr/lib/libnvidia-cfg.so.450.57',
'/usr/lib/libnvidia-compiler.so.450.57',
'/usr/lib/libnvidia-eglcore.so.450.57',
'/usr/lib/libnvidia-glcore.so.450.57',
'/usr/lib/libnvidia-glsi.so.450.57',
'/usr/lib/libnvidia-glvkspirv.so.450.57',
'/usr/lib/libnvidia-rtcore.so.450.57',
'/usr/lib/libnvidia-cbl.so.450.57', and
'/usr/lib/libnvidia-allocator.so.450.57'.
o A VDPAU (Video Decode and Presentation API for Unix-like systems) library
for the NVIDIA vendor implementation,
('/usr/lib/vdpau/libvdpau_nvidia.so.450.57'); see Appendix G for
details.
o The CUDA library ('/usr/lib/libcuda.so.450.57') which provides
runtime support for CUDA (high-performance computing on the GPU)
applications.
o The PTX JIT Compiler library
('/usr/lib/libnvidia-ptxjitcompiler.so.450.57') is a JIT compiler
which compiles PTX into GPU machine code and is used by the CUDA driver.
o Two OpenCL libraries ('/usr/lib/libOpenCL.so.1.0.0',
'/usr/lib/libnvidia-opencl.so.450.57'); the former is a
vendor-independent Installable Client Driver (ICD) loader, and the latter
is the NVIDIA Vendor ICD. A config file '/etc/OpenCL/vendors/nvidia.icd'
is also installed, to advertise the NVIDIA Vendor ICD to the ICD Loader.
o The 'nvidia-cuda-mps-control' and 'nvidia-cuda-mps-server' applications,
which allow MPI processes to run concurrently on a single GPU.
o A kernel module ('/lib/modules/`uname
-r`/kernel/drivers/video/nvidia-modeset.ko'); this kernel module is
responsible for programming the display engine of the GPU. User-mode
NVIDIA driver components such as the NVIDIA X driver, OpenGL driver, and
VDPAU driver communicate with nvidia-modeset.ko through the
/dev/nvidia-modeset device file.
o A kernel module ('/lib/modules/`uname
-r`/kernel/drivers/video/nvidia.ko'); this kernel module provides
low-level access to your NVIDIA hardware for all of the above components.
It is generally loaded into the kernel when the X server is started, and
is used by the X driver and OpenGL. nvidia.ko consists of two pieces: the
binary-only core, and a kernel interface that must be compiled
specifically for your kernel version. Note that the Linux kernel does not
have a consistent binary interface like the X server, so it is important
that this kernel interface be matched with the version of the kernel that
you are using. This can either be accomplished by compiling yourself, or
using precompiled binaries provided for the kernels shipped with some of
the more common Linux distributions.
o NVIDIA Unified Memory kernel module ('/lib/modules/`uname
-r`/kernel/drivers/video/nvidia-uvm.ko'); this kernel module provides
functionality for sharing memory between the CPU and GPU in CUDA
programs. It is generally loaded into the kernel when a CUDA program is
started, and is used by the CUDA driver on supported platforms.
o The nvidia-tls library ('/usr/lib/libnvidia-tls.so.450.57'); this
file provides thread local storage support for the NVIDIA OpenGL
libraries (libGLX_nvidia, libnvidia-glcore, and libglxserver_nvidia).
o The nvidia-ml library ('/usr/lib/libnvidia-ml.so.450.57'); The
NVIDIA Management Library provides a monitoring and management API. See
Chapter 27 for more information.
o The application nvidia-installer ('/usr/bin/nvidia-installer') is
NVIDIA's tool for installing and updating NVIDIA drivers. See Chapter 4
for a more thorough description.
Source code is available at
https://download.nvidia.com/XFree86/nvidia-installer/.
o The application nvidia-modprobe ('/usr/bin/nvidia-modprobe') is installed
as setuid root and is used to load the NVIDIA kernel module and create
the '/dev/nvidia*' device nodes by processes (such as CUDA applications)
that don't run with sufficient privileges to do those things themselves.
Source code is available at
https://download.nvidia.com/XFree86/nvidia-modprobe/.
o The application nvidia-xconfig ('/usr/bin/nvidia-xconfig') is NVIDIA's
tool for manipulating X server configuration files. See Chapter 6 for
more information.
Source code is available at
https://download.nvidia.com/XFree86/nvidia-xconfig/.
o The application nvidia-settings ('/usr/bin/nvidia-settings') is NVIDIA's
tool for dynamic configuration while the X server is running. See Chapter
24 for more information.
o The libnvidia-gtk libraries ('/usr/lib/libnvidia-gtk2.so.450.57' and
on some platforms '/usr/lib/libnvidia-gtk3.so.450.57'); these
libraries are required to provide the nvidia-settings user interface.
Source code is available at
https://download.nvidia.com/XFree86/nvidia-settings/.
o The application nvidia-smi ('/usr/bin/nvidia-smi') is the NVIDIA System
Management Interface for management and monitoring functionality. See
Chapter 26 for more information.
o The application nvidia-debugdump ('/usr/bin/nvidia-debugdump') is
NVIDIA's tool for collecting internal GPU state. It is normally invoked
by the nvidia-bug-report.sh ('/usr/bin/nvidia-bug-report.sh') script. See
Chapter 28 for more information.
o The daemon nvidia-persistenced ('/usr/bin/nvidia-persistenced') is the
NVIDIA Persistence Daemon for allowing the NVIDIA kernel module to
maintain persistent state when no other NVIDIA driver components are
running. See Chapter 29 for more information.
Source code is available at
https://download.nvidia.com/XFree86/nvidia-persistenced/.
o The NVCUVID library ('/usr/lib/libnvcuvid.so.450.57'); The NVIDIA
CUDA Video Decoder (NVCUVID) library provides an interface to hardware
video decoding capabilities on NVIDIA GPUs with CUDA.
o The NvEncodeAPI library ('/usr/lib/libnvidia-encode.so.450.57'); The
NVENC Video Encoding library provides an interface to video encoder
hardware on supported NVIDIA GPUs.
o The NvIFROpenGL library ('/usr/lib/libnvidia-ifr.so.450.57'); The
NVIDIA OpenGL-based Inband Frame Readback library provides an interface
to capture and optionally encode an OpenGL framebuffer.
o The NvFBC library ('/usr/lib/libnvidia-fbc.so.450.57'); The NVIDIA
Framebuffer Capture library provides an interface to capture and
optionally encode the framebuffer of an X server screen.
o An X driver configuration file
('/usr/share/X11/xorg.conf.d/nvidia-drm-outputclass.conf'); If the X
server is sufficiently new, this file will be installed to configure the
X server to load the 'nvidia_drv.so' driver automatically if it is
started after the NVIDIA DRM kernel module (nvidia-drm.ko) is loaded.
This feature is supported in X.Org xserver 1.16 and higher when running
on Linux kernel 3.13 or higher with CONFIG_DRM enabled.
o Predefined application profile keys and documentation for those keys can
be found in the following files in the directory '/usr/share/nvidia/':
'nvidia-application-profiles-450.57-rc',
'nvidia-application-profiles-450.57-key-documentation'.
See Appendix J for more information.
o The OptiX library ('/usr/lib/libnvoptix.so.1'); This library implements
the OptiX ray tracing engine. It is loaded by the 'liboptix.so.*' library
bundled with applications that use the OptiX API.
o The NVIDIA Optical Flow library
('/usr/lib/libnvidia-opticalflow.so.450.57'); The NVIDIA Optical
Flow library can be used for hardware-accelerated computation of optical
flow vectors and stereo disparity values on Turing and later NVIDIA GPUs.
This is useful for some forms of computer vision and image analysis. The
Optical Flow library depends on the NVCUVID library, which in turn
depends on the CUDA library.
Problems will arise if applications use the wrong version of a library. This
can be the case if there are either old libGL libraries or stale symlinks left
lying around. If you think there may be something awry in your installation,
check that the following files are in place (these are all the files of the
NVIDIA Accelerated Linux Graphics Driver, as well as their symlinks):
/usr/lib/xorg/modules/drivers/nvidia_drv.so
/usr/lib/xorg/modules/extensions/libglxserver_nvidia.so.450.57
/usr/lib/xorg/modules/extensions/libglxserver_nvidia.so ->
libglxserver_nvidia.so.450.57
/usr/lib/libGL.so.1.0.0;
/usr/lib/libGL.so.1 -> libGL.so.1.0.0;
/usr/lib/libGL.so -> libGL.so.1
(libGL.so.1.0.0 is the name of the libglvnd client-side GLX ICD loader
library included with the NVIDIA Linux driver. A compatible ICD loader
provided by your distribution may have a slightly different filename.)
/usr/lib/libnvidia-glcore.so.450.57
/usr/lib/libcuda.so.450.57
/usr/lib/libcuda.so -> libcuda.so.450.57
/lib/modules/`uname -r`/video/nvidia.{o,ko}, or
/lib/modules/`uname -r`/kernel/drivers/video/nvidia.{o,ko}
If there are other libraries whose "soname" conflicts with that of the NVIDIA
libraries, ldconfig may create the wrong symlinks. It is recommended that you
manually remove or rename conflicting libraries (be sure to rename clashing
libraries to something that ldconfig will not look at -- we have found that
prepending "XXX" to a library name generally does the trick), rerun
'ldconfig', and check that the correct symlinks were made. An example of a
library that often creates conflicts is "/usr/lib/mesa/libGL.so*".
If the libraries appear to be correct, then verify that the application is
using the correct libraries. For example, to check that the application
/usr/bin/glxgears is using the libglvnd GLX libraries, run:
% ldd /usr/bin/glxgears
linux-vdso.so.1 (0x00007ffc8a5d4000)
libGL.so.1 => /usr/lib/x86_64-linux-gnu/libGL.so.1 (0x00007f6593896000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f65934f8000)
libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f65931c0000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6592dcf000)
libGLX.so.0 => /usr/lib/x86_64-linux-gnu/libGLX.so.0 (0x00007f6592b9e000)
libGLdispatch.so.0 => /usr/lib/x86_64-linux-gnu/libGLdispatch.so.0
(0x00007f65928e8000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
(0x00007f65926c9000)
/lib64/ld-linux-x86-64.so.2 (0x00007f6593d28000)
libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f65924a1000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f659229d000)
libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f6592099000)
libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6
(0x00007f6591e93000)
libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f6591c7e000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f6591a76000)
In the example above, the list of libraries reported by 'ldd' includes
'libGLX.so.0' and 'libGLdispatch.so.0'. If the GLX client library is something
other than the GLVND 'libGL.so.1', then you will need to either remove the
library that is getting in the way or adjust your dynamic loader search path
using the 'LD_LIBRARY_PATH' environment variable. You may want to consult the
man pages for 'ldconfig' and 'ldd'.
______________________________________________________________________________
Chapter 6. Configuring X for the NVIDIA Driver
______________________________________________________________________________
The X configuration file provides a means to configure the X server. This
section describes the settings necessary to enable the NVIDIA driver. A
comprehensive list of parameters is provided in Appendix B.
The NVIDIA Driver includes a utility called nvidia-xconfig, which is designed
to make editing the X configuration file easy. You can also edit it by hand.
6A. USING NVIDIA-XCONFIG TO CONFIGURE THE X SERVER
nvidia-xconfig will find the X configuration file and modify it to use the
NVIDIA X driver. In most cases, you can simply answer "Yes" when the installer
asks if it should run it. If you need to reconfigure your X server later, you
can run nvidia-xconfig again from a terminal. nvidia-xconfig will make a
backup copy of your configuration file before modifying it.
Note that the X server must be restarted for any changes to its configuration
file to take effect.
More information about nvidia-xconfig can be found in the nvidia-xconfig
manual page by running.
% man nvidia-xconfig
6B. MANUALLY EDITING THE CONFIGURATION FILE
If you do not have a working X config file, there are a few different ways to
obtain one. A sample config file is included both with the X.Org distribution
and with the NVIDIA driver package (at '/usr/share/doc/NVIDIA_GLX-1.0/'). The
'nvidia-xconfig' utility, provided with the NVIDIA driver package, can
generate a new X configuration file. Additional information on the X config
syntax can be found in the xorg.conf manual page (`man xorg.conf`).
If you have a working X config file for a different driver (such as the "vesa"
or "fbdev" driver), then simply edit the file as follows.
Remove the line:
Driver "vesa"
(or Driver "fbdev")
and replace it with the line:
Driver "nvidia"
Remove the following lines:
Load "dri"
Load "GLCore"
In the "Module" section of the file, add the line (if it does not already
exist):
Load "glx"
If the X config file does not have a "Module" section, you can safely skip the
last step.
There are numerous options that may be added to the X config file to tune the
NVIDIA X driver. See Appendix B for a complete list of these options.
Once you have completed these edits to the X config file, you may restart X
and begin using the accelerated OpenGL libraries. After restarting X, any
OpenGL application should automatically use the new NVIDIA libraries. (NOTE:
If you encounter any problems, see Chapter 8 for common problem diagnoses.)
6C. RESTORING THE X CONFIGURATION AFTER UNINSTALLING THE DRIVER
If X is explicitly configured to use the NVIDIA driver, then the X config file
should be edited to use a different X driver after uninstalling the NVIDIA
driver. Otherwise, X may fail to start, since the driver it was configured to
use will no longer be present on the system after uninstallation.
If you edited the file manually, revert any edits you made. If you used the
'nvidia-xconfig' utility, either by answering "Yes" when prompted to configure
the X server by the installer, or by running it manually later on, then you
may restore the backed-up X config file, if it exists and reflects the X
config state that existed before the NVIDIA driver was installed.
If you do not recall any manual changes that you made to the file, or do not
have a backed-up X config file that uses a non-NVIDIA X driver, you may want
to try simply renaming the X configuration file, to see if your X server loads
a sensible default.
______________________________________________________________________________
Chapter 7. Frequently Asked Questions
______________________________________________________________________________
This section provides answers to frequently asked questions associated with