Linux Driver



Intel addressed 57 security vulnerabilities during this month's Patch Tuesday, including high severity ones impacting Intel Graphics Drivers.

  1. Linux Driver Md
  2. Linux Driver Tutorial
  3. Linux Driver For Laptop Webcam
  4. Linux Drivers For Brother Printers

It explains how the Linux USB subsystem is structured and introduces the reader to the concept of USB urbs, which are essential to USB drivers. The first thing a Linux USB driver needs to do is register itself with the Linux USB subsystem, giving it some information about which devices the driver supports and which functions to call when a device supported by the driver is inserted or removed from the system.

There are two ways of programming a Linux device driver: Compile the driver along with the kernel, which is monolithic in Linux. Implement the driver as a kernel module, in which case you won’t need to recompile the kernel. Linux (which is a kernel) manages the machine's hardware in a simple and efficient manner, offering the user a simple and uniform programming interface. In the same way, the kernel, and in particular its device drivers, form a bridge or interface between the end-user/programmer and the hardware.

40 of them were found internally by Intel, while the other 17 were externally reported, almost all through Intel's Bug Bounty program.

The security bugs are detailed in the 19 security advisories published by Intel on its Product Security Center, with security and functional updates being delivered to users through the Intel Platform Update (IPU) process.

Intel includes a list of all impacted products and recommendations for vulnerable products at the end of each advisory.

The company also provides contact details for users and researchers who want to report other security issues or vulnerabilities found in Intel branded technology or products.

'While you may be able to retrieve these updates direct from Intel, we recommend that you check with your system manufacturer for updates specific to your system,' Intel's Director of Communications Jerry Bryant said. 'Find links to system manufacturer support sites here.'

February 2021 Intel Platform Update highlights

'The bulk of advisories this month are software driver updates for graphics components and firmware/software updates for ethernet components,' Intel's Director of Communications Jerry Bryant said.

The vulnerability with the highest severity rating (8.8/10) is tracked as CVE-2020-0544 and it enables authenticated attackers to escalate privileges via local access.

The bug behind it is an insufficient control flow management issue in the kernel mode driver for some Intel graphics drivers prior to version 15.36.39.5145.

Intel graphics driver vulnerabilities patched this month affect multiple Intel processor generations up to the 10th generation, codenamed Comet Lake, and impact several Windows and Linux driver versions.

On Tuesday, Apple also released security updates that fix two arbitrary code execution vulnerabilities in Intel graphics drivers.

Intel microcode updates for Windows

Microsoft has also released Intel microcode updates for Windows 10 20H2, 2004, 1909, and older versions to fix issues impacting current and previously released Windows 10 versions.

These microcode updates are offered to affected devices via Windows Update but they can also be manually downloaded directly from the Microsoft Catalog using these links:

• KB4589212: Intel microcode updates for Windows 10, version 2004 and 20H2, and Windows Server, version 2004 and 20H2
• KB4589211: Intel microcode updates for Windows 10, version 1903 and 1909, and Windows Server, version 1903 and 1909
• KB4589208: Intel microcode updates for Windows 10, version 1809 and Windows Server 2019
• KB4589206: Intel microcode updates for Windows 10, version 1803
• KB4589210: Intel microcode updates for Windows 10, version 1607 and Windows Server 2016
• KB4589198: Intel microcode updates for Windows 10, version 1507

However, it is important to mention that similar updates are known to have caused system hangs and performance issues on older CPUs in the past due to the way the issues were mitigated.

Related Articles:

-->

To take advantage of the GPU capabilities of Azure N-series VMs backed by NVIDIA GPUs, you must install NVIDIA GPU drivers. The NVIDIA GPU Driver Extension installs appropriate NVIDIA CUDA or GRID drivers on an N-series VM. Install or manage the extension using the Azure portal or tools such as the Azure CLI or Azure Resource Manager templates. See the NVIDIA GPU Driver Extension documentation for supported distributions and deployment steps.

If you choose to install NVIDIA GPU drivers manually, this article provides supported distributions, drivers, and installation and verification steps. Manual driver setup information is also available for Windows VMs.

For N-series VM specs, storage capacities, and disk details, see GPU Linux VM sizes.

Supported distributions and drivers

NVIDIA CUDA drivers

NVIDIA CUDA drivers for NC, NCv2, NCv3, ND, and NDv2-series VMs (optional for NV-series) are supported only on the Linux distributions listed in the following table. CUDA driver information is current at time of publication. For the latest CUDA drivers and supported operating systems, visit the NVIDIA website. Ensure that you install or upgrade to the latest CUDA drivers for your distribution.

Tip

As an alternative to manual CUDA driver installation on a Linux VM, you can deploy an Azure Data Science Virtual Machine image. The DSVM editions for Ubuntu 16.04 LTS or CentOS 7.4 pre-install NVIDIA CUDA drivers, the CUDA Deep Neural Network Library, and other tools.

NVIDIA GRID drivers

Microsoft redistributes NVIDIA GRID driver installers for NV and NVv3-series VMs used as virtual workstations or for virtual applications. Install only these GRID drivers on Azure NV VMs, only on the operating systems listed in the following table. These drivers include licensing for GRID Virtual GPU Software in Azure. You do not need to set up a NVIDIA vGPU software license server.

The GRID drivers redistributed by Azure do not work on non-NV series VMs like NC, NCv2, NCv3, ND, and NDv2-series VMs.

DistributionDriver
Ubuntu 18.04 LTS
Ubuntu 16.04 LTS
Red Hat Enterprise Linux 7.7 to 7.9, 8.0, 8.1
SUSE Linux Enterprise Server 12 SP2
SUSE Linux Enterprise Server 15 SP2
NVIDIA GRID 12.0, driver branch R460(.exe)
DriverLinux

Visit GitHub for the complete list of all previous Nvidia GRID driver links.

Warning

DriverLinux

Installation of third-party software on Red Hat products can affect the Red Hat support terms. See the Red Hat Knowledgebase article.

Install CUDA drivers on N-series VMs

Here are steps to install CUDA drivers from the NVIDIA CUDA Toolkit on N-series VMs.

C and C++ developers can optionally install the full Toolkit to build GPU-accelerated applications. For more information, see the CUDA Installation Guide.

To install CUDA drivers, make an SSH connection to each VM. To verify that the system has a CUDA-capable GPU, run the following command:

You will see output similar to the following example (showing an NVIDIA Tesla K80 card):

Then run installation commands specific for your distribution.

Ubuntu

  1. Download and install the CUDA drivers from the NVIDIA website. For example, for Ubuntu 16.04 LTS:

    The installation can take several minutes.

  2. To optionally install the complete CUDA toolkit, type:

  3. Reboot the VM and proceed to verify the installation.

Linux Driver Md

CUDA driver updates

We recommend that you periodically update CUDA drivers after deployment.

CentOS or Red Hat Enterprise Linux

  1. Update the kernel (recommended). If you choose not to update the kernel, ensure that the versions of kernel-devel and dkms are appropriate for your kernel.

  2. Install the latest Linux Integration Services for Hyper-V and Azure. Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected, installing LIS is not required.

Skip this step if you plan to use CentOS 7.8(or higher) as LIS is no longer required for these versions.

Please note that LIS is applicable to Red Hat Enterprise Linux, CentOS, and the Oracle Linux Red Hat Compatible Kernel 5.2-5.11, 6.0-6.10, and 7.0-7.7. Please refer to the [Linux Integration Services documentation] (https://www.microsoft.com/en-us/download/details.aspx?id=55106) for more details.

Skip this step if you are not using the Kernel versions listed above.

  1. Reconnect to the VM and continue installation with the following commands:

    The installation can take several minutes.

  2. To optionally install the complete CUDA toolkit, type:

  3. Reboot the VM and proceed to verify the installation.

Verify driver installation

To query the GPU device state, SSH to the VM and run the nvidia-smi command-line utility installed with the driver.

If the driver is installed, you will see output similar to the following. Note that GPU-Util shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.

RDMA network connectivity

RDMA network connectivity can be enabled on RDMA-capable N-series VMs such as NC24r deployed in the same availability set or in a single placement group in a virtual machine (VM) scale set. The RDMA network supports Message Passing Interface (MPI) traffic for applications running with Intel MPI 5.x or a later version. Additional requirements follow:

Distributions

Deploy RDMA-capable N-series VMs from one of the images in the Azure Marketplace that supports RDMA connectivity on N-series VMs:

  • Ubuntu 16.04 LTS - Configure RDMA drivers on the VM and register with Intel to download Intel MPI:

    1. Install dapl, rdmacm, ibverbs, and mlx4

    2. In /etc/waagent.conf, enable RDMA by uncommenting the following configuration lines. You need root access to edit this file.

    3. Add or change the following memory settings in KB in the /etc/security/limits.conf file. You need root access to edit this file. For testing purposes you can set memlock to unlimited. For example: <User or group name> hard memlock unlimited.

    4. Install Intel MPI Library. Either purchase and download the library from Intel or download the free evaluation version.

      Only Intel MPI 5.x runtimes are supported.

      For installation steps, see the Intel MPI Library Installation Guide.

    5. Enable ptrace for non-root non-debugger processes (needed for the most recent versions of Intel MPI).

  • CentOS-based 7.4 HPC - RDMA drivers and Intel MPI 5.1 are installed on the VM.

  • CentOS-based HPC - CentOS-HPC 7.6 and later (for SKUs where InfiniBand is supported over SR-IOV). These images have Mellanox OFED and MPI libraries pre-installed.

Linux Driver Tutorial

Note

CX3-Pro cards are supported only through LTS versions of Mellanox OFED. Use LTS Mellanox OFED version (4.9-0.1.7.0) on the N-series VMs with ConnectX3-Pro cards. For more information, see Linux Drivers.

Also, some of the latest Azure Marketplace HPC images have Mellanox OFED 5.1 and later, which don't support ConnectX3-Pro cards. Check the Mellanox OFED version in the HPC image before using it on VMs with ConnectX3-Pro cards.

The following images are the latest CentOS-HPC images that support ConnectX3-Pro cards:

  • OpenLogic:CentOS-HPC:7.6:7.6.2020062900
  • OpenLogic:CentOS-HPC:7_6gen2:7.6.2020062901
  • OpenLogic:CentOS-HPC:7.7:7.7.2020062600
  • OpenLogic:CentOS-HPC:7_7-gen2:7.7.2020062601
  • OpenLogic:CentOS-HPC:8_1:8.1.2020062400
  • OpenLogic:CentOS-HPC:8_1-gen2:8.1.2020062401

Install GRID drivers on NV or NVv3-series VMs

To install NVIDIA GRID drivers on NV or NVv3-series VMs, make an SSH connection to each VM and follow the steps for your Linux distribution.

Ubuntu

  1. Run the lspci command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.

  2. Install updates.

  3. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NVv2 VMs.) To do this, create a file in /etc/modprobe.d named nouveau.conf with the following contents:

  4. Reboot the VM and reconnect. Exit X server:

  5. Download and install the GRID driver:

  6. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select Yes.

  7. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/

  8. Add the following to /etc/nvidia/gridd.conf:

  9. Remove the following from /etc/nvidia/gridd.conf if it is present:

  10. Reboot the VM and proceed to verify the installation.

CentOS or Red Hat Enterprise Linux

  1. Update the kernel and DKMS (recommended). If you choose not to update the kernel, ensure that the versions of kernel-devel and dkms are appropriate for your kernel.

  2. Disable the Nouveau kernel driver, which is incompatible with the NVIDIA driver. (Only use the NVIDIA driver on NV or NV3 VMs.) To do this, create a file in /etc/modprobe.d named nouveau.conf with the following contents:

  3. Reboot the VM, reconnect, and install the latest Linux Integration Services for Hyper-V and Azure. Check if LIS is required by verifying the results of lspci. If all GPU devices are listed as expected, installing LIS is not required.

Skip this step is you are using CentOS/RHEL 7.8 and above.

  1. Reconnect to the VM and run the lspci command. Verify that the NVIDIA M60 card or cards are visible as PCI devices.

  2. Download and install the GRID driver:

  3. When you're asked whether you want to run the nvidia-xconfig utility to update your X configuration file, select Yes.

  4. After installation completes, copy /etc/nvidia/gridd.conf.template to a new file gridd.conf at location /etc/nvidia/

  5. Add the following to /etc/nvidia/gridd.conf:

  6. Remove the following from /etc/nvidia/gridd.conf if it is present:

  7. Reboot the VM and proceed to verify the installation.

Verify driver installation

To query the GPU device state, SSH to the VM and run the nvidia-smi command-line utility installed with the driver.

If the driver is installed, you will see output similar to the following. Note that GPU-Util shows 0% unless you are currently running a GPU workload on the VM. Your driver version and GPU details may be different from the ones shown.

X11 server

If you need an X11 server for remote connections to an NV or NVv2 VM, x11vnc is recommended because it allows hardware acceleration of graphics. The BusID of the M60 device must be manually added to the X11 configuration file (usually, etc/X11/xorg.conf). Add a 'Device' section similar to the following:

Additionally, update your 'Screen' section to use this device.

The decimal BusID can be found by running

The BusID can change when a VM gets reallocated or rebooted. Therefore, you may want to create a script to update the BusID in the X11 configuration when a VM is rebooted. For example, create a script named busidupdate.sh (or another name you choose) with contents similar to the following:

Then, create an entry for your update script in /etc/rc.d/rc3.d so the script is invoked as root on boot.

Troubleshooting

  • You can set persistence mode using nvidia-smi so the output of the command is faster when you need to query cards. To set persistence mode, execute nvidia-smi -pm 1. Note that if the VM is restarted, the mode setting goes away. You can always script the mode setting to execute upon startup.
  • If you updated the NVIDIA CUDA drivers to the latest version and find RDMA connectivity is no longer working, reinstall the RDMA drivers to reestablish that connectivity.
  • If a certain CentOS/RHEL OS version (or kernel) is not supported for LIS, an error “Unsupported kernel version” is thrown. Please report this error along with the OS and kernel versions.

Linux Driver For Laptop Webcam

Next steps

Linux Drivers For Brother Printers

  • To capture a Linux VM image with your installed NVIDIA drivers, see How to generalize and capture a Linux virtual machine.




Comments are closed.