very beginner, following many online tutorials and have been able to overcome my problems until this one. im dont understand the problem so not sure what info to give as a starting point
Python 3.8 might no longer be supported.
You are running into a dependency problem, not a âyou did something wrongâ problem.
From the screenshot, the important part is:
Collecting hf-xet...Installing backend dependencies ... errorpip install ... -- puccinalinERROR: Could not find a version that satisfies the requirement puccinalin (from versions: none)
So:
huggingface_hubâ depends onhf-xet(PyPI)hf-xetâ needs another tool calledpuccinialinto build its Rust code (PyPI)puccinialinis only built for Python â„ 3.9 (PyPI)- Your virtual environment is Python 3.8 (
/home/rui/.venv/lib/python3.8/...in the error), so pip canât find any compatiblepuccinialin, and the whole install fails.
On top of that, current huggingface_hub itself also officially requires Python â„ 3.9. (PyPI)
So the core issue is: your Python version (3.8) is too old for the version of huggingface_hub youâre trying to install and its new Rust-based dependency chain (hf-xet â puccinialin).
Fix 1 (recommended): use Python â„ 3.9 in a fresh environment
If you can install a newer Python (3.10 or 3.11 is ideal), this is the cleanest fix.
A. Create a new virtual environment with the newer Python
Example for Linux once you have python3.10 installed:
# check what version you have
python3.10 --version
# create a new env (choose any folder you like)
python3.10 -m venv ~/hf-env
# activate it
source ~/hf-env/bin/activate
# upgrade pip (always a good idea)
pip install --upgrade pip
If python3.10 doesnât exist yet, your options are:
- Install a newer Python via your OS packages (e.g. on Ubuntu using a PPA such as âdeadsnakesâ), or
- Install Miniconda/Anaconda and create a
condaenv:
# after installing miniconda
conda create -n hf python=3.10
conda activate hf
pip install --upgrade pip
B. Install huggingface_hub inside that new env
pip install "huggingface_hub"
This should now work, because:
huggingface_hubis happy (Python â„ 3.9). (PyPI)hf-xetcan install its Rust helperpuccinialin, which also needs Python â„ 3.9. (PyPI)
C. Test that it actually imported
Once the install succeeds, test like this:
python -c "from huggingface_hub import hf_hub_download; print('import OK')"
If you see import OK, the library is installed and usable.
Fix 2 (workaround): stay on Python 3.8 but use an older huggingface_hub
Only do this if you absolutely cannot upgrade Python.
Older versions of huggingface_hub (†0.30.2) did not depend on hf-xet yet, so they avoid the Rust + puccinialin chain. (GitHub)
Inside your existing 3.8 virtualenv:
# make sure old broken attempt is cleared
pip uninstall -y huggingface_hub hf-xet
# install a version before the hf-xet dependency
pip install "huggingface_hub<0.31"
Then test:
python -c "from huggingface_hub import hf_hub_download; print('import OK')"
Important limitations of this workaround:
- You wonât get newer features that expect recent
huggingface_hubversions. - Some tutorials written against very new versions might not match exactly.
But for basic things like hf_hub_download, an older 0.2x/0.3x version is usually fine.
One more thing: that from: can't read /var/mail/huggingface_hub line
At the very top of your screenshot there is:
from huggingface_hub import hf_hub_download
from: can't read /var/mail/huggingface_hub
That error is unrelated to the installation itself. It means:
- You typed
from huggingface_hub import hf_hub_downloaddirectly in the shell. - The shell (bash) doesnât understand Python syntax, so it thinks
fromis a command.
To run Python code you must either:
-
Start a Python prompt:
pythonThen inside the Python prompt:
from huggingface_hub import hf_hub_download -
Or put Python code in a
.pyfile and runpython your_script.py.
Suggested next steps for you
-
In a new terminal, check Python:
python3 --version -
If itâs 3.8: choose either
- upgrade path (Fix 1, new env with Python 3.10+), or
- workaround (Fix 2,
pip install "huggingface_hub<0.31").
-
After installation, verify with:
python -c "from huggingface_hub import hf_hub_download; print('OK')"
Useful references
- Official
huggingface_hubinstallation docs (lists requirements & extras). (Hugging Face) huggingface_hubon PyPI (shows that it requires Python â„ 3.9 and depends onhf-xet). (PyPI)hf-xetpackage on PyPI (Hugging Faceâs Rust-based storage helper). (PyPI)puccinialinpackage on PyPI (Rust bootstrap helper, requires Python â„ 3.9). (PyPI)- GitHub issue showing when
hf-xetbecame a dependency, explaining why olderhuggingface_hubversions avoid it. (GitHub)
Short summary
- The red error text is not âHugging Face is brokenâ; itâs pip failing to install
puccinialin, a Rust helper thathf-xetneeds. puccinialinand recenthuggingface_hubboth require Python â„ 3.9, but your env is Python 3.8, so pip finds no compatible version.- Best fix: create a new env with Python 3.10+ and reinstall
huggingface_hubthere. - If you cannot upgrade Python, install an older
huggingface_hubversion (<0.31) that doesnât depend onhf-xet/puccinialin. - Remember to run Python code (like
from huggingface_hub import hf_hub_download) inside Python, not directly in the shell.
Thanks so much for your very detailed response, I had little faith my problem would be even worth a response.
However I have run into an issue when updating my version of python, where dpkg returned an error:
Screenshot by Lightshot (not allowed to upload more than 1 piece of embedded media as a new user)
Assuming dpkg was broken, i tried to fix it only for it to return the same error code:
bash dump below
rui@Rui-AI-Desktop:~$ python3 -V
Python 3.8.10
rui@Rui-AI-Desktop:~$ sudo add-apt-repository ppa:deadsnakes/ppa
This PPA contains more recent Python versions packaged for Ubuntu.
Disclaimer: there's no guarantee of timely updates in case of security problems or other issues. If you want to use them in a security-or-otherwise-critical environment (say, on a production server), you do so at your own risk.
Update Note
===========
Please use this repository instead of ppa:fkrull/deadsnakes.
Reporting Issues
================
Issues can be reported in the master issue tracker at:
https://github.com/deadsnakes/issues/issues
Supported Ubuntu and Python Versions
====================================
- Ubuntu 22.04 (jammy) Python3.7 - Python3.9, Python3.11 - Python3.13
- Ubuntu 24.04 (noble) Python3.7 - Python3.11, Python3.13
- Note: Python 3.10 (jammy), Python3.12 (noble) are not provided by deadsnakes as upstream ubuntu provides those packages.
Why some packages aren't built:
- Note: for jammy and noble, older python versions requre libssl<3 so they are not currently built
- If you need these, reach out to asottile to set up a private ppa
The packages may also work on other versions of Ubuntu or Debian, but that is not tested or supported.
Packages
========
The packages provided here are loosely based on the debian upstream packages with some modifications to make them more usable as non-default pythons and on ubuntu. As such, the packages follow debian's patterns and often do not include a full python distribution with just `apt install python#.#`. Here is a list of packages that may be useful along with the default install:
- `python#.#-dev`: includes development headers for building C extensions
- `python#.#-venv`: provides the standard library `venv` module
- `python#.#-distutils`: provides the standard library `distutils` module
- `python#.#-lib2to3`: provides the `2to3-#.#` utility as well as the standard library `lib2to3` module
- `python#.#-gdbm`: provides the standard library `dbm.gnu` module
- `python#.#-tk`: provides the standard library `tkinter` module
Third-Party Python Modules
==========================
Python modules in the official Ubuntu repositories are packaged to work with the Python interpreters from the official repositories. Accordingly, they generally won't work with the Python interpreters from this PPA. As an exception, pure-Python modules for Python 3 will work, but any compiled extension modules won't.
To install 3rd-party Python modules, you should use the common Python packaging tools. For an introduction into the Python packaging ecosystem and its tools, refer to the Python Packaging User Guide:
https://packaging.python.org/installing/
Sources
=======
The package sources are available at:
https://github.com/deadsnakes/
Nightly Builds
==============
For nightly builds, see ppa:deadsnakes/nightly https://launchpad.net/~deadsnakes/+archive/ubuntu/nightly
More info: https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa
Press [ENTER] to continue or Ctrl-c to cancel adding it.
Get:1 file:/var/cudnn-local-tegra-repo-ubuntu2004-8.6.0.166 InRelease [1,575 B]
Get:1 file:/var/cudnn-local-tegra-repo-ubuntu2004-8.6.0.166 InRelease [1,575 B]
Hit:2 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu focal InRelease
Hit:3 https://repo.download.nvidia.com/jetson/common r35.5 InRelease
Hit:4 https://repo.download.nvidia.com/jetson/t234 r35.5 InRelease
Get:5 https://pkgs.tailscale.com/stable/ubuntu focal InRelease
Hit:6 https://repo.download.nvidia.com/jetson/ffmpeg r35.5 InRelease
Hit:7 http://ports.ubuntu.com/ubuntu-ports focal InRelease
Hit:8 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease
Hit:9 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease
Hit:10 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease
Fetched 6,581 B in 2s (3,495 B/s)
Reading package lists... Done
rui@Rui-AI-Desktop:~$ sudo apt-get update
Get:1 file:/var/cudnn-local-tegra-repo-ubuntu2004-8.6.0.166 InRelease [1,575 B]
Get:1 file:/var/cudnn-local-tegra-repo-ubuntu2004-8.6.0.166 InRelease [1,575 B]
Hit:2 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu focal InRelease
Hit:3 http://ports.ubuntu.com/ubuntu-ports focal InRelease
Hit:4 http://ports.ubuntu.com/ubuntu-ports focal-updates InRelease
Hit:5 https://repo.download.nvidia.com/jetson/common r35.5 InRelease
Hit:6 http://ports.ubuntu.com/ubuntu-ports focal-backports InRelease
Hit:7 http://ports.ubuntu.com/ubuntu-ports focal-security InRelease
Hit:8 https://repo.download.nvidia.com/jetson/t234 r35.5 InRelease
Get:9 https://pkgs.tailscale.com/stable/ubuntu focal InRelease
Hit:10 https://repo.download.nvidia.com/jetson/ffmpeg r35.5 InRelease
Fetched 6,581 B in 1s (4,705 B/s)
Reading package lists... Done
rui@Rui-AI-Desktop:~$ apt list | grep python3.10
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
libqgispython3.10.4/focal,now 3.10.4+dfsg-1ubuntu2 arm64 [installed]
rui@Rui-AI-Desktop:~$ sudo apt-get install python3.10
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note: selecting 'libqgispython3.10.4' for regex 'python3.10'
Note: selecting 'libpython3.10-stdlib' for regex 'python3.10'
libqgispython3.10.4 is already the newest version (3.10.4+dfsg-1ubuntu2).
The following packages were automatically installed and are no longer required:
apt-clone archdetect-deb bogl-bterm busybox-static cryptsetup-bin dpkg-repack gir1.2-timezonemap-1.0 gir1.2-xkl-1.0 grub-common
libdebian-installer4 libpaps0 libtimezonemap-data libtimezonemap1 os-prober paps python3-icu python3-pam rdate tasksel
tasksel-data
Use 'sudo apt autoremove' to remove them.
0 to upgrade, 0 to newly install, 0 to remove and 4 not to upgrade.
5 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n]
Setting up nvidia-l4t-bootloader (35.5.0-20240613202628) ...
3701--0005--1--jetson-orin-nano-devkit-
Info. Installing mtdblock.
Info. Active boot storage: mmcblk1
Info. Legacy mode: false
TNSPEC 3701-501-0005-G.0-1-0-jetson-orin-nano-devkit-
COMPATIBLE_SPEC 3701--0005--1--jetson-orin-nano-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE mmcblk1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0
Info: Write TegraPlatformCompatSpec with 3701--0005--1--jetson-orin-nano-devkit-.
INFO. Dump slots info:
Current version: 35.4.1
Capsule update status: 0
Current bootloader slot: A
Active bootloader slot: A
num_slots: 2
slot: 0, status: normal
slot: 1, status: normal
INFO. Dump nv_boot_control.conf:
TNSPEC 3701-501-0005-G.0-1-0-jetson-orin-nano-devkit-
COMPATIBLE_SPEC 3701--0005--1--jetson-orin-nano-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE mmcblk1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0
ERROR. 3701--0005--1--jetson-orin-nano-devkit- does not match any known boards.
dpkg: error processing package nvidia-l4t-bootloader (--configure):
installed nvidia-l4t-bootloader package post-installation script subprocess returned error exit status 1
Setting up nvidia-l4t-kernel (5.10.192-tegra-35.5.0-20240613202628) ...
Using the existing boot entry 'primary'
3701--0005--1--jetson-orin-nano-devkit-
Info. Installing mtdblock.
Info. Active boot storage: mmcblk1
Info. Legacy mode: false
TNSPEC 3701-501-0005-G.0-1-0-jetson-orin-nano-devkit-
COMPATIBLE_SPEC 3701--0005--1--jetson-orin-nano-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE mmcblk1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0
Info: Write TegraPlatformCompatSpec with 3701--0005--1--jetson-orin-nano-devkit-.
Starting kernel post-install procedure.
Rootfs AB is not enabled.
ERROR. Procedure for A_kernel update FAILED.
Cannot install package. Exiting...
dpkg: error processing package nvidia-l4t-kernel (--configure):
installed nvidia-l4t-kernel package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of nvidia-l4t-kernel-headers:
nvidia-l4t-kernel-headers depends on nvidia-l4t-kernel (= 5.10.192-tegra-35.5.0-20240613202628); however:
Package nvidia-l4t-kernel is not configured yet.
dpkg: error processing package nvidia-l4t-kernel-headers (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of nvidia-l4t-display-kernel:
nvidia-l4t-display-kernel depends on nvidia-l4t-kernel (= 5.10.192-tegra-35.5.0-20240613202628); however:
Package nvidia-l4t-kernel is not configured yet.
dpkg: error processing package nvidia-l4t-display-kernel (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of nvidia-l4t-kernel-dtbs:
nvidia-l4t-kernel-dtbs depends on nvidia-l4t-kernel (= 5.10.192-tegra-35.5.0-20240613202628); however:
Package nvidia-l4t-kernel is not configured yet.
dpkg: error processing package nvidia-l4t-kernel-dtbs (--configure):
No apport report written because the error message indicates it's a follow-up error from a previous failure.
No apport report written because MaxReports has already been reached
No apport report written because MaxReports has already been reached
dependency problems - leaving unconfigured
Errors were encountered while processing:
nvidia-l4t-bootloader
nvidia-l4t-kernel
nvidia-l4t-kernel-headers
nvidia-l4t-display-kernel
nvidia-l4t-kernel-dtbs
E: Sub-process /usr/bin/dpkg returned an error code (1)
rui@Rui-AI-Desktop:~$ sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
rui@Rui-AI-Desktop:~$ sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 2
update-alternatives: error: alternative path /usr/bin/python3.10 doesn't exist
rui@Rui-AI-Desktop:~$ sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.10 1
update-alternatives: error: alternative path /usr/bin/python3.10 doesn't exist
rui@Rui-AI-Desktop:~$ sudo dpkg --configure -a
Setting up nvidia-l4t-bootloader (35.5.0-20240613202628) ...
3701--0005--1--jetson-orin-nano-devkit-
Info. Installing mtdblock.
Info. Active boot storage: mmcblk1
Info. Legacy mode: false
TNSPEC 3701-501-0005-G.0-1-0-jetson-orin-nano-devkit-
COMPATIBLE_SPEC 3701--0005--1--jetson-orin-nano-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE mmcblk1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0
Info: Write TegraPlatformCompatSpec with 3701--0005--1--jetson-orin-nano-devkit-.
INFO. Dump slots info:
Current version: 35.4.1
Capsule update status: 0
Current bootloader slot: A
Active bootloader slot: A
num_slots: 2
slot: 0, status: normal
slot: 1, status: normal
INFO. Dump nv_boot_control.conf:
TNSPEC 3701-501-0005-G.0-1-0-jetson-orin-nano-devkit-
COMPATIBLE_SPEC 3701--0005--1--jetson-orin-nano-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE mmcblk1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0
ERROR. 3701--0005--1--jetson-orin-nano-devkit- does not match any known boards.
dpkg: error processing package nvidia-l4t-bootloader (--configure):
installed nvidia-l4t-bootloader package post-installation script subprocess returned error exit status 1
Setting up nvidia-l4t-kernel (5.10.192-tegra-35.5.0-20240613202628) ...
Using the existing boot entry 'primary'
3701--0005--1--jetson-orin-nano-devkit-
Info. Installing mtdblock.
Info. Active boot storage: mmcblk1
Info. Legacy mode: false
TNSPEC 3701-501-0005-G.0-1-0-jetson-orin-nano-devkit-
COMPATIBLE_SPEC 3701--0005--1--jetson-orin-nano-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE mmcblk1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0
Info: Write TegraPlatformCompatSpec with 3701--0005--1--jetson-orin-nano-devkit-.
Starting kernel post-install procedure.
Rootfs AB is not enabled.
ERROR. Procedure for A_kernel update FAILED.
Cannot install package. Exiting...
dpkg: error processing package nvidia-l4t-kernel (--configure):
installed nvidia-l4t-kernel package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of nvidia-l4t-kernel-headers:
nvidia-l4t-kernel-headers depends on nvidia-l4t-kernel (= 5.10.192-tegra-35.5.0-20240613202628); however:
Package nvidia-l4t-kernel is not configured yet.
dpkg: error processing package nvidia-l4t-kernel-headers (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of nvidia-l4t-display-kernel:
nvidia-l4t-display-kernel depends on nvidia-l4t-kernel (= 5.10.192-tegra-35.5.0-20240613202628); however:
Package nvidia-l4t-kernel is not configured yet.
dpkg: error processing package nvidia-l4t-display-kernel (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of nvidia-l4t-kernel-dtbs:
nvidia-l4t-kernel-dtbs depends on nvidia-l4t-kernel (= 5.10.192-tegra-35.5.0-20240613202628); however:
Package nvidia-l4t-kernel is not configured yet.
dpkg: error processing package nvidia-l4t-kernel-dtbs (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
nvidia-l4t-bootloader
nvidia-l4t-kernel
nvidia-l4t-kernel-headers
nvidia-l4t-display-kernel
nvidia-l4t-kernel-dtbs
rui@Rui-AI-Desktop:~$ sudo apt install -f
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
apt-clone archdetect-deb bogl-bterm busybox-static cryptsetup-bin dpkg-repack gir1.2-timezonemap-1.0 gir1.2-xkl-1.0 grub-common
libdebian-installer4 libpaps0 libtimezonemap-data libtimezonemap1 os-prober paps python3-icu python3-pam rdate tasksel
tasksel-data
Use 'sudo apt autoremove' to remove them.
0 to upgrade, 0 to newly install, 0 to remove and 4 not to upgrade.
5 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up nvidia-l4t-bootloader (35.5.0-20240613202628) ...
3701--0005--1--jetson-orin-nano-devkit-
Info. Installing mtdblock.
Info. Active boot storage: mmcblk1
Info. Legacy mode: false
TNSPEC 3701-501-0005-G.0-1-0-jetson-orin-nano-devkit-
COMPATIBLE_SPEC 3701--0005--1--jetson-orin-nano-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE mmcblk1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0
Info: Write TegraPlatformCompatSpec with 3701--0005--1--jetson-orin-nano-devkit-.
INFO. Dump slots info:
Current version: 35.4.1
Capsule update status: 0
Current bootloader slot: A
Active bootloader slot: A
num_slots: 2
slot: 0, status: normal
slot: 1, status: normal
INFO. Dump nv_boot_control.conf:
TNSPEC 3701-501-0005-G.0-1-0-jetson-orin-nano-devkit-
COMPATIBLE_SPEC 3701--0005--1--jetson-orin-nano-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE mmcblk1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0
ERROR. 3701--0005--1--jetson-orin-nano-devkit- does not match any known boards.
dpkg: error processing package nvidia-l4t-bootloader (--configure):
installed nvidia-l4t-bootloader package post-installation script subprocess returned error exit status 1
Setting up nvidia-l4t-kernel (5.10.192-tegra-35.5.0-20240613202628) ...
Using the existing boot entry 'primary'
3701--0005--1--jetson-orin-nano-devkit-
Info. Installing mtdblock.
Info. Active boot storage: mmcblk1
Info. Legacy mode: false
TNSPEC 3701-501-0005-G.0-1-0-jetson-orin-nano-devkit-
COMPATIBLE_SPEC 3701--0005--1--jetson-orin-nano-devkit-
TEGRA_LEGACY_UPDATE false
TEGRA_BOOT_STORAGE mmcblk1
TEGRA_EMMC_ONLY false
TEGRA_CHIPID 0x23
TEGRA_OTA_BOOT_DEVICE /dev/mtdblock0
TEGRA_OTA_GPT_DEVICE /dev/mtdblock0
Info: Write TegraPlatformCompatSpec with 3701--0005--1--jetson-orin-nano-devkit-.
Starting kernel post-install procedure.
Rootfs AB is not enabled.
ERROR. Procedure for A_kernel update FAILED.
Cannot install package. Exiting...
dpkg: error processing package nvidia-l4t-kernel (--configure):
installed nvidia-l4t-kernel package post-installation script subprocess returned error exit status 1
dpkg: dependency problems prevent configuration of nvidia-l4t-kernel-headers:
nvidia-l4t-kernel-headers depends on nvidia-l4t-kernel (= 5.10.192-tegra-35.5.0-20240613202628); however:
Package nvidia-l4t-kernel is not configured yet.
dpkg: error processing package nvidia-l4t-kernel-headers (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of nvidia-l4t-display-kernel:
nvidia-l4t-display-kernel depends on nvidia-l4t-kernel (= 5.10.192-tegra-35.5.0-20240613202628); however:
Package nvidia-l4t-kernel is not configured yet.
dpkg: error processing package nvidia-l4t-display-kernel (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of nvidia-l4t-kernel-dtbs:
nvidia-l4t-kernel-dtbs depends on nvidia-l4t-kernel (= 5.10.192-tegra-35.5.0-20240613202628); however:
Package nvidia-l4t-kernel is not configured yet.
dpkg: error processing package nvidia-l4t-kernel-dtbs (--configure):
dependency problems - leaving unconfigured
No apport report written because the error message indicates it's a follow-up error from a previous failure.
No apport report written because MaxReports has already been reached
No apport report written because MaxReports has already been reached
Errors were encountered while processing:
nvidia-l4t-bootloader
nvidia-l4t-kernel
nvidia-l4t-kernel-headers
nvidia-l4t-display-kernel
nvidia-l4t-kernel-dtbs
E: Sub-process /usr/bin/dpkg returned an error code (1)
rui@Rui-AI-Desktop:~$
I figure id rather update python now to prevent future problems rather than downgrade my huggingface_hub install.
Thanks again
Edit: for some reason when tinkering with newer python versions than 3.10 or 3.11, I have managed to update to python 3.9.5 (?) and followed the rest of your instructions successfully, so thank you.
But if you know a good fix and its not too much to ask id still be happy to know how to fix this problem to prevent more in the future.
Oh, Jetson environment? Seems like it has some pretty specific conventions?
Youâve already done the important part (getting to Python 3.9.5 and using a venv).
The remaining question is: whatâs the ârightâ way to fix and avoid those scary dpkg / nvidia-l4t-* errors on your Jetson so they donât keep coming back?
Iâll first recap whatâs actually wrong, then give you:
- The clean, recommended fix (reflash JetPack / Jetson Linux).
- A short-term workaround if you canât reflash yet.
- A few habits to avoid similar problems in the future.
1. What the error really is (and why itâs not your fault)
From your logs:
Setting up nvidia-l4t-bootloader (35.5.0-20240613202628) ...
...
ERROR. 3701--0005--1--jetson-orin-nano-devkit- does not match any known boards.
dpkg: error processing package nvidia-l4t-bootloader (--configure):
installed nvidia-l4t-bootloader package post-installation script subprocess returned error exit status 1
...
dpkg: error processing package nvidia-l4t-kernel (--configure):
installed nvidia-l4t-kernel package post-installation script subprocess returned error exit status 1
...
E: Sub-process /usr/bin/dpkg returned an error code (1)
Key points:
-
These packages come from NVIDIAâs Jetson Linux / L4T stack:
nvidia-l4t-bootloadernvidia-l4t-kernelnvidia-l4t-kernel-headersnvidia-l4t-display-kernelnvidia-l4t-kernel-dtbs
-
When apt tries to install or upgrade them, their own post-install scripts run.
Those scripts:- Read the board ID (
TNSPEC,COMPATIBLE_SPEC), and - Decide how to update the bootloader/kernel stored on QSPI / eMMC.
- Read the board ID (
-
On your board, the script prints:
3701--0005--1--jetson-orin-nano-devkit- does not match any known boards.That means âthis ID string isnât in the list of supported boards in this updaterâ, so the script refuses to continue and exits with status 1.
-
dpkgis not actually corrupt. It just reports: âthe post-install script failed, so I canât mark these packages as configuredâ.
This is a known Jetson behavior: NVIDIA explicitly warn that partial upgrades or mismatched Jetson Linux packages can cause firmware/bootloader configuration problems, and that those packages are tightly coupled to the board hardware and JetPack release. (NVIDIA Docs)
So:
The problem is a stuck Jetson firmware/kernel update, not anything you did with Python or
pip.
2. âGoodâ fix: reflash to a clean, matching JetPack
The robust, recommended way to fix this so it doesnât haunt future apt commands is:
-
Reflash the Jetson with a JetPack / Jetson Linux image that:
- Matches your board (Jetson Orin Nano devkit) and
- Has a consistent set of
nvidia-l4t-*packages that know your board ID. (NVIDIA Docs)
This is exactly what NVIDIAâs SDK Manager (or the official SD-card image) is designed for.
2.1. Why reflashing fixes it
Reflashing:
- Writes a new, known-good bootloader + kernel that match the root filesystem.
- Installs a fresh set of
nvidia-l4t-*packages marked as fully configured. - Resets any half-applied OTA/apt upgrades that confused the updater script.
After a clean flash, sudo apt update && sudo apt upgrade should no longer hit the âdoes not match any known boardsâ error, because the firmware updater now recognises the board.
2.2. High-level steps (when youâre ready)
-
Back up your data
Copy off anything you care about from/home(projects, notebooks, models, etc.). -
On a separate Ubuntu x86_64 machine, install NVIDIA SDK Manager. (NVIDIA Docs)
-
Put the Jetson into recovery mode and connect via USB-C to the host PC (SDK Manager docs show the exact button sequence for Orin Nano).
-
In SDK Manager:
- Select your Jetson model (Jetson Orin Nano devkit).
- Select the JetPack version you want (e.g. a JetPack 5.1.x or 6.x that matches what you plan to use).
- Flash the OS. Optionally let it install CUDA/cuDNN and extra components.
-
Boot the Jetson, run the first-boot wizard.
-
On the Jetson, run:
sudo apt update sudo apt upgradeYou should no longer see errors from
nvidia-l4t-bootloaderornvidia-l4t-kernel.
Thatâs the âclean slateâ option: safest, fully supported, and it prevents this particular problem from reappearing unless another partial firmware upgrade happens.
3. Short-term workaround if you canât reflash yet
Sometimes you canât reflash immediately (no spare host, donât want to wipe the board yet, etc.), but apt keeps complaining.
The goal of the workaround is:
- Leave your current bootloader/kernel alone (board is already booting fine).
- Stop the broken
nvidia-l4t-*scripts from failing every timedpkgruns. - Let
apt installfor normal packages (like Python libs) succeed.
This is more advanced and not officially recommended by NVIDIA, but itâs a common pragmatic fix used by experienced users when theyâre in a bind.
3.1. Idea
Each problematic package has a script like:
/var/lib/dpkg/info/nvidia-l4t-bootloader.postinst/var/lib/dpkg/info/nvidia-l4t-kernel.postinst- (similar for
...-headers,...-display-kernel,...-kernel-dtbs)
Those scripts do the check that fails. You can:
- Back them up, then
- Make them immediately
exit 0before they do anything, sodpkgthinks configuration succeeded.
3.2. Concrete steps
-
Go to the dpkg info directory:
cd /var/lib/dpkg/info -
Back up the post-install scripts:
sudo cp nvidia-l4t-bootloader.postinst nvidia-l4t-bootloader.postinst.bak sudo cp nvidia-l4t-kernel.postinst nvidia-l4t-kernel.postinst.bak(You can back up the others too if they have
.postinstfiles and also fail.) -
Prepend
exit 0to short-circuit them:sudo sed -i '1s/^/exit 0\n/' nvidia-l4t-bootloader.postinst sudo sed -i '1s/^/exit 0\n/' nvidia-l4t-kernel.postinstWhat this does: the script now returns success before it hits the board-ID logic.
-
Re-run configuration:
sudo dpkg --configure -aNow
dpkgshould mark those packages as configured instead of erroring out. -
(Optional) run:
sudo apt update sudo apt upgradeYou should no longer see the
nvidia-l4t-bootloader/ âdoes not match any known boardsâ messages, because those scripts now exit successfully.
Important consequences:
- Your Jetson keeps using whatever firmware/kernel it already had.
- Those specific
nvidia-l4t-*packages wonât actually update the bootloader anymore, because youâve disabled the code that does it. - Whenever you do want to cleanly move to a new JetPack version, the right path is still to reflash; at that point you can restore the
.bakversions or just let the fresh install provide new scripts.
So this is best treated as âget apt out of the way for nowâ, not a permanent solution.
4. How to avoid similar headaches in the future
4.1. Treat Jetson like an appliance OS, not like stock Ubuntu
NVIDIAâs docs explicitly say partial upgrades of Jetson Linux components can cause trouble, because boot firmware, kernel, and user-space libraries are tightly coupled. (NVIDIA Docs)
Practical rules:
-
Do not run:
sudo do-release-upgradesudo apt dist-upgradeor big âfullâ upgrades without reading Jetson release notes.
-
Itâs generally fine to:
sudo apt updatesudo apt install <normal package>(e.g.python3.9,vim,git, etc.)
-
When you want to move to a new JetPack (e.g. 5.1 â 6.1):
- Follow NVIDIAâs JetPack / Jetson Linux upgrade instructions (Debian-based OTA or full reflash), not generic Ubuntu upgrade guides. (NVIDIA Docs)
4.2. Keep system Python separate from your project Python
You already did the right thing here:
-
Let
/usr/bin/python3remain whatever the OS ships (3.8 on Ubuntu 20.04). -
Install extra interpreters like
python3.9orpython3.10beside the system one, not instead of it. -
For each project, make a venv with the interpreter you want:
python3.9 -m venv ~/venvs/hf source ~/venvs/hf/bin/activate pip install --upgrade pip pip install huggingface_hub
This avoids breaking system tools that assume python3 is the original version.
4.3. Avoid touching update-alternatives for python3
The update-alternatives calls you tried failed because /usr/bin/python3.10 didnât exist, but even if they had worked, redirecting /usr/bin/python3 to a different version can break apt and NVIDIA scripts.
Best practice on Jetson: donât register python3 itself with update-alternatives at all.
5. Quick recap
- The
huggingface_hubissue is already solved by moving to Python 3.9.5 (thatâs exactly what it needed). - The
dpkgerror messages are from NVIDIAâsnvidia-l4t-*packages, whose bootloader/kernel updater script doesnât recognise your board ID and refuses to run. - Because of that, those packages stay âhalf configuredâ, and every
aptaction that touches them printsE: Sub-process /usr/bin/dpkg returned an error code (1).
To have a solid, future-proof system:
-
Best fix: when possible, reflash the Jetson with a clean JetPack image via SDK Manager. That gives you a consistent bootloader/kernel/rootfs and a clean set of
nvidia-l4t-*packages that work for your board. -
Short-term workaround: back up and neutralise the failing
nvidia-l4t-*post-install scripts so thatdpkg --configure -acompletes successfully; then plan a proper reflash later. -
For the future:
- Avoid generic Ubuntu full upgrades on Jetson; follow NVIDIAâs JetPack/Jetson Linux update instructions instead.
- Keep system Python as-is, use extra interpreters + virtualenvs for your ML work, and avoid changing
/usr/bin/python3viaupdate-alternatives.
Thanks so much for the detailed help, I didnât realise following standard Ubuntu guides could be damaging so thanks again for letting me know.
To reflash the jetson, is it absolutely necessary to flash with a separate Ubuntu machine? its unlikely ill have one on hand for a long time, so would it be okay to use a Mac? if not would a Mac running an Ubuntu VM be a working fix?
It should work, but thereâs definitely some risk when connecting the device and cables for communication using a VM instead of the actual hardware (VMs are generally fine for high-level processing, but their proficiency with low-level I/O-oriented tasks variesâŠ).
For rewriting SD cards, it seems to work fine on Mac or Windows.
Short version:
- You donât always need a separate Ubuntu PC.
- For some things (writing SD-card images) a Mac alone is fine.
- For full âSDK Managerâ flashing (especially QSPI bootloader / NVMe), NVIDIA still expects an x86-64 Ubuntu host. That can be a real PC or an Ubuntu VM/emulator on your Mac, but VMs are a bit fiddly.
Below is the breakdown with context so you can choose a path that actually fits your situation.
1. Two different âflashâ jobs
On a Jetson Orin Nano devkit you usually care about two layers:
- Root filesystem / OS (what lives on the microSD card or NVMe)
- QSPI bootloader firmware (tiny flash chip on the module â basically âBIOS/UEFI + early bootâ)
NVIDIA exposes two official installation methods for devkits:
- SD Card Image Method â write a prebuilt JetPack image to a microSD card.
- NVIDIA SDK Manager Method â use a host PC running Ubuntu + SDK Manager over USB to flash QSPI and/or rootfs. (Qiita)
Your dpkg errors (nvidia-l4t-bootloader, nvidia-l4t-kernel etc.) are essentially the âQSPI/bootloader updateâ part misbehaving. Reflashing is about getting both layers back into a consistent state.
2. What you can do with just a Mac
2.1. Re-image the SD card (rootfs) â Mac is completely fine
For the SD Card Image Method, NVIDIAâs own getting-started guide for the Orin Nano devkit literally shows using Balena Etcher on macOS to write the image.
Process (high level):
- On your Mac, download the JetPack SD card image for âJetson Orin Nano Developer Kitâ from NVIDIAâs download center / JetPack page.
- Install Balena Etcher (macOS build).
- Use Etcher to flash the
.imgfile to a microSD card. - Put the card in the Jetson, power on, go through the normal Ubuntu setup.
This completely resets the root filesystem on that microSD.
No Linux PC is required for this part.
This by itself may already clear a lot of ârandom dpkg messâ that came from experimenting on the old OS.
2.2. Limitation: QSPI bootloader updates for JetPack 6.x
The catch is the QSPI bootloader:
- NVIDIAâs JetPack 6.x notes say: if you use a JetPack 6.x SD card image for the first time, you must first update the QSPI bootloaders by installing JetPack 6 once using SDK Manager. After that one-time step, future JetPack 6.x SD-card images can be used directly. (Qiita)
So:
- If you stay on JetPack 5.x (e.g., 5.1.2 / 5.1.3 / 5.1.4), you can usually get away with just SD-card flashing from the Mac and not worry about SDK Manager right away.
- For a first-time move to JetPack 6.x, or for certain QSPI fixes, NVIDIA still expects you to run a proper flash from an Ubuntu x86-64 host at least once.
Given your current errors are around nvidia-l4t-bootloader wanting to go to 35.5.0, a clean SD-card reflash to a known-good JetPack 5.x image (and then resisting the urge to âapt dist-upgrade to the next major JetPackâ) is already a big step towards stability, and can be done from Mac only.
3. Using a Mac + Ubuntu VM/emulator for full SDK Manager flashing
When you do want to properly update QSPI or flash NVMe, then youâre in âSDK Manager territoryâ.
3.1. Official requirement: x86-64 Linux host
SDK Managerâs system requirements say the host must be Ubuntu Desktop on x86-64 (16.04 / 18.04 / 20.04 â newer docs also mention 22.04) or a similar Linux, not macOS or Windows. (NVIDIA Docs)
So NVIDIAâs official options are:
- A physical x86-64 machine running Ubuntu, or
- An x86-64 Linux environment reached via VM/emulation.
macOS itself is not a supported host for SDK Manager.
3.2. Intel Mac: Ubuntu VM is common (but unofficial)
If your Mac is Intel:
- You can install VirtualBox, VMware Fusion, etc., and run Ubuntu x86-64 as a guest.
- Inside that Ubuntu VM you install SDK Manager and flash the Jetson over USB.
Plenty of people have done this successfully; there are tutorials and forum posts describing the steps and USB pass-through quirks.
Typical gotchas:
- You must configure the VMâs USB settings so that the âNVIDIA Corp. APXâ device (the Jetson in recovery mode) is attached to the VM, not to macOS.
- If the USB device disconnects or snaps back to macOS mid-flash, SDK Manager will hang or fail.
NVIDIA doesnât officially âsupportâ this, but itâs a widely used workaround.
3.3. Apple Silicon (M1/M2/M3) Mac: needs x86 emulation
On Apple Silicon:
- The usual virtualisation tools (Parallels/UTM/VMware) default to ARM64 Ubuntu guests.
- SDK Manager is built for x86-64, so you need an Ubuntu x86-64 guest under emulation (e.g., UTM with QEMU) if you want to run it.
People have successfully done exactly this: running an emulated Ubuntu 22.04 (x86-64) under UTM on an M-series Mac, installing SDK Manager there, and flashing a Jetson Orin Nano.
Trade-offs:
- It works, but itâs slower and more fragile than a real x86-64 PC.
- You still have to get USB passthrough right (UTM/Parallels must hand the Jetsonâs USB device to the guest).
So: itâs doable if youâre patient and careful, but not âone clickâ.
4. Is a separate Ubuntu PC absolutely necessary?
Putting it all together:
-
To just reinstall the OS on SD (clean JetPack 5.x or a consistent JetPack 6.x SD image):
ïž No, a separate Ubuntu PC is not required. Your Mac + Balena Etcher is enough. -
To do NVIDIAâs official âfirst-timeâ QSPI bootloader update and full JetPack install (especially for JetPack 6.x, or NVMe flashing):
ïž Yes, you effectively need some Ubuntu x86-64 environment â but it can be:- A physical Ubuntu PC, or
- An Ubuntu x86-64 VM/emulator running on your Mac, with USB passthrough configured.
-
There are advanced host-less paths (like upgrading Orin Nano from JetPack 5 to 6 using only SD cards and on-device
aptsteps) that avoid SDK Manager, but they are more complex and easy to get wrong. For where you are now, Iâd treat those as âlater, when youâre comfortable and after reading the full guideâ.
5. Practical suggestion for you right now
Given your situation (beginner, already on Python 3.9.5, and seeing scary nvidia-l4t-* dpkg errors):
-
Short term, Mac-only âstabilise thingsâ plan
-
Pick a JetPack 5.x SD-card image for Jetson Orin Nano Devkit (e.g. 5.1.3 or 5.1.4, whichever NVIDIA recommends for your use). (Qiita)
-
On the Mac, use Balena Etcher to flash that image to a new or freshly erased microSD card.
-
Boot the Jetson from that card and do the initial setup.
-
On that fresh system:
- Avoid
do-release-upgradeor largedist-upgradejumps that push you into a new JetPack/L4T major version; follow Jetson-specific guides instead of generic Ubuntu ones. - Use your Python 3.9.5 + venv approach for ML work.
- Avoid
This already gets you onto a known-good base and should avoid the broken
nvidia-l4t-*state you hit earlier. -
-
Later, when/if you want JetPack 6 or full QSPI reset
-
Decide whether youâre willing to:
- Borrow or build a small x86-64 Ubuntu box, or
- Invest time in setting up an Ubuntu x86-64 VM/emulator on your Mac and learning how to pass the Jetsonâs USB device through reliably.
-
When you do that, run SDK Manager once to flash the board fully (QSPI + rootfs) to a JetPack version you want.
-
After that one-time QSPI update, future SD-card images for the same JetPack major line are much simpler. (Qiita)
-
Tiny summary
-
Standard Ubuntu guides can be dangerous on Jetson because they assume a normal PC â Jetsonâs firmware stack (the
nvidia-l4t-*packages) is much more tightly coupled to the hardware. -
For re-imaging the SD card, you can absolutely use just your Mac â that is officially documented and common practice.
-
For full âflash everything including QSPIâ (especially for first-time JetPack 6.x use), NVIDIA still expects an x86-64 Ubuntu host. That can be:
- a real Ubuntu PC, or
- an Ubuntu x86-64 VM/emulator on your Mac with USB passthrough, which works but is more work to set up.
-
A sensible path for you is: stabilise the board via SD-card reflash from Mac now, and only worry about SDK Manager / full QSPI flashing once youâre ready and have some form of Ubuntu host available.
Seems like this?
What you are seeing is not a problem with your Jetson; it is SDK Manager saying:
âFor this host OS + target hardware combination I donât have any JetPack SDKs that Iâm allowed to offer.â
On your screenshot the key line is:
- Host Machine: Ubuntu 24.04 â x86_64
- Target Hardware: Jetson AGX Orin modules
- SDK VERSION panel: âThere are no SDKs available for installation.â
1. What SDK Manager is doing in Step 1
In Step 1 SDK Manager:
-
Looks at:
- Product = Jetson
- Host OS version (here: Ubuntu 24.04)
- Selected target board (AGX Orin / Orin Nano / etc.)
-
Asks NVIDIAâs server for a JSON manifest of all JetPack releases.
-
Filters that list by a compatibility matrix:
- Each JetPack version is tagged with which host OS versions it supports.
- If none of the JetPack entries match your host OS + target board, the list is empty.
When the filtered list is empty, SDK Manager shows the ghost icon and:
âThere are no SDKs available for installation.â
So the message means âthere is no JetPack version that officially supports Ubuntu 24.04 for this boardâ, not that your board is broken.
2. Why this happens specifically on Ubuntu 24.04
2.1. SDK Manager vs JetPack support
-
SDK Manager itself can run on Ubuntu 24.04 (system requirements list 16.04, 18.04, 20.04, 22.04 and 24.04 as valid hosts).
-
But each SDK (JetPack) has its own host OS support. The compatibility matrix for SDK Manager 2.0.0 shows, for example:
- JetPack 4.x â Ubuntu 16.04 / 18.04
- JetPack 5.x â Ubuntu 18.04 / 20.04
- JetPack 6.x â Ubuntu 20.04 / 22.04
- No JetPack entry lists Ubuntu 24.04 yet in that matrix.
Recent community reports for Jetson Orin Nano/Orin NX say exactly what you are seeing:
- On a Ubuntu 24.04 host, SDK Manager shows the Jetson device, but the SDK Version panel is empty and says âno SDKs availableâ. The suggested fix is to use Ubuntu 20.04 (or 22.04, depending on JetPack version) as host. (Zenn)
A Jetson AGX Xavier user saw the same message when trying to flash JetPack 5.1 from Ubuntu 22.04, and NVIDIAâs response was that JetPack 5.1 requires a 20.04 host; 22.04 is only for JetPack 6.x. (Seeed Studio Forum)
So, your situation matches this pattern:
- Youâre running Ubuntu 24.04 on the host.
- JetPack versions for Jetson AGX Orin / Orin Nano are currently defined only for 20.04 and/or 22.04, not 24.04.
- The compatibility filter removes every JetPack entry â nothing left to display â âno SDKs availableâ.
3. Less common causes (for completeness)
There are a few other things that can trigger similar messages:
-
Account / membership or network issues
- If SDK Manager cannot download the configuration files at all, it may show
âNo SDKs are available for your accountâ and complain about failing to fetch a configuration file. (RidgeRun Developer Wiki) - That is a different message from what your screenshot shows, and usually accompanies explicit network errors in the log.
- If SDK Manager cannot download the configuration files at all, it may show
-
Very old SDK Manager
- Very old SDK Manager builds do not know about newer JetPack streams, so they canât list them. Updating to the current 2.x release fixes it. (Seeed Studio Forum)
-
Unsupported/custom target boards or BSPs
- For custom carrier boards or modules that require a custom BSP, NVIDIA sometimes states that SDK Manager canât offer a ready-made JetPack image; you must build/flash manually. (NVIDIA Developer Forums)
Your screenshot shows a standard Jetson AGX Orin module and a modern SDK Manager UI, so the host OS mismatch is by far the most likely root cause.
4. Practical solutions
Option A â Use a supported Ubuntu version as host (recommended)
Pick the host version that matches the JetPack you want:
- JetPack 5.x â Ubuntu 20.04 host is recommended.
- JetPack 6.x â Ubuntu 20.04 or 22.04 host.
Steps (high-level):
-
On an x86_64 machine, install Ubuntu 22.04 or 20.04 (dual-boot, separate disk, or another PC).
-
Install the latest NVIDIA SDK Manager from the official download page.
-
Boot your Jetson into recovery mode and connect it via USB-C to the host.
-
Start SDK Manager, log in, select:
- Product Category: Jetson
- Target Hardware: your exact board (e.g., âJetson Orin Nano Developer Kitâ or âJetson AGX Orin 64GBâ)
-
The SDK Version panel should now show JetPack 5.x / 6.x entries instead of the ghost icon.
As soon as you move from Ubuntu 24.04 to a supported host version, reports show that the âno SDKs availableâ message disappears. (Zenn)
Option B â Use a VM on your existing Ubuntu 24.04 machine
If you donât want to repartition or rebuild your current host:
- Install a hypervisor such as VirtualBox or VMware on your 24.04 host.
- Create a VM running Ubuntu 22.04 or 20.04 (x86_64).
- Install SDK Manager inside the VM.
- Pass the Jetsonâs USB device through to the VM (so the VM sees it as if it were plugged directly into it).
- Run SDK Manager as above.
This is widely used in practice (including on non-Ubuntu hosts), but it adds complexity around USB passthrough and performance.
Option C â âHackâ the host into pretending itâs 22.04 (not recommended)
Some users edit /etc/lsb-release so that SDK Manager believes a 24.04 machine is 22.04, and JetPack 6.x appears.
This can work, but it is fragile:
- Other tools may now mis-detect your OS.
- Youâre using a JetPack release on an untested host configuration.
- If something goes wrong, NVIDIA support will typically ask you to move to a supported host anyway.
For a beginner this is usually not worth the risk; using a proper 20.04/22.04 environment is safer.
5. Quick checklist for you
-
Confirm: your host says Ubuntu 24.04 â x86_64 in SDK Manager Step 1 (as in your screenshot).
-
Decide which JetPack you want (most Orin tutorials currently target JetPack 5.1.x or 6.x).
-
Set up a Ubuntu 20.04 (for JetPack 5.x) or 22.04 (for JetPack 6.x) environment on x86_64:
- Separate PC, or
- Dual-boot, or
- VM on your existing machine.
-
Install the latest SDK Manager there and re-run the Jetson flashing process.
-
In Step 1, you should now see a dropdown of JetPack SDK versions instead of âThere are no SDKs available for installationâ.
Summary
- The message in your screenshot means: no JetPack release is officially defined for âJetson AGX Orin + Ubuntu 24.04 hostâ, so SDK Manager has nothing to offer.
- SDK Manager itself supports Ubuntu 24.04, but the JetPack SDKs for Jetson currently support only older host versions (20.04 / 22.04), as shown in NVIDIAâs compatibility matrix and confirmed by multiple Jetson Orin Nano/AGX Orin user reports. (Zenn)
- The straightforward fix is to run SDK Manager from a supported host OS (20.04/22.04), either directly on a machine or inside a VM. Once you do that, the SDK Version panel will list valid JetPack versions and you can proceed with flashing normally.
Have connected the Jetson to the host now running Ubuntu 22.04, SDK manager finds it and begins to download and install to my micro sd card it returns the following errors
21:34:43 ERROR: Drivers for Jetson - target_image: [exec_command]: /bin/bash -c /home/ruisdk/.nvsdkm/replays/scripts/JetPack_6.2.1_Linux/NV_L4T_DRIVERS_COMP.sh; [error]: tar: Exiting with failure status due to previous errors
21:14:11 INFO: command finished successfully
21:34:43 ERROR: Drivers for Jetson - target_image: command error code: 11
21:14:11 DEBUG: running command < if ls -l /dev/tty[AU]* > /dev/null 2>&1; then ls -l /dev/tty[AU]* | awk '{ print $NF }'; else echo ""; fi >
21:34:43 ERROR: Drivers for Jetson - target_image: command terminated with error
21:14:11 INFO: command finished successfully
21:34:43 SUMMARY: Drivers for Jetson - target_image: Installation failed.
21:14:54 DEBUG: running command < true >
21:34:43 SUMMARY: Multimedia API - target: Depends on failed component
21:14:54 INFO: command finished successfully
21:34:43 SUMMARY: TensorRT Runtime - target: Depends on failed component
21:14:54 DEBUG: running command < lsusb | grep 0955: | awk '{printf $2 "/" $4 " " $6 ";"}' >
21:34:43 SUMMARY: CUDA Runtime - target: Depends on failed component
21:14:54 INFO: command finished successfully
21:34:43 SUMMARY: CuDNN Runtime - target: Depends on failed component
21:17:02 DEBUG: running command < true >
21:34:43 SUMMARY: OpenCV Runtime - target: Depends on failed component
21:17:02 INFO: command finished successfully
21:34:43 SUMMARY: VPI Runtime - target: Depends on failed component
21:34:43 SUMMARY: CuPVA Runtime - target: Depends on failed component
21:34:43 SUMMARY: NVIDIA Container Runtime with Docker integration (Beta) - target: Depends on failed component
21:34:43 SUMMARY: Gstreamer - target: Depends on failed component
21:34:43 SUMMARY: DLA Compiler - target: Depends on failed component
21:34:43 SUMMARY: DateTime Target Setup - target: Depends on failed component
21:34:43 SUMMARY: Flash Jetson Linux - flash: Depends on failed component
21:34:43 SUMMARY: File System and OS - target_image: Depends on failed component
21:34:44 ERROR: Nsight Perf SDK - host: unarchive the package failure: reading file in tar archive: /media/ruisdk/disk/sdk/JetPack_6.2.1_Linux/NVIDIA_Nsight_Perf_SDK/NvPerf/lib/a64/libnvperf_grfx_host.so: making symbolic link for: symlink libnvperf_grfx_host.so.2023.5.0 /media/ruisdk/disk/sdk/JetPack_6.2.1_Linux/NVIDIA_Nsight_Perf_SDK/NvPerf/lib/a64/libnvperf_grfx_host.so: operation not permitted
21:34:44 ERROR: Nsight Perf SDK - host: command error code: 82
21:34:44 ERROR: Nsight Perf SDK - host: command terminated with error
21:34:44 SUMMARY: Nsight Perf SDK - host: Installation failed.
21:37:55 ERROR: TensorRT Runtime - target: Download 'TensorRT Runtime' failure
21:37:55 ERROR: TensorRT Runtime - target: Download 'TensorRT Runtime' failure
21:37:55 ERROR: TensorRT Runtime - target: download failed
21:39:52 ERROR: Nsight Graphics - host: Download 'Nsight Graphics' failure
21:39:52 ERROR: Nsight Graphics - host: Download 'Nsight Graphics' failure
21:39:52 ERROR: Nsight Graphics - host: download failed
21:43:52 SUMMARY: Nsight Systems - host: Install completed successfully.
21:53:50 SUMMARY: Nsight Graphics - host: Install completed successfully.
21:54:42 ERROR: CUDA on Host - host: Download request timed out after 60 seconds. Closing connections
21:54:42 ERROR: CUDA on Host - host: Download 'CUDA on Host' failure
21:54:42 ERROR: CUDA on Host - host: Download request timed out after 60 seconds. Closing connections
21:54:42 ERROR: CUDA on Host - host: Download 'CUDA on Host' failure
21:54:42 ERROR: CUDA on Host - host: download failed
22:23:36 SUMMARY: CUDA on Host - host: Install completed successfully.
22:26:07 SUMMARY: CUDA Cross Compile Package on Host - host: Install completed successfully.
22:26:17 SUMMARY: NvSci - host: Install completed successfully.
22:27:30 SUMMARY: VPI on Host - host: Install completed successfully.
21:53:41 INFO: Nsight Graphics - host: Setting up libxcb-icccm4-dev:amd64 (0.4.1-1.1build2) ...
21:53:41 INFO: Nsight Graphics - host: Setting up nsight-graphics-for-embeddedlinux-2024.2.0.0 (2024.2.0.0) ...
21:53:41 INFO: Nsight Graphics - host: update-alternatives: using /opt/nvidia/nsight-graphics-for-embeddedlinux/nsight-graphics-for-embeddedlinux-2024.2.0.0/host/linux-desktop-nomad-x64/ngfx-ui to provide /usr/bin/ngfx-ui-for-embeddedlinux (ngfx-ui-for-embeddedlinux) in auto mode
21:53:50 INFO: Nsight Graphics - host: exec_command: dpkg-query -W -f='${db:Status-abbrev} ${Version}' nsight-graphics-for-embeddedlinux-2024.2.0.0
21:53:50 INFO: Nsight Graphics - host: Host Deb package NV_L4T_NVIDIA_NSIGHT_GRAPHICS_HOST_COMP NVIDIA_Nsight_Graphics_L4T_Public_2024.2.24327_x64.deb installed successfully.
21:53:50 INFO: Nsight Graphics - host: [ Package Install Finished Successfully ]
21:53:50 INFO: Nsight Graphics - host: [host] [ 661.82 MB used. Disk Avail on Partition /dev/sda4: 49.29 GB ]
21:53:50 INFO: Nsight Graphics - host: [ NV_L4T_NVIDIA_NSIGHT_GRAPHICS_HOST_COMP Install took 32s ]
21:54:42 : CUDA on Host - host: download https://developer.nvidia.com/assets/embedded/secure/tools/files/jetpack-sdks/Jetson_621_b41/./l4t-cuda-repo-ubuntu2204-12-6-local_12.6.11-560.35.03-1_amd64.deb failed, retrying 1...
21:54:42 INFO: wait for 4.745 sec before retry https://developer.nvidia.com/assets/embedded/secure/tools/files/jetpack-sdks/Jetson_621_b41/./l4t-cuda-repo-ubuntu2204-12-6-local_12.6.11-560.35.03-1_amd64.deb
21:54:46 INFO: CUDA on Host - host: start to download https://developer.nvidia.com/assets/embedded/secure/tools/files/jetpack-sdks/Jetson_621_b41/./l4t-cuda-repo-ubuntu2204-12-6-local_12.6.11-560.35.03-1_amd64.deb to /media/ruisdk/disk/l4t-cuda-repo-ubuntu2204-12-6-local_12.6.11-560.35.03-1_amd64.deb
22:27:12 INFO: VPI on Host - host: (Reading database ... 100%
22:27:12 INFO: VPI on Host - host: (Reading database ... 242248 files and directories currently installed.)
22:27:12 INFO: VPI on Host - host: Preparing to unpack .../vpi-python3.10-3.2.4-cuda12-x86_64-linux.deb ...
22:27:12 INFO: VPI on Host - host: Unpacking python3.10-vpi3 (3.2.4) ...
22:27:12 INFO: VPI on Host - host: Setting up python3.10-vpi3 (3.2.4) ...
22:27:20 INFO: VPI on Host - host: exec_command: dpkg-query -W -f='${db:Status-abbrev} ${Version}' python3.10-vpi3
22:27:20 INFO: VPI on Host - host: Host Deb package NV_VPI_HOST_COMP vpi-python3.10-3.2.4-cuda12-x86_64-linux.deb installed successfully.
22:27:20 INFO: VPI on Host - host: [ Package Install Finished Successfully ]
22:27:20 INFO: VPI on Host - host: [ Package Install Started ]
22:27:20 INFO: VPI on Host - host: deb installer start to install
22:27:20 INFO: VPI on Host - host: exec_command: dpkg -f /media/ruisdk/disk/vpi-samples-3.2.4-cuda12-x86_64-linux.deb Package | tr -d '\n'
22:27:20 INFO: VPI on Host - host: exec_command: dpkg -f /media/ruisdk/disk/vpi-samples-3.2.4-cuda12-x86_64-linux.deb Version | tr -d '\n'
22:27:20 INFO: VPI on Host - host: deb file /media/ruisdk/disk/vpi-samples-3.2.4-cuda12-x86_64-linux.deb, package name [vpi3-samples], package version [3.2.4]
22:27:20 INFO: VPI on Host - host: exec_command: sudo -E apt-get install -y --allow-downgrades '/media/ruisdk/disk/vpi-samples-3.2.4-cuda12-x86_64-linux.deb'
22:27:20 INFO: VPI on Host - host: Reading package lists...
22:27:20 INFO: VPI on Host - host: Building dependency tree...
22:27:20 INFO: VPI on Host - host: Reading state information...
22:27:21 INFO: VPI on Host - host: The following NEW packages will be installed:
22:27:21 INFO: VPI on Host - host: vpi3-samples
22:27:21 INFO: VPI on Host - host: 0 upgraded, 1 newly installed, 0 to remove and 294 not upgraded.
22:27:21 INFO: VPI on Host - host: Need to get 0 B/17.2 MB of archives.
22:27:21 INFO: VPI on Host - host: After this operation, 17.7 MB of additional disk space will be used.
INSTALLATION FAILEDPlease review the terminal tab for details. Resolve and RETRY.
Host-side issue�
Root cause is on the host, not on the Jetson.
Two different things are happening in your log:
Drivers for Jetson - target_imagefails with
tar: Exiting with failure status due to previous errorsâ all Jetson-side components then show âDepends on failed componentâ.- Some host tools (Nsight Perf SDK etc.) fail with
operation not permittedwhen trying to create symlinks.
Those two have the same underlying cause:
SDK Manager is unpacking JetPack into
/media/ruisdk/disk/..., which is on a filesystem that doesnât allow Unix symlinks (typically exFAT/FAT32/NTFS or wrong mount options), sotarcannot create the symlink files and exits with an error.
NVIDIA engineers explicitly say that this error usually means either no root permission or that âyour host PC is using a file system that is not ext4â for the SDK Manager working directory. (Launchpad)
Other SDK Manager threads also warn that the ~/nvidia / download folders must be on an ext4 filesystem, not VFAT/NTFS/exFAT. (PyPI)
Your path:
/media/ruisdk/disk/sdk/JetPack_6.2.1_Linux/...
is exactly the pattern you see when the downloads go to an external USB/SD drive formatted exFAT/FAT32. On those filesystems, creating symlinks returns operation not permitted, which is exactly what you see for Nsight Perf SDK, and itâs almost certainly what tar saw a few lines before the Exiting with failure status message.
Once Drivers for Jetson fails, every Jetson-side component (CUDA Runtime, cuDNN, TensorRT, Flash Jetson Linux, File System and OS, etc.) is marked as âDepends on failed componentâ, so they all show as failed even though the core problem is just the tar extract step.
What to do
1. Confirm the filesystem type
On the host, run:
df -Th /media/ruisdk/disk
Look at the âTypeâ column:
- If it says
exfat,vfat, orntfs, that is the problem. - For SDK Manager you want
ext4(or another Linux FS that supports symlinks and Unix permissions).
If /media/ruisdk/disk is exFAT/NTFS/FAT, do not use it as:
- Download folder
- SDK / JetPack workspace
- Target HW image folder
SDK Manager is allowed to store things there, but the JetPack archives contain symlinks and must be unpacked on a Unix filesystem.
2. Put SDK Managerâs work directories on an ext4 partition
Simplest approach:
-
Make a workspace on your normal home partition (which is ext4 by default):
mkdir -p ~/nvidia/sdk mkdir -p ~/Downloads/nvidia -
Start SDK Manager on the host (Ubuntu 22.04).
-
In SDK Manager, open the gear icon â Settings â Downloads and set:
- Download folder =
~/Downloads/nvidia - Local SDK Manager folder / Target HW image folder =
~/nvidia/sdk
(or whatever ext4 path you prefer)
Make sure nothing in those fields points to
/media/ruisdk/disk/.... - Download folder =
-
Optional but tidy: delete the old half-extracted content on the external drive:
rm -rf /media/ruisdk/disk/sdk/JetPack_6.2.1_Linux -
Restart SDK Manager and re-run the JetPack 6.2.1 install/flash.
Now NV_L4T_DRIVERS_COMP.sh will unpack into an ext4 directory (~/nvidia/sdk/...), symlinks will work, and tar should no longer exit with an error. That in turn should let:
- Drivers for Jetson â target_image
- File System and OS â target_image
- Flash Jetson Linux â flash
- CUDA/cuDNN/TensorRT Runtime on target
all complete successfully.
3. Nsight Perf SDK / other host-side errors
The Nsight Perf SDK error you pasted:
unarchive the package failure ... making symbolic link for:
symlink libnvperf_grfx_host.so.2023.5.0 ... operation not permitted
is the same pattern as an NVIDIA forum case where the SDK workspace was on a removable drive and symlinks were blocked. (Launchpad)
Once you move the SDK Manager workspace to an ext4 directory as above, this should also disappear.
If you still do not care about Nsight tools, you can also:
- In Step 1, uncheck Host components (if the UI lets you) and only install/flash the Jetson side. NVIDIA staff explicitly suggest deselecting Host Machine when Nsight Perf SDK is the only thing failing. (Launchpad)
But fixing the filesystem is the real solution.
4. Download timeouts
Lines like:
Download request timed out after 60 seconds. Closing connections
... download failed
...
Install completed successfully.
mean the HTTP download to the host timed out once but succeeded on retry. That is just network speed or transient server issues. If it happens a lot:
- Use the âDownload now, install laterâ option in SDK Manager so all packages are cached first, then flash.
- Or simply press RETRY when SDK Manager finishes and reports some downloads as failed; it will reuse already-downloaded files.
These download issues are annoying but not the cause of the tar / symlink errors.
Minimal checklist for you
- On the host, run
df -Th /media/ruisdk/diskand confirm it is not ext4. - Stop using
/media/ruisdk/diskfor SDK Manager paths. - Create
~/nvidia/sdk(or similar) on your ext4 root/home partition. - In SDK Manager settings, point Download folder and Target HW image folder to that ext4 directory.
- Restart SDK Manager, select JetPack 6.2.1, and run the installation/flash again.
- If you donât need host Nsight tools, uncheck them to reduce noise.
After this, you should no longer see:
Drivers for Jetson - target_image: tar: Exiting with failure status due to previous errorsNsight Perf SDK ... operation not permitted
and the JetPack 6.2.1 flash to your microSD/Jetson should complete cleanly.
Sorry i realise this is getting quite dragged out.
I managed to successfully flash and run jetpack on my Jetson using your help, but it is booting from its internal 64GB drive, and i wanted it to boot from a 1TB micro SD card that iâm using. iâm getting very confused at which machine should have the micro SD card connected to it, and what file paths i need to be entering to make sure that it only goes to the card, and i also cannot change the HW image location as the button is greyed out saying that it is already installed in a file directory that, after checking and using online guides to try and uninstall, doesnât exist.
my problem is i cant change the install folder because its stuck on something, but even if i could iâm not certain (after watching and reading online guides) what directory i should be using.
Jetson is toughâŠ![]()
Short answer:
-
To make the Jetson boot from the 1 TB microSD, you either:
- Flash a JetPack SD-card image onto the 1 TB card on your host (Mac/Ubuntu), then put that card in the Jetson, or
- Use SDK Manager to flash Jetson OS to the microSD while the card is inserted in the Jetson, and select
SDCardas the Storage Device in the flashing dialog.
-
The âTarget HW image folderâ is only a workspace on the host PC. It does not decide which drive on the Jetson is used. It just decides where the
Linux_for_Tegrafiles live on the host. NVIDIA expects this to be on an ext4 path like~/nvidia/nvidia_sdk. (Medium) -
If that folder is greyed out and stuck on a dead path, you reset SDK Managerâs internal database (
~/.nvsdkm/sdkm.dbor a CLI uninstall) so it âforgetsâ the old install and lets you pick a new folder. (NVIDIA Developer Forums)
Now step-by-step, with context.
1. Who owns what: host vs Jetson vs SD card
There are two completely separate things:
-
On the host PC (Ubuntu x86_64)
-
SDK Manager downloads archives and creates a Jetson OS image in a âTarget HW image folderâ.
-
This is a normal folder on your host disk like:
/home/<you>/nvidia/nvidia_sdk/JetPack_6.2_Linux_JETSON_.../Linux_for_Tegra -
This folder just holds
Linux_for_Tegra,rootfs, flash scripts, etc. It never runs on the Jetson itself. (NVIDIA Developer Forums)
-
-
On the Jetson (the board)
- It has one or more real storage devices: internal eMMC / NVMe (your â64 GB internal driveâ), microSD slot, maybe USB drives.
- SDK Manager flashes Jetson over USB in Force Recovery mode, and in the pre-flash dialog you pick Storage Device =
EMMC/SDCard/NVMe/USB/Custom. (NVIDIA Docs) - Whatever you pick there is where the real root filesystem goes.
So:
- âTarget HW image folderâ = host workspace directory.
- âStorage Deviceâ in the flash dialog = where the OS actually lives (eMMC vs SD vs NVMe) on the Jetson.
These are easy to mix up. You are currently mixing them.
2. Where should the 1 TB microSD physically be?
There are two valid workflows. Pick one and stick to it.
Option A: âDefaultâ SD-card method (no SDK Manager for OS)
This is what NVIDIA calls the âDefault software setup methodâ for Jetson Orin Nano dev kits:
- Download the JetPack SD card image on your PC.
- Write it to the microSD with Balena Etcher or similar.
- Insert that microSD into the Jetson and power on. (NVIDIA Developer)
In this flow:
- The 1 TB microSD is inserted into the host PC while you flash the
.imgfile. - The Jetson is off during that step.
- After flashing, you move the card to the Jetson and boot.
You never touch âTarget HW image folderâ for this; SDK Manager is not required to put the OS on the SD card at all.
Option B: SDK Manager method (what you are using now)
This is what the âOptional flow with host Linux PCâ describes. (NVIDIA Developer)
In this flow:
-
You put the Jetson into Force Recovery and connect it to the Ubuntu host with USB-C. (NVIDIA Developer)
-
The 1 TB microSD sits inside the Jetson, not in the host.
-
SDK Manager creates image files in the hostâs âTarget HW image folderâ.
-
SDK Manager opens the pre-flash dialog, and there you select:
- Setup type: Manual or Auto
- OEM configuration: Pre-config or Runtime
- Storage Device:
SDCard(instead of eMMC / NVMe). (NVIDIA Developer)
When you click Flash, SDK Manager writes the OS image to the Jetsonâs microSD, not to the internal 64 GB device.
So:
- If you use SDK Manager, the microSD must be in the Jetson during flashing.
- If you use the plain SD image, the microSD must be in the host PC during flashing.
3. Why your Jetson is currently booting from the 64 GB device
Right now you have:
-
Recently flashed JetPack onto âinternal 64 GBâ using SDK Manager. Storage Device was probably
EMMCorNVMe. -
Your 1 TB microSD either:
- has no OS image, or
- has an old / incompatible image, or
- is not first in the UEFI boot order.
On Orin-class boards, the UEFI firmware can boot from eMMC, SD, NVMe, USB, etc., with removable devices (SD/USB) usually having higher default priority than internal storage. (NVIDIA Docs)
If the SD card does not have a valid JetPack installation, or firmware is out of date, it will fall back to whatever is valid (your 64 GB device). Some users see UEFI shell or boot failures until firmware and SD image are aligned. (Reddit)
Once you properly flash Jetson OS to the 1 TB card, two things happen:
- UEFI sees a valid OS on the SD and normally prefers it.
- Or you can explicitly set SD as first boot device via the UEFI menu by pressing
ESCat boot, entering Boot Maintenance Manager â Boot Options â Change Boot Order, and moving the SD entry to the top. (NVIDIA Docs)
4. Fixing the âTarget HW image folderâ being greyed out / stuck
This part is only about the host PC (Ubuntu x86_64). It is not about which Jetson drive is used.
SDK Manager marks components as âinstalledâ in a small SQLite database ~/.nvsdkm/sdkm.db. If it thinks an OS image for that JetPack version + board is already created, it locks the Target HW image folder field and reuses the previous workspace path, even if that folder is gone. (NVIDIA Developer Forums)
You have exactly that situation.
You have three ways out; simplest first:
Method 1: Delete the SDK Manager DB (brute-force reset)
Effect: SDK Manager forgets what was installed; you log in again; all paths become editable again.
On the host PC:
cd ~/.nvsdkm
rm sdkm.db
NVIDIA forum users and NVIDIA staff recommend removing sdkm.db when you need a âfresh installâ or when paths are stuck. (NVIDIA Developer Forums)
Then:
- Restart SDK Manager.
- Log in again.
- Go back to STEP 02.
Target HW image folderwill now be editable.
Method 2: CLI uninstall for the stuck target image
NVIDIA also added a CLI uninstall method for the âcannot change Target HW image folderâ issue. (NVIDIA Developer Forums)
Even if the folder no longer exists physically, you can run:
sdkmanager --cli \
--action uninstall \
--target-image-folder /path/that/SDKM/thinks/is/used
Use the exact path SDK Manager shows in the UI. That command only clears the record in sdkm.db; it does not need the directory to still exist. After that, restart SDK Manager and the field becomes editable.
Method 3: Delete the entire ~/.nvsdkm directory
If things are badly tangled, you can just:
rm -rf ~/.nvsdkm
This wipes SDKM state entirely (downloads cache info, login tokens). Next run is like first install.
5. What directory should you actually use for âTarget HW image folderâ?
You want:
- On the Ubuntu host, an internal ext4 path with plenty of space.
- Avoid external USB/disks with exFAT/NTFS; people hit symlink/permission errors there. Multiple guides explicitly warn about using external USB drives for âTarget HW image folderâ. (ktnc.co.kr)
A safe choice:
/home/<your-username>/nvidia/nvidia_sdk
Then in SDK Manager STEP 02:
- Download folder:
/home/<you>/nvidia/nvidia_sdk/downloads(for example) - Target HW image folder:
/home/<you>/nvidia/nvidia_sdk
SDK Manager will create subfolders like:
/home/<you>/nvidia/nvidia_sdk/
JetPack_6.2_Linux_JETSON_AGX_ORIN_TARGETS/
Linux_for_Tegra/
bootloader/
kernel/
rootfs/
...
This pattern is shown in NVIDIA examples and partner BSP guides. (NVIDIA Developer Forums)
Key point:
- You do not point âTarget HW image folderâ at the SD card device or mount point.
- You just point it at a normal host folder; SDKM handles writing to the Jetsonâs SD/eMMC/NVMe over USB during the flash step.
6. Concrete recipe to end up booting from the 1 TB microSD
Path 1: Use the SD-image method (simplest, avoids SDKM confusion)
-
On your host (Mac or Ubuntu):
- Download the JetPack 6.2.x SD card image for your exact board (e.g. âJetson Orin Nano Developer Kit JetPack 6.2.1 SD imageâ). (Jetson AI Lab)
-
Insert the 1 TB microSD into the host.
-
Use Balena Etcher:
- Source = JetPack 6.2.x
.imgor.zip - Target = 1 TB microSD
- Flash.
- Source = JetPack 6.2.x
-
Put the 1 TB microSD into the Jetson.
-
Power on the Jetson:
- If the firmware is current and boot order is default, it should boot from SD automatically. (NVIDIA Docs)
- If it still boots from the internal 64 GB drive, press
ESCat the NVIDIA logo, go to Boot Manager â Boot Options â Change Boot Order and move the SD card device to the top. (NVIDIA Docs)
At that point, your root filesystem is on the 1 TB card. Internal 64 GB can be left unused or repurposed later.
You can still use SDK Manager afterwards for Jetson SDK Components only, which does not touch the OS location.
Path 2: Use SDK Manager to flash directly to SD
If you want to keep everything in SDK Manager:
-
On the Ubuntu host, reset SDKM state:
cd ~/.nvsdkm rm sdkm.db # or sdkmanager --cli --action uninstall ... -
Restart SDK Manager, log in.
-
STEP 01:
- Product: Jetson
- Target hardware: your exact dev kit
- SDK version: JetPack 6.2.x
- Deselect Host components if you do not need them. (NVIDIA Docs)
-
STEP 02:
- Download folder:
/home/<you>/nvidia/nvidia_sdk/downloads - Target HW image folder:
/home/<you>/nvidia/nvidia_sdk - Accept licenses, Continue. (Medium)
- Download folder:
-
STEP 03:
-
Jetson in Force Recovery.
-
1 TB microSD inserted into the Jetson.
-
SDKM opens the pre-flash dialog:
- Choose Manual or Auto setup.
- OEM configuration: Pre-config or Runtime (your choice).
- Storage Device: set to
SDCard. (NVIDIA Developer)
-
Click Flash.
-
-
After flash:
- Jetson reboots. If boot order prefers SD, you boot from 1 TB microSD.
- If not, use
ESCâ Boot Maintenance Manager â Change Boot Order to put SD first. (NVIDIA Docs)
Result is effectively the same as Path 1: OS on the 1 TB SD, internal 64 GB unused for rootfs.
7. How to double-check what you are booted from
On the Jetson:
df -h /
lsblk
df -h /shows which device/(root filesystem) is on. If it is something like/dev/mmcblk1p1, that is usually SD;/dev/mmcblk0p1or/dev/nvme0n1p1is typically internal eMMC / NVMe.lsblkshows sizes; 1 TB vs 64 GB is very obvious.
This is optional but removes doubt.
Summary
-
âTarget HW image folderâ is on the Ubuntu host. It is just a workspace folder like
~/nvidia/nvidia_sdk, not the SD card. It must be on an internal ext4 filesystem. (Medium) -
Which Jetson disk actually gets the OS is controlled by Storage Device in the SDK Manager flash dialog (
SDCard,EMMC,NVMe, etc.). (NVIDIA Docs) -
To clear the greyed-out, stuck image folder, reset SDKM state (
rm ~/.nvsdkm/sdkm.dborsdkmanager --cli --action uninstall --target-image-folder ...). (NVIDIA Developer Forums) -
To boot from the 1 TB microSD:
- Either flash the official JetPack SD image to that card on your host and boot from it, or
- Use SDKM to flash Jetson OS to SD by inserting the card into the Jetson and selecting
SDCardas Storage Device. (NVIDIA Developer)
-
If it still prefers the 64 GB drive, adjust boot order in the UEFI menu with
ESC â Boot Maintenance Manager â Change Boot Order. (NVIDIA Docs)


