Installation
This guide provides a complete walkthrough for installing xTorch and its dependencies on Ubuntu.
The process involves three main stages:
- Install System Dependencies: Set up the required development tools and libraries.
- Build the Project: Compile the xTorch library from the source code.
- Install the Library: Make xTorch available system-wide for your C++ projects.
1. Prerequisites: System Dependencies
Before building xTorch, you must install several essential system-level packages.
Step 1: Update Package Lists
First, ensure your package manager has the latest list of available software.
sudo apt-get updateStep 2: Install Core Build Tools
Install build-essential (which includes the g++ compiler) and cmake.
sudo apt-get install -y build-essential cmakeStep 3: Install Required Libraries
Install Python and all other libraries required by xTorch with a single command.
sudo apt-get install -y \
python3 \
python3-venv \
libopencv-dev \
libhdf5-dev \
libcurl4-openssl-dev \
zlib1g-dev \
libssl-dev \
liblzma-dev \
libarchive-dev \
libtar-dev \
libzip-dev \
libglfw3-dev \
libeigen3-dev \
libsamplerate0-dev2. Clone and Build xTorch
The build process is managed by CMake, which will automatically download and configure LibTorch and ONNX Runtime for you.
Step 1: Clone the Repository
Get the latest source code from the official GitHub repository.
git clone https://github.com/kamisaberi/xtorch
cd xtorchStep 2: Create a Build Directory
It's standard practice to build the project in a separate directory to keep the source tree clean.
mkdir build
cd buildStep 3: Configure with CMake
Run cmake from the build directory. This will check for dependencies and prepare the build files.
cmake ..!!! note "Custom CUDA Path"
If your NVIDIA CUDA Toolkit is installed in a non-standard location, you'll need to tell CMake where to find the compiler (nvcc). First, find the path with which nvcc, then run cmake with the following flag:
cmake -DCMAKE_CUDA_COMPILER='/path/to/your/nvcc' ..Step 4: Compile the Library
Use make to compile the entire project. The -j$(nproc) flag uses all available CPU cores to significantly speed up the process.
make -j$(nproc)Once complete, the compiled shared library (libxTorch.so) will be available in the build directory.
3. Install the Library
To make xTorch easily accessible to other C++ projects, install it system-wide. This command copies the header files, the compiled library, and CMake configuration files to standard system locations (like /usr/local/lib and /usr/local/include).
From within the build directory, run:
sudo make install4. Verify the Installation
You can verify that everything was built correctly by running the included unit tests and examples.
Running Unit Tests
- Navigate to the
testdirectory in the source folder.cd ../test - Create a build directory and configure with CMake. This will download the Google Test framework.
mkdir build cd build cmake .. - Compile and run the tests.
make ./run_test
Building and Running Examples
The main repository includes a few key examples to get you started.
- Navigate to the main
examplesdirectory (create it if it doesn't exist or use the one from the repo). - Follow the same build process:
mkdir build cd build cmake .. make - You can now run any of the compiled examples, such as:
# Run the LeNet MNIST example ./classifying_handwritten_digits_with_lenet_on_mnist # Run the DCGAN image generation example ./generating_images_with_dcgan
!!! success "Installation Complete!" xTorch is now successfully installed and ready to be used in your C++ applications. For a comprehensive collection of advanced examples, check out the dedicated xtorch-examples repository.
