Hey there! We’re Aryorithm

XInfer: Maximum Performance, Minimum Effort.

Deploy TensorRT models natively in C++ or leverage our high-speed cloud inference API.

Image Not FoundImage Not Found

Effortless Power

01
Image Not Found
Blazing Fast

Get the bare-metal speed of NVIDIA TensorRT without the complexity..

02
Image Not Found
Simplified C++ API

Our clean, modern C++ API lets you focus on your application's logic, not boilerplate.

03
Image Not Found
Custom CUDA

A streamlined interface to register and execute your own custom CUDA kernels within the inference pipeline.

Two Ways to Infer

FOR C++ DEVELOPERS: The Library

Get maximum control and performance by linking XInfer directly into your C++ applications.

View C++ Docs
FOR ANY APPLICATION: The Cloud API

Upload your TensorRT engine file and get a production-ready REST endpoint instantly. No setup, no servers to manage.

View API Docs

Go from Code to Inference in Minutes

#include <xinfer/zoo/vision/detector.h>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <stdexcept>
 
int main() {
    try {
        // 1. Configure the detector to use our new engine and labels.
        xinfer::zoo::vision::DetectorConfig config;
        config.engine_path = "assets/yolov8n_fp16.engine";
        config.labels_path = "assets/coco.names";
        config.confidence_threshold = 0.5f;
 
        // 2. Initialize the detector.
        // This is a fast, one-time setup that loads the optimized engine.
        std::cout << "Loading object detector...\n";
        xinfer::zoo::vision::ObjectDetector detector(config);
 
        // 3. Load an image to run inference on.
        // (Create a simple dummy image for this test)
        cv::Mat image = cv::Mat(480, 640, CV_8UC3, cv::Scalar(114, 144, 154));
        cv::putText(image, "xInfer Quickstart!", cv::Point(50, 240), cv::FONT_HERSHEY_SIMPLEX, 1.5, cv::Scalar(255, 255, 255), 3);
        cv::imwrite("quickstart_input.jpg", image);
        std::cout << "Created a dummy image: quickstart_input.jpg\n";
 
        // 4. Predict in a single line of code.
        // xInfer handles all the pre-processing, inference, and NMS post-processing.
        std::cout << "Running prediction...\n";
        std::vector<xinfer::zoo::vision::BoundingBox> detections = detector.predict(image);
 
        // 5. Print and draw the results.
        std::cout << "\nFound " << detections.size() << " objects (this will be 0 on a dummy image).\n";
        for (const auto& box : detections) {
            std::cout << " - " << box.label << " (Confidence: " << box.confidence << ")\n";
            cv::rectangle(image, cv::Point(box.x1, box.y1), cv::Point(box.x2, box.y2), cv::Scalar(0, 255, 0), 2);
        }
 
        cv::imwrite("quickstart_output.jpg", image);
        std::cout << "Saved annotated image to quickstart_output.jpg\n";
 
    } catch (const std::exception& e) {
        std::cerr << "An error occurred: " << e.what() << std::endl;
        return 1;
    }
 
    return 0;
}
View Full Installation Guide

Call the API in 30 Seconds

# Navigate to your build directory where the CLI tool was created
cd build/tools/xinfer-cli
 
# Run the build command
./xinfer-cli --build \
    --onnx ../../assets/yolov8n.onnx \
    --save_engine ../../assets/yolov8n_fp16.engine \
    --fp16
Get Your API Key

Speed Without Compromise

XInfer delivers the simplicity you want with the native C++ performance you need.
We provide a zero-overhead abstraction over TensorRT, giving you the best of both worlds.