Effortless Power
Two Ways to Infer
FOR C++ DEVELOPERS: The Library
Get maximum control and performance by linking XInfer directly into your C++ applications.
View C++ DocsFOR ANY APPLICATION: The Cloud API
Upload your TensorRT engine file and get a production-ready REST endpoint instantly. No setup, no servers to manage.
View API DocsGo from Code to Inference in Minutes
#include <xinfer/zoo/vision/detector.h>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <stdexcept>
int main() {
try {
// 1. Configure the detector to use our new engine and labels.
xinfer::zoo::vision::DetectorConfig config;
config.engine_path = "assets/yolov8n_fp16.engine";
config.labels_path = "assets/coco.names";
config.confidence_threshold = 0.5f;
// 2. Initialize the detector.
// This is a fast, one-time setup that loads the optimized engine.
std::cout << "Loading object detector...\n";
xinfer::zoo::vision::ObjectDetector detector(config);
// 3. Load an image to run inference on.
// (Create a simple dummy image for this test)
cv::Mat image = cv::Mat(480, 640, CV_8UC3, cv::Scalar(114, 144, 154));
cv::putText(image, "xInfer Quickstart!", cv::Point(50, 240), cv::FONT_HERSHEY_SIMPLEX, 1.5, cv::Scalar(255, 255, 255), 3);
cv::imwrite("quickstart_input.jpg", image);
std::cout << "Created a dummy image: quickstart_input.jpg\n";
// 4. Predict in a single line of code.
// xInfer handles all the pre-processing, inference, and NMS post-processing.
std::cout << "Running prediction...\n";
std::vector<xinfer::zoo::vision::BoundingBox> detections = detector.predict(image);
// 5. Print and draw the results.
std::cout << "\nFound " << detections.size() << " objects (this will be 0 on a dummy image).\n";
for (const auto& box : detections) {
std::cout << " - " << box.label << " (Confidence: " << box.confidence << ")\n";
cv::rectangle(image, cv::Point(box.x1, box.y1), cv::Point(box.x2, box.y2), cv::Scalar(0, 255, 0), 2);
}
cv::imwrite("quickstart_output.jpg", image);
std::cout << "Saved annotated image to quickstart_output.jpg\n";
} catch (const std::exception& e) {
std::cerr << "An error occurred: " << e.what() << std::endl;
return 1;
}
return 0;
}Call the API in 30 Seconds
# Navigate to your build directory where the CLI tool was created
cd build/tools/xinfer-cli
# Run the build command
./xinfer-cli --build \
--onnx ../../assets/yolov8n.onnx \
--save_engine ../../assets/yolov8n_fp16.engine \
--fp16Speed Without Compromise
XInfer delivers the simplicity you want with the native C++ performance you need.
We provide a zero-overhead abstraction over TensorRT, giving you the best of both worlds.







