Developer Plan

For personal projects, students, and exploring the platform.

$0 / month

Choose Plan
  • 5,000 API Requests / month
  • 2 Hosted Models
  • 1 GB Model Storage
  • Community GPU Access (May experience cold starts)
  • Community Support (GitHub Discussions, Discord)
Best Choice
Pro Plan (Most Popular)

For startups, professionals, and applications in production.

$49 / month

$35.8

49.2%

/Month (annually billed)

SAVE UP TO $54.2Choose Plan
  • 100,000 API Requests / month
  • 10 Hosted Models
  • 10 GB Model Storage
  • Priority GPU Access (No cold starts)
  • Standard Email Support
  • Access to Detailed Usage Analytics
Enterprise Plan

For large-scale applications requiring dedicated infrastructure and support.

Custom

Choose Plan
  • Custom / Unlimited API Requests / month
  • Unlimited Hosted Models
  • Custom Model Storage
  • Dedicated GPU Infrastructure (Optional)
  • Dedicated Support Channel & SLAs
  • On-Premise Deployment Options
  • Security & Compliance Reviews
Question & Answerer

We answer your questions

XTorch is a command-line tool and Python library designed to streamline the conversion of PyTorch models into optimized TensorRT engines. It intelligently handles the conversion to ONNX and then to TensorRT, applying optimizations like FP16 or INT8 quantization to maximize inference speed.

No, but it is highly recommended. Ignition-Hub accepts any valid TensorRT .engine file. If you have your own complex conversion pipeline, you can absolutely use that. XTorch is provided to make the process easier and more reliable for the 90% of use cases.

XTorch is designed to work with models from PyTorch 1.8 and newer. We always recommend using the latest stable version of PyTorch for the best results, as ONNX export support improves with each release.

  • FP16 (Half Precision): This optimization reduces your model's size by half and can significantly speed up inference with minimal loss in accuracy. It's a great default choice.
  • INT8 (8-bit Integer): This offers the highest performance boost and smallest model size but requires a calibration step with a representative dataset. It can sometimes lead to a noticeable drop in accuracy, so it should be used carefully and validated. XTorch provides tools to help with the calibration process.

XInfer is our official client library (SDK) for interacting with models deployed on Ignition-Hub. It simplifies the process of making API requests by handling authentication, data serialization, and response parsing for you, so you can focus on your application logic.

Currently, we have official SDKs for Python and C++. We also provide clear REST API documentation for developers who wish to make requests from other languages like JavaScript, Go, or Rust.

Yes. Every model deployed on Ignition-Hub has a standard REST API endpoint. You can use any HTTP client, like curl or Python's requests library, to call it. XInfer is simply a convenience wrapper.

XInfer is a pure inference client. It does not perform pre-processing (like image resizing or normalization) or post-processing (like non-maximum suppression). This logic should remain in your application code for maximum flexibility. You prepare your input tensor, pass it to the XInfer client, and receive the raw output tensor(s) back.
Testimonials

Trusted by 1000+ companies

Image Not Found
Dr. Lena Petrova
Lead AI Engineer at SynthVision

Ignition-Hub is a game-changer. We spent weeks trying to optimize our TensorRT engines for deployment, and the performance was still inconsistent. With Aryorithm, we just upload our model, and the inference API is not only fast but incredibly stable. We cut our deployment time by 80% and saw a 2x speedup on our core models.

Image Not Found
David Chen
Founder & CEO of InsightFlow Analytics

As a startup, we can't afford a massive DevOps team or expensive GPU infrastructure. Ignition-Hub gave us enterprise-grade inference capabilities on a startup budget. The pay-as-you-go pricing is transparent, and it scales effortlessly as our user base grows. It’s the serverless backend for AI we've been dreaming of.

Image Not Found
Maria Garcia
Indie Developer

I was hitting a wall trying to deploy my custom computer vision project. Cloud GPU instances were too complex and expensive. I found Ignition-Hub, and within an hour, I had a live API endpoint for my TensorRT engine. The documentation is fantastic, and the user interface is so intuitive. It's the perfect platform for turning personal projects into real applications.

Our Client

15,000+ Professionals & Teams Choose Doodle

Image Not Found
Image Not Found
Image Not Found
Image Not Found
Image Not Found