<aside> ➡️
</aside>
Tetra is a Python SDK that streamlines the development and deployment of AI workflows on RunPod's Serverless infrastructure. It provides an abstraction layer that lets you define, execute, and monitor sophisticated AI pipelines using nothing but Python code and your local terminal, eliminating the need to interact with the RunPod console GUI.
Learn how to code Tetra workflows in serial and parallel by following this step-by-step tutorial:
You can also start by cloning the Tetra repository and running the examples inside:
git clone <https://github.com/runpod/tetra-examples.git>
Tetra provides several advantages over vanilla Serverless:
Tetra lets you specify hardware requirements at the function level through the LiveServerless
object. This provides granular control over GPU allocation and worker scaling.limits.
For example: