GitHub Actions
Ubicloud runners come with dedicated CPU, memory, local block storage, and a public IPv4 address. Our GitHub Managed Runner Application allocates virtual machines (VMs) across Germany and Finland regions to provide high availability. Every account gets a $1/month credit that’s equivalent to 1,250 minutes of Ubicloud runner time.
The tag ubicloud-standard-2
defaults to Linux x64 2 vCPUs. Given Ubicloud’s price advantage, we recommend trying 4 vCPUs if your pipeline benefits from parallelism.
Linux x64
YAML runner tag | vCPU | Memory | Price |
---|---|---|---|
ubicloud-standard-2 | 2 | 8GB | $0.0008/min |
ubicloud-standard-4 | 4 | 16GB | $0.0016/min |
ubicloud-standard-8 | 8 | 32GB | $0.0032/min |
ubicloud-standard-16 | 16 | 64GB | $0.0064/min |
ubicloud-standard-30 | 30 | 120GB | $0.0120/min |
Linux arm64
YAML runner tag | vCPU | Memory | Price |
---|---|---|---|
ubicloud-standard-2-arm | 2 | 6GB | $0.0008/min |
ubicloud-standard-4-arm | 4 | 12GB | $0.0016/min |
ubicloud-standard-8-arm | 8 | 24GB | $0.0032/min |
ubicloud-standard-16-arm | 16 | 48GB | $0.0064/min |
ubicloud-standard-30-arm | 30 | 90GB | $0.0120/min |
GPU runners (beta)
YAML runner tag | vCPU | RAM | GPU | Pricing |
---|---|---|---|---|
ubicloud-gpu | 6 | 32GB | 1x RTX 4000 Ada 20GB Memory | $0.032/min |
Concurreny Limits
By default, all Ubicloud for GitHub Actions accounts can run a total of 128 concurrent vCPUs. This concurrency limit represents the maximum number of vCPUs that can be active in parallel across all runners and enables faster completion of queued workflow runs.
For example, with the default concurrency limit of 128 vCPUs, you can run 64 x ubicloud-standard-2
instances in parallel. Or, you can concurrently run 16 x ubicloud-standard-4
and 8 x ubicloud-standard-8
instances.
If your project requires additional concurrency beyond the default limits, you can add Concurrency Extensions to your account. Each extension increases your capacity by 64 vCPUs for an additional fee. To learn more or buy additional Concurrency Extensions, please contact us at [email protected].