Team Plans

Collaborate and deploy functions with your team.

Developer

$0

plus usage, per month.

Start building and deploying prediction functions in your apps.

What's included

  • Only pay for usage
  • Create cloud predictors
  • Unlimited secrets
  • Community support

Organization

$120

plus usage, per month.

Go from prototype to production with your team.

What's included

  • 4 seats + $30/seat/mo
  • Create edge predictors
  • Warm predictors: no cold starts
  • Multi-GPU acceleration

Enterprise

Custom

Deploy Function across your enterprise, to both internal teams and customers.

What's included

  • Deploy Function on-prem
  • Bring your own cluster
  • Edge model encryption
  • Enhanced, private support

Below are features included in each plan:

DeveloperOrganizationEnterprise
Seats14 + $30/seat/moCustom
Create Cloud Predictors1
Create Edge Predictors2
Create Warm Predictors3
Multi-GPU Acceleration
Bring your own Cluster
Edge Model Encryption
SupportCommunity DiscordCommunity DiscordPrivate Slack
  1. Cloud predictors run predictions server-side, in our GPU cloud.
  2. Edge predictors run predictions on the local device, drastically saving on compute spend.
  3. Warm predictors are cloud predictors that have at least one replica always available, eliminating prediction latency caused by cold starts.

Cloud Prediction Pricing

Pay for the amount of time that your prediction takes.

Function charges1 per second of prediction time2, depending on acceleration (hardware tier):

AccelerationPrice
CPU$0.0001 / second
Nvidia A40$0.0006 / second
Nvidia A100 (40GB)$0.0012 / second
  1. The user who makes the prediction gets charged, not the predictor owner.
  2. Function does not charge for time spent on cold starts.

Edge Prediction Pricing

Pay per download, make infinitely many predictions on-device at zero cost.

Function charges1 per predictor download2, depending on whether auto-tune is enabled:

TierPrice
Edge Predictor$0.01 / download

Auto-tune Edge Predictor3

$0.02 / download
  1. The user who makes the prediction gets charged, not the predictor owner.
  2. Edge predictors are cached on-device and power infinitely many predictions at zero cost.
  3. Auto-tune finds the best runtime configuration (CUDA, Metal, TensorRT, etc.) to maximize performance for each unique device. See the docs.

Frequently Asked Questions

Answering a few common questions.