Roboflow
Learn how to integrate Supabase with Roboflow, a tool for running fine-tuned and foundation vision models.
In this guide, we will walk through two examples of using Roboflow Inference to run fine-tuned and foundation models. We will run inference and save predictions using an object detection model and CLIP.
Project setup
Let's create a new Postgres database. This is as simple as starting a new Project in Supabase:
- Create a new project in the Supabase dashboard.
- Enter your project details. Remember to store your password somewhere safe.
Your database will be available in less than a minute.
Finding your credentials:
You can find your project credentials inside the project settings, including:
- Database credentials: connection strings and connection pooler details.
- API credentials: your serverless API URL and
anon
/service_role
keys.
Save computer vision predictions
Once you have a trained vision model, you need to create business logic for your application. In many cases, you want to save inference results to a file.
The steps below show you how to run a vision model locally and save predictions to Supabase.
Preparation: Set up a model
Before you begin, you will need an object detection model trained on your data.
You can train a model on Roboflow, leveraging end-to-end tools from data management and annotation to deployment, or upload custom model weights for deployment.
All models have an infinitely scalable API through which you can query your model, and can be run locally.
For this guide, we will use a demo rock, paper, scissors model.
Step 1: Install and start Roboflow Inference
You will deploy our model locally using Roboflow Inference, a computer vision inference server.
To install and start Roboflow Inference, first install Docker on your machine.
Then, run:
_10pip install inference inference-cli inference-sdk && inference server start
An inference server will be available at http://localhost:9001
.
Step 2: Run inference on an image
You can run inference on images and videos. Let's run inference on an image.
Create a new Python file and add the following code:
_13from inference_sdk import InferenceHTTPClient_13_13image = "example.jpg"_13MODEL_ID = "rock-paper-scissors-sxsw/11"_13_13client = InferenceHTTPClient(_13 api_url="http://localhost:9001",_13 api_key="ROBOFLOW_API_KEY"_13)_13with client.use_model(MODEL_ID):_13 predictions = client.infer(image)_13_13print(predictions)
Above, replace:
- The image URL with the name of the image on which you want to run inference.
ROBOFLOW_API_KEY
with your Roboflow API key. Learn how to retrieve your Roboflow API key.MODEL_ID
with your Roboflow model ID. Learn how to retrieve your model ID.
When you run the code above, a list of predictions will be printed to the console:
_10{'time': 0.05402109300121083, 'image': {'width': 640, 'height': 480}, 'predictions': [{'x': 312.5, 'y': 392.0, 'width': 255.0, 'height': 110.0, 'confidence': 0.8620790839195251, 'class': 'Paper', 'class_id': 0}]}
Step 3: Save results in Supabase
To save results in Supabase, add the following code to your script:
_10import os_10from supabase import create_client, Client_10_10url: str = os.environ.get("SUPABASE_URL")_10key: str = os.environ.get("SUPABASE_KEY")_10supabase: Client = create_client(url, key)_10_10result = supabase.table('predictions') \_10 .insert({"filename": image, "predictions": predictions}) \_10 .execute()
You can then query your predictions using the following code:
_10result = supabase.table('predictions') \_10 .select("predictions") \_10 .filter("filename", "eq", image) \_10 .execute()_10_10print(result)
Here is an example result:
_10data=[{'predictions': {'time': 0.08492901099998562, 'image': {'width': 640, 'height': 480}, 'predictions': [{'x': 312.5, 'y': 392.0, 'width': 255.0, 'height': 110.0, 'confidence': 0.8620790839195251, 'class': 'Paper', 'class_id': 0}]}}, {'predictions': {'time': 0.08818970100037404, 'image': {'width': 640, 'height': 480}, 'predictions': [{'x': 312.5, 'y': 392.0, 'width': 255.0, 'height': 110.0, 'confidence': 0.8620790839195251, 'class': 'Paper', 'class_id': 0}]}}] count=None
Calculate and save CLIP embeddings
You can use the Supabase vector database functionality to store and query CLIP embeddings.
Roboflow Inference provides a HTTP interface through which you can calculate image and text embeddings using CLIP.
Step 1: Install and start Roboflow Inference
See Step #1: Install and Start Roboflow Inference above to install and start Roboflow Inference.
Step 2: Run CLIP on an image
Create a new Python file and add the following code:
_32import cv2_32import supervision as sv_32import requests_32import base64_32import os_32_32IMAGE_DIR = "images/train/images/"_32API_KEY = ""_32SERVER_URL = "http://localhost:9001"_32_32results = []_32_32for i, image in enumerate(os.listdir(IMAGE_DIR)):_32 print(f"Processing image {image}")_32 infer_clip_payload = {_32 "image": {_32 "type": "base64",_32 "value": base64.b64encode(open(IMAGE_DIR + image, "rb").read()).decode("utf-8"),_32 },_32 }_32_32 res = requests.post(_32 f"{SERVER_URL}/clip/embed_image?api_key={API_KEY}",_32 json=infer_clip_payload,_32 )_32_32 embeddings = res.json()['embeddings']_32_32 results.append({_32 "filename": image,_32 "embeddings": embeddings_32 })
This code will calculate CLIP embeddings for each image in the directory and print the results to the console.
Above, replace:
IMAGE_DIR
with the directory containing the images on which you want to run inference.ROBOFLOW_API_KEY
with your Roboflow API key. Learn how to retrieve your Roboflow API key.
You can also calculate CLIP embeddings in the cloud by setting SERVER_URL
to https://infer.roboflow.com
.
Step 3: Save embeddings in Supabase
You can store your image embeddings in Supabase using the Supabase vecs
Python package:
First, install vecs
:
_10pip install vecs
Next, add the following code to your script to create an index:
_26_26import vecs_26_26DB_CONNECTION = "postgresql://postgres:[password]@[host]:[port]/[database]"_26_26vx = vecs.create_client(DB_CONNECTION)_26_26# create a collection of vectors with 3 dimensions_26images = vx.get_or_create_collection(name="image_vectors", dimension=512)_26_26for result in results:_26 image = result["filename"]_26 embeddings = result["embeddings"][0]_26_26 # insert a vector into the collection_26 images.upsert(_26 records=[_26 (_26 image,_26 embeddings,_26 {} # metadata_26 )_26 ]_26 )_26_26images.create_index()
Replace DB_CONNECTION
with the authentication information for your database. You can retrieve this from the Supabase dashboard in Project Settings > Database Settings
.
You can then query your embeddings using the following code:
_17infer_clip_payload = {_17 "text": "cat",_17}_17_17res = requests.post(_17 f"{SERVER_URL}/clip/embed_text?api_key={API_KEY}",_17 json=infer_clip_payload,_17)_17_17embeddings = res.json()['embeddings']_17_17result = images.query(_17 data=embeddings[0],_17 limit=1_17)_17_17print(result[0])