hf-to-open-api-wrapper
does hugging face provide free openAPI compatible API key that can be used ?
Hugging Face provides free API access, but not in the form of an OpenAPI-compatible API key by default. Instead, they offer:
If you need an OpenAPI-compatible API key, you might need to use third-party services that wrap Hugging Face's APIs into OpenAPI-compliant interfaces or manually create an OpenAPI spec for your use case.
Would you like help with integrating Hugging Face's API in a project?
I am okay with rate limits. I want API for use with AI cli chat tools like aichat. Is there a opensource solution that I can use the hugging face inference API but provide the other tools a wrapper API key ?
Yes! You can create a simple API wrapper around the Hugging Face Inference API that provides an OpenAPI-compatible API key for AI CLI tools like aichat
. Here’s how you can do it using FastAPI (which supports OpenAPI) and Hugging Face Inference API.
First, get a free Hugging Face API token from: 🔗 https://huggingface.co/settings/tokens
Here’s a simple Python API wrapper:
from fastapi import FastAPI, Request, HTTPException, Depends
import requests
import os
app = FastAPI()
# Configurations
HF_API_KEY = os.getenv("HF_API_KEY", "your_huggingface_api_key_here")
VALID_API_KEYS = {"your_custom_api_key_here"} # Define your own API keys
# Hugging Face Inference API Endpoint
HF_ENDPOINT = "https://api-inference.huggingface.co/models/facebook/opt-1.3b" # Change model as needed
def authenticate(api_key: str):
"""Simple API key authentication"""
if api_key not in VALID_API_KEYS:
raise HTTPException(status_code=401, detail="Invalid API key")
return api_key
@app.post("/chat")
async def chat(request: Request, api_key: str = Depends(authenticate)):
"""Chat with the Hugging Face model"""
data = await request.json()
input_text = data.get("text")
if not input_text:
raise HTTPException(status_code=400, detail="Text is required")
headers = {"Authorization": f"Bearer {HF_API_KEY}"}
response = requests.post(HF_ENDPOINT, json={"inputs": input_text}, headers=headers)
if response.status_code != 200:
raise HTTPException(status_code=response.status_code, detail=response.text)
return response.json()
Save the above as app.py
and install dependencies:
pip install fastapi uvicorn requests
Run it locally:
uvicorn app:app --host 0.0.0.0 --port 8000
Now, your OpenAPI-compatible wrapper API runs on http://localhost:8000/chat.
In aichat
, configure it to use:
aichat --api-key your_custom_api_key_here --endpoint http://localhost:8000/chat
This lets you use Hugging Face models with an OpenAPI-style key.
✅ Dockerize for easy deployment.
✅ Rate-limit requests using slowapi
.
✅ Deploy to Railway, Fly.io, or a small VPS.
Would you like help with deployment? 🚀
Powered by ChatGPT Exporter