Using Hugging Face Models via API
Hugging Face provides a simple and efficient API to access and interact with a wide variety of pre-trained machine learning models. You can use these models for tasks like text generation, sentiment analysis, translation, and more, directly from your application via HTTP requests.
1. Get an API Key
- Sign up or log in at Hugging Face.
- Go to your account settings and find the Access Tokens section.
- Create a new token and copy it. You will use this token as your API key.
2. Make an API Request
Example: Text Generation with Python
import requests
API_URL = "https://api-inference.huggingface.co/models/gpt2"
headers = {"Authorization": f"Bearer YOUR_HF_API_KEY"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
data = query({"inputs": "Once upon a time,"})
print(data)
Replace YOUR_HF_API_KEY
with your actual key.
Example: cURL
curl -X POST \
-H "Authorization: Bearer YOUR_HF_API_KEY" \
-H "Content-Type: application/json" \
-d '{"inputs": "Once upon a time,"}' \
https://api-inference.huggingface.co/models/gpt2
3. Choosing a Model
Replace gpt2
in the API URL with any other model available on Hugging Face Model Hub. For example, for sentiment analysis, use distilbert-base-uncased-finetuned-sst-2-english
:
API_URL = "https://api-inference.huggingface.co/models/distilbert-base-uncased-finetuned-sst-2-english"
4. Handling Responses
The API returns JSON responses containing the model's output. You can parse these in your application as needed.
5. Notes
- The free tier has rate limits. For higher usage, consider a paid plan.
- Check each model’s documentation for specific input/output formats and supported features.