Rate Limits
The Influship API implements rate limiting to ensure fair usage and system stability. This guide explains how to work with rate limits effectively.
Rate Limit Overview
Rate limits are applied per API key and vary by your billing plan. All responses include rate limit information in headers for easy monitoring.
Monitor Every Request - Check RateLimit-Limit, RateLimit-Remaining, and RateLimit-Reset headers to stay within limits.
Rate Limit Tiers
π Free Tier
Requests per hour: 1,000Burst capacity: 100 requests/minutePerfect for: Development, testing, small applications, proof of conceptsCost: Free forever
πΌ Pro Tier
Requests per hour: 10,000Burst capacity: 500 requests/minutePerfect for: Production applications, agencies, growing businessesCost: Contact sales
π’ Enterprise
Requests per hour: Custom (typically 50,000+)Burst capacity: CustomPerfect for: High-volume applications, platforms, white-label solutionsCost: Custom pricing
Every API response includes these headers:
HTTP/1.1 200 OK
RateLimit-Limit: 10000
RateLimit-Remaining: 9997
RateLimit-Reset: 1640995200
| Header | Type | Description | Example |
|---|
RateLimit-Limit | Integer | Total requests allowed in current window | 10000 |
RateLimit-Remaining | Integer | Requests remaining in current window | 9997 |
RateLimit-Reset | Integer | Unix timestamp (seconds) when limit resets | 1640995200 |
function checkRateLimits(response) {
const limit = parseInt(response.headers.get('RateLimit-Limit'));
const remaining = parseInt(response.headers.get('RateLimit-Remaining'));
const reset = parseInt(response.headers.get('RateLimit-Reset'));
const used = limit - remaining;
const usagePercent = (used / limit * 100).toFixed(1);
const resetDate = new Date(reset * 1000);
console.log(`Rate Limit Status:
Used: ${used}/${limit} (${usagePercent}%)
Remaining: ${remaining}
Resets: ${resetDate.toLocaleString()}
`);
return { limit, remaining, reset, usagePercent };
}
// Usage
const response = await fetch('https://api.influship.com/v1/search', options);
const rateLimit = checkRateLimits(response);
if (rateLimit.usagePercent > 80) {
console.warn('β οΈ Approaching rate limit!');
}
Handling Rate Limit Errors
429 Too Many Requests
When you exceed your rate limit, youβll receive a 429 response:
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Try again later.",
"details": {
"limit": 10000,
"remaining": 0,
"reset": 1640995200,
"retry_after": 3600
},
"request_id": "req_7a8b9c0d1e2f3g4h"
}
}
Additional Headers:
HTTP/1.1 429 Too Many Requests
RateLimit-Limit: 10000
RateLimit-Remaining: 0
RateLimit-Reset: 1640995200
Retry-After: 3600
Implementing Exponential Backoff
async function makeRequestWithBackoff(url, options, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch(url, options);
// Handle rate limiting
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After');
const delay = retryAfter
? parseInt(retryAfter) * 1000
: Math.pow(2, attempt) * 1000; // Exponential backoff
console.log(`Rate limited. Retrying in ${delay}ms... (attempt ${attempt + 1}/${maxRetries})`);
if (attempt < maxRetries - 1) {
await new Promise(resolve => setTimeout(resolve, delay));
continue;
}
}
return response;
} catch (error) {
if (attempt === maxRetries - 1) throw error;
const delay = Math.pow(2, attempt) * 1000;
console.log(`Request failed. Retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
// Usage
try {
const response = await makeRequestWithBackoff(
'https://api.influship.com/v1/search',
{
method: 'POST',
headers: { 'X-API-Key': API_KEY },
body: JSON.stringify({ query: 'fitness influencers' })
}
);
const data = await response.json();
console.log(data);
} catch (error) {
console.error('Request failed after retries:', error);
}
Python Implementation
import time
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
def make_request_with_backoff(url, max_retries=3, **kwargs):
"""Make request with exponential backoff for rate limits"""
for attempt in range(max_retries):
response = requests.request(**kwargs, url=url)
if response.status_code == 429:
retry_after = response.headers.get('Retry-After')
delay = int(retry_after) if retry_after else 2 ** attempt
print(f'Rate limited. Retrying in {delay}s... (attempt {attempt + 1}/{max_retries})')
if attempt < max_retries - 1:
time.sleep(delay)
continue
response.raise_for_status()
return response.json()
raise Exception('Max retries exceeded')
# Usage
try:
data = make_request_with_backoff(
'https://api.influship.com/v1/search',
method='POST',
headers={'X-API-Key': os.environ['INFLUSHIP_API_KEY']},
json={'query': 'fitness influencers'}
)
print(data)
except Exception as e:
print(f'Request failed: {e}')
Best Practices
1. Monitor Rate Limit Usage
Track usage proactively to avoid hitting limits:
class RateLimitMonitor {
constructor(warningThreshold = 0.8, criticalThreshold = 0.95) {
this.warningThreshold = warningThreshold;
this.criticalThreshold = criticalThreshold;
this.history = [];
}
check(response) {
const limit = parseInt(response.headers.get('RateLimit-Limit'));
const remaining = parseInt(response.headers.get('RateLimit-Remaining'));
const reset = parseInt(response.headers.get('RateLimit-Reset'));
const used = limit - remaining;
const usage = used / limit;
// Log to history
this.history.push({
timestamp: Date.now(),
limit,
remaining,
reset,
usage
});
// Alert based on thresholds
if (usage > this.criticalThreshold) {
console.error(`π¨ CRITICAL: ${(usage * 100).toFixed(1)}% rate limit used!`);
this.sendAlert('critical', usage, remaining, reset);
} else if (usage > this.warningThreshold) {
console.warn(`β οΈ WARNING: ${(usage * 100).toFixed(1)}% rate limit used`);
this.sendAlert('warning', usage, remaining, reset);
}
return { limit, remaining, reset, usage };
}
sendAlert(level, usage, remaining, reset) {
// Send to monitoring service
// analytics.track('rate_limit_alert', { level, usage, remaining, reset });
}
getStats() {
if (this.history.length === 0) return null;
const latest = this.history[this.history.length - 1];
const avgUsage = this.history.reduce((sum, h) => sum + h.usage, 0) / this.history.length;
return {
current: latest,
averageUsage: (avgUsage * 100).toFixed(1) + '%',
totalRequests: this.history.length
};
}
}
// Usage
const monitor = new RateLimitMonitor(0.8, 0.95);
async function monitoredRequest(url, options) {
const response = await fetch(url, options);
monitor.check(response);
return response;
}
// Check stats periodically
setInterval(() => {
const stats = monitor.getStats();
console.log('Rate Limit Stats:', stats);
}, 60000); // Every minute
2. Implement Request Queuing
Control request rate to stay within limits:
class RateLimitQueue {
constructor(requestsPerSecond = 2) {
this.queue = [];
this.interval = 1000 / requestsPerSecond;
this.processing = false;
this.lastRequestTime = 0;
}
async add(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject });
this.process();
});
}
async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
const now = Date.now();
const timeSinceLastRequest = now - this.lastRequestTime;
// Wait if we're going too fast
if (timeSinceLastRequest < this.interval) {
await new Promise(resolve =>
setTimeout(resolve, this.interval - timeSinceLastRequest)
);
}
const { requestFn, resolve, reject } = this.queue.shift();
this.lastRequestTime = Date.now();
try {
const result = await requestFn();
resolve(result);
} catch (error) {
reject(error);
}
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitQueue(2); // 2 requests per second
async function queuedSearch(query) {
return queue.add(async () => {
const response = await fetch('https://api.influship.com/v1/search', {
method: 'POST',
headers: { 'X-API-Key': API_KEY },
body: JSON.stringify({ query })
});
return response.json();
});
}
// Make multiple requests (will be queued)
const results = await Promise.all([
queuedSearch('fitness'),
queuedSearch('fashion'),
queuedSearch('tech')
]);
3. Batch Requests
Reduce API calls by using batch endpoints:
// β Inefficient - 100 separate requests
for (const creatorId of creatorIds) {
await getCreator(creatorId); // 100 API calls
}
// β
Efficient - 1 batch request
const creators = await fetch('https://api.influship.com/v1/creators', {
method: 'POST',
headers: { 'X-API-Key': API_KEY },
body: JSON.stringify({ creator_ids: creatorIds })
}).then(r => r.json());
4. Cache Aggressively
Cache responses to reduce API calls:
class APICache {
constructor(ttl = 3600000) { // 1 hour default
this.cache = new Map();
this.ttl = ttl;
}
set(key, value) {
this.cache.set(key, {
value,
timestamp: Date.now()
});
}
get(key) {
const item = this.cache.get(key);
if (!item) return null;
// Check if expired
if (Date.now() - item.timestamp > this.ttl) {
this.cache.delete(key);
return null;
}
return item.value;
}
clear() {
this.cache.clear();
}
}
const cache = new APICache(3600000); // 1 hour TTL
async function getCachedCreator(creatorId) {
const cacheKey = `creator:${creatorId}`;
// Check cache first
const cached = cache.get(cacheKey);
if (cached) {
console.log('Cache hit for:', creatorId);
return cached;
}
// Fetch from API
const creator = await fetch(`https://api.influship.com/v1/creators`, {
method: 'POST',
headers: { 'X-API-Key': API_KEY },
body: JSON.stringify({ creator_ids: [creatorId] })
}).then(r => r.json());
// Cache result
cache.set(cacheKey, creator);
return creator;
}
Rate Limiting Strategies
Adaptive Rate Limiting
Automatically adjust request rate based on response headers:
class AdaptiveRateLimiter {
constructor(initialRate = 10) {
this.requestsPerSecond = initialRate;
this.requestTimes = [];
}
async throttle() {
const now = Date.now();
const oneSecondAgo = now - 1000;
// Remove old request times
this.requestTimes = this.requestTimes.filter(t => t > oneSecondAgo);
// Check if we need to wait
if (this.requestTimes.length >= this.requestsPerSecond) {
const oldestRequest = Math.min(...this.requestTimes);
const waitTime = 1000 - (now - oldestRequest);
if (waitTime > 0) {
await new Promise(resolve => setTimeout(resolve, waitTime));
}
}
this.requestTimes.push(Date.now());
}
adjust(response) {
const limit = parseInt(response.headers.get('RateLimit-Limit'));
const remaining = parseInt(response.headers.get('RateLimit-Remaining'));
const reset = parseInt(response.headers.get('RateLimit-Reset'));
const timeUntilReset = reset * 1000 - Date.now();
const optimalRate = remaining / (timeUntilReset / 1000);
// Adjust rate conservatively (80% of optimal)
this.requestsPerSecond = Math.floor(optimalRate * 0.8);
console.log(`Adjusted rate to ${this.requestsPerSecond} req/sec`);
}
}
// Usage
const limiter = new AdaptiveRateLimiter();
async function adaptiveRequest(url, options) {
await limiter.throttle();
const response = await fetch(url, options);
limiter.adjust(response);
return response;
}
Common Scenarios
High-Volume Data Processing
Use Batch Endpoints
Combine multiple lookups into single requests
Implement Queuing
Queue requests to stay within rate limits
Monitor Headers
Check rate limit headers and adjust request frequency
Consider Upgrading
If consistently hitting limits, upgrade your plan
Real-Time Applications
Implement Debouncing
For autocomplete and search-as-you-type, use 300ms+ debounce
Cache Aggressively
Cache frequently accessed data for 5-10 minutes
Optimize Queries
Use filters and pagination to reduce data transfer
Monitor Usage
Track request patterns to identify optimization opportunities
Troubleshooting
Getting 429 errors frequently
Solutions:
- Check your request frequency
- Implement exponential backoff
- Use batch endpoints
- Consider upgrading your plan
- Review caching strategy
Check:
- Verify youβre using the correct API key
- Confirm your billing plan in response headers
- Contact support for custom limits
- Review if batch endpoints could help
Solutions:
- Implement request queuing
- Add delays between rapid requests
- Use batch endpoints for bulk operations
- Review request patterns for spikes
Testing Rate Limits
Test your rate limit handling in development:
async function testRateLimitHandling() {
console.log('Testing rate limit handling...');
const requests = Array.from({ length: 15 }, (_, i) => i);
for (const i of requests) {
try {
const start = Date.now();
const response = await makeRequestWithBackoff(
'https://api.influship.com/v1/search',
{
method: 'POST',
headers: { 'X-API-Key': API_KEY },
body: JSON.stringify({ query: `test ${i}`, limit: 1 })
}
);
const latency = Date.now() - start;
const remaining = response.headers.get('RateLimit-Remaining');
console.log(`Request ${i + 1}: ${latency}ms, ${remaining} remaining`);
// Small delay to avoid burst limits
await new Promise(resolve => setTimeout(resolve, 100));
} catch (error) {
console.error(`Request ${i + 1} failed:`, error.message);
}
}
}
Next Steps