Resolvers provide a robust means to expand the capabilities of your Grafbase backend. They enable us to perform diverse operations, such as executing custom business logic, making network requests, invoking dependencies, and more. In this tutorial, we'll delve into creating a new GraphQL API using Grafbase, leveraging resolvers, and Upstash Redis rate limiting to safeguard costly ChatGPT requests.
To get started, create a new directory or navigate to an existing Grafbase project.
Install Grafbase by running the following command:
npx grafbase init
Once Grafbase is set up, we need to install the necessary packages for rate limiting and Redis integration. Run the following command:
npm install @upstash/redis @upstash/ratelimit
To use Upstash Redis as our rate limiter, we first need to create an Upstash Redis instance.
Once you have the Redis connection details, update the .env
file in your Grafbase project and add the following lines:
UPSTASH_REDIS_REST_URL=YOUR_REDIS_REST_URL
UPSTASH_REDIS_REST_TOKEN=YOUR_REDIS_REST_TOKEN
With that configured Redis.fromEnv()
will already pickup the redis client by leveraging the .env
keys defined.
In this example, we'll protect an expensive ChatGPT resolver using Upstash rate-limiting library.
Create a new file grafbase/resolvers/chat-gpt.js
and add the following code:
import { Ratelimit } from '@upstash/ratelimit'
import { Redis } from '@upstash/redis'
// Create a new rate-limiter, allowing 10 requests per 10 seconds
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, '10 s'),
analytics: true,
prefix: '@upstash/ratelimit', // Optional prefix for Redis keys
})
export default async function Resolver(_, { question }) {
const identifier = 'api' // Use a constant string or any unique identifier
const { success } = await ratelimit.limit(identifier)
if (!success) {
throw new Error('Too many requests. Please try again later.')
}
// Execute the expensive Chat-GPT request here
const answer = await expensiveChatGPTRequest(question)
return answer
}
async function expensiveChatGPTRequest(question) {
// Make the expensive Chat-GPT request here
const response = await fetch('https://api.openai.com/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
// You must define OPENAI_API_KEY in your .env file
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: question },
],
}),
})
const data = await response.json()
// Extract and return the generated response from the API
return data.choices[0].message.content
}
In the code above, we import the Ratelimit
and Redis
classes from the @upstash/ratelimit
and @upstash/redis
packages, respectively. We then create a new rate limiter instance, specifying the Redis connection details and the rate limit configuration (10 requests per 10 seconds in this example).
Inside the resolver function, we use the ratelimit.limit
method to check if the current request exceeds the rate limit. If the request is within the allowed limit, we proceed with the expensive ChatGPT request. Otherwise, we throw an error indicating that there are too many requests.
Update the resolver path in grafbase/schema.graphql
to point to the new resolver:
extend type Query {
chatGPT(question: String!): String! @resolver(name: "chat-gpt")
}
Start the Grafbase development server by running the following command in your project directory:
npx grafbase dev
You can now interact with the GraphQL API by visiting http://localhost:4000 in your browser.
To request answers from Chat-GPT, use the following mutation:
query {
chatGPT(question: "What is the meaning of life?")
}
Replace "What is the meaning of life?"
with your own question.
The resolver will protect the expensive ChatGPT request using rate limiting, ensuring that the number of requests stays within the allowed limit.
That's it! You've successfully implemented rate limiting for expensive ChatGPT requests using Upstash Redis in Grafbase. Feel free to explore and adapt this setup to suit your specific use case.