A Complete Guide to Build GraphQL APIs Using Apollo Server and Node.js

A Complete Guide to Build GraphQL APIs Using Apollo Server and Node.js

A Complete Guide to Build GraphQL APIs Using Apollo Server and Node.js

1. Introduction

GraphQL allows clients to query only the data they need. Unlike REST, which relies on fixed endpoints, GraphQL provides a flexible approach, allowing clients to request specific fields, combine queries, and subscribe to real-time updates.

In this tutorial, we will build a GraphQL API using Apollo Server and Node.js. By the end, we will have a fully functional API with authentication, custom resolvers, middleware integration, and database connectivity.

1.1 Why Apollo Server for GraphQL?

Apollo Server is a production-ready GraphQL server that simplifies schema design, integrates seamlessly with Node.js, and supports advanced features like caching, subscriptions, and error handling. Unlike basic Express-GraphQL setups, Apollo provides built-in optimizations for performance-critical applications.

1.2 Prerequisites

Before we begin, make sure you have:

  • Node.js (v16 or later) installed
  • npm or yarn for package management
  • A basic understanding of JavaScript and Node.js
  • Familiarity with GraphQL concepts (queries, mutations, resolvers)
  • (Optional) A running MongoDB or PostgreSQL instance

2. Setting Up the Project

2.1 Initialize a Node.js Project

Run the following command to create a new Node.js project:

mkdir graphql-apollo-server && cd graphql-apollo-server
npm init -y

This will generate a package.json file, which manages dependencies and scripts for the project.

2.2 Install Required Dependencies

Apollo Server requires several packages. Install them using:

npm install @apollo/server graphql @graphql-tools/schema lodash
npm install -D typescript ts-node nodemon @types/node
PackagePurpose
@apollo/serverCore Apollo Server library
graphqlGraphQL implementation
@graphql-tools/schemaSchema utilities
lodashUtility library (used for pagination, data transformation)
typescript, ts-node, nodemon, @types/nodeDevelopment tools

If you plan to use MongoDB, install Mongoose:

npm install mongoose

For PostgreSQL, install Sequelize or Prisma:

npm install pg sequelize

3: Defining GraphQL Schema & Database

The GraphQL schema is the blueprint of your API. It defines the types of data available, how the data is structured, and what operations (queries, mutations, subscriptions) clients can perform. In this tutorial, we’ll define a simple schema with two main entities: User and Post. We will also define input types for mutations and a subscription for real-time updates.

3.1 Create a schema.js File

Let’s create a file named schema.graphql inside a src folder. In this file, we define our data types and operations. For instance, the User type represents a user, including fields like id, name, and email.

The Post type represents a blog post with a title, content, and an associated author. We also include an input type called CreatePostInput for creating new posts. Lastly, we define our queries to fetch users and posts, a mutation to create a post, and a subscription to listen for new posts:

A schema defines your API’s data structure using the GraphQL Schema Definition Language (SDL). It includes:

  • Mutations (modify data).
  • Types (e.g., UserPost).
  • Queries (fetch data).
# src/schema.graphql

type User {
  id: ID!
  name: String!
  email: String!  # Sensitive field (we’ll secure this later)
  posts(limit: Int = 10, offset: Int = 0): [Post!]!
}

type Post {
  id: ID!
  title: String!
  content: String!
  author: User!
}

input CreatePostInput {
  title: String!
  content: String!
  authorId: ID!
}

type AuthPayload {
  token: String!
  user: User!
}

type Query {
  getUser(id: ID!): User
  getPosts(limit: Int = 10, offset: Int = 0): [Post!]!
}

type Mutation {
  createPost(input: CreatePostInput!): Post!
  login(email: String!, password: String!): AuthPayload!
}

type Subscription {
  postAdded: Post!
}

This schema defines the structure of our API:

  • Types:
    • User: Represents a user with fields like idname, and email. Note how the User type includes a field posts that accepts arguments for pagination (limit and offset).
    • Post: Represents a blog post with fields like titlecontent, and author.
  • Input Types:
    • CreatePostInput: Bundles arguments for the createPost mutation.
  • Operations:
    • Query: Fetches data (e.g., getUsergetPosts).
    • Mutation: Modifies data (e.g., createPostlogin).
    • Subscription: Delivers real-time updates (e.g., postAdded).

We are using input types (CreatePostInput) makes the schema cleaner and reusable. The subscription provides a way to receive real-time updates when a new post is added.

3.2 Connect to MongoDB

Create src/db.ts:

dotenv.config();  

const connectDB = async () => {  
  try {  
    await mongoose.connect(process.env.MONGO_URI!, {
      serverSelectionTimeoutMS: 5000,
      retryWrites: true,
    );  
    console.log('✅ MongoDB connected');  
  } catch (err) {  
    console.error('❌ Connection failed:', err);  
    throw new Error(`❌ MongoDB Connection failed: ${err.message}`);
  }  
};  

export default connectDB;  

This file sets up the connection to MongoDB:

  1. dotenv.config(): Loads environment variables from .env.
  2. mongoose.connect(): Connects to the MongoDB instance using the MONGO_URI environment variable.

3.3 Define Data Models

Create a file for your User model (e.g., src/models/User.js):

import { Schema, model } from 'mongoose';  

const UserSchema = new Schema({  
  name: { type: String, required: true },  
  email: { type: String, unique: true, required: true },  
  password: { type: String, required: true },  
  role: { type: String, enum: ['USER', 'ADMIN'], default: 'USER' }  
});  

export const User = model('User', UserSchema);

This defines a Mongoose schema for the User entity:

  • Fields:
    • name: Required string for the user’s name.
    • email: Unique string to prevent duplicate accounts.
    • password: Hashed string for secure storage.
    • role: Restricts values to USER or ADMIN (default: USER).

Ensure you also create and import a similar file for the Post model, for example, src/models/Post.js:

const PostSchema = new Schema({
  title: { type: String, required: true },
  content: { type: String, required: true },
  author: { type: Schema.Types.ObjectId, ref: 'User', required: true }
});

export const Post = model('Post', PostSchema);

3.4 Exports Models

This file aggregates and exports your models so you can import them easily.

// src/database.ts
import { User } from './models/User.js';
import { Post } from './models/Post.js';

export { User, Post };

4. Setting Up Subscriptions

Subscriptions allow clients to receive real-time updates when data changes. For example, if you want your client to update automatically when a new post is added, you can use GraphQL subscriptions.

4.1 Install Subscription Dependencies

To enable subscriptions, we use the graphql-ws package together with a WebSocket server. First, install the necessary packages:

npm install graphql-ws

Note that Apollo Server v4 no longer has built-in WebSocket support, so graphql-ws is necessary for handling subscriptions.

4.2 Create PubSub Instance

We need ONE instance of PubSub for our entire app. Here’s the proper setup:

// src/pubsub.ts
import { PubSub } from 'graphql-subscriptions';
export const pubsub = new PubSub(); // Single instance for entire app

5. Resolvers & the N+1 Query Problem

Resolvers are the functions that actually fetch the data for your GraphQL operations. They translate the schema definitions into data that can be returned to the client.

5.1 Creating Resolvers

Create a file named resolvers.ts in the src directory. In this file, we define resolvers for queries, mutations, and subscriptions:

// src/resolvers.ts

// Import necessary modules.
const SECRET_KEY = process.env.JWT_SECRET || 'your_secret_key';

// Resolver for Query and Mutation without DataLoader optimization.
const resolvers = {
  Query: {
    // Fetch a single user by ID.
    getUser: async (_, { id }) => {
      return User.findById(id);
    },
    // Fetch posts with pagination.
    getPosts: async (_, { limit, offset }) => {
      return Post.find().skip(offset).limit(limit);
    },
  },
  Mutation: {
    // Create a new post.
    createPost: async (_: any, { input }: { input: any }) => {
      const newPost = new Post({ ...input });
      const savedPost = await newPost.save();
      pubsub.publish('POST_ADDED', { postAdded: savedPost });
      return savedPost;
    },
    // Login mutation for user authentication.
    // ... cover in later part
  },
  // Resolve the posts field for a User.
  User: {
    posts: async (user, { limit, offset }) => {
      return Post.find({ author: user.id }).skip(offset).limit(limit);
    },
  },
  // Subscription for new posts.
  Subscription: {
    postAdded: {
      subscribe: () => (pubsub as PubSub<any>).asyncIterableIterator('POST_ADDED'), 
    },
  },
};

module.exports = { resolvers };

The asyncIterator accepts an array of event names even for single events. This allows you to subscribe to multiple events simultaneously if needed:

Resolvers are functions that fetch data for schema fields:

  • Query Resolvers:
    • getUser: Fetches a user by ID.
    • getPosts: Fetches posts with pagination.
  • Mutation Resolvers:
    • createPost: Creates a new post and publishes it to subscribers.
    • login: Validates credentials and returns a JWT token.
  • Field-Level Resolver for User:
    • The posts field under User fetches posts associated with that user.
  • Subscription Resolver:
    • postAdded: Delivers real-time updates when a new post is created.

5.2 Understanding the N+1 Problem

The N+1 query problem occurs when fetching a list of users and then making separate queries for each user’s posts. For instance, if a User has many Posts, a naive resolver might query the database N times for N users (e.g., fetching posts for each user individually).

This can lead to performance bottlenecks. A solution is batching queries using DataLoader. DataLoader batches database requests into a single query.

npm install dataloader

5.2.1 Create a DataLoader Instance

To address this, we use DataLoader—a library that batches and caches database requests. First, we create our resolvers. We set up DataLoader instances for users and posts:

// src/loaders.ts
const DataLoader = require('dataloader');
import { User, Post } from './database.js';

// DataLoader for users.
export const createUserLoader = () =>
  new DataLoader(async (userIds) => {
    console.log('Fetching users for:', userIds);
    const users = await User.find({ _id: { $in: userIds } });
    return userIds.map(id => users.find(u => u._id.toString() === id));
  });

// DataLoader for posts with pagination.
export const createPostLoader = () =>
  new DataLoader(async (keys: readonly { userId: string; limit: number; offset: number }[]) => {
    // Each key is an object: { userId, limit, offset }
    const userIds = keys.map(key => key.userId);
    const posts = await Post.find({ author: { $in: userIds } });
    return keys.map(({ userId, limit, offset }) =>
      posts.filter(post => post.author.toString() === userId).slice(offset, offset + limit)
    );
  });

DataLoader:

  • Batches multiple database calls into a single request, significantly reducing redundant queries.
  • The user loader takes an array of user IDs and fetches them in one go.
  • The post loader not only batches queries by user but also implements pagination by slicing the results.

5.2.2 Update the Resolver

Within src/resolvers.ts, update the resolvers to use the DataLoader instances:

// Updated parts in src/resolvers.ts

const resolvers = {
  Query: {
    getUser: async (_, { id }, { loaders }) => {
      // Now uses the DataLoader to fetch the user.
      return loaders.userLoader.load(id);
    },
    // getPosts remains unchanged.
  },
  User: {
    posts: async (user, { limit, offset }, { loaders }) => {
      return loaders.postLoader.load({ userId: user.id, limit, offset });
    },
  },
  // Other parts remain the same.
};

In getUser, the resolver now uses loaders.userLoader.load(id) instead of directly querying the database. The getPosts resolver directly queries the database for a paginated list of posts.

For the posts field, we pass an object to the post DataLoader that includes the necessary parameters.

6. Authentication & Authorization

For a secure API, authentication (verifying who the user is) and authorization (determining what the user can do) are essential. In this tutorial, we implement authentication using JSON Web Tokens (JWT). Users log in, receive a token, and then include that token in subsequent requests.

6.1 Install Authentication Dependencies

npm install jsonwebtoken bcryptjs dotenv
  • jsonwebtoken: For signing and verifying JWTs.
  • bcryptjs: For hashing and comparing passwords securely.
  • dotenv: For loading environment variables to manage sensitive information like secrets.

6.2 Create a .env File

We store sensitive information, such as the JWT secret, in environment variables. Create a .env file at the root of your project:

# .env
JWT_SECRET=your_secure_secret_here
DATABASE_URL=mongodb://localhost:27017/graphql_db

6.3 Create a Context

Next, we create a context function that will run for every request. This function extracts the JWT from the request headers, verifies it, and then attaches the user information to the context. This context is then available in all resolvers, so you can easily enforce authorization by checking if the user is logged in or if they have the right role.

Create a file named context.js in your src directory with the following content:

// src/context.ts
// .. import

dotenv.config();

const { verify } = jwt;

export const createContext = async ({ req }) => {
  // Extract the token from the "Authorization" header.
  const token = req.headers.authorization ? req.headers.authorization.split(' ')[1] : '';
  let user;
  try {
    // Verify the token using the secret from the environment.
    const payload = verify(token, process.env.JWT_SECRET);
    user = payload;
  } catch (error) {
    // If token is missing or invalid, user remains undefined.
  }

  // Return an object that is available to all resolvers.
  return {
    user, // Contains userId and role if authenticated.
    db,   // Your database connection.
    loaders: {
      userLoader: createUserLoader(),
      postLoader: createPostLoader(),
    },
    pubsub, // Used for subscriptions.
  };
};

This function validates the JWT and attaches the user to the context:

  • Token Extraction & Validation:
    • The context extracts the JWT from the request header and verifies it.
    • If the token is valid, the payload (including userId and role) is attached to the context.
  • DataLoader Instances:
    • The context also provides DataLoader instances to optimize data fetching across resolvers.
  • PubSub Instance:
    • Enables real-time updates via subscriptions.

This context setup is critical because it ensures that every resolver can access the user information and DataLoader instances, making it easy to enforce security policies and optimize data fetching.

6.4 Login Resolver

In src/resolvers.ts the login resolver is defined as:

// src/resolvers.ts
// .. import

const resolvers = {
  // existing logic ...
  Mutation: {
    // ... 
    register: async (_: any, { input }: { input: { name: string; email: string; password: string } }) => {
      const { name, email, password } = input;
      // Check if user already exists.
      const existingUser = await User.findOne({ email });
      if (existingUser) throw new Error('User already exists');
      
      // Hash the password before saving.
      const hashedPassword = await bcrypt.hash(password, 10);
      const newUser = new User({ name, email, password: hashedPassword });
      const savedUser = await newUser.save();
      
      // Sign a JWT.
      const token = jwt.sign({ userId: savedUser._id, role: savedUser.role }, SECRET_KEY, { expiresIn: '1h' });
      return { token, user: savedUser };
    },
    login: async (_: any, { email, password }: { email: string; password: string }) => {
      const user = await User.findOne({ email });
      if (!user) throw new Error('User not found');
      const valid = await bcrypt.compare(password, user.password);
      if (!valid) throw new Error('Invalid password');
      const token = jwt.sign({ userId: user._id, role: user.role }, SECRET_KEY, { expiresIn: '1h' });
      return { token, user };
    },
  },
};

Authentication Flow:

  • The resolver finds the user by email, compares passwords using bcrypt, and if successful, signs a JWT that includes the user’s ID and role.

Security:

  • JWT tokens have an expiration time (1h), which limits the window of token misuse.

6.5 Authorization with Middleware

Authorization controls what authenticated users can do. We’ll use Apollo’s context to validate tokens.

6.5.1 Protecting Resolvers

The resolver checks if user exists in the context. If not, it throws an “Unauthorized!” error:

// src/resolvers.ts
const resolvers = {
  Mutation: {
    createPost: async (_: any, { input }: { input: any }, { user }: { user: any }) => {
      // Ensure the request is authenticated.
      if (!user) throw new Error('Unauthorized!');

      // Create a new post with the authenticated user's ID as the author.
      const newPost = new Post({ ...input, author: user.userId });
      const savedPost = await newPost.save();
      pubsub.publish('POST_ADDED', { postAdded: savedPost });
      return savedPost;
    },
  },
};

Authorization Check:

  • The resolver first checks if a user exists in the context (i.e., is authenticated).

6.5.2 Role-Based Access (Admin-Only Endpoint)

Before deleting a post, the resolver verifies that the user is authenticated and has the ADMIN role:

// src/resolvers.ts
const resolvers = {
  // ...
  Mutation: {
    // ...
    deletePost: async (_: any, { id }: { id: string }, { user }: { user: any }) => {
      // Ensure the request is authenticated and the user has ADMIN role.
      if (!user || user.role !== 'ADMIN') throw new Error('Unauthorized!');
      return Post.deleteOne({ _id: id });
    },
  },
};

Admin Check:

  • This mutation is restricted so that only users with the ADMIN role can delete posts. The resolver immediately throws an error if the check fails, preventing unauthorized actions.

7. Setting Up the Server

We create a server file that sets up both HTTP and WebSocket servers so that our API can handle standard queries and subscriptions over a single endpoint.

Create a file named server.js in your src directory:

// src/server.ts
// .. import

dotenv.config();

const typeDefs = readFileSync(path.join(process.cwd(), 'src', 'schema.graphql'), 'utf-8');
const schema = makeExecutableSchema({ typeDefs, resolvers });

// Connect to MongoDB
connectDB();

const app = express();
const httpServer = http.createServer(app);

// Set up WebSocket server for subscriptions
const wsServer = new WebSocketServer({
  server: httpServer,
  path: '/graphql'
});
useServer({ schema, context: createContext }, wsServer);

const server = new ApolloServer({ schema });

(async () => {
  await server.start();
  app.use(
    '/graphql',
    express.json(),
    expressMiddleware(server, { context: createContext })
  );
  const PORT = process.env.PORT || 4000;
  httpServer.listen(PORT, () => {
    console.log(`🚀 Server is running on http://localhost:${PORT}/graphql`);
    console.log(`🚀 Subscriptions ready at ws://localhost:${PORT}/graphql`);
  });
})();

In this file, we set up an Express app and create an HTTP server. We then integrate a WebSocket server using the ws library, configuring it to listen for GraphQL subscriptions at the /graphql endpoint.

The useServer function from graphql-ws bridges our GraphQL schema with the WebSocket server, ensuring that subscription messages are handled properly. Finally, we integrate Apollo Server with our Express app so that both HTTP queries and WebSocket subscriptions are served from the same endpoint.

8. Testing

Launch the server to verify that everything is set up correctly:

Screenshot-2025-03-20-at-10.03.09 AM A Complete Guide to Build GraphQL APIs Using Apollo Server and Node.js

8.1 Example Query with Authentication

mutation Login {
  login(email: "user@example.com", password: "secret123") {
    token
    user { id name }
  }
}

Tests the authentication flow. After logging in, the client receives a token and user details.

Screenshot-2025-03-20-at-10.21.19 AM-1024x838 A Complete Guide to Build GraphQL APIs Using Apollo Server and Node.js
query GetUserWithPosts {
  getUser(id: "67db7b0335560bf35790a6fb") {
    name
    posts {
      title
    }
  }
}

Demonstrates fetching user data along with paginated posts, which uses the DataLoader optimizations.

Screenshot-2025-03-20-at-10.24.39 AM-1024x851 A Complete Guide to Build GraphQL APIs Using Apollo Server and Node.js

8.2 Testing Protected Endpoints

Include the token in the Authorization header:

curl -X POST http://localhost:4000/graphql \
  -H "Authorization: Bearer YOUR_JWT_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "mutation { 
      createPost(input: { 
        title: \\\"Hello\\\", 
        content: \\\"World\\\", 
        authorId: \\\"67db7b0335560bf35790a6fb\\\" 
      }) { 
        id 
      } 
    }"
  }'

When testing endpoints that require authentication, include the JWT in the Authorization header. The server checks this token to ensure that only authenticated users can perform actions like creating a post.

Screenshot-2025-03-20-at-10.36.45 AM-1024x624 A Complete Guide to Build GraphQL APIs Using Apollo Server and Node.js

9. Deployment

Below is an example of a Dockerfile designed to build and run your GraphQL API in production mode using a Node.js image:

# Stage 1: Build Stage
FROM node:18-alpine as builder
WORKDIR /app

# Copy dependency files and install all dependencies (including dev dependencies)
COPY package*.json ./
RUN npm ci

# Copy the rest of the application source code and build the project
COPY . .
RUN npm run build

# Stage 2: Production Stage
FROM node:18-alpine
WORKDIR /app

# Accept an environment variable to control the deployment mode (default: production)
ARG ENV=production
ENV NODE_ENV=$ENV

# Copy only the dependency files for production installation
COPY package*.json ./
RUN npm ci --production

# Copy the compiled output from the builder stage
COPY --from=builder /app/dist ./dist

# Optionally, pass environment-specific configuration
# Example: For production, you might use a separate .env file that you COPY into the image
# COPY .env.$ENV .env

# Start the application using the built code
CMD ["node", "dist/server.js"]

We first use a full Node.js image to install all dependencies (including dev dependencies) and compile your TypeScript source into JavaScript. This stage outputs the built files into the dist folder.

Then, we use a lightweight Node.js Alpine image. It installs only production dependencies and then copies the compiled files from the builder stage. This results in a much smaller and more secure final image.

Conclusion

In this tutorial, we built a GraphQL API using Apollo Server and Node.js, covering key concepts like schema definition, resolvers, authentication, database integration, and real-time subscriptions. We explored how GraphQL improves API flexibility compared to REST and implemented a structured approach to querying and mutating data efficiently.

By following these steps, you now have a solid foundation to expand your API further—whether by adding authorization, optimizing performance with DataLoader, or integrating additional services like Redis caching and cloud deployments.

The full source code can be found on GitHub.

Share this content:

Leave a Comment

Discover more from nnyw@tech

Subscribe now to keep reading and get access to the full archive.

Continue reading