Builders https://wpengine.com/builders/ Reimagining the way we build with WordPress. Thu, 12 Dec 2024 17:22:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://wpengine.com/builders/wp-content/uploads/2024/05/wp-engine-favicon-32.png Builders https://wpengine.com/builders/ 32 32 WP Engine Smart Search for Headless WordPress with Next.js and MDX https://wpengine.com/builders/wp-engine-smart-search-for-headless-wordpress-with-next-js-and-mdx/ https://wpengine.com/builders/wp-engine-smart-search-for-headless-wordpress-with-next-js-and-mdx/#respond Thu, 12 Dec 2024 16:22:41 +0000 https://wpengine.com/builders/?p=31750 If you’re using Headless WordPress and aiming to build a documentation site alongside a blog, you might consider integrating MDX or Markdown files. These formats make it simple for developers […]

The post WP Engine Smart Search for Headless WordPress with Next.js and MDX appeared first on Builders.

]]>
If you’re using Headless WordPress and aiming to build a documentation site alongside a blog, you might consider integrating MDX or Markdown files. These formats make it simple for developers to create and edit content, which is especially handy if you want to open up your docs to contributions from the open-source community.

While WordPress offers powerful tools for indexing and retrieving blog content, challenges arise when you want to add search functionality that also includes MDX or Markdown-based documentation files. Without a unified search solution, users could be left searching in silos—missing out on relevant information across your content types.

In this guide, we’ll walk you through using the WP Engine Smart Search plugin to create a seamless search experience that indexes both your WordPress content and MDX/Markdown files. With this setup, you’ll be able to serve a Next.js application where users can search blog posts, documentation, and any additional content in a unified experience.

Prerequisites

Before reading this article, you should have the following prerequisites checked off:

In order to follow along step by step, you need the following:

  • A Next.js project that is connected with your WordPress backend with a basic home page and layout.
  • A dynamic route that grabs a single post by its URI to display a single post detail page in your Next.js frontend

If you do not have that yet and do want to follow along step by step, you can clone down my demo here: 

https://github.com/Fran-A-Dev/smart-search-with-app-router

If you need a WordPress install, you can use a free sandbox account on WP Engine’s Headless Platform:

WP Engine Smart Search

WP Engine Smart Search is an Add-on for WP Engine customers that improves search for headless and traditional WordPress applications. It is designed to improve search result relevancy, support advanced search query operators, and add support for advanced WordPress data types.

We will use its public API for this article.

Smart Search search is included in paid plans with WP Engine accounts.

Steps

Install and activate Smart Search

Steps to install and activate the Smart Search plugin:

You should have this:


Now that you have Smart Search activated in your WordPress backend, navigate to WP Engine Smart Search > Index data and you should see this page:

A content sync sends all pre-existing content on the WordPress site to WP Engine Smart Search for indexing. This ensures the plugin knows exactly what content exists so it can serve the best results to site users.  Click on the “Index Now” button and this will Index your content.

Configure Smart Search

Now that we have Smart Search installed, let’s go ahead and configure it.  For this article, we will use the default settings.  In the WP admin sidebar, click on WP Engine Smart Search > Configuration. You will see this page:

Smart Search will default to “Full text” and “Stemming” for the search config.  We will stick to these for this guide, but feel free to try the other options on your own. 

Smart Search Environment Variables

Everything is installed and configured.  We need to grab the environment variables we are going to use to communicate with Smart Search on our frontend.  Navigate to Settings in the Smart Search menu option and it will take you to this page:

The Access Token and URL in WP Engine’s Smart Search plugin are essential credentials that enable communication between your WordPress site and the Smart Search service. 

The URL points to the Smart Search API endpoint, which ships with a GraphQL interface. This interface allows you to query and manage indexed data with GraphQL. 

The Access Token is used for secure authentication to ensure that only authorized requests interact with the search backend.

Save these two variables since we will need them in the next section.

Configure Next.js and App Router

If you followed along in the prerequisites section of this article, you should already have a Next.js frontend spun up with a dynamic route that displays a single post detail page from WordPress via its URI.  

Installing Dependencies

The first thing we will need to do is import the necessary package dependencies.  In your terminal, run the following command:

npm i downshift lodash.debounce json5 @next/mdx shikiji-transformers next-secure-headers rehype-mdx-import-media rehype-pretty-code rehype-slug http-status-codes

The packages we installed do the following:

Packages:

  • downshift: Provides functionality for building autocomplete or dropdown experiences.
  • lodash.debounce: A utility for debouncing functions.
  • @next/mdx: Allows you to use .mdx files in a Next.js project.
  • shikiji-transformers: A collection of common transformers for shikiji, which we will use for adding syntax highlighting to the code blocks on our documentation pages.
  • next-secure-headers: Helps secure Next.js apps by adding headers.
  • rehype-pretty-code: Formats and highlights code blocks in markdown or MDX.
  • rehype-slug: Adds IDs to headings in markdown or MDX for easier linking.
  • http-status-codes:  Constants enumerating the HTTP status codes. Based on the Java Apache HttpStatus API

These packages will allow us to have our MDX files with nice, clean syntax highlighting and code blocks. It will also optimize our search UI experience.  

Environment Variables

The following step is to create a .env.local folder at the root of the project so that we can add the environment variables necessary to our project.

Once that is created, add these keys and values to it:

NEXT_PUBLIC_GRAPHQL_ENDPOINT=https://your-wordpress-site.conm g
 /graphql

NEXT_PUBLIC_WORDPRESS_HOSTNAME=https://your-wordpress-site.com


NEXT_PUBLIC_SEARCH_ENDPOINT=https://your-smartsearch-endpoint.uc.a.run.app/graphql
NEXT_SEARCH_ACCESS_TOKEN=your-access-token

The access token and endpoints will allow our app to access our smart search and WordPress API’s.

The next.config.mjs file

We have our packages dependencies installed. Now, let’s setup our Next.js project to support MDX and Webpack for our search plugin.  

In the root of the project, you will find the next.config.mjs file and it should look like this:

// next.config.mjs

import { env } from "node:process";
import createMDX from "@next/mdx";
import { transformerNotationDiff } from "@shikijs/transformers";
import { createSecureHeaders } from "next-secure-headers";
import rehypeMdxImportMedia from "rehype-mdx-import-media";
import { rehypePrettyCode } from "rehype-pretty-code";
import rehypeSlug from "rehype-slug";
import smartSearchPlugin from "./lib/smart-search-plugin.mjs";

/**
 * @type {import('next').NextConfig}
 */
const nextConfig = {
  trailingSlash: true,
  reactStrictMode: true,
  pageExtensions: ["js", "jsx", "mdx", "ts", "tsx"],
  sassOptions: {
    includePaths: ["node_modules"],
  },
  eslint: {
    ignoreDuringBuilds: true,
  },
  redirects() {
    return [
      {
        source: "/discord",
        destination: "https://discord.gg/headless-wordpress-836253505944813629",
        permanent: false,
      },
    ];
  },
  images: {
    remotePatterns: [
      {
        protocol: "https",
        hostname: process.env.NEXT_PUBLIC_WORDPRESS_HOSTNAME, // Updated hostname
        pathname: "/**",
      },
    ],
  },
  i18n: {
    locales: ["en"],
    defaultLocale: "en",
  },
 
  webpack: (config, { isServer }) => {
    if (isServer) {
      config.plugins.push(
        smartSearchPlugin({
          endpoint: env.NEXT_PUBLIC_SEARCH_ENDPOINT,
          accessToken: env.NEXT_SEARCH_ACCESS_TOKEN,
        })
      );
    }

    return config;
  },
};

const withMDX = createMDX({
  options: {
    // Add any remark plugins if needed
    rehypePlugins: [
    rehypeSlug,
      [
        rehypePrettyCode,
        {
          transformers: [transformerNotationDiff()],
          theme: "github-dark-dimmed",
          defaultLang: "plaintext",
          bypassInlineCode: false,
        },
      ],
    ],
  },
});

export default withMDX(nextConfig);


In this config, we have a few things set up to support .mdx files acting as pages, routes, or imports.

We import all the necessary dependencies at the top to make our Next.js app run and support what we need.

Toward the bottom of the file, we add a custom Webpack plugin to run during the server build process only.  The plugin is configured using environment variables for the search endpoint and access token:

webpack: (config, { isServer }) => {
    if (isServer) {
      config.plugins.push(
        smartSearchPlugin({
          endpoint: env.NEXT_PUBLIC_SEARCH_ENDPOINT,
          accessToken: env.NEXT_SEARCH_ACCESS_TOKEN,
        })
      );
    }
    return config;
  },


Create MDX Files And Paths

The next thing we need to do is create our MDX files which will represent our docs pages and routes.  In the app folder, create another folder with subfolders that look like this:

app/docs/hello-world
app/docs/example

Then at the last level of the final subfolder, you will add a page.mdx file which is what will be rendered on the browser.  So the tree should look like this:

You can name these paths anything you like. 
Now, in your page.mdx file is where you can add your MDX content:

export const metadata = {
  title: "Test Page",
  date: "2023-10-30",
};

# Test Page

This is a test MDX file for verifying the search indexing process.

- Point one
- Point two

Here is some code:

```js
console.log("Hello, world!");
```

## Test Point 2

### Test Point 3

#### Test Point 4

Most of the content in this file is regular Markdown, but because we are using MDX, you will notice at the top of the file that we have an export syntax to define our metadata directly in the file instead of in the front matter. This allows MDX to be processed as JS, which adds flexibility to using variables or functions in metadata.

In a framework like Next.js, MDX is treated as a module and metadata can be consumed dynamically within React components or by the framework’s data fetching mechanisms. 

Feel free to add as many MDX files as you like. I only added two in this article. 

Global MDX Components

In order for the @next/mdx package to work and have mdx rendered properly with our configuration in Next.js, it is necessary to have a mdx-components.jsx(or .tsx).

This file will help us customize our component mapping for MDX files, replacing standard HTML tags with React components.  In the root of our Next.js app, create a mdx-components.js file and copy and paste this code block in:

import DocsLayout from "@/components/docs-layout";
import Heading from "@/components/heading";
import Link from "next/link";

export function useMDXComponents(components) {
  return {
 
    a: (props) => <Link {...props} />,
    wrapper: DocsLayout,
    h1: (props) => <Heading level={1} {...props} />,
    h2: (props) => <Heading level={2} {...props} />,
    h3: (props) => <Heading level={3} {...props} />,
    h4: (props) => <Heading level={4} {...props} />,
    h5: (props) => <Heading level={5} {...props} />,
    h6: (props) => <Heading level={6} {...props} />,
    ...components,
  };
}

Let’s move on to the search part now.

Smart Search Plugin

To integrate smart search into our Next.js frontend, we need to create a plugin. This plugin will allow us to automate the process of collecting, cleaning, and indexing MDX files from the app/docs directory into our search engine.  In the root of the project, create a folder called lib.  In this folder, create a file called smart-search-plugin.mjs.  In this file, copy and paste this entire code block:

import { hash } from "node:crypto";
import fs from "node:fs/promises";
import path from "node:path";
import { cwd } from "node:process";
import { htmlToText } from "html-to-text";

const queryDocuments = `
query FindIndexedMdxDocs($query: String!) {
  find(query: $query) {
    documents {
      id
    }
  }
}
`;

const deleteMutation = `
mutation DeleteDocument($id: ID!) {
  delete(id: $id) {
    code
    message
    success
  }
}
`;

const bulkIndexQuery = `
mutation BulkIndex($input: BulkIndexInput!) {
  bulkIndex(input: $input) {
    code
    documents {
      id
    }
  }
}
`;

let isPluginExecuted = false;

function smartSearchPlugin({ endpoint, accessToken }) {
  return {
    apply: (compiler) => {
      compiler.hooks.done.tapPromise("SmartSearchPlugin", async () => {
        if (isPluginExecuted) return;
        isPluginExecuted = true;

        if (compiler.options.mode !== "production") {
          console.log("Skipping indexing in non-production mode.");
          return;
        }

        try {
          const pages = await collectPages(path.join(cwd(), "app/docs"));
          console.log("Docs Pages collected for indexing:", pages.length);

          await deleteOldDocs({ endpoint, accessToken }, pages);
          await sendPagesToEndpoint({ endpoint, accessToken }, pages);
        } catch (error) {
          console.error("Error in smartSearchPlugin:", error);
        }
      });
    },
  };
}

async function collectPages(directory) {
  const pages = [];
  const entries = await fs.readdir(directory, { withFileTypes: true });

  for (const entry of entries) {
    const entryPath = path.join(directory, entry.name);

    if (entry.isDirectory()) {
      const subPages = await collectPages(entryPath);
      pages.push(...subPages);
    } else if (entry.isFile() && entry.name.endsWith(".mdx")) {
      const content = await fs.readFile(entryPath, "utf8");

      const metadataMatch = content.match(
        /export\s+const\s+metadata\s*=\s*(?<metadata>{[\S\s]*?});/
      );

      if (!metadataMatch?.groups?.metadata) {
        console.warn(`No metadata found in ${entryPath}. Skipping.`);
        continue;
      }

      let metadata = {};
      try {
        // eslint-disable-next-line no-eval
        metadata = eval(`(${metadataMatch.groups.metadata})`);
      } catch (error) {
        console.error("Error parsing metadata:", error);
        continue;
      }

      if (!metadata.title) {
        console.warn(`No title found in metadata of ${entryPath}. Skipping.`);
        continue;
      }

      const textContent = htmlToText(content);
      const cleanedPath = cleanPath(entryPath);

      const id = hash("sha-1", `mdx:${cleanedPath}`);

      pages.push({
        id,
        data: {
          title: metadata.title,
          content: textContent,
          path: cleanedPath,
          content_type: "mdx_doc",
        },
      });
    }
  }

  return pages;
}

function cleanPath(filePath) {
  const relativePath = path.relative(cwd(), filePath);
  return (
    "/" +
    relativePath
      .replace(/^src\/pages\//, "")
      .replace(/^pages\//, "")
      .replace(/^app\//, "")
      .replace(/\/index\.mdx$/, "")
      .replace(/\.mdx$/, "")
      // Remove trailing "/page" segment if it appears
      .replace(/\/page$/, "")
  );
}

async function deleteOldDocs({ endpoint, accessToken }, pages) {
  const currentMdxDocuments = new Set(pages.map((page) => page.id));
  const variablesForQuery = { query: 'content_type:"mdx_doc"' };

  try {
    const response = await fetch(endpoint, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${accessToken}`,
      },
      body: JSON.stringify({
        query: queryDocuments,
        variables: variablesForQuery,
      }),
    });

    const result = await response.json();

    if (result.errors) {
      console.error("Error fetching existing documents:", result.errors);
      return;
    }

    const existingIndexedDocuments = new Set(
      result.data.find.documents.map((doc) => doc.id)
    );

    const documentsToDelete = [...existingIndexedDocuments].filter(
      (id) => !currentMdxDocuments.has(id)
    );

    if (documentsToDelete.length === 0) {
      console.log("No documents to delete.");
      return;
    }

    for (const docId of documentsToDelete) {
      const variablesForDelete = { id: docId };

      try {
        const deleteResponse = await fetch(endpoint, {
          method: "POST",
          headers: {
            "Content-Type": "application/json",
            Authorization: `Bearer ${accessToken}`,
          },
          body: JSON.stringify({
            query: deleteMutation,
            variables: variablesForDelete,
          }),
        });

        const deleteResult = await deleteResponse.json();

        if (deleteResult.errors) {
          console.error(`Error deleting document ID ${docId}:`, deleteResult.errors);
        } else {
          console.log(`Deleted document ID ${docId}:`, deleteResult.data.delete);
        }
      } catch (error) {
        console.error(`Network error deleting document ID ${docId}:`, error);
      }
    }
  } catch (error) {
    console.error("Error during deletion process:", error);
  }
}

async function sendPagesToEndpoint({ endpoint, accessToken }, pages) {
  if (pages.length === 0) {
    console.warn("No documents found for indexing.");
    return;
  }

  const documents = pages.map((page) => ({
    id: page.id,
    data: page.data,
  }));

  const variables = { input: { documents } };

  try {
    const response = await fetch(endpoint, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${accessToken}`,
      },
      body: JSON.stringify({ query: bulkIndexQuery, variables }),
    });

    if (!response.ok) {
      console.error(
        `Error during bulk indexing: ${response.status} ${response.statusText}`
      );
      return;
    }

    const result = await response.json();

    if (result.errors) {
      console.error("GraphQL bulk indexing error:", result.errors);
    } else {
      console.log(`Indexed ${documents.length} documents successfully.`);
    }
  } catch (error) {
    console.error("Error during bulk indexing:", error);
  }
}

export default smartSearchPlugin;

This is a lot of code so let’s break this down into small chunks.

At the top of the file, we have our imports:

import { hash } from "node:crypto";
import fs from "node:fs/promises";
import path from "node:path";
import { cwd } from "node:process";
import { htmlToText } from "html-to-text";

These imports are responsible for the following:

  • hash This is Node’s built-in crypto module. We’ll use this to generate unique identifiers for our documents, ensuring that each piece of content can be tracked as it’s indexed.
  • fs/promises This module lets us work with the file system using async/await. We’ll read directories and MDX files and collect their content asynchronously.
  • path  Used for handling and transforming file paths
  • cwd Retrieves the current working directory of the Node.js process
  • html-to-text This allows us to convert raw HTML (or MDX content that often renders to HTML) into plain text.

Then, We define a GraphQL query to retrieve already indexed documents that match a specific query—in this case, all documents tagged as mdx_doc. This is part of the cleanup process where we compare what’s currently indexed with what we are about to index:

const queryDocuments = `
query FindIndexedMdxDocs($query: String!) {
  find(query: $query) {
    documents {
      id
    }
  }
}
`;

Following that, we have a GraphQL mutation for deleting a document by its id. We’ll use this to remove outdated content from the index, ensuring the search results stay accurate:

const deleteMutation = `
mutation DeleteDocument($id: ID!) {
  delete(id: $id) {
    code
    message
    success
  }
}
`;

Next, we define a GraphQL mutation for bulk indexing. After collecting all MDX pages, we send them to this mutation in a single request, making the indexing process efficient and seamless.

This line here is a flag that ensures that our indexing logic runs only once per build. Without it, we might accidentally trigger multiple indexing operations, leading to redundant work and inconsistent states.

let isPluginExecuted = false

After that, we have our main plugin function:

function smartSearchPlugin({ endpoint, accessToken }) {
  return {
    apply: (compiler) => {
      compiler.hooks.done.tapPromise("SmartSearchPlugin", async () => {
        if (isPluginExecuted) return;
        isPluginExecuted = true;

        if (compiler.options.mode !== "production") {
          console.log("Skipping indexing in non-production mode.");
          return;
        }

        try {
          const pages = await collectPages(path.join(cwd(), "app/docs"));
          console.log("Docs Pages collected for indexing:", pages.length);

          await deleteOldDocs({ endpoint, accessToken }, pages);
          await sendPagesToEndpoint({ endpoint, accessToken }, pages);
        } catch (error) {
          console.error("Error in smartSearchPlugin:", error);
        }
      });
    },
  };
}

It receives an endpoint from the search API and an access token for authentication. The plugin hooks into the build process via the apply method. We use compiler.hooks.done.tapPromise to run our indexing logic once the build time finishes. This ensures our search index updates after the latest version of the site is compiled.

If we’ve already run this plugin once, we skip running it again. Setting isPluginExecuted to true ensures that even if the build pipeline triggers multiple times, the indexing won’t repeat. Then we make sure we only index in production mode. In dev mode, we don’t want to flood our search index with unstable or temporary content, so we bail out if we are not in production.

We then call collectPages() to gather all MDX documents from app/docs. After this call, pages is an array of all the files we want to index. We log how many we find for transparency.

Then we remove any documents that are no longer relevant by calling deleteOldDocs(). That is followed by adding our current pages to the search index with sendPagesToEndpoint().

If anything goes wrong in our try block, we catch the error, log it out, and do not crash the build. This error handling gives us insight into indexing issues without breaking the build entirely.

The next thing we have is an async function that walks through a given directory, finds all MDX files, extracts their metadata, and preps them for indexing:

async function collectPages(directory) {
  const pages = [];
  const entries = await fs.readdir(directory, { withFileTypes: true });

  for (const entry of entries) {
    const entryPath = path.join(directory, entry.name);

    if (entry.isDirectory()) {
      const subPages = await collectPages(entryPath);
      pages.push(...subPages);
    } else if (entry.isFile() && entry.name.endsWith(".mdx")) {
      const content = await fs.readFile(entryPath, "utf8");

      const metadataMatch = content.match(
        /export\s+const\s+metadata\s*=\s*(?<metadata>{[\S\s]*?});/
      );

      if (!metadataMatch?.groups?.metadata) {
        console.warn(`No metadata found in ${entryPath}. Skipping.`);
        continue;
      }

If we don’t find any metadata in the file, we log a warning and skip this file. Our indexing workflow requires metadata – specifically a title – to proper,y index the content.

Here, we parse the metadata using eval:

      let metadata = {};
      try {
        metadata = eval(`(${metadataMatch.groups.metadata})`);
      } catch (error) {
        console.error("Error parsing metadata:", error);
        continue;
      }
      
if (!metadata.title) {
        console.warn(`No title found in metadata of ${entryPath}. Skipping.`);
        continue;
      }

If the metadata is invalid or can’t be evaluated, we log an error and skip the file. Then we ensure that the metadata includes a title. If it’s not a valid document for indexing, we skip it.

Next, we convert the MDX content to plain text for indexing. This gives our search engine clean text without HTML tags. Then we run cleanPath() to transform the file’s absolute path into a user-facing URL path. It ensures that the app/docs/my-page/page.mdx becomes a neat URL, like /docs/my-page. After that, a unique ID for this document is generated based on the cleaned path. Using a hash ensures that the same path always yields the same ID, which is needed for identifying when documents change or need deletion:

  const textContent = htmlToText(content);
      const cleanedPath = cleanPath(entryPath);

      const id = hash("sha-1", `mdx:${cleanedPath}`);

The next lines add a new document to the pages array. It includes the ID, metadata, raw text, and other helpful attributes that the search API needs:

pages.push({
        id,
        data: {
          title: metadata.title,
          content: textContent,
          path: cleanedPath,
          content_type: "mdx_doc",
        },
      });
    }
  }

  return pages;
}

The following function cleanPath takes a file’s absolute path, converts it to a project-relative path, and then strips out unwanted prefixes. It also removes trailing extensions and ensures any trailing /page segments are gone.

function cleanPath(filePath) {
  const relativePath = path.relative(cwd(), filePath);
  return (
    "/" +
    relativePath
      .replace(/^src\/pages\//, "")
      .replace(/^pages\//, "")
      .replace(/^app\//, "")
      .replace(/\/index\.mdx$/, "")
      .replace(/\.mdx$/, "")
      // Remove trailing "/page" segment if it appears
      .replace(/\/page$/, "")
  );
}

In this following async function, we take a snapshot of all the current document IDs we are about to index. We then prepare a GraphQL query to find all existing mdx_doc documents that might no longer be relevant.

async function deleteOldDocs({ endpoint, accessToken }, pages) {
  const currentMdxDocuments = new Set(pages.map((page) => page.id));
  const variablesForQuery = { query: 'content_type:"mdx_doc"' };

Following that, we perform a network request to the search GraphQL endpoint to fetch a list of already indexed documents. After sending the request, we await response.json() for the server’s reply. We get the data returned in JSON format which we parse into a JS object.

Then we check for errors and if something is wrong, we log it.

try {
    const response = await fetch(endpoint, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${accessToken}`,
      },
      body: JSON.stringify({
        query: queryDocuments,
        variables: variablesForQuery,
      }),
    });

    const result = await response.json();

    if (result.errors) {
      console.error("Error fetching existing documents:", result.errors);
      return;
    }

Here, we take the documents returned from our earlier GraphQL query (result.data.find.documents) and map them down to their IDs. We then wrap these IDs in a JavaScript Set. A Set gives us fast lookups, helping us easily check whether a given document ID is present.

 const existingIndexedDocuments = new Set(
      result.data.find.documents.map((doc) => doc.id)
    );

Then we identify documents to delete here. This filters out any id that is not in the current MDX documents and all IDs that are currently indexed but should not be anymore:

const documentsToDelete = [...existingIndexedDocuments].filter(
  (id) => !currentMdxDocuments.has(id)
);

If documentsToDelete is empty, it means our search index is perfectly aligned with our current documents—nothing to remove. In that case, we log a message and return early.

if (documentsToDelete.length === 0) {
      console.log("No documents to delete.");
      return;
    }

Now we iterate over each document ID in documentsToDelete. For each, we create a variablesForDelete object that will be passed into a GraphQL mutation. This mutation will delete the document with that specific id from the search index.

 for (const docId of documentsToDelete) {
      const variablesForDelete = { id: docId };

The next lines send a GraphQL mutation request to the search endpoint to remove a specific document from the index. It posts a deleteMutation query along with the necessary variables (including the document ID to delete). After the network call completes, it processes the JSON response to confirm whether the deletion was successful.


      try {
        const deleteResponse = await fetch(endpoint, {
          method: "POST",
          headers: {
            "Content-Type": "application/json",
            Authorization: `Bearer ${accessToken}`,
          },
          body: JSON.stringify({
            query: deleteMutation,
            variables: variablesForDelete,
          }),
        });

        const deleteResult = await deleteResponse.json();

This section checks whether the deletion request was successful or not. If the GraphQL response includes any errors, it logs those errors along with the document ID that failed to delete. Otherwise, it confirms that the document was successfully removed by logging a success message. Should any network issues occur during the process, those are also caught and logged. After processing all documents set for deletion, if a broader error occurs, it’s logged to help diagnose issues in the overall deletion process:

 if (deleteResult.errors) {
          console.error(
            `Error deleting document ID ${docId}:`,
            deleteResult.errors
          );
        } else {
          console.log(
            `Deleted document ID ${docId}:`,
            deleteResult.data.delete
          );
        }
      } catch (error) {
        console.error(`Network error deleting document ID ${docId}:`, error);
      }
    }
  } catch (error) {
    console.error("Error during deletion process:", error);
  }
}

Following that, we set up the data needed to index newly collected pages. If there are no pages to index, it simply logs a warning and exits. Otherwise, it transforms the array of page objects into a format required by the GraphQL bulkIndex mutation, packaging each page’s ID and associated metadata into a documents array. It then wraps that array into a variables object, preparing the payload that will be sent to the search endpoint.

async function sendPagesToEndpoint({ endpoint, accessToken }, pages) {
  if (pages.length === 0) {
    console.warn("No documents found for indexing.");
    return;
  }

  const documents = pages.map((page) => ({
    id: page.id,
    data: page.data,
  }));

  const variables = { input: { documents } };

A the end of this code block, we send the prepared list of documents to the search service using a POST request. It includes the bulkIndexQuery and the variables containing all documents that need indexing. If the server responds with a non-OK status, it logs an error and stops. Once a successful response comes in, it checks for any GraphQL-level errors and logs them. If there are no errors, it confirms that the documents have been successfully indexed. If a network or unexpected error occurs at any point, it’s caught and reported, ensuring any issues are visible and can be addressed.

try {
    const response = await fetch(endpoint, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${accessToken}`,
      },
      body: JSON.stringify({ query: bulkIndexQuery, variables }),
    });

    if (!response.ok) {
      console.error(
        `Error during bulk indexing: ${response.status} ${response.statusText}`
      );
      return;
    }

    const result = await response.json();

    if (result.errors) {
      console.error("GraphQL bulk indexing error:", result.errors);
    } else {
      console.log(`Indexed ${documents.length} documents successfully.`);
    }
  } catch (error) {
    console.error("Error during bulk indexing:", error);
  }
}

export default smartSearchPlugin;

Components Directory

The next thing we have to do is go over the components needed in this project.  The components directory will be at the root of our Next.js 15 project. 

The search-bar.jsx file


The search bar in our project is a client-side search component with a keyboard-accessible modal. Since it is a large block of code, let’s break down the code line by line.

At the start of the file, we declare this as a client-side component with “use client.”  This tells Next.js to run this component on the client allowing the use of hooks like useState and useEffect.

All our necessary imports for this component follow that.

Next, we define and export our SearchBar component, which is our main component. It renders the search bar and handles all the logic.

Then we have the state and variable references:

export default function SearchBar() {
  const [items, setItems] = useState([]);
  const [inputValue, setInputValue] = useState("");
  const [isModalOpen, setIsModalOpen] = useState(false);
  const dialogReference = useRef(null);
  const router = useRouter();

Here is what the state and variable references do:

const [items, setItems] = useState([]); 

This initializes items state as an empty array. It stores the search results fetched from the API.

const [inputValue, setInputValue] = useState(""); 

This initializes the input value state as an empty string and tracks the current value of the search input field.

const [isModalOpen, setIsModalOpen] = useState(false); 

This sets up the isModalOpen state as false and controls the visibility of the search modal.

const dialogReference = useRef(null);

This creates a reference to dialogReference which is set to null.  It references the modal dialog DOM element to detect clicks outside the modal.

const router = useRouter();

This sets the Next.js router instance.  It allows navigation to different routes programmatically.

After the state and references, we have our modal control functions:

const openModal = useCallback(() => {
    setIsModalOpen(true);
    setInputValue("");
    setItems([]);
  }, []);
const closeModal = useCallback(() => setIsModalOpen(false), []);

This defines the openModal function calling useCallback.  It opens the search modal and resets the input and items.  The dependency array is empty so the function is memoized once and doesn’t change across renders.

Then we define the closeModal function using useCallback.  This closes the search modal by setting isModalOpen to false.  The array is empty here to ensure consistent reference.

The next block of code is event handlers:

const handleOutsideClick = useCallback(
  (event) => {
    const isClickOutsideOfModal = event.target === dialogReference.current;
    if (isClickOutsideOfModal) {
      closeModal();
      setInputValue("");
      setItems([]);
    }
  },
  [closeModal]
);

const handleKeyDown = useCallback(
  (event) => {
    if (event.metaKey && event.key === "k") {
      event.preventDefault();
      openModal();
    }

    if (event.key === "Escape") {
      closeModal();
    }
  },
  [openModal, closeModal]
);

The first event handler closes the modal when the user clicks outside of it.

The next event handles keyboard shortcuts on a Mac. If command k is pressed, it prevents default behavior and opens the modal. If the escape key is pressed, it closes it.

The following is an event listener:

useEffect(() => {
  document.addEventListener("keydown", handleKeyDown);
  return () => document.removeEventListener("keydown", handleKeyDown);
}, [handleKeyDown]);

This attaches the key-down event listener to the document when the component mounts and cleans up when it unmounts.

Next, we have our debounced fetch function:

 const debouncedFetchItems = useRef(
    debounce(async (value) => {
      if (!value) {
        setItems([]);
        return;
      }

      try {
        const response = await fetch(
          `/api/search?query=${encodeURIComponent(value)}`
        );

        if (!response.ok) {
          console.error(
            `Search API error: ${response.status} ${response.statusText}`
          );
          setItems([]);
          return;
        }

        const data = await response.json();

        if (Array.isArray(data)) {
          setItems(data);
        } else {
          console.error("Search API returned unexpected data:", data);
          setItems([]);
        }
      } catch (error) {
        console.error("Error fetching search results:", error);
        setItems([]);
      }
    }, 500)
  ).current;

This creates a debounced function that fetches search results, limiting API calls to one every 500 milliseconds.  useRef stores the debounced function so it is not recreated on every render.  debounce wraps the async function to delay execution.  

In the body of the function, we have fetched search results from the API based on the user’s input value.  It checks if the value is empty or falsy, it will clear items and it exits.

The API call fetches data from api/search, encoding the value to handle special characters.  If the response is not OK, it logs an error and clears items.  The response JSON is parsed and then checks if data is an array.  If it is, it updates the items.  Otherwise, it logs an error and clears.

The error handling then catches any network or parsing errors, logs them and clears items.

Following that we have our cleanup effect:

useEffect(() => {
  return () => {
    debouncedFetchItems.cancel();
  };
}, [debouncedFetchItems]);

This useEffect cleanup hook cancels any pending debounced function calls when the component unmounts. The next block of code is our downshift combobox configuration:

const {
    isOpen,
    getMenuProps,
    getInputProps,
    getItemProps,
    highlightedIndex,
    openMenu,
    closeMenu,
  } = useCombobox({
    items,
    inputValue,
    defaultHighlightedIndex: 0,
    onInputValueChange: ({ inputValue: newValue }) => {
      setInputValue(newValue);
      debouncedFetchItems(newValue);
      if (newValue.trim() === "") {
        closeMenu();
      } else {
        openMenu();
      }
    },
    onSelectedItemChange: ({ selectedItem }) => {
      if (selectedItem) {
        closeModal();
        router.push(selectedItem.path);
      }
    },
    itemToString: (item) => (item ? item.title : ""),
  });

This configures the combobox behavior using the useCombobox hook from Downshift.

At the start of the const, we have our destructured values:

  • isOpen: Boolean indicating if the menu is open.
  • getMenuProps: Props to spread onto the menu (<ul>) element.
  • getInputProps: Props to spread onto the input element.
  • getItemProps: Props to spread onto each item (<li>) element.
  • highlightedIndex: The index of the currently highlighted item.
  • openMenu: Function to open the menu.
  • closeMenu: Function to close the menu.

After that, we have our config options:

  • items: The array of items to display (search results).
  • inputValue: The current value of the input field.
  • defaultHighlightedIndex: Sets the default highlighted item (index 0).
  • onInputValueChange: Called when the input value changes.
  • onSelectedItemChange: Called when an item is selected.
  • itemToString: Converts an item to a string representation for display.

Then, the onInputValueChange handler updates state and manages menu visibility when the input value changes.  If the menu is empty, it closes. Otherwise, it is left open.

The onSelectedItemChange handler handles actions when an item is selected from the search results.  If a selected item exists, it closes the modal and navigates to the selected item’s path with router.push.

The last thing in this code block is the itemToString function. This converts an item to a string for display purposes.  If there is an item, it will return its title and if not, it returns an empty string.

Lastly, we have our JSX return statement:

return (
  <>
    {/* Search Button */}
    <button
      className="inline-flex items-center rounded-md bg-gray-800 px-2 py-1.5 text-sm font-medium text-gray-400 hover:bg-gray-700"
      onClick={openModal}
      type="button"
    >
      <span className="hidden md:inline">
        <span className="pl-3">Search docs or posts...</span>
        <kbd className="ml-8 rounded bg-gray-700 px-2 py-1 text-gray-400">
          ⌘K
        </kbd>
      </span>
    </button>

    {/* Modal */}
    {isModalOpen && (
      <div
        className="bg-black fixed inset-0 z-50 flex items-start justify-center bg-opacity-50 backdrop-blur-sm"
        onClick={handleOutsideClick}
        onKeyDown={(event) => {
          if (event.key === "Enter" || event.key === " ") {
            handleOutsideClick(event);
          }
        }}
        role="button"
        tabIndex="0"
        ref={dialogReference}
      >
        {/* Modal Content */}
        <div
          className="relative mt-10 w-full max-w-4xl rounded-lg bg-gray-800 p-6 shadow-lg"
          role="dialog"
          tabIndex="-1"
        >
          {/* Combobox */}
          <div
            role="combobox"
            aria-expanded={isOpen}
            aria-haspopup="listbox"
            aria-controls="search-results"
          >
            {/* Input Field */}
            <div className="relative">
              <input
                autoFocus
                {...getInputProps({
                  placeholder: "What are you searching for?",
                  "aria-label": "Search input",
                  className:
                    "w-full pr-10 p-2 bg-gray-700 text-white placeholder-gray-400 border border-gray-700 rounded focus:outline-none focus:ring-2 focus:ring-blue-500",
                })}
              />
              {/* Close Button */}
              <button
                type="button"
                className="absolute right-2 top-1/2 -translate-y-1/2 transform text-xs text-gray-400 hover:text-white"
                onClick={closeModal}
              >
                Esc
              </button>
            </div>
            {/* Results List */}
            <ul
              {...getMenuProps({
                id: "search-results",
              })}
              className="mt-2 max-h-60 overflow-y-auto"
            >
              {isOpen &&
                items &&
                items.length > 0 &&
                items.map((item, index) => (
                  <li
                    key={item.id}
                    {...getItemProps({
                      item,
                      index,
                      onClick: () => {
                        closeModal();
                        router.push(item.path);
                      },
                      onKeyDown: (event) => {
                        if (event.key === "Enter") {
                          closeModal();
                          router.push(item.path);
                        }
                      },
                    })}
                    role="option"
                    aria-selected={highlightedIndex === index}
                    tabIndex={0}
                    className={`flex w-full cursor-pointer items-center justify-between px-4 py-4 ${
                      highlightedIndex === index
                        ? "bg-blue-600 text-white"
                        : "bg-gray-800 text-white"
                    }`}
                  >
                    <span className="text-left">{item.title}</span>
                    <span className="text-right text-sm text-gray-400">
                      {item.type === "mdx_doc" ? "Doc" : "Blog"}
                    </span>
                  </li>
                ))}
            </ul>
            {/* No Results Message */}
            {isOpen && items.length === 0 && (
              <div className="mt-2 text-gray-500">No results found.</div>
            )}
          </div>
        </div>
      </div>
    )}
  </>
);

This JSX return statement renders a dynamic and interactive search feature. It starts with a button styled to initiate the search modal and displays a keyboard shortcut (⌘K) for convenience. When clicked, the button opens a full-screen modal overlay with a slight blur effect, visually focusing the user on the search functionality while the rest of the page dims. 

The modal includes a responsive, styled search input field that is immediately focused and supports live autocomplete through the useCombobox hook. Below the input field, a scrollable list dynamically displays search results, with highlighted and selectable options that navigate the user to their respective pages when clicked or when the Enter key is pressed. If no results are found, a subtle message is shown. 

Additionally, the modal is fully accessible, with ARIA roles and attributes for the combobox and options, and it can be closed by clicking outside, pressing the Escape key, or using the close button within the modal. In this example code, we’re using  Tailwind CSS, but you can use whatever styling solution works best for your project.

Layout, Heading, and NavBar

There are 3 more components that we will go over in our components folder.


The docs-layout component is a wrapper that applies consistent styling to docs pages.  This ensures proper spacing and a clean readable layout for its child content.

The heading component creates custom-styled headings with clickable anchor links, making it easy for users to share or navigate to specific sections. It dynamically adjusts styles based on the heading level.

Lastly, the Navbar component provides a global navigation bar with a link to the home page on the left and a SearchBar component on the right for usability.

There is not much to the code in these components.  You can reference them in the repo here: 

https://github.com/Fran-A-Dev/smart-search-with-app-router/blob/main/components/nav-bar.jsx

https://github.com/Fran-A-Dev/smart-search-with-app-router/blob/main/components/heading.jsx

https://github.com/Fran-A-Dev/smart-search-with-app-router/blob/main/components/docs-layout.jsx

The route.js File

In Next.js 15 App Router, we need to set up a route handler that allows us to create a custom request handler for our search API route.  This API route handles GET requests for search functionality by querying a GraphQL endpoint for documents matching a user-provided query string.

In the app folder, create a subfolder called api. In that subfolder, create another subfolder called search.  Within that search folder, create a file called route.js.  Copy and paste this code block:

import { NextResponse } from "next/server";
import { ReasonPhrases, StatusCodes } from "http-status-codes";

function cleanPath(filePath) {
  return (
    filePath
      .replace(/^\/?src\/pages/, "")
      .replace(/^\/?pages/, "")
      .replace(/\/index\.mdx$/, "")
      .replace(/\.mdx$/, "") || "/"
  );
}

export async function GET(request) {
  const endpoint = process.env.NEXT_PUBLIC_SEARCH_ENDPOINT;
  const accessToken = process.env.NEXT_SEARCH_ACCESS_TOKEN;

  const { searchParams } = new URL(request.url);
  const query = searchParams.get("query");

  if (!query) {
    return NextResponse.json(
      { error: "Search query is required." },
      { status: StatusCodes.BAD_REQUEST }
    );
  }

  const graphqlQuery = `
    query FindDocuments($query: String!) {
      find(query: $query) {
        total
        documents {
          id
          data
        }
      }
    }
  `;

  try {
    const response = await fetch(endpoint, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${accessToken}`,
      },
      body: JSON.stringify({
        query: graphqlQuery,
        variables: { query },
      }),
    });

    if (!response.ok) {
      return NextResponse.json(
        { error: ReasonPhrases.SERVICE_UNAVAILABLE },
        { status: StatusCodes.SERVICE_UNAVAILABLE }
      );
    }

    const result = await response.json();

    if (result.errors) {
      return NextResponse.json(
        { errors: result.errors },
        { status: StatusCodes.INTERNAL_SERVER_ERROR }
      );
    }

    const seenIds = new Set();
    const formattedResults = [];

    for (const content of result.data.find.documents) {
      const contentType =
        content.data.content_type || content.data.post_type || "mdx_doc";
      let item = null;

      if (contentType === "mdx_doc" && content.data.title) {
        const path = content.data.path ? cleanPath(content.data.path) : "/";
        item = {
          id: content.id,
          title: content.data.title,
          path,
          type: "mdx_doc",
        };
      } else if (
        (contentType === "wp_post" || contentType === "post") &&
        content.data.post_title &&
        content.data.post_name
      ) {
        item = {
          id: content.id,
          title: content.data.post_title,
          path: `/blog/${content.data.post_name}`,
          type: "post",
        };
      }

      if (item && !seenIds.has(item.id)) {
        seenIds.add(item.id);
        formattedResults.push(item);
      }
    }

    return NextResponse.json(formattedResults, { status: StatusCodes.OK });
  } catch (error) {
    console.error("Error fetching search data:", error);
    return NextResponse.json(
      { error: ReasonPhrases.INTERNAL_SERVER_ERROR },
      { status: StatusCodes.INTERNAL_SERVER_ERROR }
    );
  }
}

First, we import NextResponse from Next.js, to return responses in App Router-based endpoints. We also bring in ReasonPhrases and StatusCodes from http-status-codes to ensure our responses are consistent and human-readable, making it easier to understand which HTTP statuses we’re returning and why.

The cleanPath function then takes an internal file path—like one found in a Next.js src/pages or app directory—and transforms it into a clean, user-friendly URL. It strips away any directory prefixes (/src/pages, /pages), and removes filename extensions or index placeholders (such as index.mdx).

If all content is removed, it defaults to /, ensuring every piece of content resolves to a neat, presentable URL.

In the next few lines, we handle incoming search requests. The GET function is the main entry point for the endpoint. It begins by reading environment variables NEXT_PUBLIC_SEARCH_ENDPOINT and NEXT_SEARCH_ACCESS_TOKEN—these determine where we’ll send our search queries and how we authenticate with the search service.

Next, it extracts the query parameter from the request’s URL. If no query is provided, the function quickly returns a 400 Bad Request response, ensuring that the caller knows a query string is mandatory.

Then we have a GraphQL query that finds all documents matching the user’s query string:

export async function GET(request) {
  const endpoint = process.env.NEXT_PUBLIC_SEARCH_ENDPOINT;
  const accessToken = process.env.NEXT_SEARCH_ACCESS_TOKEN;

  const { searchParams } = new URL(request.url);
  const query = searchParams.get("query");

  if (!query) {
    return NextResponse.json(
      { error: "Search query is required." },
      { status: StatusCodes.BAD_REQUEST }
    );
  }

  const graphqlQuery = `
    query FindDocuments($query: String!) {
      find(query: $query) {
        total
        documents {
          id
          data
        }
      }
    }
  `;

In the next few lines, we send a POST request to the search endpoint, including the GraphQL query and the search term. This request includes authentication via a bearer token to ensure only authorized calls go through.

If the response indicates an issue—like the server being unreachable—it immediately returns a 503 Service Unavailable message to let the caller know the search service isn’t currently accessible.

After verifying the response is good, it converts the response to JSON. If the server replies with GraphQL errors, the code returns a 500 Internal Server Error response, signaling that something went wrong on the server side.

 try {
    const response = await fetch(endpoint, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${accessToken}`,
      },
      body: JSON.stringify({
        query: graphqlQuery,
        variables: { query },
      }),
    });

    if (!response.ok) {
      return NextResponse.json(
        { error: ReasonPhrases.SERVICE_UNAVAILABLE },
        { status: StatusCodes.SERVICE_UNAVAILABLE }
      );
    }

    const result = await response.json();

    if (result.errors) {
      return NextResponse.json(
        { errors: result.errors },
        { status: StatusCodes.INTERNAL_SERVER_ERROR }
      );
    }

Following that, the documents are returned by the search service, turning them into a more usable, standardized format. It uses a Set called seenIds to keep track of which document IDs have already been processed, ensuring that no duplicates end up in the final results.

For each content object, the code identifies the document’s type—whether it’s an mdx_doc or a WordPress post—so that it knows how to handle its title and path.

For mdx_doc types, it cleans the path using cleanPath() to create a user-friendly URL. For WordPress posts, it constructs a blog URL based on the post’s name. This ensures that each document is transformed into a consistent, meaningful shape: it has an id, a title, and a path that can be used directly in the frontend.

Finally, each document is only added to the results if it hasn’t been seen before. Once processed, formattedResults ends up with a clean, non-redundant list of items.

const seenIds = new Set();
    const formattedResults = [];

    for (const content of result.data.find.documents) {
      const contentType =
        content.data.content_type || content.data.post_type || "mdx_doc";
      let item = null;

      if (contentType === "mdx_doc" && content.data.title) {
        const path = content.data.path ? cleanPath(content.data.path) : "/";
        item = {
          id: content.id,
          title: content.data.title,
          path,
          type: "mdx_doc",
        };
      } else if (
        (contentType === "wp_post" || contentType === "post") &&
        content.data.post_title &&
        content.data.post_name
      ) {
        item = {
          id: content.id,
          title: content.data.post_title,
          path: `/blog/${content.data.post_name}`,
          type: "post",
        };
      }

      if (item && !seenIds.has(item.id)) {
        seenIds.add(item.id);
        formattedResults.push(item);
      }
    }

Lastly, the code returns a 200 OK response containing the formattedResults array if all is good. This gives the user a clear set of neatly formatted search results.

If an exception occurs, the catch block logs the issue for diagnostic purposes and responds with a 500 Internal Server Error. This ensures that the client knows there is an error and logs a record of the error to troubleshoot.

Test the Search Functionality

That is all the code we need to make this work. Stoked!!!

Now, let’s test in production mode.  

Run this command in your terminal to create a production build in your Next.js project:

npm run build

When you run a build, you will notice this output in your terminal:

You should be even more stoked because the output shows the plugin working with the indexing process being output!  

Now, start the server to test the application in production:

npm run start

On the page, click on the search bar and type whatever title or associated content you have with a WordPress blog post in your WP backend.  You should see this:

Then, try typing a title or content associated with your MDX docs and you should see this:


Notice that the search results will return both documents and blog posts if they contain similar data!  When you click that search result, it will take you to the correct path of the MDX page or the WordPress blog post. (Remember, the WP posts are coming from the dynamic route file you already have setup )

The good thing is, that we have labeled them accordingly so our users know which one they are.  

Conclusion 

The WP Engine Smart Search plugin is an enhanced search solution for regular WordPress and headless WordPress. We hope this tutorial helps you better understand how to set it up and use it in a headless app.  

As always, we’re super stoked to hear your feedback and learn about the headless projects you’re working on, so hit us up in our Discord!



The post WP Engine Smart Search for Headless WordPress with Next.js and MDX appeared first on Builders.

]]>
https://wpengine.com/builders/wp-engine-smart-search-for-headless-wordpress-with-next-js-and-mdx/feed/ 0
Code Syntax Highlighting in Headless WordPress https://wpengine.com/builders/code-syntax-highlighting-in-headless-wordpress/ https://wpengine.com/builders/code-syntax-highlighting-in-headless-wordpress/#respond Wed, 09 Oct 2024 21:46:21 +0000 https://wpengine.com/builders/?p=31729 If you’re using Headless WordPress with Next.js or any other frontend framework, you might have run into a small issue when displaying code snippets: the native WordPress code block doesn’t […]

The post Code Syntax Highlighting in Headless WordPress appeared first on Builders.

]]>
If you’re using Headless WordPress with Next.js or any other frontend framework, you might have run into a small issue when displaying code snippets: the native WordPress code block doesn’t support syntax highlighting. This can be a problem for developer-focused sites or tutorials where reading and understanding code snippets is key.

This shortcoming of the native Code block often leads developers to look for other solutions that support code syntax highlighting. For a recent headless WordPress project our team worked on, we researched a few options including the Syntax-highlighting Code Block (with Server-side Rendering) and Code Block Pro. We found Code Block Pro to offer the highest quality syntax highlighting, provide tons of features and customization options, and even support a number of popular VS Code themes to choose from, giving code snippets a nice, professional look.

In this guide, we’ll show you how to install and use Code Block Pro with a Headless WordPress setup, ensuring your code snippets are presented properly on the front end.

If you prefer the video format of this article, you can access it here:

Prerequisites

Before reading this article, you should have the following prerequisites checked off:

  • Basic knowledge of Next.js 14
  • Next.js boilerplate project that uses the App Router
  • A WordPress install
  • A basic knowledge of WPGraphQL and headless WordPress

In order to follow along step by step, you need the following:

  • A Next.js project that is connected with your WordPress backend.
  • A dynamic route that grabs a single post by its URI to display a single post detail page

If you do not have that yet and do want to follow along step by step, you can clone down my demo here: https://github.com/Fran-A-Dev/Kevin-Bacon-Code-Syntax-Highlighting-HeadlessWP/tree/main

To gain a basic understanding of Next.js, please review the Next.js docs.

If you need a WordPress install, you can use a free sandbox account on WP Engine’s Headless Platform:

Steps

Install and activate Code Block Pro

Steps to install and activate the Code Block Pro plugin:

  • Go to your WordPress admin dashboard.
  • Navigate to Plugins > Add New.
  • Search for “Code Block Pro“.
  • Click Install Now, then Activate.

You should have this:

Now that it is activated, create a new post in WordPress.  In the block editor, click the plus icon to insert a new block and select the Code Pro block.  It looks like this:

When you select it, a syntax-highlighted code block will be added.  Go ahead and paste whatever code you want in that block and then save the post.  In this case, I am going to add some jsx that renders a page.  Note that the panel on the right contains many configuration options.

I chose the “Dracula Soft” theme for this article and the header type is set to “none” to achieve a blank header.  The footer is set to “simple string start” which displays the kind of language the code is in at the bottom.  I also highlighted lines 1 and 2 to show off the line-by-line highlight feature:

That is it!  Stoked!  This is what you have to do. If simple syntax highlighting and formatting are all you need, as well as displaying the programming language in your post, you are done.  Now, let’s get this to render on your decoupled frontend.

Configure Next.js and App Router

In this article, we will use the App Router in Next.js 14. You should already have a boilerplate Next.js project spun up in the app router. Go ahead and open your Next.js project in your code editor.
The first thing we need to do is add some code to our CSS.  Navigate to your globals.css file in the app directory.  Add this CSS:

/* Line highlighting for Code Block Pro blocks */
pre.shiki {
  padding-inline: 0;
}
pre.shiki .line {
  padding-inline: 2rem;
}
pre.shiki .cbp-line-highlight {
  display: inline-block;
  width: 100%;
  background-color: rgba(255, 255, 255, 0.06);
}


This css will ensure that line highlighting is properly formatted, creating a more readable and accessible presentation for longer code blocks.

Save that and run npm run dev to spin up your dev server and visit the single post detail page where you added the code.  You should have something like this:

Out of the box, the plugin with some css allows you to easily display code, highlighting, and the programming language in a nice, readable way. 

Taking it a Step Further – WPGraphQL & WPGraphQL Content Blocks

You can stop at just installing the plugin and adding the css to your frontend application to get the formatting and highlighting.  If you want to implement a copy-to-clipboard feature whereby users can click a button to copy the code within the code snippet to their clipboard, however, follow the additional steps below.

Install and activate WPGraphQL & WPGraphQL Content Blocks

WPGraphQL is a canonical WordPress plugin that provides an extendable GraphQL schema and API for any WordPress site.

WPGraphQL Content Blocks is a WordPress plugin that extends WPGraphQL to support querying (Gutenberg) block data. Let’s install both plugins.

Go to the plugins page in your WP admin and search for WPGraphQL. You can add and activate the plugin from there.

Go to the WPGraphQL Content Blocks repo and download the latest .zip version of the plugin.

Navigate to your WP install and upload the plugin .zip to your WordPress site.

Once that is done, activate the plugin.

Create an editor block query to get Code Block Pro data

Next, let’s query for the Code Block Pro data.  Head over to GraphQL IDE and paste in this query:


query GetPostsWithCodeBlocks {
  posts {
    nodes {
      title
      content
      editorBlocks {
        name
        ... on KevinbatdorfCodeBlockPro {
          attributes {
            language
            lineNumbers
            code
          }
          renderedHtml
        }
      }
    }
  }
}

Now press play and you should get this response:


As shown in the IDE, this query returns all posts along with the Code Block Pro data, including the attributes we are asking for (programming language, HTML-rendered code snippet, code, line numbers, copy button, etc.).

Rendering the Code Block in Next.js

Now that we have the data, we need to render the code block in our Next.js frontend. Here’s how you can do it in Next.js 14.

Navigate to app/post/[uri]/page.jsx file.  In this file, paste this code block in:

"use client";
import { useState } from "react";
import "../../globals.css";

async function getPost(uri) {
  const query = `
  query GetPostByUri($uri: ID!) {
    post(id: $uri, idType: URI) {
      title
      editorBlocks {
        name
        ... on KevinbatdorfCodeBlockPro {
          attributes {
            language
            lineNumbers
            code
            copyButton
            copyButtonString
          }
          renderedHtml
        }
      }
    }
  }
  `;

  const variables = {
    uri,
  };

  const res = await fetch(process.env.NEXT_PUBLIC_GRAPHQL_ENDPOINT, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
    },
    next: {
      revalidate: 60,
    },
    body: JSON.stringify({ query, variables }),
  });

  const responseBody = await res.json();
  return responseBody.data.post;
}

export default async function PostDetails({ params }) {
  const post = await getPost(params.uri);

  return (
    <main>
      <h1>{post.title}</h1>
      <div>
        {/* Loop through the editor blocks to render CodeBlockPro if available */}
        {post.editorBlocks.map((block, index) => {
          if (block.name === "kevinbatdorf/code-block-pro") {
            return (
              <CodeBlockDisplay
                key={index}
                attributes={block.attributes}
                renderedHtml={block.renderedHtml}
              />
            );
          }
          return null;
        })}
      </div>
    </main>
  );
}

// CodeBlockDisplay inline function to display code block
function CodeBlockDisplay({ attributes, renderedHtml }) {
  const [copied, setCopied] = useState(false);

  // Handle copy button functionality
  const handleCopy = async () => {
    try {
      if (navigator && navigator.clipboard) {
        console.log("Copying the text:", attributes.code);
        await navigator.clipboard.writeText(attributes.code);
        setCopied(true);
        setTimeout(() => setCopied(false), 3000); // Reset after 3 seconds
      } else {
        console.error("Clipboard API not available.");
      }
    } catch (err) {
      console.error("Failed to copy: ", err);
    }
  };

  return (
    <div className="code-block-container">
      {/* Render the HTML of the code block */}
      <div dangerouslySetInnerHTML={{ __html: renderedHtml }} />

      {/* Show the copy button */}
      {attributes.copyButton && (
        <button onClick={handleCopy} className="copy-button" title="Copy">
          {copied ? <CheckMarkIcon /> : <CopyIcon />}
        </button>
      )}

      {/* Show the language at the bottom */}
      {attributes.language && (
        <div className="language-label-right">
          {attributes.language}
          <span className="language-label">{attributes.language}</span>
        </div>
      )}
    </div>
  );
}

function CheckMarkIcon() {
  return (
    <svg
      xmlns="http://www.w3.org/2000/svg"
      fill="none"
      viewBox="0 0 24 24"
      strokeWidth={1.5}
      stroke="currentColor"
      className="h-6 w-6 text-gray-400"
    >
      <path
        strokeLinecap="round"
        strokeLinejoin="round"
        d="M9 12.75 11.25 15 15 9.75M21 12a9 9 0 1 1-18 0 9 9 0 0 1 18 0Z"
      />
    </svg>
  );
}

function CopyIcon() {
  return (
    <svg
      xmlns="http://www.w3.org/2000/svg"
      fill="none"
      viewBox="0 0 64 64"
      strokeWidth={5}
      stroke="currentColor"
      className="h-6 w-6 text-gray-500 hover:text-gray-400"
    >
      <rect x="11.13" y="17.72" width="33.92" height="36.85" rx="2.5" />
      <path d="M19.35,14.23V13.09a3.51,3.51,0,0,1,3.33-3.66H49.54a3.51,3.51,0,0,1,3.33,3.66V42.62a3.51,3.51,0,0,1-3.33,3.66H48.39" />
    </svg>
  );
}

This is a big code block, so let’s break it down into sections. 

Starting at the top, we have our “use client” directive since we are importing and using the useState hook in React which needs to run on the client.  Then we have our globals.css file for the styling:

"use client";
import { useState } from "react";
import "../../globals.css";

Following that, we have our WPGraphQL query to fetch the post and code block data:

async function getPost(uri) {
  const query = `
    query GetPostByUri($uri: ID!) {
      post(id: $uri, idType: URI) {
        title
        editorBlocks {
          name
          ... on KevinbatdorfCodeBlockPro {
            attributes {
              language
              lineNumbers
              code
              copyButton
              copyButtonString
            }
            renderedHtml
          }
        }
      }
    }
  `;

  const variables = { uri };
  const res = await fetch(process.env.NEXT_PUBLIC_GRAPHQL_ENDPOINT, {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    next: { revalidate: 60 },
    body: JSON.stringify({ query, variables }),
  });

  const responseBody = await res.json();
  return responseBody.data.post;
}



This function defines a GraphQL query to fetch post data, specifically targeting the KevinbatdorfCodeBlockPro block that contains attributes such as language, code, and the copyButton.

The uri (Unique Resource Identifier) is passed into the GraphQL query to fetch the correct post.

The query is then sent to the GraphQL API using fetch, with headers defining the request as POST and the body as JSON. The response is parsed as JSON, and the post data is returned for rendering.

Next, we have our PostDetails component:

export default async function PostDetails({ params }) {
  const post = await getPost(params.uri);

  return (
    <main>
      <h1>{post.title}</h1>
      <div>
        {post.editorBlocks.map((block, index) => {
          if (block.name === "kevinbatdorf/code-block-pro") {
            return (
              <CodeBlockDisplay
                key={index}
                attributes={block.attributes}
                renderedHtml={block.renderedHtml}
              />
            );
          }
          return null;
        })}
      </div>
    </main>
  );
}

The PostDetails component fetches the post data using the getPost function, passing the post uri as a parameter.  Following that, the post title is displayed using a simple <h1> tag.

Then editorBlocks field is mapped over, and the function checks if the block name is "kevinbatdorf/code-block-pro". If it is, it renders the CodeBlockDisplay component, passing in the block’s attributes and HTML.

The next part is the CodeBlockDisplay component:

function CodeBlockDisplay({ attributes, renderedHtml }) {
  const [copied, setCopied] = useState(false);

  const handleCopy = async () => {
    try {
      if (navigator && navigator.clipboard) {
        console.log("Copying the text:", attributes.code);
        await navigator.clipboard.writeText(attributes.code);
        setCopied(true);
        setTimeout(() => setCopied(false), 3000);
      } else {
        console.error("Clipboard API not available.");
      }
    } catch (err) {
      console.error("Failed to copy: ", err);
    }
  };

  return (
    <div className="code-block-container">
      <div dangerouslySetInnerHTML={{ __html: renderedHtml }} />
      {attributes.copyButton && (
        <button onClick={handleCopy} className="copy-button" title="Copy">
          {copied ? <CheckMarkIcon /> : <CopyIcon />}
        </button>
      )}
      {attributes.language && (
        <div className="language-label-right">
          {attributes.language}
          <span className="language-label">{attributes.language}</span>
        </div>
      )}
    </div>
  );
}



The component uses useState to manage whether the copy button was clicked. This means that after the copy action is triggered, the "Copied!" message will be displayed for 3 seconds before resetting back to the original "Copy" button state.

After that, the handleCopy function uses the Clipboard API to copy the code (contained in attributes.code) to the user’s clipboard. It checks if the API is available and logs an error if not.

The rendered HTML of the code block is then injected into the DOM using dangerouslySetInnerHTML, which is necessary because the content comes from WordPress in HTML format. If the block has a copyButton attribute, the copy button is conditionally displayed. Additionally, the programming language is displayed at the bottom of the block if it’s available.

The last thing we need to do is include components for rendering the SVG icons:

function CheckMarkIcon() {
  return (
    <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" strokeWidth={1.5} stroke="currentColor" className="h-6 w-6 text-gray-400">
      <path strokeLinecap="round" strokeLinejoin="round" d="M9 12.75 11.25 15 15 9.75M21 12a9 9 0 1 1-18 0 9 9 0 0 1 18 0Z" />
    </svg>
  );
}

function CopyIcon() {
  return (
    <svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 64 64" strokeWidth={5} stroke="currentColor" className="h-6 w-6 text-gray-500 hover:text-gray-400">
      <rect x="11.13" y="17.72" width="33.92" height="36.85" rx="2.5" />
      <path d="M19.35,14.23V13.09a3.51,3.51,0,0,1,3.33-3.66H49.54a3.51,3.51,0,0,1,3.33,3.66V42.62a3.51,3.51,0,0,1-3.33,3.66H48.39" />
    </svg>
  );
}


The CopyIcon here is displayed by default. After the user clicks on the button to copy the code, that SVG is hidden and swapped out with the CheckMarkIcon to indicate that the code snippet was successfully copied.

Update globals.css file

The last thing we need to do before testing our code block page is to update the css to collectively enhance the presentation of the code block by ensuring the elements are properly spaced, visually appealing, and interactive (with the copy button).

This setup compliments the functionality provided in the JavaScript code for copying code snippets.  You can style it however you would like, but I chose to do it this way.

.code-block-container {
  position: relative;
  padding: 16px;
  background-color: #282a36; /* Dark theme for the code block */
  border-radius: 8px;
  margin-bottom: 24px;
}

.copy-button {
  position: absolute;
  top: 10px;
  right: 10px;
  background: none;
  border: none;
  cursor: pointer;
  padding: 0;
  display: flex;
  align-items: center;
  justify-content: center;
}

.copy-button svg {
  height: 24px;
  width: 24px;
  color: #ccc;
  transition: color 0.3s ease;
}

.copy-button:hover svg {
  color: #fff;
}

.language-label {
  display: block;
  font-size: 12px;
  color: #bebebe;
  text-align: left;
  padding-top: 8px;
}



We are ready to test this page.  Navigate to your WordPress admin and grab whatever slug is related to the post you embedded code with.  When you visit that page, you should have something that looks like this:

Now, to test the copy functionality, click on the clipboard box icon and then paste it into a document to test that it works:

Stoked!! This works!  Now let’s discuss more options and practices to use this feature.

Other Options

Here are two more options you can get stoked with in adding code syntax highlighting to your headless WordPress app.

Create A Separate Code Block Pro Component

Instead of embedding the entire code block logic directly into a single page file, it’s a good practice to create a separate component specifically for handling Code Block Pro blocks.

This approach enhances code readability, reusability, and maintainability by isolating the code block’s functionality. You can easily import this component into any page that requires the code block, such as your page.jsx file, without cluttering the page’s primary logic.  For this example, our separate component would look like this:

"use client";
import { useState } from "react";

// This component renders the Code Block Pro block with copy-to-clipboard functionality
export default function KevinBatdorfCodeBlockPro({ attributes, renderedHtml }) {
  const [copied, setCopied] = useState(false);

  // Handle the copy functionality
  const handleCopy = async () => {
    try {
      if (navigator && navigator.clipboard) {
        await navigator.clipboard.writeText(attributes.code);
        setCopied(true);
        setTimeout(() => setCopied(false), 3000); // Reset after 3 seconds
      } else {
        console.error("Clipboard API not available.");
      }
    } catch (err) {
      console.error("Failed to copy: ", err);
    }
  };

  return (
    <div className="code-block-container">
      {/* Render the HTML of the code block */}
      <div dangerouslySetInnerHTML={{ __html: renderedHtml }} />

      {/* Show the copy button */}
      {attributes.copyButton && (
        <button onClick={handleCopy} className="copy-button" title="Copy">
          {copied ? <CheckMarkIcon /> : <CopyIcon />}
        </button>
      )}

      {/* Show the language at the bottom */}
      {attributes.language && (
        <div className="language-label-right">
          {attributes.language}
          <span className="language-label">{attributes.language}</span>
        </div>
      )}
    </div>
  );
}

// CheckMarkIcon component for when the text has been copied
function CheckMarkIcon() {
  return (
    <svg
      xmlns="http://www.w3.org/2000/svg"
      fill="none"
      viewBox="0 0 24 24"
      strokeWidth={1.5}
      stroke="currentColor"
      className="h-6 w-6 text-gray-400"
    >
      <path
        strokeLinecap="round"
        strokeLinejoin="round"
        d="M9 12.75 11.25 15 15 9.75M21 12a9 9 0 1 1-18 0 9 9 0 0 1 18 0Z"
      />
    </svg>
  );
}

// CopyIcon component for the default copy button
function CopyIcon() {
  return (
    <svg
      xmlns="http://www.w3.org/2000/svg"
      fill="none"
      viewBox="0 0 64 64"
      strokeWidth={5}
      stroke="currentColor"
      className="h-6 w-6 text-gray-500 hover:text-gray-400"
    >
      <rect x="11.13" y="17.72" width="33.92" height="36.85" rx="2.5" />
      <path d="M19.35,14.23V13.09a3.51,3.51,0,0,1,3.33-3.66H49.54a3.51,3.51,0,0,1,3.33,3.66V42.62a3.51,3.51,0,0,1-3.33,3.66H48.39" />
    </svg>
  );
}


Since I already explained the code’s function and logic in the previous section, you can go over what it does there.

Conclusion

Syntax highlighting and copy-to-clipboard functionality are valuable enhancements to the code snippets on your headless WordPress sites.

By leveraging the Code Block Pro plugin and WPGraphQL, we were able to query and render code blocks with ease. This approach improves readability and user experience, allowing visitors to easily copy code snippets directly from your posts. The combination of server-side rendering with client-side interactivity using Next.js, along with a clean, simple styling approach, ensures that you can maintain a visually appealing and functional code block display.

As always, we’re stoked to hear your feedback and see what headless projects you’re building! Hit us up in our Discord!

The post Code Syntax Highlighting in Headless WordPress appeared first on Builders.

]]>
https://wpengine.com/builders/code-syntax-highlighting-in-headless-wordpress/feed/ 0
Enhanced Runtime Logs on WP Engine’s headless platform https://wpengine.com/builders/enhanced-runtime-logs-on-wp-engines-headless-platform/ https://wpengine.com/builders/enhanced-runtime-logs-on-wp-engines-headless-platform/#respond Sat, 21 Sep 2024 18:04:46 +0000 https://wpengine.com/builders/?p=31722 Diagnosing issues and optimizing performance is critical when building headless WordPress applications. WP engine’s new Enhanced Runtime Logs are designed to give developers deeper insights and make the debugging process […]

The post Enhanced Runtime Logs on WP Engine’s headless platform appeared first on Builders.

]]>
Diagnosing issues and optimizing performance is critical when building headless WordPress applications. WP engine’s new Enhanced Runtime Logs are designed to give developers deeper insights and make the debugging process easier.

What Are Runtime Logs?

Runtime logs capture real-time information about your application’s activity, errors, and custom log messages. They provide visibility into how your app operates at any given moment, allowing developers to track performance, monitor for errors, and understand user behavior.

How Runtime Logs Aid Debugging

Logs are essential for pinpointing issues, especially when things don’t go as expected in production. With real-time visibility into the app’s operations, you can:

  • Detect Errors: Get immediate feedback on what went wrong.
  • Monitor Performance: Keep an eye on memory usage, API calls, and other performance-related metrics.
  • Custom Logging: Add custom log messages to gain insights specific to your application’s unique workflows.

Benefits of WP Engine’s Enhanced Runtime Logs

WP engine’s enhancements to runtime logs improve the developer experience significantly:

  • Visual Log Activity Chart: Color-coded entries and deployment markers offer an intuitive way to understand log data. Developers can quickly correlate logs with code changes.

  • Advanced Filtering: Quickly filter error or info logs, and use full-text search to find specific issues or patterns within the logs.
  • Time-Based Filtering: Choose predefined or custom time windows to investigate issues and track performance. You can also view logs since the last build.
  • Improved Diagnostics: These features enable faster diagnosis, better optimization, and more effective debugging in production environments.

How to Access the Logs

Accessing these logs is easy:

  1. Navigate to the Logs tab in your WP engine headless platform account.
  2. Select the time period you want to inspect under the Runtime section.

WP engine’s runtime logs retain data for the past 24 hours, giving you up-to-date information to work with.

Why This Matters for Developers

These enhancements are all about efficiency. When debugging headless WordPress applications, the quicker you can identify and address issues, the better. WP Engine’s runtime logs allow developers to optimize performance, reduce downtime, and deploy with confidence.

With these tools, you’ll be better equipped to maintain smooth, reliable applications. The documentation of WP Engine provides more information on how to use these features.

As always, we’re stoked to hear your feedback and see what headless projects you’re building! Hit us up in our Discord!

The post Enhanced Runtime Logs on WP Engine’s headless platform appeared first on Builders.

]]>
https://wpengine.com/builders/enhanced-runtime-logs-on-wp-engines-headless-platform/feed/ 0
WP engine’s Node.js Edge Cache Purge Library for the headless WordPress platform https://wpengine.com/builders/wp-engines-node-js-edge-cache-purge-library-for-the-headless-wordpress-platform/ https://wpengine.com/builders/wp-engines-node-js-edge-cache-purge-library-for-the-headless-wordpress-platform/#respond Tue, 03 Sep 2024 22:50:47 +0000 https://wpengine.com/builders/?p=31705 Efficient caching is essential for maintaining performance in modern web applications, but keeping cached content up-to-date can be challenging.  In this article, I will discuss how WP Engine’s Edge Cache […]

The post WP engine’s Node.js Edge Cache Purge Library for the headless WordPress platform appeared first on Builders.

]]>
Efficient caching is essential for maintaining performance in modern web applications, but keeping cached content up-to-date can be challenging.  In this article, I will discuss how WP Engine’s Edge Cache Purge Library for Node.js addresses this by allowing targeted cache purging through specific paths or tags, rather than clearing the entire cache.

If you prefer the video format of this article, please access it here:

Prerequisites

Before reading this article, you should have the following prerequisites checked off:

  • Foundational knowledge of JavaScript frameworks
  • A WP engine headless WordPress platform account
  • Node.js

If you’re looking for a headless platform to develop on, you can get started with a WP Engine Headless WordPress sandbox site for free:

Understanding Edge Cache and Its Role in Headless WordPress

In order to understand the benefit of this feature by WP engine, let’s first define and discuss what edge cache is.

Edge caching refers to storing cached content at edge locations, which are servers distributed globally closer to users. This reduces latency by delivering content from the nearest server rather than the origin server, improving load times and overall user experience.

How It Differs from Other Caching Layers

You might ask yourself how it differs from other caching layers. In a headless WordPress setup using a JavaScript frontend framework and WPGraphQL, multiple caching layers exist:

  • Application-Level Caching: This involves caching GraphQL queries or API responses within the application. For example, in Next.js’s server-side rendering or incremental static regeneration. It focuses on reducing the need to repeatedly fetch data from the WordPress backend.
  • CDN Caching: Content Delivery Networks (CDNs) cache static assets like images, CSS, and JavaScript files at edge locations. This is similar to edge caching but focused on static resources.
  • Database Caching: WordPress can use database-level caching with object caches like Redis to speed up database queries and reduce server load.

Edge Cache vs. Other Caches

The main differences between edge cache versus other caching layers are as follows:

Edge Cache: Specifically stores whole pages or HTML responses at edge locations. It can be dynamically purged by paths or tags using tools like WP Engine’s Edge Cache Purge Library. This makes it highly efficient for frequently changing content, allowing rapid updates without waiting for other cache layers to expire.

Application and Database Caches: These are closer to the backend and primarily reduce server load by avoiding redundant data processing.

Benefits of Edge Cache in Headless WordPress

Here are the main benefits you get when using the edge cache library by WP engine:

  • Performance: Delivering cached content from locations near users significantly reduces latency.
  • Scalability: It handles high traffic efficiently without burdening the origin server.
  • Dynamic Purging: Allows for granular control over what gets purged and updated, ensuring content remains fresh without unnecessary full cache clears.

Setup and Installation With SvelteKit

In this example, let’s use SvelteKit (you can use any framework you want and it will work), a great frontend framework similar to Nuxt and Next.

  1. First, install and pull down the SvelteKit framework with npm create:
npm create svelte@latest

2. Once you have a SvelteKit app spun up, navigate into the directory of your project and install the edge cache library:

npm install @wpengine/edge-cache 

3. Now, ensure you have the necessary environment variables configured, such as authentication credentials and your WPGraphQL endpoint/WordPress endpoint.

4. In this example, let’s purge by path. Navigate to the src/routes folder. In the routes folder, create a folder called blog/[uri] . Once that is done, in the [uri] folder, create a +page.svelte file. You can drop this code block in or something similar to what you decide to use framework-wise:

<script>
  import { onMount } from 'svelte';
  
  let posts = [];
  let loading = true;

  async function fetchGraphQL(query = {}) {
    const queryParams = new URLSearchParams({ query }).toString();
    const response = await fetch(`${import.meta.env.VITE_GRAPHQL_ENDPOINT}?${queryParams}`, {
      method: 'GET',
      headers: {
        'Accept': 'application/json',
      },
    });

    if (!response.ok) {
      throw new Error(`Network error: ${response.statusText}`);
    }
    return response.json();
  }

  const GET_POSTS_QUERY = `
    query GetPosts {
      posts(first: 5) {
        nodes {
          id
          title
          slug
          uri
          date
        }
      }
    }
  `;

  onMount(async () => {
    try {
      const { data, errors } = await fetchGraphQL(GET_POSTS_QUERY);
      if (errors) {
        console.error('Errors returned from server:', errors);
      } else {
        posts = data.posts.nodes;
      }
    } catch (error) {
      console.error('An error occurred:', error);
    } finally {
      loading = false;
    }
  });
</script>

{#if loading}
  <div class="flex justify-center items-center min-h-screen">
    <p>Loading...</p>
  </div>
{:else}
  <div class="bg-custom-dark min-h-screen text-hot-pink p-8">
    <h1 class="text-4xl mb-8 font-bold text-center">Fran's Headless WP with SvelteKit Blog Example</h1>
    <ul class="space-y-4">
      {#each posts as post}
        <li class="text-center">
          <a href={`/blog${post.uri}`} class="text-2xl hover:text-pink-600">{post.title}</a>
          {#if post.date}
            <p class="text-gray-500 text-sm">{new Date(post.date).toLocaleDateString()}</p>
          {:else}
            <p class="text-gray-500 text-sm">No date available</p>
          {/if}
        </li>
      {/each}
    </ul>
  </div>
{/if}

This code block executes a single post detail page, dynamically grabbing the post data by its URI.

Just to be clear, this is meant to focus on the framework agnosticism of the headless WordPress platform and this edge cache library. It is not meant to be a SveltKit tutorial. With that being said, you should be able to follow along and use whatever dynamic route file you have with whatever framework you choose.

5. Now that we created the path we want to purge, let’s create the API route we will use to execute the purge logic. Navigate to src/routes and create an api/purge/+server.js folder and file. Add this code block:

// src/routes/api/+server.js
import { purgePaths } from '@wpengine/edge-cache';

export async function GET({ url }) {
    const uri = url.searchParams.get('uri'); // Extracting 'uri' parameter from the query string

    try {
        if (uri) {
            // Purge the specific blog path
            await purgePaths(`/blog/${uri}`);
            return {
                status: 200,
                body: { success: true, message: `Cache purged for /blog/${uri}` }
            };
        } else {
            return {
                status: 400,
                body: { success: false, message: 'URI is required' }
            };
        }
    } catch (error) {
        return {
            status: 500,
            body: { success: false, message: error.message }
        };
    }
}

At the top of this file, we import the purgePaths function from the WP engine Edge Cache library, which is used to purge cache for specific paths.

Following that we define an async function that handles GET requests to that endpoint. Then we have a const that extracts the uri param from the query string for the request URL.

If the uri is provided, it calls the purgePath function with the full path, then returns a success response with status 200.

If uri is not present, it returns a 400 status with an error message indicating that the URI is required.

Following that, we catch any errors during the purge process and return a 500 status with the error message, indicating an internal server error occurred during the operation.

This version allows cache purging through a GET request by passing the uri parameter directly in the query string.

You would have a URL that looks something like this when you pass the URI parameter directly into the query string:

https:your-wpengine-site/api/purge?uri=some-blog-post

Once you have this endpoint, you can set it up in a webhook, for example, to automate the purge. This ensures that when a user visits a blog detail page, you can selectively purge the cache for that specific path, keeping your content up-to-date efficiently.

Conclusion

The WP engine Edge Cache Purge Library, allows you to manage edge cache dynamically and keep your content up-to-date with targeted purges. This setup offers a flexible, framework-agnostic solution that fits well with any Node. js-based application.

We hope you have a better understanding and grasp of the edge cache library. As always, we’re stoked to hear your feedback and what you are building in headless WordPress! Hit us up in our Discord!!

The post WP engine’s Node.js Edge Cache Purge Library for the headless WordPress platform appeared first on Builders.

]]>
https://wpengine.com/builders/wp-engines-node-js-edge-cache-purge-library-for-the-headless-wordpress-platform/feed/ 0
How to Customize WPGraphQL Cache Keys 💾🔑 https://wpengine.com/builders/how-to-customize-wpgraphql-cache-keys/ https://wpengine.com/builders/how-to-customize-wpgraphql-cache-keys/#respond Tue, 27 Aug 2024 16:24:10 +0000 https://wpengine.com/builders/?p=31697 Caching is important in optimizing performance for headless WordPress setups. The WPGraphQL Smart Cache plugin helps manage caching for GraphQL queries, ensuring faster response times. In this guide, we’ll walk […]

The post How to Customize WPGraphQL Cache Keys 💾🔑 appeared first on Builders.

]]>
Caching is important in optimizing performance for headless WordPress setups. The WPGraphQL Smart Cache plugin helps manage caching for GraphQL queries, ensuring faster response times. In this guide, we’ll walk you through setting up your WordPress environment, installing the necessary plugins, and customizing GraphQL cache keys to better suit your specific needs.

Prerequisites

Before reading this article, you should have the following prerequisites checked off:

If you’re looking for a headless platform to develop on, you can get started with a WP Engine Headless WordPress sandbox site for free:

Understanding Default WPGraphQL Smart Cache Behavior

WPGraphQL Smart Cache automatically tags cached responses with keys derived from the GraphQL queries. These keys are linked to specific WordPress data (e.g., posts, pages, taxonomies). When relevant data is updated, the associated cache is invalidated. 

For example, a query that retrieves posts with specific categories and tags will generate cache keys like list:post, list:category, and list:tag. If any of these categories or tags are updated, the entire cache is invalidated, ensuring the data stays current.  

In addition to the list:$type_name keys, individual node IDs are also included. 

These individual IDs are used to purge cache when updates or deletes happen.

The list:$type_name is used to purge when a new thing is published.  For example list:post will be purged when a new post is published, but purge( "post:1" ) would be purged when post 1 is updated or deleted.   

Let’s see this in action.  Navigate to your WP admin, then open your WPGraphQL IDE. Copy and paste this query:

query GetPosts {
  posts {
    nodes {
      title
      uri
    }
  }
}

When you press play in your IDE, this will make a query to your site’s WPGraphQL endpoint. 

Then WPGraphQL will return headers that caching clients can use to tag the cached document. Next, Open your dev tools.  In this case, I am using Google Chrome.  When I open up the dev tools and inspect the response headers, you should see this:

Here, we see the X-GraphQL-Keys header with a list of keys. If this query were made via a GET request to a site hosted on a host that supports it, the response would be cached and “tagged” with these keys.

For this particular query, we see the following keys:

  • Hash of the Query: This is a unique identifier which is in this example

4382426a7bebd62479da59a06d90ceb12e02967d342afb7518632e26b95acc6f for the specific query made. It ensures that the exact same query returns the same cached response unless invalidated.

  • Operation Type (graphql:Query): Indicates that the operation is a GraphQL query, as opposed to a mutation or subscription
  • Operation Name (operation:GetPosts): Identifies the specifically named query, in this case, GetPosts, which helps in targeting this operation for caching or invalidation.
  • List Key (list:post): This key identifies that the query is fetching a list of posts. Any changes to the list of posts would trigger cache invalidation.
  • Node ID (cG9zdDox): This represents the specific node (e.g., a post) that was resolved in the query. Changes to this node will invalidate the cache for this query.

If a purge event for one of those tags is triggered, the document will be tagged with these keys and purged (deleted) from the cache.

Understanding Cache Invalidation with WPGraphQL Smart Cache

WPGraphQL Smart Cache optimizes caching by sending the keys in the headers, but the caching client (e.g., Varnish or Litespeed) needs to use those keys to tag the cache. WPGraphQL Smart Cache itself does not tag the cached document; it provides the caching client info (the keys) to tag the cached document with. A supported host like WP engine works with WPGraphQL Smart Cache out of the box.

Let’s discuss how invalidation works:

WPGraphQL Smart Cache listens to various events in WordPress, such as publishing, updating, or deleting content, and triggers cache invalidation (or “purge”) based on these events.

Detailed Key Breakdown:

Publish Events (purge('list:$type_name')): When a new post or content type is published, the cache for the entire list associated with that content type (e.g., all posts) is purged. This ensures that any queries fetching this list will be up-to-date.

  Update Events (purge('$nodeId')): When an existing post or content type is updated, the cache for that specific node (e.g., a single post) is purged. This allows the updated content to be fetched without affecting the entire list.

    Delete Events (purge('$nodeId')): Similarly, when a post or content type is deleted, the cache for that specific node is purged, ensuring that the deleted content is no longer served from the cache.

Why This Matters:

These targeted cache invalidations help maintain the balance between performance and data freshness. By only purging the cache when necessary and only for the relevant data, WPGraphQL Smart Cache ensures that users receive up-to-date content without unnecessary cache purges, which can negatively impact performance.

This invalidation strategy is crucial for optimizing the performance of headless WordPress setups using WPGraphQL, especially in dynamic environments where content changes frequently.

How Cache Invalidation and Cache Tags Work Together

Now that we’ve explored how cached documents are tagged and how cache invalidation works in WPGraphQL Smart Cache, let’s see how these concepts interact.

When a GraphQL query is executed, specific cache keys (tags) are associated with the cached response. These tags correspond to the data queried, such as posts, categories, or specific node IDs. The cache invalidation strategy then ensures that when relevant data changes occur in WordPress, the associated cached documents are purged based on these tags.

Example: Invalidation Scenarios for a GetPosts Query

  1. Publishing a New Post (purge('list:post')):
    • When a new post is published, the entire list of posts in the cache (tagged with list:post) is invalidated. This ensures that the new post will appear in any subsequent queries that fetch this list.
  2. Updating or Deleting a Specific Node (purge('$nodeId')):
    • If the “Hello World” post (with the ID cG9zdDox) is updated or deleted, the cache for that specific node is purged. This allows the updated or deleted content to be accurately reflected in any future queries.
  3. Manually Purging Cache (purge('graphql:Query')):
    • Clicking “Purge Cache” in GraphQL > Settings > Cache page triggers a manual cache purge for all queries. This can be useful when you want to ensure that all cached data is refreshed, regardless of specific events.
  4. Operation Name or Query Hash-Based Purge:
    • Custom purge events can be manually triggered based on the operation name (e.g., GetPosts) or the hash of the query. This level of control allows you to finely tune when and how caches are invalidated.

These strategies work together to ensure that the cache is only invalidated when necessary, providing up-to-date data without unnecessary performance overhead. For instance, when the “Hello World” post is updated, it’s reasonable to expect that the cache for the GetPosts query should be purged so that any queries return the most current data. This fine-grained control over cache invalidation ensures that your headless WordPress site remains performant while delivering fresh content.

Why Would You Need to Customize WPGraphQL Cache Keys?

In some scenarios, the default caching behavior might be too broad, leading to frequent cache invalidations.  This is especially true for more complex queries. 

For instance, if your query includes categories and tags, any update to these taxonomies will invalidate the cache, even if those changes don’t affect the specific posts you’re querying. Customizing cache keys allows you to fine-tune this behavior, ensuring that only relevant updates trigger cache invalidation, thereby improving performance.

For example, consider the following query:

{
  posts {
    nodes {
      id
      title
      tags {
        nodes {
          id
          name
        }
      }
    }
  }
  categories {
    nodes {
      id
      name
    }
  }
  tags {
    nodes {
      id
      name
    }
  }
}

This query retrieves a list of posts, along with all categories and tags. When this query is executed, the response includes the posts, categories, and tags that match the query as shown here:

The X-GraphQL-Keys header shows that the cached document is tagged with list:post, list:category, and list:tag. This tagging means that the cache will be invalidated whenever there’s a change in any of these entities—whether it’s a new post, category, or tag.

While this behavior ensures that your cache is up-to-date, it can lead to excessive cache invalidation. For instance, if a new tag is created and assigned to a post not included in this query, it will still trigger a purge('list:tag'), invalidating the cache for this query.

This means the cache could be cleared more often than you want for your specific use case, which could negatively impact performance.

Just A Note

Just a note, consider this query from the original article on this subject:

query GetPostsWithCategoriesAndTags {
  posts {
    nodes {
      id
      title
      categories {
        nodes {
          id
          name
        }
      }
      tags {
        nodes {
          id
          name
        }
      }
    }
  }
}

The WPGraphQL team changed things to only track list: types from the root.  So, if you run this query, your list of categories won’t be tracked because it is not at the root.

The Problem

The problem is that the list:category and list:tag keys could cause this document to be purged more frequently than you might like. WPGraphQL tracks precisely, but it doesn’t know your specific intention and what you care about.

For example, you might simply not care if this particular query is “fresh” when terms change. OR you might ONLY care for this query to be fresh when terms change. 

WPGraphQL doesn’t know the intent of the query, only what the query is.

Fortunately, you can customize the cache keys to better suit your specific needs, reducing unnecessary cache invalidations and improving performance.

Customizing Cache Keys

By customizing the cache keys, you can ensure that the cache is only invalidated when changes you believe are relevant to your use case occur. This involves fine-tuning the tags associated with your queries, allowing you to maintain optimal performance without sacrificing data accuracy.

Let’s do this by navigating to our WP admin and modifying the functions.php file.  Go to Appearance> Theme File Editor.  Select the functions.php file from your active theme.

Insert this code snippet at the bottom of your functions.php file to customize the cache keys for a specific GraphQL operation.  In this case, let’s add an operator name to the query we used in the section before.  We are calling our operation GetPostWithCategoriesAndTags:

add_filter( 'graphql_query_analyzer_graphql_keys', function( $graphql_keys, $return_keys ) {
    $keys_array = explode( ' ', $return_keys );
    if ( ! in_array( 'operation:GetPostsWithCategoriesAndTags', $keys_array, true ) ) {
        return $graphql_keys;
    }
    $keys_array = array_diff($keys_array, ['list:tag', 'list:category']);
    $graphql_keys['keys'] = implode( ' ', $keys_array );
    return $graphql_keys;
}, 10, 5 );

You should have something that looks like this:

This snippet customizes the cache keys for the GetPostsWithCategoriesAndTags operation. It removes the list:tag and list:category keys from the cache, preventing their updates from invalidating the cache for this specific query. The array_diff() function is used to filter out the unwanted keys, and the modified keys are then reassembled into a string and returned.

Let’s test this now in WPGraphQL IDE and the browser dev tools:

Stoked!!!  Now as you see in the dev tools image, publishing new categories and tags, which triggers purge( 'list:category' ) and purge( 'list:tag' ) will not purge this document. 

We’re getting the benefits of cached GraphQL documents. The document is invalidated when the post is updated or deleted, but we’re letting the cache remain cached when categories or tags are created.

Conclusion

We hope you have a better understanding of using filters, as demonstrated above, to customize your cache tagging and invalidation strategies to better suit your project’s specific needs. By taking control of how cache keys are managed, you can optimize performance and reduce unnecessary cache invalidations.
As always, we look forward to hearing your feedback, thoughts, and projects so hit us up in our headless Discord!

The post How to Customize WPGraphQL Cache Keys 💾🔑 appeared first on Builders.

]]>
https://wpengine.com/builders/how-to-customize-wpgraphql-cache-keys/feed/ 0
On-Demand ISR Support for Next.js/Faust.js on WP engine’s headless WordPress Platform https://wpengine.com/builders/on-demand-isr-support-for-next-js-faust-js-on-wp-engines-headless-wordpress-platform/ https://wpengine.com/builders/on-demand-isr-support-for-next-js-faust-js-on-wp-engines-headless-wordpress-platform/#respond Mon, 12 Aug 2024 13:19:12 +0000 https://wpengine.com/builders/?p=31676 WP engine’s headless WordPress hosting platform is the go-to end-to-end solution for the headless approach. In this article, I will discuss and guide you through the easy implementation of the […]

The post On-Demand ISR Support for Next.js/Faust.js on WP engine’s headless WordPress Platform appeared first on Builders.

]]>
WP engine’s headless WordPress hosting platform is the go-to end-to-end solution for the headless approach. In this article, I will discuss and guide you through the easy implementation of the latest feature on the platform: Support for On-Demand ISR on Next.js/Faust.js. By the end of this article, you will have a better understanding of On-Demand ISR and using the headless WP platform to support it with Next.js/Faust.js.

If you prefer the video format of this video, you can access it here:

Prerequisites

Before reading this article, you should have the following prerequisites checked off:

  • Basic knowledge of Next.js and Faust.js.
  • A WP engine headless WordPress account and environment set up.
  • Node.js and npm are installed on your local machine.

If you do not and need a basic understanding of Next.js and Faust.js, please visit the docs:

https://nextjs.org/docs

https://faustjs.org/tutorial/get-started-with-faust

What is On-Demand ISR?

On-Demand Incremental Static Regeneration (ISR) is a feature that allows you to manually purge the Next.js cache for specific pages, enabling more dynamic and timely updates to your site. Typically, in regular ISR when you set a revalidate time, such as 60 seconds, all visitors will see the same generated version of your site for that duration. The cache is only invalidated when someone visits the page after the revalidation period has passed.

With the introduction of On-Demand ISR in Next.js version 12.2.0, you can now trigger cache invalidation manually, providing greater flexibility in updating your site. This is particularly useful when:

  • Content from your headless CMS is created or updated.
  • E-commerce metadata changes, such as price, description, category, or reviews, are made.

This feature streamlines the process of reflecting changes on your site in real-time, ensuring that your content is always fresh and up-to-date.

Why Use it in headless WordPress?

In headless WordPress, the front end is decoupled from the WordPress backend, often using Next.js and Faust.js to render the website. This architecture offers several advantages, such as potential for improved performance, enhanced security, and greater flexibility in choosing front-end technologies.

However, one challenge with headless WordPress is ensuring that content changes in WordPress are reflected on the front end without sacrificing performance. This is where On-Demand ISR becomes crucial. By leveraging On-Demand ISR, you can achieve the following benefits:

Up-to-date Content: On-Demand ISR allows your site to fetch the latest content updates from WordPress manually, as needed. Unlike regular ISR, which checks for updates at specified intervals, On-Demand ISR lets you trigger cache invalidation whenever content is created or updated in WordPress. This ensures that users see the most recent content without waiting for a revalidation period.

Enhanced Performance: Since On-Demand ISR updates only the specific pages that need regeneration at the moment they are triggered, your site remains fast and responsive. Initial load times are minimized, and only the changed content is updated, reducing server load and build times.

SEO Benefits: Static pages are highly favored by search engines due to their speed and reliability. With On-Demand ISR, you maintain the SEO advantages of static generation while ensuring that your content is always fresh and relevant, as updates are reflected immediately after they are triggered.

Scalability: On-Demand ISR enables your site to handle large volumes of content efficiently. Whether you’re running a blog with frequent updates or an e-commerce site with dynamic product listings, On-Demand ISR ensures that your site scales seamlessly.

All those benefits got me stoked! Let’s get it on our Next.js and Faust.js sites!

Configuring Next.js with the headless WP Platform for On-Demand ISR

Let’s configure our Next.js application to work with On-Demand ISR.

Here is the docs link to the headless WordPress platform support for On-Demand ISR.

Atlas-Next Package

In your Next.js project, go to your terminal and install the @wpengine/atlas-next package:

npm install --save @wpengine/atlas-next

This package provides improved support on the headless WP platform. Once you install it, ensure it is in your project by navigating to your package.json file at the root of your project:

{
  "name": "atlas-on-demand-isr",
  "version": "0.1.0",
  "private": true,
  "scripts": {
    "dev": "next dev",
    "build": "next build",
    "start": "next start",
    "lint": "next lint"
  },
  "dependencies": {
    "@wpengine/atlas-next": "^1.3.0-beta.0",
    "next": "14.2.4",
    "react": "^18",
    "react-dom": "^18"
  },
  "devDependencies": {
    "eslint": "^8",
    "eslint-config-next": "14.2.4",
    "postcss": "^8",
    "tailwindcss": "^3.4.1"
  }
}

Now that you have verified the proper installation, staying at the root of your project,  modify your next.config.js file like so:

const { withAtlasConfig } = require("@wpengine/atlas-next");

/** @type {import('next').NextConfig} */

const nextConfig = {

 // Your existing Next.js config

};

module.exports = withAtlasConfig(nextConfig);

Faust.js Wrapper and Next.js/React.js Versions

If you are using Faust.js, please note that you need to update your Next.js to a minimum of 13.5.1 and React versions to 18.3.1 for this feature to work in Faust. The npx utility command that pulls down the Faust.js boilerplate from the Faust docs comes with Next.js 12 by default. So please change this if using that package.

Following the update to your versions, all you need to do is modify your next.config.js file using the withFaust wrapper:

const { withFaust } = require("@faustwp/core")
const { withAtlasConfig } = require("@wpengine/atlas-next")

/** @type {import('next').NextConfig} */
const nextConfig = {
  // Your existing Next.js config
}

module.exports = withFaust(withAtlasConfig(nextConfig))

Next, we need to verify that it works.  Run your app in dev mode via npm run dev and you should see this output in your terminal:

Stoked! It works!

Create an API route

The first thing we need to do is create an API route.  This will allow you to pass the path to be revalidated as a parameter.

Step 1. Create the API route file: Navigate to the pages/api directory in your Next.js project and create a new file named revalidate.js.

Step 2. Add the API Route code: Open revalidate.js and add the following code:

export default async function handler(req, res) {
  // Check for a secret token to authenticate the request
  if (req.query.secret !== process.env.MY_SECRET_TOKEN) {
    return res.status(401).json({ message: 'Invalid token' });
  }

  const path = req.query.path;

  // Ensure the path parameter is provided
  if (!path) {
    return res.status(400).json({ message: 'Path query parameter is required' });
  }

  try {
    // Revalidate the specified path
    await res.revalidate(path);
    return res.json({ revalidated: true });
  } catch (err) {
    return res.status(500).json({ message: 'Error revalidating', error: err.message });
  }
}

Step 3.  Configure the environment variables: Create a .env.local file in the root of your project if it does not exist already.

Step 4. Next, create a secret token. This code sets up an API route that checks for a secret token for security, validates the presence of the path parameter, and triggers the revalidation of the specified path.

Once you have Node.js installed, you can use it to generate a secret token.

Open Your Terminal: Start by opening your terminal or command prompt.

Generate a Secret Token: Run the following command in your terminal:

node -e "console.log(require('crypto').randomBytes(32).toString('hex'))

This command uses the crypto module to generate a 32-byte random string in hexadecimal format. The output will be your secret token.

Once the token is generated, copy and paste it into your .env.local file and give it an environment variable name e.g. `REVALIDATION_SECRET` 

It should look like this: REVALIDATION_SECRET=your-secret-key

Use the API route to configure cache revalidation

To configure cache revalidation in the headless WordPress setup, you could follow one of the approaches below:

Use a webhook plugin: You can use plugins like WP Webhooks to enable webhook functionality and trigger the API endpoint you’ve just created when relevant events occur, such as when a post is published or updated.

When you have your secret key generated and need an API endpoint, append a query string parameter to it with the key and value pair, where the key is secret and the value is the secret token. For instance:

https://your-wordpress-site.com/api/revalidate?secret=your-secret-key&path=/your-route

This is the endpoint you can embed in a field that requires it when using a plugin like WP Webhooks. The field will correlate to whatever action will happen when the endpoint that is associated with this path is hit such as updating a post.

Just a note, if you are developing locally and want to test it, you will have to manually visit the endpoint which typically spins up on port 3000 since the app will only know when to revalidate it on your local machine.

Set up WordPress hooks: You can also add actions in your WordPress theme or plugin to send requests to your Next.js API route. Here’s an example using the wp_remote_post function in php, which will send a POST request to the Next.js API route whenever a post is saved in WordPress, triggering the revalidation of the corresponding path:

function trigger_revalidation($post_id) {
  $url = 'https://your-nextjs-site.com/api/revalidate?secret=your-secret-token&path=' . get_permalink($post_id);
  
  $response = wp_remote_post($url);

  if (is_wp_error($response)) {
    error_log('Error triggering revalidation: ' . $response->get_error_message());
  }
}
add_action('save_post', 'trigger_revalidation');

Headless WordPress Platform User Portal

We now have On-Demand ISR set up with the proper configuration. The last steps are to connect our remote repository to WP engine’s headless WP platform, git push any changes, and observe On-Demand ISR working in all its cache invalidation glory.

If you have not connected your local project to a remote repository, go ahead and do so.  WP engine headless platform supports GitHub, Bitbucket, and GitLab.

Once you have connected your remote repository and added all your necessary environment variables, go ahead and build the app. If you have done this with an existing repo, you can git push the change, which will trigger a build.

When the application is finished in the build step, you are on the main page of the WP engine headless WP portal.  Navigate over to the Logs subpage:

In the Logs subpage, click the “Show logs” button on the Runtime option:

You should see the same output focusing on line 6 as you did in your terminal to ensure it’s working properly:

Awesome!!! It is implemented and working in runtime.  Now, when you edit or input new content in your WP backend, and then visit the live URL of the WP engine headless WP site you just deployed, the ISR should work on demand like so:

Limitations

At the moment, the headless WP platform supports On-Demand ISR with the following limitations:

  1. Requires @wpengine/atlas-next package: To enable On-Demand ISR on Atlas, you must use the @wpenngine/atlas-next package and follow the setup steps outlined in the previous sections of this document.
  2. On-demand ISR for App Router is not supported: The App Router is a new routing system in Next.js that enhances routing capabilities with features like server components and nested layouts. However, Atlas currently supports On-Demand ISR only in the context of the traditional Pages Router. This means that methods like revalidatePath and revalidateTag, which are used for revalidation in the App Router, are not compatible with the headless WP platform’s ISR mechanism. For more details on the App Router and its data fetching methods, you can refer to the Next.js documentation here.
  3. Rewrites are not supported: Rewrites in Next.js allow you to change the destination URL of a request without altering the URL visible to the user. However, On-Demand ISR on the headless WP platform does not support rewrites. This means that if your Next.js application relies on rewrites, the On-Demand ISR feature might not function as expected. You can learn more about rewrites here.
  4. Not compatible with Next.js I18N: Since Next.js uses rewrites for internationalization, this feature is not supported on the headless WP platform due to the rewrite limitation mentioned above.
  5. Next.js >=13.5 is required: To be able to use this feature, you need to update your application to Next.js version 13.5 or higher.

NoteRedirects (which, unlike rewrites, actually change the URL in the user’s browser to the specified destination) are supported.

If you want to give us feedback on how we can make things better for this feature or anything else with the platform, please visit our feedback form.

Conclusion

Implementing On-Demand Incremental Static Regeneration (ISR) with Next.js and Faust.js on WP engine’s headless WP platform is a game-changer for maintaining performance and up-to-date content in a headless WordPress setup. By following the steps outlined in this guide, you can leverage On-Demand ISR to ensure your site remains both fast and current, without the need for full rebuilds. 

The integration with the platform also simplifies the deployment and management process, providing a seamless workflow from development to production. 

As always, we look forward to hearing your feedback, thoughts, and projects so hit us up in our headless Discord!

The post On-Demand ISR Support for Next.js/Faust.js on WP engine’s headless WordPress Platform appeared first on Builders.

]]>
https://wpengine.com/builders/on-demand-isr-support-for-next-js-faust-js-on-wp-engines-headless-wordpress-platform/feed/ 0
Contributing to Open Source Projects https://wpengine.com/builders/contributing-to-open-source-projects/ https://wpengine.com/builders/contributing-to-open-source-projects/#respond Wed, 24 Jul 2024 21:55:02 +0000 https://wpengine.com/builders/?p=31667 This article aims to guide developers on best practices for contributing to Faust.js and other Open Source Software (OSS) and provide actionable steps for getting started with contributions. Whether you’re […]

The post Contributing to Open Source Projects appeared first on Builders.

]]>
This article aims to guide developers on best practices for contributing to Faust.js and other Open Source Software (OSS) and provide actionable steps for getting started with contributions. Whether you’re a seasoned developer or just starting, this guide will help you navigate the OSS ecosystem and make meaningful contributions.

What Defines an OSS?

Open Source Software (OSS) is software released with a license that allows anyone to view, modify, and distribute the source code. This openness fosters a collaborative environment where developers can contribute, innovate, and improve the software. OSS is the backbone of many critical technologies and has a profound impact on the software industry by promoting transparency, security, and community-driven development.

Understanding the Open Source Ecosystem

In order to understand the OSS ecosystem, let’s define the types of contributions to it.  There are four main types of contributions:

  1. Code Contributions: Adding new features, fixing bugs, and improving software performance and security are common ways to contribute code. These contributions directly impact the project’s functionality and reliability.
  2. Documentation Improvements: Clear and comprehensive documentation is crucial for any OSS project. Contributing to user guides, API references, and tutorials helps new users and contributors understand and utilize the software effectively.
  3. Community Support: Helping other users in forums, social media, and chat channels builds a supportive and inclusive community. Providing answers, sharing knowledge, and offering guidance are valuable contributions.
  4. Testing and Bug Reporting: Identifying and reporting bugs, performing quality assurance, and testing new features ensure the software remains robust and reliable. Thorough testing and detailed bug reports help maintainers address issues efficiently.

Now that we have listed out the types of contributions, let’s discuss the licensing and legal considerations in OSS.

In OSS, the three most common types of licenses are listed below:

  • MIT License: Permissive and simple, allowing almost unrestricted reuse. It’s one of the most popular OSS licenses due to its simplicity and flexibility.
  • GPL (GNU General Public License): This license requires derivative works to be open source. It ensures that software remains free and open for future generations.
  • Apache License: This license is similar to MIT but includes an explicit grant of patent rights from contributors. It is preferred for projects that want to protect against potential patent litigation.

Understanding Contributor License Agreements (CLAs):

CLAs are legal agreements that contributors must sign, giving the project permission to use their contributions. They clarify the rights and obligations of both contributors and project maintainers, ensuring that contributions can be used freely within the project.  The Faust.js project uses such an agreement.

Headless WordPress OSS

The three main OSS projects for headless WordPress that we use at WP Engine are Faust.js, WordPress, and WPGraphQL:

Faust.js: A toolkit for Next.js for building modern, headless WordPress sites. It simplifies the development of WordPress sites by leveraging the power of React and WPGraphQL.

WPGraphQL: WPGraphQL is a free, open-source WordPress plugin that provides an extendable GraphQL schema and API for any WordPress site.

WordPress: A content management system (CMS) powering a significant portion of the web. Its strong community and extensive ecosystem of plugins and themes make it a versatile and widely-used platform.

Contributions can include adding new features, fixing bugs, or enhancing the documentation. For example, you might implement a new WPGraphQL query, optimize performance, or write a tutorial.

Within the contribution areas, the focus might include improving core functionality, creating example projects, or enhancing developer tools. The Faust.js community is welcoming and always looking for enthusiastic contributors.

Getting Started with Contributing

The first thing you need to do is identify a project that matches your interests and skill set. In this case, we are focusing on JavaScript, WordPress, and WPGraphQL, which are related to web development.

Next, look for active projects with regular updates, responsive maintainers, and welcoming communities. Check for the frequency of commits, issue resolution, and community engagement on forums and social media.

Familiarizing Yourself with the Project

Here are some steps you can take to familiarize yourself with the project you choose.

The main thing a project’s documentation tells you about it is its architecture, purpose, and processes. Read the documents and contribution guidelines to understand how to contribute effectively. Good documents also provide insights into the project’s governance, purpose, and processes.

The next thing to understand is the project codebase. You can do this by exploring the repository, reading the README.md and CONTRIBUTING.md files, and diving into the directory structure and any key components.

Once you do that, you can review open issues to identify areas where you can contribute.  Look for issues that are tagged with “good first issue” or “Need Help” as these are great opportunities to tackle for new contributors.

Best Practices for Contributing to OSS

There are multiple ways to contribute effectively with best practices in OSS.  Here are some for you to consider.

Effective Communication

Communication is a key practice in OSS contribution. You can do this by participating in chat channels, forums, mailing lists, and social media to stay informed about project updates. This will help you stay on top of ongoing changes and decisions and where the project is heading.

When you ask for help or provide feedback, use clear and respectful language. Be as specific as possible about your issue and provide context to make your point clearer and easier to understand.

You can also help others by answering questions, sharing your knowledge, and providing guidance. Your expertise can benefit other community members and foster a collaborative environment.

Quality Code

Quality code is a crucial part of overall best practice when contributing to OSS. Follow the project’s standards and guidelines to write maintainable and readable code. Consistent coding practices ensure that your contributions are compatible with the existing codebase.

Test your code to ensure software reliability. Write unit tests, integration tests, and end-to-end tests as appropriate to validate your changes.

Writing quality code includes detailed commit messages and pull request descriptions. These will help the maintainers review your changes. Clearly explain your changes, why they are necessary, and how they were implemented.

Review and Feedback Process

Understanding the review and feedback process is important for contributions to OSS. This involves familiarizing yourself with code reviews, responding to feedback constructively, and learning from the review process to improve your skills.

Familiarize Yourself with the Code Review Process: Start by examining past reviews to understand the maintainers’ criteria and expectations. This will give you insights into what maintainers look for in contributions and common areas for improvement. Observing how other contributions are reviewed can also provide valuable lessons.

Responding to Feedback: When you receive feedback on your contributions, be receptive and open-minded. Address the reviewers’ comments thoughtfully and make the necessary changes to improve your work. Constructive feedback is an opportunity for growth, so approach it with a positive attitude and a willingness to learn.

Learning from Reviews: Use the feedback from code reviews to enhance your skills and contribute more effectively. Each review is a learning opportunity that can help you write better code and understand best practices. Embrace the review process as a valuable part of your development journey, and apply the lessons learned to future contributions.

Conclusion

I hope this guide has given you a clear understanding of how to contribute to OSS projects. By following the steps and best practices outlined here, you’ll be well-prepared and excited to start contributing to projects like Faust.js, WPGraphQL, and WordPress. Your contributions will not only positively impact these communities but also help advance the software you use and care about.  As always, we look forward to hearing your feedback, thoughts, and projects so hit us up in our headless Discord!

The post Contributing to Open Source Projects appeared first on Builders.

]]>
https://wpengine.com/builders/contributing-to-open-source-projects/feed/ 0
Using Composer to Manage Plugins and Deploy to WP Engine https://wpengine.com/builders/using-composer-manage-plugins-deploy/ https://wpengine.com/builders/using-composer-manage-plugins-deploy/#respond Thu, 11 Jul 2024 18:24:08 +0000 https://wpengine.com/builders/?p=4931 Manage your WordPress dependencies with Composer and deploy to WP Engine with GitHub Actions.

The post Using Composer to Manage Plugins and Deploy to WP Engine appeared first on Builders.

]]>
We recently covered how to do branched deploys to WP Engine with GitHub Actions. Today, let’s explore managing plugin dependencies with Composer when deploying to WP Engine.

It helps if you are familiar with Composer, WPackagist, and Git version control. However, if you are not, here is an excellent resource to get you started: Managing your WordPress site with Git and Composer. Also, these instructions assume you have an existing WordPress site hosted on WP Engine that you are retroactively putting under version control.

Here is what we’ll be covering:

  • Version control (Git) – you will only need the wp-content/ directory under version control. We’ll let WP Engine maintain WordPress core automatic updates.
  • Composer—You will use Composer and WPackagist.org to manage WordPress plugins. Note that it will be your responsibility to manage plugin updates with this Composer setup, and utilizing Smart Plugin Manager or WordPress auto-updates is not covered.
    • Bonus: You will learn to install and manage ACF PRO as a Composer dependency.

Overview of project organization

Below is an example of your final GitHub repository. We will explore these in greater detail. (Also, check out the demo codebase on GitHub to follow along with the proposed structure.)

  • .github/workflows/[dev|staging|production].yml: These represent our GitHub Action configuration files for deploying our codebase to their corresponding environments. Be sure to become familiar with the WP Engine GitHub Action for Site Deployment, which relies on using rsync to sync the repository files with the targeted server.
  • rsync-config: These files configure our WP Engine GitHub Action for Site Deployment. The action relies on running a rsync between the GitHub repository and the targeted environment.
    • excludes.txt: Referenced in the .github/workflows/[dev|staging|production].yml file as the explicit rsync FLAGS. These are any items we want to exclude from being deleted each time our GitHub Action runs a rsync.
      • Hint: these files likely exist on the final WP Engine server and we do not want to remove them every time an rsync is executed in our GitHub Action.
    • includes.txt: Referenced in the .github/workflows/[dev|staging|production].yml GitHub Action as the explicit rsync FLAGS. These are any items we want to include in our GitHub Action rsync.
      • Hint: these will likely represent the un-ignored items in your project .gitignore which we’ll cover below.
  • bin/post-deploy.sh: This is how you pass any SCRIPTS to the GitHub Action to run commands on the final destination server.
    • Tip: you can run WP-CLI and Composer install commands on the final WP Engine environment.
  • plugins/: You will rely on Composer and WPackagist to install standard and stable plugins. However, we will also show you how you might handle a custom plugin.
    • plugins/demo-plugin: Represents any custom plugins you may want to version control. You could have as many of these as you like. For example, you could organize your custom functionality as plugins/foo-plugin, plugins/bar-plugin.
  • themes/: Similar to our plugins, you will likely version control a single theme for the final destination.
    • themes/demo-theme: Represents a single, custom theme you would have under version control.
  • .gitignore: It is critical to tell Git what you want to ignore from being under version control, as well as what you do not want to ignore (yes, this sounds odd, but trust us).
  • composer.json: Lists your project’s direct dependencies, as well as each dependency’s dependencies, and allows you to pin relative semantic versions for your dependencies.
  • composer.lock: Allows you to control when and how to update your dependencies.

Start by organizing a copy of your WordPress site’s wp-content/ directory to mirror the organization noted above. It is recommended to create this setup on your local computer. You can access a full site backup from WP Engine’s User Portal.  It is okay if there are other directories within your wp-content/ directory. You will tell Git what you want to ignore, or not ignore in the next step.

Create a .gitignore

Create a .gitignore file in your WordPress installation’s wp-content/ directory and place the code below:

.gitignore (full source)

#---------------------------
# WordPress general
#---------------------------
# Ignore the wp-content/index.php
/index.php

#---------------------------
# WordPress themes
#---------------------------
# Ignore all themes
/themes/*
# DO NOT ignore this theme!
!/themes/demo-theme

#---------------------------
# WordPress plugins
#---------------------------
# Ignore all plugins
/plugins/*
# DO NOT ignore this plugin!
!/plugins/demo-plugin

#---------------------------
# WP MU plugins: these are
# managed by the platform.
#---------------------------
/mu-plugins/

#---------------------------
# WP uploads directory
#---------------------------
/uploads/

#---------------------------
# WP upgrade files
#---------------------------
/upgrade/

#---------------------------
# Composer
#---------------------------
/vendor
auth.json
.env
.env.*
!.env.example

A few key things to note from the code above:

  • /plugins/*: this ignores any directories nested within the plugins/ directory.
    • !/plugins/demo-plugin: overrides the previous /plugins/* to allow the demo-plugin to not be ignored, and instead, it is version-controlled. 
  • /themes/*: this ignores any directories nested within the themes/ directory.
    • !/themes/demo-theme: overrides the previous /themes/* to allow the demo-theme to not be ignored, and instead it is version controlled. 

You can adjust the demo-plugin or demo-theme examples to work with your setup.

Set up Composer with WPackagist integration

Composer allows you to manage your PHP dependencies. WPackagist mirrors the WordPress plugin and theme directories as a Composer repository. 

Typically, you could consider utilizing Composer for PSR-4 / PSR-0 namespacing, linting, and unit testing. We’ll only focus on demonstrating how you might pull in some standard WordPress plugins. 

Here is a composer.json that installs a few example plugins from WPackagist: Two Factor, Gutenberg, and WordPress SEO. These are here for demonstration and feel free to replace them with plugins that are standard to your workflow.

composer.json (full source)

{
   "name": "wpe/composer-demo",
   "description": "Demo with Composer and deploy to WP Engine.",
   "license": "GPL-2.0+",
   "type": "project",
   "repositories": [
       {
           "type":"composer",
           "url":"https://wpackagist.org",
           "only": [
               "wpackagist-plugin/*",
               "wpackagist-theme/*"
           ]
       }
   ],
   "require": {
       "wpackagist-plugin/two-factor": "*",
       "wpackagist-plugin/gutenberg": "*",
       "wpackagist-plugin/wordpress-seo": "*"
   },
   "extra": {
       "installer-paths": {
           "plugins/{$name}/": [
               "type:wordpress-plugin"
           ],
           "themes/{$name}/": [
               "type:wordpress-theme"
           ]
       }
   },
   "config": {
       "allow-plugins": {
           "composer/installers": true
       }
   }
}

If you’re just integrating Composer into your project the first time then you’ll likely want to now run composer install after creating a composer.json like the one above. This will generate the corresponding composer.lock file for your project with these new dependencies (and their dependencies).

If this this is your first time integrating WPackagist into your existing Composer project then the key things to note from the code above:

  • Add the WPackagist repository under the repositories entry (see lines 6-15 in the code above).
  • Add any plugins or themes you want to install from WPackagist under the require key (see lines 17-19 in the code above). Be sure to use the wpackagist-plugin/ or wpackagist-theme/ prefixed vendor name to tell Composer that you intend these to be installed through WPackagist.
  • Set the installer-paths for your plugins and themes under the extra key to tell Composer where to install your WPackagist dependencies.

Run composer update to install the new required dependencies (see lines 22-29 in the code above).

How to install ACF PRO with Composer

ACF has some useful information on installing ACF PRO with Composer. We’ll use the composer.json code above as our starting point. Here are the steps you’ll need to set this up, which we’ll go over in further detail:

  1. Copy your ACF PRO license key from the Licenses tab. To activate your license via your wp-config.php file, add the following line to the file, replacing [key] with your license key: define( 'ACF_PRO_LICENSE', '[key]'.
  2. Add the ACF package repository to your composer.json.
  3. Install the plugin by running composer require wpengine/advanced-custom-fields-pro.

Here is what your final composer.json file will look:

composer.json (full source)

{
    "name": "wpe/composer-demo",
    "description": "Demo with Composer and deploy to WP Engine.",
    "license": "GPL-2.0+",
    "type": "project",
    "repositories": [
        {
            "type": "composer",
            "url": "https://connect.advancedcustomfields.com"
        },
        {
            "type":"composer",
            "url":"https://wpackagist.org",
            "only": [
                "wpackagist-plugin/*",
                "wpackagist-theme/*"
            ]
        }
    ],
    "require": {
        "wpackagist-plugin/two-factor": "*",
        "wpackagist-plugin/gutenberg": "*",
        "wpackagist-plugin/wordpress-seo": "*",
        "wpengine/advanced-custom-fields-pro": "^6.3"
    },
    "extra": {
        "installer-paths": {
            "plugins/{$name}/": [
                "type:wordpress-plugin"
            ],
            "themes/{$name}/": [
                "type:wordpress-theme"
            ]
        }
    },
    "config": {
        "allow-plugins": {
            "composer/installers": true
        }
    }
}

There are other ways to install and activate your ACF PRO license, so be sure to check out the full documentation. If you encounter any issues along the way, then send a support message.

Set up WP Engine GitHub Action for Composer integration

WP Engine’s GitHub Action for Site Deployment relies on rsync to transfer and synchronize local GitHub repository files to the final WP Engine hosting environment. This is critical to be mindful of when you initially setup your GitHub workflows.

Additionally, since we’re organizing the root of our repository within the wp-content/ directory then we want to be sure that we configure some key deployment options in the final workflow.

production.yml (full source)

# Deploy to WP Engine Production environment
# https://wpengine.com/support/environments/#About_Environments
name: Deploy to production
on:
  push:
    branches:
     - main
jobs:
  Deploy-to-WP-Engine-Production:
    runs-on: ubuntu-latest
    steps:
    - run: echo "Preparing to deploy to WP Engine production"
    - uses: actions/checkout@v3
    - name: GitHub Action Deploy to WP Engine
      uses: wpengine/github-action-wpe-site-deploy@v3
      with:
        # Deploy vars
        # https://github.com/wpengine/github-action-wpe-site-deploy?tab=readme-ov-file#environment-variables--secrets

        # The private RSA key you will save in the Github Secrets
        WPE_SSHG_KEY_PRIVATE: ${{ secrets.WPE_SSHG_KEY_PRIVATE }}
        # Destination to deploy to WPE
        # Change to your environment name
        WPE_ENV: yourEnvironmentName

        # Deploy options

        # An optional destination directory to deploy
        # to other than the WordPress root.
        REMOTE_PATH: "wp-content/"
        # Optional flags for the deployment
        FLAGS: -azvr --inplace --delete --include-from rsync-config/includes.txt --exclude=".*" --exclude-from rsync-config/excludes.txt
        # File containing custom scripts run after the rsync
        SCRIPT: wp-content/bin/post-deploy.sh

In the code above you’ll want to replace some of the deployment variables, like WPE_ENV and be sure to setup your SSH keys (both the WP Engine SSH Gateway key and your GitHub repository’s private SSH key: WPE_SSH_KEY_PRIVATE). Again, the helpful WP Engine step-by-step guide can help you here. The key options you will want to pay close attention to are listed in the table below.

NameTypeUsage
REMOTE_PATHstringOptional path to specify a directory destination to deploy to. Defaults to WordPress root directory on WP Engine. You want this to be wp-content/.
FLAGSstringSet optional rsync flags such as --delete or --exclude-from


Caution: Setting custom rsync flags replaces the default flags provided by this action. Consider also adding the -azvr flags as needed.
a preserves symbolic links, timestamps, user permissions and ownership.
z is for compression
v is for verbose output
r is for recursive directory scanning
SCRIPTstringRemote bash file to execute post-deploy. This can include WP_CLI commands for example. Path is relative to the WP root and file executes on remote. This file can be included in your repo, or be a persistent file that lives on your server.
Deployment options for Deploy WordPress to WP Engine GitHub Action (see full list).

You will want to pass some rather specific FLAGS and a custom post-deploy SCRIPT in order to get our targeted setup accurately deploying.

Configure rsync flags

You’ll be running rsync with the  --delete flag, which is destructive and we need to be careful about what we tell it to delete. Below is what you’ll want to put in your rsync-config/excludes.txt and rsync-config/includex.txt files.

excludes.txt (full source)

# Excluding these items from being deleted each rsync

plugins/*
themes/* 
mu-plugins/
uploads/
blogs.dir/
upgrade/*
backup-db/*
advanced-cache.php
wp-cache-config.php
cache/*
cache/supercache/*
index.php
mysql.sql

.env
.env.*
auth.json
vendor

includes.txt (full source)

# Including plugins/themes that we check into
# Git so that the version in GitHub is deployed

/plugins/demo-plugin
/themes/demo-theme

# ...other plugins could go here...

Create a post-deploy script

After everything is deployed, you will want to run composer install on the WP Engine environment. This will allow you to update your dependencies with Composer locally, commit any changes, push them to the Git remote, and once the GitHub Action is run to rsync any composer.json and composer.lock changes then it’ll install any updated dependencies on the final environment. This is the SCRIPT: wp-content/bin/post-deploy.sh we set in our GitHub Actions’s YML file (above).

post-deploy.sh (full source)

#!/bin/sh

echo "Starting post deploy script..."
echo "Switch directory to wp-content/"
cd wp-content
echo "Installing Composer dependencies..."
composer install --optimize-autoloader --no-dev --no-progress

Conclusion

Utilizing Composer with WPackagist to manage your WordPress plugin dependencies can help keep teams organized and facilitate consistent workflows.

Let us know how you’re maintaining your ideal workflow—tag us on X @wpengine.

The post Using Composer to Manage Plugins and Deploy to WP Engine appeared first on Builders.

]]>
https://wpengine.com/builders/using-composer-manage-plugins-deploy/feed/ 0
Request Headers in headless WordPress with Atlas https://wpengine.com/builders/request-headers-in-headless-wordpress-with-atlas/ https://wpengine.com/builders/request-headers-in-headless-wordpress-with-atlas/#respond Wed, 03 Jul 2024 19:04:05 +0000 https://wpengine.com/builders/?p=31655 Request headers are key-value pairs sent by the client to the server in an HTTP request. They provide additional information about the request, such as the client’s browser type, preferred […]

The post Request Headers in headless WordPress with Atlas appeared first on Builders.

]]>

Request headers are key-value pairs sent by the client to the server in an HTTP request. They provide additional information about the request, such as the client’s browser type, preferred language, and other relevant data. Request headers play an important role in enabling the server to understand and process the request appropriately.

In this article, I will guide you through how Atlas, WP Engine’s headless hosting platform automates request headers to your site.

Prerequisites

Before reading this article, you should have the following prerequisites checked off:

  • Basic knowledge of Next.js and Faust.js.
  • An Atlas account and environment set up.
  • Node.js and npm are installed on your local machine.

Request Headers in Atlas

The Atlas headless WordPress hosting platform automatically appends additional request headers to every request made to your site.  These headers are designed to provide valuable geographical and temporal information about the request origin, enabling you to tailor content and functionality based on the user’s location and local time.

The headers automatically appended are:

  • wpe-headless-country: The ISO alpha-2 country code of the request’s origin.
  • wpe-headless-region: The region within the country from where the request is made.
  • wpe-headless-timezone: The timezone of the request origin in TZ Database format.

Documentation on Atlas request headers is here.

Benefits of Geolocation Headers in Atlas

Geolocation headers offer several advantages:

  • Personalization: Tailor content and experiences based on the user’s location.
  • Localization: Display content in the user’s local language or relevant to their region.
  • Analytics: Gather insights into where your users are coming from to better understand your audience.
  • Compliance: Ensure compliance with regional regulations by adapting content accordingly.

This data has several use cases. You could display localized news and weather updates or promote things based on the user’s location. Moreover, you can make custom greetings based on the user’s local time and collect data on user distribution across regions and time zones for better insight and user experience.

Now that we understand request headers and how Atlas automates them, let’s implement an example of how to display an output of the request headers on a page.

Rendering Atlas Geolocation Headers in Next.js/Faust.js pages

For this example, we are going to use Next.js. The code and folder structure/file structure in the App Router Experimental package in Faust.js work exactly the same.

Since we are using Next 14 which defaults to React Server Components, we can fetch the request headers directly in the page component on the server side.

In the root of the App directory, create a folder called local and within that folder, create a file called page.jsx.  The structure should be: app/local/page.jsx.  Once you have created that, copy and paste this code block into the page.jsx file:

import { headers } from 'next/headers';

export default async function LocalPage() {
  const country = headers().get('wpe-headless-country') || 'No country data';
  const region = headers().get('wpe-headless-region') || 'No region data';
  const timezone = headers().get('wpe-headless-timezone') || 'No timezone data';

  return (
    <div>
      <h1>Geolocation Data</h1>
      <p>Country: {country}</p>
      <p>Region: {region}</p>
      <p>Timezone: {timezone}</p>
    </div>
  );
}


Let’s break down what the code is doing here.  At the top of the file, we import the headers function from the next/headers module. This will allow us to access the request headers in this server component.

Next, we define the component, calling it LocalPage. This is our async function that renders the page.

Following that, we retrieve the values of all the headers (country, region, timezone). If the headers are not present, the message defaults to stating that there is no data for that header.

Lastly we render the component returning the JSX.  The elements on the page will show the geolocation data.   

Once you have added this page file, make sure you push these changes up to your remote repository that is supported by Atlas before deployment.  GitHub, Bitbucket, and GitLab are the supported repos.

Deploying to Atlas

The last step is to connect your remote repo to Atlas, git push any changes, and visit the local page route we made to see the geolocation data. If you have not connected to Atlas yet, please do so. Here are the docs for reference.

Once you are deployed and have a live URL, you should have a page called /local that looks like this:

Conclusion

By leveraging request headers provided by WP Engine’s Atlas, you can enhance your headless WordPress applications with geolocation data, offer personalized and localized content, gather valuable analytics, and ensure compliance with regional regulations.

As always, we look forward to hearing your feedback, thoughts, and projects so hit us up in our headless Discord!

The post Request Headers in headless WordPress with Atlas appeared first on Builders.

]]>
https://wpengine.com/builders/request-headers-in-headless-wordpress-with-atlas/feed/ 0
Beta Testing WordPress with Local Blueprints https://wpengine.com/builders/beta-testing-wordpress-local-blueprints/ https://wpengine.com/builders/beta-testing-wordpress-local-blueprints/#respond Wed, 26 Jun 2024 14:53:08 +0000 https://wpengine.com/builders/?p=4810 Start testing the latest WordPress beta quickly with Local Blueprints.

The post Beta Testing WordPress with Local Blueprints appeared first on Builders.

]]>
A new release is on the horizon! 🌅

As with each release, there are countless hours of testing to ensure the overall experience is bug-free and optimized. WordPress 6.6 is targeted to be released on July 16, 2024. Right now, you can help by testing.

Local is the go-to tool to create a WordPress sandbox and effortlessly develop WordPress sites locally, and with Local you can get testing in seconds. Here are a few options to get you started.

Check out this video or continue reading to learn about all the ways to get testing.

Option 1: WP-CLI + Local

If you already have an existing site in Local then you can just upgrade it to the latest beta with WP-CLI. Here is how:

  1. Right-click on your site and choose ‘Open site shell’, which will open your system’s terminal application and automatically launch WP-CLI.
  2. Once WP-CLI is launched then just run this command: wp core update --version=6.6-RC1
Open WP-CLI in Local

Option 2: Local + WordPress Beta Tester plugin

If you already have a Local site then you can install the WordPress Beta Tester plugin and get the latest beta.

  1. Visit your WordPress dashboard’s Appearance > Plugins, and choose ‘Add New
  2. Search and install the WordPress Beta Tester plugin
  3. Once activated, visit the Tools > Beta Testing area and update the settings to get the latest beta (select the “Bleeding edge” channel and “Beta/RC Only” stream).
WordPress Beta Tester plugin settings screen

Option 3: Local Blueprint FTW!

Save a few clicks and just import our custom Local Blueprint, which comes with everything installed and ready for testing: WordPress Beta Tester plugin with WP 6.6 Beta 1 already installed and the default Twenty Twenty-Four theme activated.

Just click the download button below and drag and drop the downloaded WordPress-Beta-Tester_6.6-RC-1.zip into your Local app to spin up a new site and get testing!

Drag and drop Blueprint into Local

(Note: the super secret WordPress username and password for the Blueprint is admin.)

Reach out to @WPEBuilders and let us know how you’re using Local and what you’re testing in the latest WordPress 6.6 beta release.

The post Beta Testing WordPress with Local Blueprints appeared first on Builders.

]]>
https://wpengine.com/builders/beta-testing-wordpress-local-blueprints/feed/ 0