Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Fri, 24 Oct 2025 18:59:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 225069128 Introducing TanStack Start Middleware https://frontendmasters.com/blog/introducing-tanstack-start-middleware/ https://frontendmasters.com/blog/introducing-tanstack-start-middleware/#respond Fri, 24 Oct 2025 18:59:02 +0000 https://frontendmasters.com/blog/?p=7452 TanStack Start is one of the most exciting full-stack web development frameworks I’ve seen. I’ve written about it before.

In essence, TanStack Start takes TanStack Router, a superb, strongly-typed client-side JavaScript framework, and adds server-side support. This serves two purposes: it gives you a place to execute server-side code, like database access; and it enables server-side rendering, or SSR.

This post is all about one particular, especially powerful feature of TanStack Start: Middleware.

The elevator pitch for Middleware is that it allows you to execute code in conjunction with your server-side operations, executing code on both the client and the server, both before and after your underlying server-side action, and even passing data between the client and server.

This post will be a gentle introduction to Middleware. We’ll build some very rudimentary observability for a toy app. Then, in a future post, we’ll really see what Middleware can do when we use it to achieve single-flight mutations.

Why SSR?

SSR will usually improve LCP (Largest Contentful Paint) render performance compared to a client-rendered SPA. With SPAs, the server usually sends down an empty shell of a page. The browser then parses the script files, and fetches your application components. Those components then render and, usually, request some data. Only then can you render actual content for your user.

These round trips are neither free nor cheap; SSR allows you to send the initial content down directly, via the initial request, which the user can see immediately, without needing those extra round trips. See the post above for some deeper details; this post is all about Middleware.

Prelude: Server Functions

Any full-stack web application will need a place to execute code on the server. It could be for a database query, to update data, or to validate a user against your authentication solution. Server functions are the main mechanism TanStack Start provides for this purpose, and are documented here. The quick introduction is that you can write code like this:

import { createServerFn } from "@tanstack/react-start";

export const getServerTime = createServerFn().handler(async () => {
  await new Promise(resolve => setTimeout(resolve, 1000));
  return new Date().toISOString();
});

Then you can call that function from anywhere (client or server), to get a value computed on the server. If you call it from the server, it will just execute the code. If you call that function from the browser, TanStack will handle making a network request to an internal URL containing that server function.

Getting Started

All of my prior posts on TanStack Start and Router used the same contrived Jira clone, and this one will be no different. The repo is here, but the underlying code is the same. If you want to follow along, you can npm i and then npm run dev and then run the relevant portion of the app at http://localhost:3000/app/epics?page=1.

The epics section of this app uses server functions for all data and updates. We have an overview showing:

  • A count of all tasks associated with each individual epic (for those that contain tasks).
  • A total count of all epics in the system.
  • A pageable list of individual epics which the user can view and edit.
A web application displaying an epics overview with a list of projects, their completion status, and navigation buttons.
This is a contrived example. It’s just to give us a few different data sources along with mutations.

Our Middleware Use Case

We’ll explore middleware by building a rudimentary observability system for our Jira-like app.

What is observability? If you think of basic logging as a caterpillar, observability would be the beautiful butterfly it matures into. Observability is about setting up systems that allow you to holistically observe how your application is behaving. High-level actions are assigned a globally unique trace id, and the pieces of work that action performs are logged against that same trace id. Then your observability system will allow you to intelligently introspect that data, and discover where your problems or weaknesses are.

I’m no observability expert, so if you’d like to learn more, Charity Majors co-authored a superb book on this very topic. She’s the co-founder of Honeycomb IO, a mature observability platform.

We won’t be building a mature observability platform here; we’ll be putting together some rudimentary logging with trace id’s. What we’ll be building is not suitable for use in a production software system, but it will be a great way to explore TanStack Start’s Middleware.

Our First Server Function

This is a post about Middleware, which is applied to server functions. Let’s take a very quick look at a server function

export const getEpicsList = createServerFn({ method: "GET" })
  .inputValidator((page: number) => page)
  .handler(async ({ data }) => {
    const epics = await db
      .select()
      .from(epicsTable)
      .offset((data - 1) * 4)
      .limit(4);
    return epics;
  });

This is a simple server function to query our epics. We configure it to use the GET http verb. We specify and potentially validate our input, and then the handler function runs our actual code, which is just a basic query against our SQLite database. This particular code uses Drizzle for the data access, but you can of course use whatever you want.

Server functions by definition always run on the server, so you can do things like connect to a database, access secrets, etc.

Our First Middleware

Let’s add some empty middleware so we can see what it looks like.

import { createMiddleware } from "@tanstack/react-start";

export const middlewareDemo = createMiddleware({ type: "function" })
  .client(async ({ next, context }) => {
    console.log("client before");

    const result = await next({
      sendContext: {
        hello: "world",
      },
    });

    console.log("client after", result.context);

    return result;
  })
  .server(async ({ next, context }) => {
    console.log("server before", context);

    await new Promise(resolve => setTimeout(resolve, 1000));

    const result = await next({
      sendContext: {
        value: 12,
      },
    });

    console.log("server after", context);

    return result;
  });

Let’s step through it.

export const middlewareDemo = createMiddleware({ type: "function" });

This declares the middleware. type: "function" means that this middleware is intended to run against server “functions” – there’s also “request” middleware, which can run against either server functions, or server routes (server routes are what other frameworks sometimes call “API routes”). But “function” middleware has some additional powers, which is why we’re using them here.

.client(async ({ next, context }) => {

This allows us to run code on the client. Note the arguments: next is how we tell TanStack to proceed with the rest of the middlewares in our chain, as well as the underlying server function this middleware is attached to. And context holds the mutable “context” of the middleware chain.

console.log("client before");

const result = await next({
  sendContext: {
    hello: "world",
  },
});

console.log("client after", result.context);

We do some logging, then tell TanStack to run the underlying server function (as well as any other middlewares we have in the chain), and then, after everything has run, we log again.

Note the sendContext we pass into the call to next

sendContext: {
  hello: "world",
},

This allows us to pass data from the client, up to the server. Now this hello property will be in the context object on the server.

And of course don’t forget to return the actual result.

return result;

You can return next(), but separating the call to next with the return statement allows you to do additional work after the call chain is finished: modify context, perform logging, etc.

And now we essentially restart the same process on the server.

  .server(async ({ next, context }) => {
    console.log("server before", context);

    await new Promise(resolve => setTimeout(resolve, 1000));

    const result = await next({
      sendContext: {
        value: 12
      }
    });

    console.log("server after", context);

    return result;

We do some logging and inject an artificial delay of one second to simulate work. Then, as before, we call next() which triggers the underlying server function (as well as any other Middleware in the chain), and then return the result.

Note again the sendContext.

const result = await next({
  sendContext: {
    value: 12,
  },
});

This allows us to send data from the server back down to the client.

Let’s Run It

We’ll add this middleware to the server function we just saw.

export const getEpicsList = createServerFn({ method: "GET" })
  .inputValidator((page: number) => page)
  .middleware([middlewareDemo])
  .handler(async ({ data }) => {
    const epics = await db
      .select()
      .from(epicsTable)
      .offset((data - 1) * 4)
      .limit(4);
    return epics;
  });

When we run it, this is what the browser’s console shows:

client before
client after {value: 12}

With a one second delay before the final client log, since that was the time execution was on the server with the delay we saw.

Nothing too shocking. The client logs, then sends execution to the server, and then logs again with whatever context came back from the server. Note we use result.context to get what the server sent back, rather than the context argument that was passed to the client callback. This makes sense: that context was created before the server was ever invoked with the next() call, so there’s no way for it to magically, mutably update based on whatever happens to get returned from the server. So we just read result.context to get what the server sent back.

The Server

Now let’s see what the server console shows.

server before { hello: 'world' }
server after { hello: 'world' }

Nothing too interesting here, either. As we can see, the server’s context argument does in fact contain what was sent to it from the client.

When Client Middleware Runs on the Server

Don’t forget, TanStack Start will server render your initial path by default. So what happens when a server function executes as a part of that process, with Middleware? How can the client middleware possibly run, when there’s no client in existence yet—only a request, currently being executed on the server.

During SSR, client Middleware will run on the server. This makes sense: whatever functionality you’re building will still work, but the client portion of it will run on the server. So be sure not to use any browser-only APIs like localStorage.

Let’s see this in action, but during the SSR run. The prior logs I showed were the result of browsing to a page via navigation. Now I’ll just refresh that page, and show the server logs.

client before
server before { hello: 'world' }
server after { hello: 'world' }
client after { value: 12 }

This is the same as before, but now server, and client logs are together, since this code all runs during the server render phase. The server function is called from the server, while it generates the HTML to send down for the initial render. And as before, there’s a one second delay while the server is working.

Building Real Middleware

Let’s build some actual logging Middleware with an observability flair. If you want to look at real observability solutions, please check out the book I mentioned above, or a real Observability solution like Honeycomb. But our focus will be on TanStack Middleware, not a robust observability solution.

The Client

Let’s start our Middleware with our client section. It will record the local time that this Middleware began. This will allow us to measure the total end-to-end time that our action took, including server latency.

export const loggingMiddleware = (name: string) =>
  createMiddleware({ type: "function" })
    .client(async ({ next, context }) => {
      console.log("middleware for", name, "client", context);

      const clientStart = new Date().toISOString();

Now let’s call the rest of our Middleware chain and our server function.

const result = await next({
  sendContext: {
    clientStart,
  },
});

Once the await next completes, we know that everything has finished on the server, and we’re back on the client. Let’s grab the date and time that everything finished, as well as a logging id that was sent back from the server. With that in hand, we’ll call setClientEnd, which is just a simple server function to update the relevant row in our log table with the clientEnd time.

const clientEnd = new Date().toISOString();
const loggingId = result.context.loggingId;

await setClientEnd({ data: { id: loggingId, clientEnd } });

return result;

For completeness, that server function looks like this:

export const setClientEnd = createServerFn({ method: "POST" })
  .inputValidator((payload: { id: string; clientEnd: string }) => payload)
  .handler(async ({ data }) => {
    await db.update(actionLog).set({ clientEnd: data.clientEnd }).where(eq(actionLog.id, data.id));
  });

The Server

Let’s look at our server handler.

    .server(async ({ next, context }) => {
      const traceId = crypto.randomUUID();

      const start = +new Date();

      const result = await next({
        sendContext: {
          loggingId: "" as string
        }
      });

We start by creating a traceId. This is the single identifier that represents the entirety of the action the user is performing; it’s not a log id. In fact, for real observability systems, there will be many, many log entries against a single traceId, representing all the sub-steps involved in that action.

For now, there’ll just be a single log entry, but in a bit we’ll have some fun and go a little further.

Once we have the traceId, we note the start time, and then we call await next to finish our work on the server. We add a loggingId to the context we’ll be sending back down to the client. It’ll use this to update the log entry with the clientEnd time, so we can see the total end-to-end network time.

const end = +new Date();

const id = await addLog({
  data: { actionName: name, clientStart: context.clientStart, traceId: traceId, duration: end - start },
});
result.sendContext.loggingId = id;

return result;

Next we get the end time after the work has completed. We add a log entry, and then we update the context we’re sending back down to the client (the sendContext object) with the correct loggingId. Recall that the client callback used this to add the clientEnd time.

And then we return the result, which then finishes the processing on the server, and allows control to return to the client.

The addLog function is pretty boring; it just inserts a row in our log table with Drizzle.

export const addLog = createServerFn({ method: "POST" })
  .inputValidator((payload: AddLogPayload) => payload)
  .handler(async ({ data }) => {
    const { actionName, clientStart, traceId, duration } = data;

    const id = crypto.randomUUID();
    await db.insert(actionLog).values({
      id,
      traceId,
      clientStart,
      clientEnd: "",
      actionName,
      actionDuration: duration,
    });

    return id as string;
  });

The value of clientEnd is empty, initially, since the client callback will fill that in.

Let’s run our Middleware. We’ll add it to a serverFn that updates an epic.

export const updateEpic = 
  createServerFn({ method: "POST" })
    .middleware([loggingMiddleware("update epic")])
    .inputValidator((obj: { id: number; name: string }) => obj)
    .handler(async ({ data }) => { await new Promise(resolve => setTimeout(resolve, 1000 * Math.random()));

  await db.update(epicsTable)
    .set({ name: data.name })
    .where(eq(epicsTable.id, data.id));
});

And when this executes, we can see our logs!

A database logging table displaying columns for id, trace_id, client_start, client_end, action_name, and action_duration, with several entries showing recorded data.

The Problem

There’s one small problem: we have a TypeScript error.

Here’s the entire middleware, with the TypeScript error pasted as a comment above the offending line

import { createMiddleware } from "@tanstack/react-start";
import { addLog, setClientEnd } from "./logging";

export const loggingMiddleware = (name: string) =>
  createMiddleware({ type: "function" })
    .client(async ({ next, context }) => {
      console.log("middleware for", name, "client", context);

      const clientStart = new Date().toISOString();

      const result = await next({
        sendContext: {
          clientStart,
        },
      });

      const clientEnd = new Date().toISOString();
      // ERROR: 'result.context' is possibly 'undefined'
      const loggingId = result.context.loggingId;

      await setClientEnd({ data: { id: loggingId, clientEnd } });

      return result;
    })
    .server(async ({ next, context }) => {
      const traceId = crypto.randomUUID();

      const start = +new Date();

      const result = await next({
        sendContext: {
          loggingId: "" as string,
        },
      });

      const end = +new Date();

      const id = await addLog({
        data: { actionName: name, clientStart: context.clientStart, traceId: traceId, duration: end - start },
      });
      result.sendContext.loggingId = id;

      return result;
    });

Why does TypeScript dislike this line?

We call it on the client, after we call await next. Our server does in fact add a loggingId to its sendContext object. And it’s there: the value is logged.

The problem is a technical one. Our server callback can see the things the client callback added to sendContext. But the client callback is not able to “look ahead” and see what the server callback added to its sendContext object. The solution is to split the Middleware up.

Here’s a version 2 of the same Middleware. I’ve added it to a new loggingMiddlewareV2.ts module.

I’ll post the entirety of it below, but it’s the same code as before, except all the stuff in the .client handler after the call to await next has been moved to a second Middleware. This new, second Middleware, which only contains the second half of the .client callback, then takes the other Middleware as its own Middleware input.

Here’s the code:

import { createMiddleware } from "@tanstack/react-start";
import { addLog, setClientEnd } from "./logging";

const loggingMiddlewarePre = (name: string) =>
  createMiddleware({ type: "function" })
    .client(async ({ next, context }) => {
      console.log("middleware for", name, "client", context);

      const clientStart = new Date().toISOString();

      const result = await next({
        sendContext: {
          clientStart,
        },
      });

      return result;
    })
    .server(async ({ next, context }) => {
      const traceId = crypto.randomUUID();

      const start = +new Date();

      const result = await next({
        sendContext: {
          loggingId: "" as string,
        },
      });

      const end = +new Date();

      const id = await addLog({
        data: { actionName: name, clientStart: context.clientStart, traceId: traceId, duration: end - start },
      });
      result.sendContext.loggingId = id;

      return result;
    });

export const loggingMiddleware = (name: string) =>
  createMiddleware({ type: "function" })
    .middleware([loggingMiddlewarePre(name)])
    .client(async ({ next }) => {
      const result = await next();

      const clientEnd = new Date().toISOString();
      const loggingId = result.context.loggingId;

      await setClientEnd({ data: { id: loggingId, clientEnd } });

      return result;
    });

We export that second Middleware. It takes the other one as its own middleware. That runs everything, as before. But now when the .client callback calls await next, it knows what’s in the resulting context object. It knows this because that other Middleware is now input to this Middleware, and the typings can readily be seen.

Going Deeper

We could end the post here. I don’t have anything new to show with respect to TanStack Start. But let’s make our observability system just a little bit more realistic, and in the process see a cool Node feature that’s not talked about enough, and also has the distinction of being the worst named API in software engineering history: asyncLocalStorage.

You’d be forgiven for thinking asyncLocalStorage was some kind of async version of your browser’s localStorage. But no: it’s a way to set and maintain context for the entirety of an async operation in Node.

When Server Functions Call Server Functions

Let’s imagine our updateEpic server function also wants to read the epic it just updated. It does this by calling the getEpic serverFn. So far so good, but if our getEpic serverFn also has logging Middleware configured, we really would want it to use the traceId we already created, rather than create its own.

Think about React context: it allows you to put arbitrary state onto an object that can be read by any component in the tree. Well, Node’s asyncLocalStorage allows this same kind of thing, except instead of being read anywhere inside of a component tree, the state we set can be read anywhere within the current async operation. This is exactly what we need.

Note that TanStack Start did have a getContext / setContext set of api’s in an earlier beta version, which maintained state for the current, entire request, but they were removed. If they wind up being re-added at some point (possibly with a different name) you can just use them.

Let’s start by importing AsyncLocalStorage, and creating an instance.

import { AsyncLocalStorage } from "node:async_hooks";

const asyncLocalStorage = new AsyncLocalStorage();

Now let’s create a function for reading the traceId that some middleware higher up in our callstack might have added

function getExistingTraceId() {
  const store = asyncLocalStorage.getStore() as any;
  return store?.traceId;
}

All that’s left is to read the traceId that was possibly set already, and if none was set, create one. And then, crucially, use asyncLocalStorage to set our traceId for any other Middleware that will be called during our operation.

    .server(async ({ next, context }) => {
      const priorTraceId = getExistingTraceId();
      const traceId = priorTraceId ?? crypto.randomUUID();

      const start = +new Date();

      const result = await asyncLocalStorage.run({ traceId }, async () => {
        return await next({
          sendContext: {
            loggingId: "" as string
          }
        });
      });

The magic line is this:

const result = await asyncLocalStorage.run({ traceId }, async () => {
  return await next({
    sendContext: {
      loggingId: "" as string,
    },
  });
});

Our call to next is wrapped in asyncLocalStorage.run, which means virtually anything that gets called in there can see the traceId we set. There are a few exceptions at the margins, for things like WorkerThreads. But any normal async operations which happen inside of the run callback will see the traceId we set.

The rest of the Middleware is the same, and I’ve saved it in a loggingMiddlewareV3 module. Let’s take it for a spin. First, we’ll add it to our getEpic serverFn.

export const getEpic = createServerFn({ method: "GET" })
  .middleware([loggingMiddlewareV3("get epic")])
  .inputValidator((id: string | number) => Number(id))
  .handler(async ({ data }) => {
    const epic = await db.select().from(epicsTable).where(eq(epicsTable.id, data));
    return epic[0];
  });

Now let’s add it to updateEpic, and update it to also call our getEpic server function.

export const updateEpic = createServerFn({ method: "POST" })
  .middleware([loggingMiddlewareV3("update epic")])
  .inputValidator((obj: { id: number; name: string }) => obj)
  .handler(async ({ data }) => {
    await new Promise(resolve => setTimeout(resolve, 1000 * Math.random()));
    await db.update(epicsTable).set({ name: data.name }).where(eq(epicsTable.id, data.id));

    const updatedEpic = await getEpic({ data: data.id });
    return updatedEpic;
  });

Our server function now updates our epic, and then calls the other serverFn to read the newly updated epic.

Let’s clear our logging table, then give it a run. I’ll edit, and save an individual epic. Opening the log table now shows this:

A screenshot of a database table displaying log entries with columns for id, trace_id, client_start, client_end, action_name, and action_duration.

Note there’s three log entries. In order to edit the epic, the UI first reads it. That’s the first entry. Then the update happens, and then the second read, from the updateEpic serverFn. Crucially, notice how the last two rows, the update and the last read, both share the same traceId!

Our “observability” system is pretty basic right now. The clientStart and clientEnd probably don’t make much sense for these secondary actions that are all fired off from the server, since there’s not really any end-to-end latency. A real observability system would likely have separate, isolated rows just for client-to-server latency measures. But combining everything together made it easier to put something simple together, and showing off TanStack Start Middleware was the goal, not creating a real observability system.

Besides, we’ve now seen all the pieces you’d need if you wanted to actually build this into something more realistic: TanStack’s Middleware gives you everything you need to do anything you can imagine.

Parting Thoughts

We’ve barely scratched the surface of Middleware. Stay tuned for a future post where we’ll push middleware to its limit and achieve single-flight mutations.

]]>
https://frontendmasters.com/blog/introducing-tanstack-start-middleware/feed/ 0 7452
Introducing TanStack Start https://frontendmasters.com/blog/introducing-tanstack-start/ https://frontendmasters.com/blog/introducing-tanstack-start/#comments Wed, 18 Dec 2024 17:43:51 +0000 https://frontendmasters.com/blog/?p=4810 The best way to think about TanStack Start is that it’s a thin server layer atop the TanStack Router we already know and love; that means we don’t lose a single thing from TanStack Router. Not only that, but the nature of this server layer allows it to side-step the pain points other web meta-frameworks suffer from.

This is a post I’ve been looking forward to writing for a long time; it’s also a difficult one to write.

The goal (and challenge) will be to show why a server layer on top of a JavaScript router is valuable, and why TanStack Start’s implementation is unique compared to the alternatives (in a good way). From there, showing how TanStack Start actually works will be relatively straightforward. Let’s go!

Please keep in mind that, while this post discusses a lot of generic web performance issues, TanStack Start is still a React-specific meta-framework. It’s not a framework-agnostic tool like Astro

Why Server Rendering?

Client-rendered web applications, often called “Single Page Applications” or “SPAs” have been popular for a long time. With this type of app, the server sends down a mostly empty HTML page, possibly with some sort of splash image, loading spinner, or maybe some navigation components. It also includes, very importantly, script tags that load your framework of choice (React, Vue, Svelte, etc) and a bundle of your application code.

These apps were always fun to build, and in spite of the hate they often get, they (usually) worked just fine (any kind of software can be bad). Admittedly, they suffer a big disadvantage: initial render performance. Remember, the initial render of the page was just an empty shell of your app. This displayed while your script files loaded and executed, and once those scripts were run, your application code would most likely need to request data before your actual app could display. Under the covers, your app is doing something like this

The initial render of the page, from the web server, renders only an empty shell of your application. Then some scripts are requested, and then parsed and executed. When those application scripts run, you (likely) send some other requests for data. Once that is done, your page displays.

To put it more succinctly, with client-rendered web apps, when the user first loads your app, they’ll just get a loading spinner. Maybe your company’s logo above it, if they’re lucky.

This is perhaps an overstatement. Users may not even notice the delay caused by these scripts loading (which are likely cached), or hydration, which is probably fast. Depending on the speed of their network, and the type of application, this stuff might not matter much.

Maybe.

But if our tools now make it easy to do better, why not do better?

Server Side Rendering

With SSR, the picture looks more like this

The server sends down the complete, finished page that the user can see immediately. We do still need to load our scripts and hydrate, so our page can be interactive. But that’s usually fast, and the user will still have content to see while that happens.

Our hypothetical user now looks like this, since the server is responding with a full page the user can see.

Streaming

We made one implicit assumption above: that our data was fast. If our data was slow to load, our server would be slow to respond. It’s bad for the user to be stuck looking at a loading spinner, but it’s even worse for the user to be stuck looking at a blank screen while the server churns.

As a solution for this, we can use something called “streaming,” or “out of order streaming” to be more precise. The user still requests all the data, as before, but we tell our server “don’t wait for this/these data, which are slow: render everything else, now, and send that slow data to the browser when it’s ready.”

All modern meta-frameworks support this, and our picture now looks like this

To put a finer point on it, the server does still initiate the request for our slow data immediately, on the server during our initial navigation. It just doesn’t block the initial render, and instead pushes down the data when ready. We’ll look at streaming with Start later in this post.

Why did we ever do client-rendering?

I’m not here to tear down client-rendered apps. They were, and frankly still are an incredible way to ship deeply interactive user experiences with JavaScript frameworks like React and Vue. The fact of the matter is, server rendering a web app built with React was tricky to get right. You not only needed to server render and send down the HTML for the page the user requested, but also send down the data for that page, and hydrate everything just right on the client.

It’s hard to get right. But here’s the thing: getting this right is the one of the primary purposes of this new generation of meta-frameworks. Next, Nuxt, Remix, SvelteKit, and SolidStart are some of the more famous examples of these meta-frameworks. And now TanStack Start.

Why is TanStack Start different?

Why do we need a new meta-framework? There’s many possible answers to that question, but I’ll give mine. Existing meta-frameworks suffer from some variation on the same issue. They’ll provide some mechanism to load data on the server. This mechanism is often called a “loader,” or in the case of Next, it’s just RSCs (React Server Components). In Next’s (older) pages directory, it’s the getServerSideProps function. The specifics don’t matter. What matters is, for each route, whether the initial load of the page, or client-side navigation via links, some server-side code will run, send down the data, and then render the new page.

Need to bone up on React in general? Brian Holt’s Complete Intro to React and Intermediate React will get you there.

An Impedance Mismatch is Born

Notice the two worlds that exist: the server, where data loading code will always run, and the client. It’s the difference and separation between these worlds that can cause issues.

For example, frameworks always provide some mechanism to mutate data, and then re-fetch to show updated state. Imagine your loader for a page loads some tasks, user settings, and announcements. When the user edits a task, and revalidates, these frameworks will almost always re-run the entire loader, and superfluously re-load the user’s announcements and user settings, in addition to tasks, even though tasks are the only thing that changed.

Are there fixes? Of course. Many frameworks will allow you to create extra loaders to spread the data loading across, and revalidate only some of them. Other frameworks encourage you to cache your data. These solutions all work, but come with their own tradeoffs. And remember, they’re solutions to a problem that meta-frameworks created, by having server-side loading code for every path in your app.

Or what about a loader that loads 5 different pieces of data? After the page loads, the user starts browsing around, occasionally coming back to that first page. These frameworks will usually cache that previously-displayed page, for a time. Or not. But it’s all or none. When the loader re-runs, all 5 pieces of data will re-fire, even if 4 of them can be cached safely.

You might think using a component-level data loading solution like react-query can help. react-query is great, but it doesn’t eliminate these problems. If you have two different pages that each have 5 data sources, of which 4 are shared in common, browsing from the first page to the second will cause the second page to re-request all 5 pieces of data, even though 4 of them are already present in client-side state from the first page. The server is unaware of what happens to exist on the client. The server is not keeping track of what state you have in your browser; in fact the “server” might just be a Lambda function that spins up, satisfies your request, and then dies off.

In the picture, we can see a loader from the server sending down data for queryB, which we already have in our TanStack cache.

Where to, from here?

The root problem is that these meta-frameworks inevitably have server-only code running on each path, integrating with long-running client-side state. This leads to conflicts and inefficiencies which need to be managed. There’s ways of handling these things, which I touched on above. But it’s not a completely clean fit.

How much does it matter?

Let’s be clear right away: if this situation is killing performance of your site, you have bigger problems. If these extra calls are putting undue strain on your services, you have bigger problems.

That said, one of the first rules of distributed systems is to never trust your network. The more of these calls we’re firing off, the better the chances that some of them might randomly be slow for some reason beyond our control. Or fail.

We typically tolerate requesting more than we need in these scenarios because it’s hard to avoid with our current tooling. But I’m here to show you some new, better tooling that side-steps these issues altogether.

Isomorphic Loaders

In TanStack, we do have loaders. These are defined by TanStack Router. I wrote a three-part series on Router here. If you haven’t read that, and aren’t familiar with Router, give it a quick look.

Start takes what we already have with Router, and adds server handling to it. On the initial load, your loader will run on the server, load your data, and send it down. On all subsequent client-side navigations, your loader will run on the client, like it already does. That means all subsequent invocations of your loader will run on the client, and have access to any client-side state, cache, etc. If you like react-query, you’ll be happy to know that’s integrated too. Your react-query client can run on the server, to load, and send data down on the initial page load. On subsequent navigations, these loaders will run on the client, which means your react-query queryClient will have full access to the usual client-side cache react-query always uses. That means it will know what does, and does not need to be loaded.

It’s honestly such a refreshing, simple, and most importantly, effective pattern that it’s hard not being annoyed none of the other frameworks thought of it first. Admittedly, SvelteKit does have universal loaders which are isomorphic in the same way, but without a component-level query library like react-query integrated with the server.

TanStack Start

Enough setup, let’s look at some code. TanStack Start is still in beta, so some of the setup is still a bit manual, for now.

The repo for this post is here.

If you’d like to set something up yourself, check out the getting started guide. If you’d like to use react-query, be sure to add the library for that. You can see an example here. Depending on when you read this, there might be a CLI to do all of this for you.

This post will continue to use the same code I used in my prior posts on TanStack Router. I set up a new Start project, copied over all the route code, and tweaked a few import paths since the default Start project has a slightly different folder structure. I also removed all of the artificial delays, unless otherwise noted. I want our data to be fast by default, and slow in a few places where we’ll use streaming to manage the slowness.

We’re not building anything new, here. We’re taking existing code, and moving the data loading up to the server in order to get it requested sooner, and improve our page load times. This means everything we already know and love about TanStack Router is still 100% valid.

Start does not replace Router; Start improves Router.

Loading Data

All of the routes and loaders we set up with Router are still valid. Start sits on top of Router and adds server processing. Our loaders will execute on the server for the first load of the page, and then on the client as the user browses. But there’s a small problem. While the server environment these loaders will execute in does indeed have a fetch function, there are differences between client-side fetch, and server-side fetch—for example, cookies, and fetching to relative paths.

To solve this, Start lets you define a server function. Server functions can be called from the client, or from the server; but the server function itself always executes on the server. You can define a server function in the same file as your route, or in a separate file; if you do the former, TanStack will do the work of ensuring that server-only code does not ever exist in your client bundle.

Let’s define a server function to load our tasks, and then call it from the tasks loader.

import { getCookie } from "vinxi/http";
import { createServerFn } from "@tanstack/start";
import { Task } from "../../types";

export const getTasksList = createServerFn({ method: "GET" }).handler(async () => {
  const result = getCookie("user");

  return fetch(`http://localhost:3000/api/tasks`, { method: "GET", headers: { Cookie: "user=" + result } })
    .then(resp => resp.json())
    .then(res => res as Task[]);
});

We have access to a getCookie utility from the vinxi library on which Start is built. Server functions actually provide a lot more functionality than this simple example shows. Be sure to check out the docs to learn more.

If you’re curious about this fetch call:

fetch(`http://localhost:3000/api/tasks`, { method: "GET", headers: { Cookie: "user=" + result } });

That’s how I’m loading data for this project, on the server. I have a separate project running a set of Express endpoints querying a simple SQLite database. You can fetch your data however you need from within these server functions, be it via an ORM like Drizzle, an external service endpoint like I have here, or you could connect right to a database and query what you need. But that latter option should probably be discouraged for production applications.

Now we can call our server function from our loader.

loader: async ({ context }) => {
    const now = +new Date();
    console.log(`/tasks/index path loader. Loading tasks at + ${now - context.timestarted}ms since start`);
    const tasks = await getTasksList();
    return { tasks };
  },

That’s all there is to it. It’s almost anti-climactic. The page loads, as it did in the last post. Except now it server renders. You can shut JavaScript off, and the page will still load and display (and hyperlinks will still work).

Streaming

Let’s make the individual task loading purposefully slow (we’ll just keep the delay that was already in there), so we can see how to stream it in. Here’s our server function to load a single task.

export const getTask = createServerFn({ method: "GET" })
  .validator((id: string) => id)
  .handler(async ({ data }) => {
    return fetch(`http://localhost:3000/api/tasks/${data}`, { method: "GET" })
      .then(resp => resp.json())
      .then(res => res as Task);
  });

Note the validator function, which is how we strongly type our server function (and validate the inputs). But otherwise it’s more of the same.

Now let’s call it in our loader, and see about enabling streaming

Here’s our loader:

loader: async ({ params, context }) => {
    const { taskId } = params;

    const now = +new Date();
    console.log(`/tasks/${taskId} path loader. Loading at + ${now - context.timestarted}ms since start`);
    const task = getTask({ data: taskId });

    return { task };
  },

Did you catch it? We called getTask without awaiting it. That means task is a promise, which Start and Router allow us to return from our loader (you could name it taskPromise if you like that specificity in naming).

But how do we consume this promise, show loading state, and await the real value? There are two ways. TanStack Router defines an Await component for this. But if you’re using React 19, you can use the new use psuedo-hook.

import { use } from "react";

function TaskView() {
  const { task: taskPromise } = Route.useLoaderData();
  const { isFetching } = Route.useMatch();

  const task = use(taskPromise);

  return (
    <div>
      <Link to="/app/tasks">Back to tasks list</Link>
      <div className="flex flex-col gap-2">
        <div>
          Task {task.id} {isFetching ? "Loading ..." : null}
        </div>
        <h1>{task.title}</h1>
        <Link 
          params={{ taskId: task.id }}
          to="/app/tasks/$taskId/edit"
        >
          Edit
        </Link>
        <div />
      </div>
    </div>
  );
}

The use hook will cause the component to suspend, and show the nearest Suspense boundary in the tree. Fortunately, the pendingComponent you set up in Router also doubles as a Suspense boundary. TanStack is impressively well integrated with modern React features.

Now when we load an individual task’s page, we’ll first see the overview data which loaded quickly, and server rendered, above the Suspense boundary for the task data we’re streaming

When the task comes in, the promise will resolve, the server will push the data down, and our use call will provide data for our component.

React Query

As before, let’s integrate react-query. And, as before, there’s not much to do. Since we added the @tanstack/react-router-with-query package when we got started, our queryClient will be available on the server, and will sync up with the queryClient on the client, and put data (or in-flight streamed promises) into cache.

Let’s start with our main epics page. Our loader looked like this before:

async loader({ context, deps }) {
    const queryClient = context.queryClient;

    queryClient.ensureQueryData(
      epicsQueryOptions(context.timestarted, deps.page)
    );
    queryClient.ensureQueryData(
      epicsCountQueryOptions(context.timestarted)
    );
  }

That would kick off the requests on the server, but let the page render, and then suspend in the component that called useSuspenseQuery—what we’ve been calling streaming.

Let’s change it to actually load our data in our loader, and server render the page instead. The change couldn’t be simpler.

async loader({ context, deps }) {
  const queryClient = context.queryClient;

  await Promise.allSettled([
    queryClient.ensureQueryData(
      epicsQueryOptions(context.timestarted, deps.page)
    ),
    queryClient.ensureQueryData(
      epicsCountQueryOptions(context.timestarted)
    ),
  ]);
},

Note we’re awaiting a Promise.allSettled call here so the queries can run together. Make sure you don’t sequentially await each individual call, as that would create a waterfall, or use Promise.all, as that will quit immediately if any of the promises error out.

Streaming with react-query

As I implied above, to stream data with react-query, do the exact same thing, but don’t await the promise. Let’s do that on the page for viewing an individual epic.

loader: ({ context, params }) => {
  const { queryClient, timestarted } = context;

  queryClient.ensureQueryData(
    epicQueryOptions(timestarted, params.epicId)
  );
},

Now if this page is loaded initially, the query for this data will start on the server and stream to the client. If the data are pending, our suspense boundary will show, triggered automatically by react-query’s useSuspenseBoundary hook.

If the user browses to this page from a different page, the loader will instead run on the client, but still fetch those same data from the same server function, and trigger the same suspense boundary.

Parting Thoughts

I hope this post was useful to you. It wasn’t a deep dive into TanStack Start — the docs are a better venue for that. Instead, I hope I was able to show why server rendering can offer almost any web app a performance boost, and why TanStack Start is a superb tool for doing so. Not only does it simplify a great deal of things by running loaders isomorphically, but it even integrates wonderfully with react-query.

The react-query integration is especially exciting to me. It delivers component-level data fetching while still allowing for server fetching, and streaming—all without sacrificing one bit of convenience.

]]>
https://frontendmasters.com/blog/introducing-tanstack-start/feed/ 4 4810
Loading Data with TanStack Router: react-query https://frontendmasters.com/blog/tanstack-router-data-loading-2/ https://frontendmasters.com/blog/tanstack-router-data-loading-2/#comments Thu, 21 Nov 2024 18:11:14 +0000 https://frontendmasters.com/blog/?p=4492 TanStack Query, commonly referred to as react-query, is an incredibly popular tool for managing client-side querying. You could create an entire course on react-query, and people have, but here we’re going to keep it brief so you can quickly get going.

Article Series

Essentially, react-query allows us to write code like this:

const { data, isLoading } = useQuery({
  queryKey: ["task", taskId],
  queryFn: async () => {
    return fetchJson("/api/tasks/" + taskId);
  },
  staleTime: 1000 * 60 * 2,
  gcTime: 1000 * 60 * 5,
});

The queryKey does what it sounds like: it lets you identify any particular key for a query. As the key changes, react-query is smart enough to re-run the query, which is contained in the queryFn property. As these queries come in, TanStack tracks them in a client-side cache, along with properties like staleTime and gcTime, which mean the same thing as they do in TanStack Router. These tools are built by the same people, after all.

There’s also a useSuspenseQuery hook which is the same idea, except instead of giving you an isLoading value, it relies on Suspense, and lets you handle loading state via Suspense boundaries.

This barely scratches the surface of Query. If you’ve never used it before, be sure to check out the docs.

We’ll move on and cover the setup and integration with Router, but we’ll stay high level to keep this post a manageable length.

Setup

We need to wrap our entire app with a QueryClientProvider which injects a queryClient (and cache) into our application tree. Putting it around the RouterProvider we already have is as good a place as any.

const queryClient = new QueryClient();

const Main: FC = () => {
  return (
    <>
      <QueryClientProvider client={queryClient}>
        <RouterProvider router={router} context={{ queryClient }} />
      </QueryClientProvider>
      <TanStackRouterDevtools router={router} />
    </>
  );
};

Recall from before that we also passed our queryClient to our Router’s context like this:

const router = createRouter({ 
  routeTree, 
  context: { queryClient }
});

And:

type MyRouterContext = {
  queryClient: QueryClient;
};

export const Route = createRootRouteWithContext<MyRouterContext>()({
  component: Root,
});

This allows us access to the queryClient inside of our loader functions via the Router’s context. If you’re wondering why we need loaders at all, now that we’re using react-query, stay tuned.

Querying

We used Router’s built-in caching capabilities for our tasks. For epics, let’s use react-query. Moreover, let’s use the useSuspenseQuery hook, since managing loading state via Suspense boundaries is extremely ergonomic. Moreover, Suspense boundaries is exactly how Router’s pendingComponent works. So you can use useSuspenseQuery, along with the same pendingComponent we looked at before!

Let’s add another (contrived) summary query in our epics layout (route) component.

export const Route = createFileRoute("/app/epics")({
  component: EpicLayout,
  pendingComponent: () => <div>Loading epics route ...</div>,
});

function EpicLayout() {
  const context = Route.useRouteContext();
  const { data } = useSuspenseQuery(epicsSummaryQueryOptions(context.timestarted));

  return (
    <div>
      <h2>Epics overview</h2>
      <div>
        {data.epicsOverview.map(epic => (
          <Fragment key={epic.name}>
            <div>{epic.name}</div>
            <div>{epic.count}</div>
          </Fragment>
        ))}
      </div>

      <div>
        <Outlet />
      </div>
    </div>
  );
}

To keep the code somewhat organized (and other reasons we’ll get to) I stuck the query options into a separate place.

export const epicsSummaryQueryOptions = (timestarted: number) => ({
  queryKey: ["epics", "summary"],
  queryFn: async () => {
    const timeDifference = +new Date() - timestarted;
    console.log("Running api/epics/overview query at", timeDifference);
    const epicsOverview = await fetchJson<EpicOverview[]>("api/epics/overview");
    return { epicsOverview };
  },
  staleTime: 1000 * 60 * 5,
  gcTime: 1000 * 60 * 5,
});

A query key, and function, and some cache settings. I’m passing in the timestarted value from context, so we can see when these queries fire. This will help us detect waterfalls.

Let’s look at the root epics page (with a few details removed for space).

type SearchParams = {
  page: number;
};

export const Route = createFileRoute("/app/epics/")({
  validateSearch(search: Record<string, unknown>): SearchParams {
    return {
      page: parseInt(search.page as string, 10) || 1,
    };
  },
  loaderDeps: ({ search }) => {
    return { page: search.page };
  },
  component: Index,
  pendingComponent: () => <div>Loading epics ...</div>,
  pendingMinMs: 3000,
  pendingMs: 10,
});

function Index() {
  const context = Route.useRouteContext();
  const { page } = Route.useSearch();

  const { data: epicsData } = useSuspenseQuery(epicsQueryOptions(context.timestarted, page));
  const { data: epicsCount } = useSuspenseQuery(epicsCountQueryOptions(context.timestarted));

  return (
    <div className="p-3">
      <h3>Epics page!</h3>
      <h3>There are {epicsCount.count} epics</h3>
      <div>
        {epicsData.map((e, idx) => (
          <Fragment key={idx}>
            <div>{e.name}</div>
          </Fragment>
        ))}
        <div className="flex gap-3">
          <Link to="/app/epics" search={{ page: page - 1 }} disabled={page === 1}>
            Prev
          </Link>
          <Link to="/app/epics" search={{ page: page + 1 }} disabled={!epicsData.length}>
            Next
          </Link>
        </div>
      </div>
    </div>
  );
}

Two queries on this page: one to get the list of (paged) epics, another to get the total count of all the epics. Let’s run it

It’s as silly as before, but it does show the three pieces of data we’ve fetched: the overview data we fetched in the epics layout; and then the count of epics, and the list of epics we loaded in the epics page beneath that.

What’s more, when we run this, we first see the pending component for our root route. That resolves quickly, and shows the main navigation, along with the pending component for our epics route. That resolves, showing the epics overview data, and then revealing the pending component for our epics page, which eventually resolves and shows the list and count of our epics.

Our component-level data fetching is working, and integrating, via Suspense, with the same Router pending components we already had. Very cool!

Let’s take a peak at our console though, and look at all the various logging we’ve been doing, to track when these fetches happen

The results are… awful. Component-level data fetching with Suspense feels really good, but if you’re not careful, these waterfalls are extremely easy to create. The problem is, when a component suspends while waiting for data, it prevents its children from rendering. This is precisely what’s happening here. The route is suspending, and not even giving the child component, which includes the page (and any other nested route components underneath) from rendering, which prevents those components’ fetches from starting.

There’s two potential solutions here: we could dump Suspense, and use the useQuery hook, instead, which does not suspend. That would require us to manually track multiple isLoading states (for each useQuery hook), and coordinate loading UX to go with that. For the epics page, we’d need to track both the count loading state, and the epics list state, and not show our UI until both have returned. And so on, for every other page.

The other solution is to start pre-fetching these queries sooner.

We’ll go with option 2.

Prefetching

Remember previously we saw that loader functions all run in parallel. This is the perfect opportunity to start these queries off ahead of time, before the components even render. TanStack Query gives us an API to do just that.

To prefetch with Query, we take the queryClient object we saw before, and call queryClient.prefetchQuery and pass in the exact same query options and Query will be smart enough, when the component loads and executes useSuspenseQuery, to see that the query is already in flight, and just latch onto that same request. That’s also a big reason why we put those query options into the epicsSummaryQueryOptions helper function: to make it easier to reuse in the loader, to prefetch.

Here’s the loader we’ll add to the epics route:

loader({ context }) {
  const queryClient = context.queryClient;
  queryClient.prefetchQuery(epicsSummaryQueryOptions(context.timestarted));
},

The loader receives the route tree’s context, from which it grabs the queryClient. From there, we call prefetchQuery and pass in the same options.

Let’s move on to the Epics page. To review, this is the relevant code from our Epics page:

function Index() {
  const context = Route.useRouteContext();
  const { page } = Route.useSearch();

  const { data: epicsData } = useSuspenseQuery(epicsQueryOptions(context.timestarted, page));
  const { data: epicsCount } = useSuspenseQuery(epicsCountQueryOptions(context.timestarted));
  
  // ..

We grab the current page from the URL, and the context, for the timestarted value. Now let’s do the same thing we just did, and repeat this code in the loader, to prefetch.

async loader({ context, deps }) {
  const queryClient = context.queryClient;

  queryClient.prefetchQuery(epicsQueryOptions(context.timestarted, deps.page));
  queryClient.prefetchQuery(epicsCountQueryOptions(context.timestarted));
},

Now when we check the console, we see something a lot nicer.

Fetching state

What happens when we page up. The page value will change in the URL, Router will send a new page value down into our loader, and our component. Then, our useSuspenseQuery will execute with new query values, and suspend again. That means our existing list of tasks will disappear, and show the “loading tasks” pending component. That would be a terrible UX.

Fortunately, React offers us a nice solution, with the useDeferredValue hook. The docs are here. This allows us to “defer” a state change. If a state change causes our deferred value on the page to suspend, React will keep the existing UI in place, and the deferred value will simply hold the old value. Let’s see it in action.

function Index() {
  const { page } = Route.useSearch();
  const context = Route.useRouteContext();

  const deferredPage = useDeferredValue(page);
  const loading = page !== deferredPage;

  const { data: epicsData } = useSuspenseQuery(
    epicsQueryOptions(context.timestarted, deferredPage)
  );
  const { data: epicsCount } = useSuspenseQuery(
    epicsCountQueryOptions(context.timestarted)
  );
 
  // ...

We wrap the changing page value in useDeferredValue, and just like that, our page does not suspend when the new query is in flight. And to detect that a new query is running, we compare the real, correct page value, with the deferredPage value. If they’re different, we know new data are loading, and we can display a loading spinner (or in this case, put an opacity overlay on the epics list)

Queries are re-used!

When using react-query for data management, we can now re-use the same query across different routes. Both the view epic and edit epic pages need to fetch info on the epic the user is about to view, or edit. Now we can define those options in one place, like we had before.

export const epicQueryOptions = (timestarted: number, id: string) => ({
  queryKey: ["epic", id],
  queryFn: async () => {
    const timeDifference = +new Date() - timestarted;

    console.log(`Loading api/epic/${id} data at`, timeDifference);
    const epic = await fetchJson<Epic>(`api/epics/${id}`);
    return epic;
  },
  staleTime: 1000 * 60 * 5,
  gcTime: 1000 * 60 * 5,
});

We can use them in both routes, and have them be cached in between (assuming we set the caching values to allow that). You can try it in the demo app: view an epic, go back to the list, then edit the same epic (or vice versa). Only the first of those pages you visit should cause the fetch to happen in your network tab.

Updating with react-query

Just like with tasks, epics have a page where we can edit an individual epic. Let’s see what the saving logic looks like with react-query.

Let’s quickly review the query keys for the epics queries we’ve seen so far. For an individual epic, it was:

export const epicQueryOptions = (timestarted: number, id: string) => ({
  queryKey: ["epic", id],

For the epics list, it was this:

export const epicsQueryOptions = (timestarted: number, page: number) => ({
  queryKey: ["epics", "list", page],

And the count:

export const epicsCountQueryOptions = (timestarted: number) => ({
  queryKey: ["epics", "count"],

Finally, the epics overview:

export const epicsSummaryQueryOptions = (timestarted: number) => ({
  queryKey: ["epics", "summary"],

Notice the pattern: epics followed by various things for the queries that affected multiple epics, and for an individual epic, we did ['epic', ${epicId}]. With that in mind, let’s see just how easy it is to invalidate these queries after a mutation:

const save = async () => {
  setSaving(true);
  await postToApi("api/epic/update", {
    id: epic.id,
    name: newName.current!.value,
  });

  queryClient.removeQueries({ queryKey: ["epics"] });
  queryClient.removeQueries({ queryKey: ["epic", epicId] });

  navigate({ to: "/app/epics", search: { page: 1 } });

  setSaving(false);
};

The magic is on the highlighted lines.

With one fell sweep, we remove all cached entries for any query that started with epics, or started with ['epic', ${epicId}], and Query will handle the rest. Now, when we navigate back to the epics page (or any page that used these queries), we’ll see the suspense boundary show, while fresh data are loaded. If you’d prefer to keep stale data on the screen, while the fresh data load, that’s fine too: just use queryClient.invalidateQueries instead. If you’d like to detect if a query is re-fetching in the background, so you can display an inline spinner, use the isFetching property returned from useSuspenseQuery.

const { data: epicsData, isFetching } = useSuspenseQuery(
  epicsQueryOptions(context.timestarted, deferredPage)
);

Odds and ends

We’ve gone pretty deep on TanStack Route and Query. Let’s take a look at one last trick.

If you recall, we saw that pending components ship a related pendingMinMs that forced a pending component to stay on the page a minimum amount of time, even if the data were ready. This was to avoid a jarring flash of a loading state. We also saw that TanStack Router uses Suspense to show those pending components, which means that react-query’s useSuspenseQuery will seamlessly integrate with it. Well, almost seamlessly. Router can only use the pendingMinMs value with the promise we return from the Router’s loader. But now we don’t really return any promise from the loader; we prefetch some stuff, and rely on component-level data fetching to do the real work.

Well there’s nothing stopping you from doing both! Right now our loader looks like this:

async loader({ context, deps }) {
  const queryClient = context.queryClient;

  queryClient.prefetchQuery(epicsQueryOptions(context.timestarted, deps.page));
  queryClient.prefetchQuery(epicsCountQueryOptions(context.timestarted));
},

Query also ships with a queryClient.ensureQueryData method, which can load query data, and return a promise for that request. Let’s put it to good use so we can use pendingMinMs again.

One thing you do not want to do is this:

await queryClient.ensureQueryData(epicsQueryOptions(context.timestarted, deps.page)),
await queryClient.ensureQueryData(epicsCountQueryOptions(context.timestarted)),

That will block on each request, serially. In other words, a waterfall. Instead, to kick off both requests immediately and wait on them in the loader (without a waterfall), you can do this:

await Promise.allSettled([
  queryClient.ensureQueryData(epicsQueryOptions(context.timestarted, deps.page)),
  queryClient.ensureQueryData(epicsCountQueryOptions(context.timestarted)),
]);

Which works, and keeps the pending component on the screen for the duration of pendingMinMs

You won’t always, or even usually need to do this. But it’s handy for when you do.

Wrapping up

This has been a whirlwind route of TanStack Router and TanStack Query, but hopefully not an overwhelming one. These tools are incredibly powerful, and offer the ability to do just about anything. I hope this post will help some people put them to good use!

Article Series

]]>
https://frontendmasters.com/blog/tanstack-router-data-loading-2/feed/ 2 4492
Loading Data with TanStack Router: Getting Going https://frontendmasters.com/blog/tanstack-router-data-loading-1/ https://frontendmasters.com/blog/tanstack-router-data-loading-1/#respond Wed, 20 Nov 2024 18:52:19 +0000 https://frontendmasters.com/blog/?p=4465 TanStack Router is one of the most exciting projects in the web development ecosystem right now, and it doesn’t get nearly enough attention. It’s a fully fledged client-side application framework that supports advanced routing, nested layouts, and hooks for loading data. Best of all, it does all of this with deep type safety.

Article Series

This post is all about data loading. We’ll cover the built-in hooks TanStack Router ships with to load and invalidate data. Then we’ll cover how easily TanStack Query (also known as react-query) integrates and see what the tradeoffs of each are.

The code for everything we’re covering is in this GitHub repo. As before, I’m building an extremely austere, imaginary Jira knockoff. There’s nothing useful in that repo beyond the bare minimum needed for us to take a close look at how data loading works. If you’re building your own thing, be sure to check out the DevTools for TanStack Router. They’re outstanding.

The app does load actual data via SQLite, along with some forced delays, so we can more clearly see (and fix) network waterfalls. If you want to run the project, clone it, run npm i, and then open two terminals. In the first, run npm run server, which will create the SQLite database, seed it with data, and set up the API endpoints to fetch, and update data. In the second, run npm run dev to start the main project, which will be on http://localhost:5173/. There are some (extremely basic) features to edit data. If at any point you want to reset the data, just reset the server task in your terminal.

The app is contrived. It exists to show Router’s capabilities. We’ll often have odd use cases, and frankly questionable design decisions. This was purposeful, in order to simulate real-world data loading scenarios, without needing a real-world application.

But what about SSR?

Router is essentially a client-side framework. There are hooks to get SSR working, but they’re very much DIY. If this disappoints you, I’d urge just a bit of patience. TanStack Start (now in Beta) is a new project that, for all intents and purposes, adds SSR capabilities to the very same TanStack Router we’ll be talking about. What makes me especially excited about TanStack Start is that it adds these server-side capabilities in a very non-intrusive way, which does not change or invalidate anything we’ll be talking about in this post (or talked about in my last post on Router, linked above). If that’s not entirely clear and you’d like to learn more, stay tuned for my future post on TanStack Start.

The plan

TanStack Router is an entire application framework. You could teach an entire course on it, and indeed there’s no shortage of YouTube videos out there. This blog will turn into a book if we try to cover each and every option in depth.

In this post we’ll cover the most relevant features and show code snippets where helpful. Refer to the docs for details. Also check out the repo for this post as all the examples we use in this post are fleshed out in their entirety there.

Don’t let the extremely wide range of features scare you. The vast majority of the time, some basic loaders will get you exactly what you need. We’ll cover some of the advanced features, too, so you know they’re there, if you ever do need them.

Starting at the top: context

When we create our router, we can give it “context.” This is global state. For our project, we’ll pass in our queryClient for react-query (which we’ll be using a little later). Passing the context in looks like this:

// main.tsx
import { createRouter } from "@tanstack/react-router";

import { QueryClient } from "@tanstack/react-query";

const queryClient = new QueryClient();

// Import the generated route tree
import { routeTree } from "./routeTree.gen";

const router = createRouter({ 
  routeTree, 
  context: { queryClient } 
});

Then we’ll make sure Router integrates what we put on context into the static types. We do this by creating our root route like this:

// routes/__root.tsx
export const Route = createRootRouteWithContext<MyRouterContext>()({
  component: Root,
});

This context will be available to all routes in the tree, and inside API hooks like loader, which we’ll get to shortly.

Adding to context

Context can change. We set up truly global context when we start Router up at our application’s root, but different locations in the route tree can add new things to context, which will be visible from there, downward in the tree. There’s two places for this, the beforeLoad function, and the context function. Yes: route’s can take a context function which modifies the route tree’s context value.

beforeLoad

The beforeLoad method runs always, on each active route, anytime the URL changes in any way. This is a good place to check preconditions and redirect. If you return a value from here, that value will be merged into the router’s context, and visible from that route downward. This function blocks all loaders from running, so be extremely careful what you do in here. Data loading should generally be avoided unless absolutely needed, since any loaders will wait until this function is complete, potentially creating waterfalls.

Here’s a good example of what to avoid, with an opportunity to see why. This beforeLoad fetches the current user, places it into context, and does a redirect if there is no user.

// routes/index.tsx
export const Route = createFileRoute("/")({
  async beforeLoad() {
    const user = await getCurrentUser();
    if (!user) {
      throw redirect({
        to: "/login",
      });
    }
    document.cookie = `user=${user.id};path=/;max-age=31536000`;

    return { user };
  },

  // ...

We’ll be looking at some data loading in a bit, and measure what starts when. You can go into the getCurrentUser function and uncomment the artificial delay in there, and see it block everything. This is especially obvious if you’re running Router’s DevTools. You’ll see this path block, and only once ready, allow all loaders below to execute.

But this is a good enough example to show how this works. The user object is now in context, visible to routes beneath it.

A more realistic example would be to check for a logged-in cookie, optimistically assume the user is logged in, and rely on network calls we do in the loaders to detect a logged-out user, and redirect accordingly. To make things even more realistic, those loaders for the initial render would run on the server, and figure out if a user is actually logged out before we show the user anything; but that will wait for a future post on TanStack Start.

What we have is sufficient to show how the beforeLoad callback works.

Context (function)

There’s also a context function we can provide routes. This is a non-async function that also gives us an opportunity to add to context. But it runs much more conservatively. This function only runs when the URL changes in a way that’s relevant to that route. So for a route of, say, app/epics/$epicId, the context function will re-run when the epicId param changes. This might seem strange, but it’s useful for modifying the context, but only when the route has changed, especially when you need to put non-primitive values (objects and functions) onto context. These non-primitive values are always compared by reference, and therefore always unique against the last value generated. As a result, they would cause render churning if added in beforeLoad, since React would (incorrectly) think it needed to re-render a route when nothing had changed.

For now, here’s some code in our root route to mark the time for when the initial render happens, so we can compare that to the timestamp of when various queries run in our tree. This will help us see, and fix network waterfalls.

// routes/__root.tsx
export const Route = createRootRouteWithContext<MyRouterContext>()({
  context({ location }) {
    const timeStarted = +new Date();
    console.log("");
    console.log("Fresh navigation to", location.href);
    console.log("-------------------");

    return { timestarted: timeStarted };
  },

  // ...

This code is in our root route, so it will never re-run, since there’s no path parameters the root route depends on.

Now everywhere in our route tree will have a timestarted value that we can use to detect any delays from data fetches in our tree.

Loaders

Let’s actually load some data. Router provides a loader function for this. Any of our route configurations can accept a loader function, which we can use to load data. Loaders all run in parallel. It would be bad if a layout needed to complete loading its data before the path beneath it started. Loaders receive any path params on the route’s URL, any search params (querystring values) the route has subscribed to, the context, and a few other goodies, and loads whatever data it needs. Router will detect what you return, and allow components to retrieve that data via the useLoaderData hook — strongly typed.

Loader in a route

Let’s take a look at tasks.route.tsx.

This is a route that will run for any URL at all starting with /app/tasks. It will run for that path, for /app/tasks/$taskId, for app/tasks/$taskId/edit, and so on.

export const Route = createFileRoute("/app/tasks")({
  component: TasksLayout,
  loader: async ({ context }) => {
    const now = +new Date();
    console.log(`/tasks route loader. Loading task layout info at + ${now - context.timestarted}ms since start`);

    const tasksOverview = await fetchJson<TaskOverview[]>("api/tasks/overview");
    return { tasksOverview };
  },
  gcTime: 1000 * 60 * 5,
  staleTime: 1000 * 60 * 2,
});

We receive the context, and grab the timestarted value from it. We request some overview data on our tasks, and send that data down.

The gcTime property controls how long old route data are kept in cache. So if we browse from tasks over to epics, and then come back in 5 minutes and 1 second, nothing will be there, and the page will load in fresh. staleTime controls how long a cached entry is considered “fresh.” This determines whether cached data are refetched in the background. Here it’s set to two minutes. This means if the user hits this page, then goes to the epics page, waits 3 minutes, then browses back to tasks, the cached data will show, while the tasks data is re-fetched in the background, and (if changed) update the UI.

You’re probably wondering if TanStack Router tells you this background re-fetch is happening, so you can show an inline spinner, and yes, you can detect this like so:

const { isFetching } = Route.useMatch();

Loader in a page

Now let’s take a look at the tasks page.

export const Route = createFileRoute("/app/tasks/")({
  component: Index,
  loader: async ({ context }) => {
    const now = +new Date();
    console.log(`/tasks/index path loader. Loading tasks at + ${now - context.timestarted}ms since start`);

    const tasks = await fetchJson<Task[]>("api/tasks");
    return { tasks };
  },
  gcTime: 1000 * 60 * 5,
  staleTime: 1000 * 60 * 2,
  pendingComponent: () => <div>Loading tasks list...</div>,
  pendingMs: 150,
  pendingMinMs: 200,
});

This is the route for the specific URL /app/tasks. If the user were to browse to /app/tasks/$taskId then this component would not run. This is a specific page, not a layout (which Router calls a “route”). Basically the same as before, except now we’re loading the list of tasks to display on this page.

We’ve added some new properties this time, though. The pendingComponent property allows us to render some content while the loader is working. We also specified pendingMs, which controls how long we wait before showing the pending component. Lastly, pendingMinMs allows us to force the pending component to stay on the screen for a specified amount of time, even if the data are ready. This can be useful to avoid a brief flash of a loading component, which can be jarring to the user.

If you’re wondering why we’d even want to use pendingMs to delay a loading screen, it’s for subsequent navigations. Rather than immediately transition from the current page to a new page’s loading component, this setting lets us stay on the current page for a moment, in the hopes that the new page will be ready quickly enough that we don’t have to show any pending component at all. Of course, on the initial load, when the web app first starts up, these pendingComponents do show immediately, as you’d expect.

Let’s run our tasks page.

It’s ugly, and frankly useless, but it works. Now let’s take a closer look.

Loaders running in parallel

If we peak in our console, we should see something like this:

If you have DevTools open, you should see something like below. Note how the route and page load and finish in parallel.

As we can see, these requests started a mere millisecond apart from each other, since the loaders are running in parallel (since this isn’t the real Jira, I had to manually add a delay of 750ms to each of the API endpoints).

Different routes using the same data

If we look at the loader for the app/tasks/$taskId route, and the loader to the app/tasks/$taskId/edit route, we see the same fetch call:

const task = await fetchJson<Task>(`api/tasks/${taskId}`);

This is because we need to load the actual task in order to display it, or in order to display it in a form for the user to make changes. Unfortunately though, if you click the edit button for any task, then go back to the tasks list (without saving anything), then click the edit button for the same task, you should notice the same exact data being requested. This makes sense. Both loaders happen to make the same fetch() call, but there’s nothing in our client to cache the call. This is probably fine 99% of the time, but this is one of the many things react-query will improve for us, in a bit.

Updating data

If you click the edit button for any task, you should be brought to a page with an extremely basic form that will let you edit the task’s name. Once we click save, we want to navigate back to the tasks list, but most importantly, we need to tell Router that we’ve changed some data, and that it will need to invalidate some cached entries, and re-fetch when we go back to those routes.

This is where Router’s built-in capabilities start to get stretched, and where we might start to want react-query (discussed in part 2 of this post). Router will absolutely let you invalidate routes, to force re-fetches. But the API is fairly simple, and fine-grained. We basically have to describe each route we want invalidated (or removed). Let’s take a look:

import { useRouter } from "@tanstack/react-router";

// ...

const router = useRouter();
const save = async () => {
  await postToApi("api/task/update", {
    id: task.id,
    title: newTitleEl.current!.value,
  });

  router.invalidate({
    filter: route => {
      return (
        route.routeId == "/app/tasks/" ||
        (route.routeId === "/app/tasks/$taskId/" && route.params.taskId === taskId) ||
        (route.routeId === "/app/tasks_/$taskId/edit" && route.params.taskId === taskId)
      );
    },
  });

  navigate({ to: "/app/tasks" });
};

Note the call to router.invalidate. This tells Router to mark any cached entries matching that filter as stale, causing us to re-fetch them the next time we browse to those paths. We could also pass absolutely nothing to that same invalidate method, which would tell Router to invalidate everything.

Here we invalidated the main tasks list, as well as the view and edit pages, for the individual task we just modified.

Now when we navigate back to the main tasks page we’ll immediately see the prior, now-stale data, but new data will fetch, and update the UI when present. Recall that we can use const { isFetching } = Route.useMatch(); to show an inline spinner while this fetch is happening.

If you’d prefer to completely remove the cache entries, and have the task page’s “Loading” component show, then you can use router.clearCache instead, with the same exact filter argument. That will remove those cache entries completely, forcing Router to completely re-fetch them, and show the pending component. This is because there is no longer any stale data left in the cache; clearCache removed it.

There is one small caveat though: Router will prevent you from clearing the cache for the page you’re on. That means we can’t clear the cache for the edit task page, since we’re sitting on it already. To be clear, when we call clearCache, the filter function won’t even look at the route you’re on; the ability to remove it simply does not exist.

Instead, you could do something like this:

router.clearCache({
  filter: route => {
    return route.routeId == "/app/tasks/" || (route.routeId === "/app/tasks_/$taskId/edit" && route.params.taskId === taskId);
  },
});

router.invalidate({
  filter: route => {
    return route.routeId === "/app/tasks_/$taskId/edit" && route.params.taskId === taskId;
  },
});

But really, at this point you should probably be looking to use react-query, which we’ll cover in the next post.

Article Series

]]>
https://frontendmasters.com/blog/tanstack-router-data-loading-1/feed/ 0 4465
Introducing TanStack Router https://frontendmasters.com/blog/introducing-tanstack-router/ https://frontendmasters.com/blog/introducing-tanstack-router/#comments Fri, 13 Sep 2024 16:16:57 +0000 https://frontendmasters.com/blog/?p=3821 TanStack Router is an incredibly exciting project. It’s essentially a fully-featured client-side JavaScript application framework. It provides a mature routing and navigation system with nested layouts and efficient data loading capabilities at every point in the route tree. Best of all, it does all of this in a type-safe manner.

What’s especially exciting is that, as of this writing, there’s a TanStack Start in the works, which will add server-side capabilities to Router, enabling you to build full-stack web applications. Start promises to do this with a server layer applied directly on top of the same TanStack Router we’ll be covering here. That makes this a perfect time to get to know Router if you haven’t already.

TanStack Router is more than just a router — it’s a full-fledged client-side application framework. So to prevent this post from getting too long, we won’t even try to cover it all. We’ll limit ourselves to routing and navigation, which is a larger topic than you might think, especially considering the type-safe nature of Router.

Article Series

Getting started

There are official TanStack Router docs and a quickstart guide, which has a nice tool for scaffolding a fresh Router project. You can also clone the repo used for this post and follow along.

The Plan

In order to see what Router can do and how it works, we’ll pretend to build a task management system, like Jira. Like the real Jira, we won’t make any effort at making things look nice or be pleasant to use. Our goal is to see what Router can do, not build a useful web application.

We’ll cover: routing, layouts, paths, search parameters, and of course static typing all along the way.

Let’s start at the very top.

The Root Route

This is our root layout, which Router calls __root.tsx. If you’re following along on your own project, this will go directly under the routes folder.

import { createRootRoute, Link, Outlet } from "@tanstack/react-router";

export const Route = createRootRoute({
  component: () => {
    return (
      <>
        <div>
          <Link to="/">
            Home
          </Link>
          <Link to="/tasks">
            Tasks
          </Link>
          <Link to="/epics">
            Epics
          </Link>
        </div>
        <hr />
        <div>
          <Outlet />
        </div>
      </>
    );
  },
});

The createRootRoute function does what it says. The <Link /> component is also fairly self-explanatory (it makes links). Router is kind enough to add an active class to Links which are currently active, which makes it easy to style them accordingly (as well as adds an appropriate aria-current="page" attribute/value). Lastly, the <Outlet /> component is interesting: this is how we tell Router where to render the “content” for this layout.

Running the App

We run our app with npm run dev. Check your terminal for the port on localhost where it’s running.

More importantly, the dev watch process monitors the routes we’ll be adding, and maintains a routeTree.gen.ts file. This syncs metadata about our routes in order to help build static types, which will help us work with our routes safely. Speaking of, if you’re building this from scratch from our demo repo, you might have noticed some TypeScript errors on our Link tags, since those URLs don’t yet exist. That’s right: TanStack Router deeply integrates TypeScript into the route level, and will even validate that your Link tags are pointing somewhere valid.

To be clear, this is not because of any editor plugins. The TypeScript integration itself is producing errors, as it would in your CI/CD system.

src/routes/\_\_root.tsx:8:17 - error TS2322: Type '"/"' is not assignable to type '"." | ".." | undefined'.
          <Link to="/" className="[&.active]:font-bold">

Building the App

Let’s get started by adding our root page. In Router, we use the file index.tsx to represent the root / path, wherever we are in the route tree (which we’ll explain shortly). We’ll create index.tsx, and, assuming you have the dev task running, it should scaffold some code for you that looks like this:

import { createFileRoute } from "@tanstack/react-router";

export const Route = createFileRoute("/")({
  component: () => <div>Hello /!</div>,
});

There’s a bit more boilerplate than you might be used to with metaframeworks like Next or SvelteKit. In those frameworks, you just export default a React component, or plop down a normal Svelte component and everything just works. In TanStack Router we have have to call a function called createFileRoute, and pass in the route to where we are.

The route is necessary for the type safety Router has, but don’t worry, you don’t have to manage this yourself. The dev process not only scaffolds code like this for new files, it also keeps those path values in sync for you. Try it — change that path to something else, and save the file; it should change it right back, for you. Or create a folder called junk and drag it there: it should change the path to "/junk/".

Let’s add the following content (after moving it back out of the junk folder).

import { createFileRoute } from "@tanstack/react-router";

export const Route = createFileRoute("/")({
  component: Index,
});

function Index() {
  return (
    <div>
      <h3>Top level index page</h3>
    </div>
  );
}

Simple and humble — just a component telling us we’re in the top level index page.

Routes

Let’s start to create some actual routes. Our root layout indicated we want to have paths for dealing with tasks and epics. Router (by default) uses file-based routing, but provides you two ways to do so, which can be mixed and matched (we’ll look at both). You can stack your files into folders which match the path you’re browsing. Or you can use “flat routes” and indicate these route hierarchies in individual filenames, separating the paths with dots. If you’re thinking only the former is useful, stay tuned.

Just for fun, let’s start with the flat routes. Let’s create a tasks.index.tsx file. This is the same as creating an index.tsx inside of an hypothetical tasks folder. For content we’ll add some basic markup (we’re trying to see how Router works, not build an actual todo app).

import { createFileRoute, Link } from "@tanstack/react-router";

export const Route = createFileRoute("/tasks/")({
  component: Index,
});

function Index() {
  const tasks = [
    { id: "1", title: "Task 1" },
    { id: "2", title: "Task 2" },
    { id: "3", title: "Task 3" },
  ];

  return (
    <div>
      <h3>Tasks page!</h3>
      <div>
        {tasks.map((t, idx) => (
          <div key={idx}>
            <div>{t.title}</div>
            <Link to="/tasks/$taskId" params={{ taskId: t.id }}>
              View
            </Link>
            <Link to="/tasks/$taskId/edit" params={{ taskId: t.id }}>
              Edit
            </Link>
          </div>
        ))}
      </div>
    </div>
  );
}

Before we continue, let’s add a layout file for all of our tasks routes, housing some common content that will be present on all pages routed to under /tasks. If we had a tasks folder, we’d just throw a route.tsx file in there. Instead, we’ll add a tasks.route.tsx file. Since we’re using flat files, here, we can also just name it tasks.tsx. But I like keeping things consistent with directory-based files (which we’ll see in a bit), so I prefer tasks.route.tsx.

import { createFileRoute, Outlet } from "@tanstack/react-router";

export const Route = createFileRoute("/tasks")({
  component: () => (
    <div>
      Tasks layout <Outlet />
    </div>
  ),
});

As always, don’t forget the <Outlet /> or else the actual content of that path will not render.

To repeat, xyz.route.tsx is a component that renders for the entire route, all the way down. It’s essentially a layout, but Router calls them routes. And xyz.index.tsx is the file for the individual path at xyz.

This renders. There’s not much to look at, but take a quick look before we make one interesting change.

Notice the navigation links from the root layout at the very top. Below that, we see Tasks layout, from the tasks route file (essentially a layout). Below that, we have the content for our tasks page.

Path Parameters

The <Link> tags in the tasks index file give away where we’re headed, but let’s build paths to view, and edit tasks. We’ll create /tasks/123 and /tasks/123/edit paths, where of course 123 represents whatever the taskId is.

TanStack Router represents variables inside of a path as path parameters, and they’re represented as path segments that start with a dollar sign. So with that we’ll add tasks.$taskId.index.tsx and tasks.$taskId.edit.tsx. The former will route to /tasks/123 and the latter will route to /tasks/123/edit. Let’s take a look at tasks.$taskId.index.tsx and find out how we actually get the path parameter that’s passed in.

import { createFileRoute, Link } from "@tanstack/react-router";

export const Route = createFileRoute("/tasks/$taskId/")({
  component: () => {
    const { taskId } = Route.useParams();

    return (
      <div>
        <div>
          <Link to="/tasks">Back</Link>
        </div>
        <div>View task {taskId}</div>
      </div>
    );
  },
});

The Route.useParams() object that exists on our Route object returns our parameters. But this isn’t interesting on its own; every routing framework has something like this. What’s particularly compelling is that this one is statically typed. Router is smart enough to know which parameters exist for that route (including parameters from higher up in the route, which we’ll see in a moment). That means that not only do we get auto complete…

…but if you put an invalid path param in there, you’ll get a TypeScript error.

We also saw this with the Link tags we used to navigate to these routes.

<Link to="/tasks/$taskId" params={{ taskId: t.id }}>

if we’d left off the params here (or specified anything other than taskId), we would have gotten an error.

Advanced Routing

Let’s start to lean on Router’s advanced routing rules (a little) and see some of the nice features it supports. I’ll stress, these are advanced features you won’t commonly use, but it’s nice to know they’re there.

The edit task route is essentially identical, except the path is different, and I put the text to say “Edit” instead of “View.” But let’s use this route to explore a TanStack Router feature we haven’t seen.

Conceptually we have two hierarchies: we have the URL path, and we have the component tree. So far, these things have lined up 1:1. The URL path:

/tasks/123/edit

Rendered:

root route -> tasks route layout -> edit task path

The URL hierarchy and the component hierarchy lined up perfectly. But they don’t have to.

Just for fun, to see how, let’s see how we can remove the main tasks layout file from the edit task route. So we want the /tasks/123/edit URL to render the same thing, but without the tasks.route.tsx route file being rendered. To do this, we just rename tasks.$taskId.edit.tsx to tasks_.$taskId.edit.tsx.

Note that tasks became tasks_. We do need tasks in there, where it is, so Router will know how to eventually find the edit.tsx file we’re rendering, based on the URL. But by naming it tasks_, we remove that component from the rendered component tree, even though tasks is still in the URL. Now when we render the edit task route, we get this:

Notice how Tasks layout is gone.

What if you wanted to do the opposite? What if you have a component hierarchy you want, that is, you want some layout to render in the edit task page, but you don’t want that layout to affect the URL. Well, just put the underscore on the opposite side. So we have tasks_.$taskId.edit.tsx which renders the task edit page, but without putting the tasks layout route into the component hierarchy. Let’s say we have a special layout we want to use only for task editing. Let’s create a _taskEdit.tsx.

import { createFileRoute, Outlet } from "@tanstack/react-router";

export const Route = createFileRoute("/_taskEdit")({
  component: () => (
    <div>
      Special Task Edit Layout <Outlet />
    </div>
  ),
});

Then we change our task edit file to this _taskEdit.tasks_.$taskId.edit.tsx. And now when we browse to /tasks/1/edit we see the task edit page with our custom layout (which did not affect our URL).

Again, this is an advanced feature. Most of the time you’ll use simple, boring, predictable routing rules. But it’s nice to know these advanced features exist.

Directory-Based Routing

Instead of putting file hierarchies into file names with dots, you can also put them in directories. I usually prefer directories, but you can mix and match, and sometimes a judicious use of flat file names for things like pairs of $pathParam.index.tsx and $pathParam.edit.tsx feel natural inside of a directory. All the normal rules apply, so choose what feels best to you.

We won’t walk through everything for directories again. We’ll just take a peak at the finished product (which is also on GitHub). We have an epics path, which lists out, well, epics. For each, we can edit or view the epic. When viewing, we also show a (static) list of milestones in the epic, which we can also view or edit. Like before, for fun, when we edit a milestone, we’ll remove the milestones route layout.

So rather than epics.index.tsx and epics.route.tsx we have epics/index.tsx and epics/route.tsx. And so on. Again, they’re the same rules: replace the dots in the files names with slashes (and directories).

Before moving on, let’s briefly pause and look at the $milestoneId.index.tsx route. There’s a $milestoneId in the path, so we can find that path param. But look up, higher in the route tree. There’s also an $epicId param two layers higher. It should come as no surprise that Router is smart enough to realize this, and set the typings up such that both are present.

Type-Safe Querystrings

The cherry on the top of this post will be, in my opinion, one of the most obnoxious aspects of web development: dealing with search params (sometimes called querystrings). Basically the stuff that comes after the ? in a URL: /tasks?search=foo&status=open. The underlying platform primitive URLSearchParams can be tedious to work with, and frameworks don’t usually do much better, often providing you an un-typed bag of properties, and offering minimal help in constructing a new URL with new, updated querystring values.

TanStack Router provides a convenient, fully-featured mechanism for managing search params, which are also type-safe. Let’s dive in. We’ll take a high-level look, but the full docs are here.

We’ll add search param support for the /epics/$epicId/milestones route. We’ll allow various values in the search params that would allow the user to search milestones under a given epic. We’ve seen the createFileRoute function countless times. Typically we just pass a component to it.

export const Route = createFileRoute("/epics/$epicId/milestones/")({
  component: ({}) => {
    // ...

There’s lots of other functions it supports. For search params we want validateSearch. This is our opportunity to tell Router which search params this route supports, and how to validate what’s currently in the URL. After all, the user is free to type whatever they want into a URL, regardless of the TypeScript typings you set up. It’s your job to take potentially invalid values, and project them to something valid.

First, let’s define a type for our search params.

type SearchParams = {
  page: number;
  search: string;
  tags: string[];
};

Now let’s implement our validateSearch method. This receives a Record<string, unknown> representing whatever the user has in the URL, and from that, we return something matching our type. Let’s take a look.

export const Route = createFileRoute("/epics/$epicId/milestones/")({
  validateSearch(search: Record<string, unknown>): SearchParams {
    return {
      page: Number(search.page ?? "1") ?? 1,
      search: (search.search as string) || "",
      tags: Array.isArray(search.tags) ? search.tags : [],
    };
  },
  component: ({}) => {

Note that (unlike URLSearchParams) we are not limited to just string values. We can put objects or arrays in there, and TanStack will do the work of serializing and de-serializing it for us. Not only that, but you can even specify custom serialization mechanisms.

Moreover, for a production application, you’ll likely want to use a more serious validation mechanism, like Zod. In fact, Router has a number of adapters you can use out of the box, including Zod. Check out the docs on Search Params here.

Let’s manually browse to this path, without any search params, and see what happens. When we browse to

http://localhost:5173/epics/1/milestones

Router replaces (does not redirect) us to:

http://localhost:5173/epics/1/milestones?page=1&search=&tags=%5B%5D

TanStack ran our validation function, and then replaced our URL with the correct, valid search params. If you don’t like how it forces the URL to be “ugly” like that, stay tuned; there are workarounds. But first let’s work with what we have.

We’ve been using the Route.useParams method multiple times. There’s also a Route.useSearch that does the same thing, for search params. But let’s do something a little different. We’ve previously been putting everything in the same route file, so we could just directly reference the Route object from the same lexical scope. Let’s build a separate component to read, and update these search params.

I’ve added a MilestoneSearch.tsx component. You might think you could just import the Route object from the route file. But that’s dangerous. You’re likely to create a circular dependency, which might or might not work, depending on your bundler. Even if it “works” you might have some hidden issues lurking.

Fortunately Router gives you a direct API to handle this, getRouteApi, which is exported from @tanstack/react-router. We pass it a (statically typed) route, and it gives us back the correct route object.

const route = getRouteApi("/epics/$epicId/milestones/");

Now we can call useSearch on that route object and get our statically typed result.

We won’t belabor the form elements and click handlers to sync and gather new values for these search parameters. Let’s just assume we have some new values, and see how we set them. For this, we can use the useNavigate hook.

const navigate = useNavigate({ 
  from: "/epics/$epicId/milestones/"
});

We call it and tell it where we’re navigating from. Now we use the result and tell it where we want to go (the same place we are), and are given a search function from which we return the new search params. Naturally, TypeScript will yell at us if we leave anything off. As a convenience, Router will pass this search function the current values, making it easy to just add / override something. So to page up, we can do

navigate({
  to: ".",
  search: prev => {
    return { ...prev, page: prev.page + 1 };
  },
});

Naturally, there’s also a params prop to this function, if you’re browsing to a route with path parameters that you have to specify (or else TypeScript will yell at you, like always). We don’t need an $epicId path param here, since there’s already one on the route, and since we’re going to the same place we already are (as indicated by the from value in useNavigate, and the to: "." value in navigate function) Router knows to just keep what’s there, there.

If we want to set a search value and tags, we could do:

const newSearch = "Hello World";
const tags = ["tag 1", "tag 2"];

navigate({
  to: ".",
  search: prev => {
    return { page: 1, search: newSearch, tags };
  },
});

Which will make our URL look like this:

/epics/1/milestones?page=1&search=Hello%20World&tags=%5B"tag%201"%2C"tag%202"%5D

Again, the search, and the array of strings were serialized for us.

If we want to link to a page with search params, we specify those search params on the Link tag

<Link 
  to="/epics/$epicId/milestones" 
  params={{ epicId }} 
  search={{ search: "", page: 1, tags: [] }}>
  View milestones
</Link>

And as always, TypeScript will yell at us if we leave anything off. Strong typing is a good thing.

Making Our URL Prettier

As we saw, currently, browsing to:

http://localhost:5173/epics/1/milestones

Will replace the URL with this:

http://localhost:5173/epics/1/milestones?page=1&search=&tags=%5B%5D

It will have all those query params since we specifically told Router that our page will always have a page, search, and tags value. If you care about having a minimal and clean URL, and want that transformation to not happen, you have some options. We can make all of these values optional. In JavaScript (and TypeScript) a value does not exist if it holds the value undefined. So we could change our type to this:

type SearchParams = {
  page: number | undefined;
  search: string | undefined;
  tags: string[] | undefined;
};

Or this which is the same thing:

type SearchParams = Partial<{
  page: number;
  search: string;
  tags: string[];
}>;

Then do the extra work to put undefined values in place of default values:

validateSearch(search: Record<string, unknown>): SearchParams {
  const page = Number(search.page ?? "1") ?? 1;
  const searchVal = (search.search as string) || "";
  const tags = Array.isArray(search.tags) ? search.tags : [];

  return {
    page: page === 1 ? undefined : page,
    search: searchVal || undefined,
    tags: tags.length ? tags : undefined,
  };
},

This will complicate places where you use these values, since now they might be undefined. Our nice simple pageUp call now looks like this

navigate({
  to: ".",
  search: prev => {
    return { ...prev, page: (prev.page || 1) + 1 };
  },
});

On the plus side, our URL will now omit search params with default values, and for that matter, our <Link> tags to this page now don’t have to specify any search values, since they’re all optional.

Another Option

Router actually provides you another way to do this. Currently validateSearch accepts just an untyped Record<string, unknown> since the URL can contain anything. The “true” type of our search params is what we return from this function. Tweaking the return type is how we’ve been changing things.

But Router allows you to opt into another mode, where you can specify both a structure of incoming search params, with optional values, as well as the return type, which represents the validated, finalized type for the search params that will be used by your application code. Let’s see how.

First let’s specify two types for these search params

type SearchParams = {
  page: number;
  search: string;
  tags: string[];
};

type SearchParamsInput = Partial<{
  page: number;
  search: string;
  tags: string[];
}>;

Now let’s pull in SearchSchemaInput:

import { SearchSchemaInput } from "@tanstack/react-router";

SearchSchemaInput is how we signal to Router that we want to specify different search params for what we’ll receive compared to what we’ll produce. We do it by intersecting our desired input type with this type, like this:

validateSearch(search: SearchParamsInput & SearchSchemaInput): SearchParams {

Now we perform the same original validation we had before, to produce real values, and that’s that. We can now browse to our page with a <Link> tag, and specify no search params at all, and it’ll accept it and not modify the URL, while still producing the same strongly-typed search param values as before.

That said, when we update our URL, we can’t just “splat” all previous values, plus the value we’re setting, since those params will now have values, and therefore get updated into the URL. The GitHub repo has a branch called feature/optional-search-params-v2 showing this second approach.

Experiment and choose what works best for you and your use case.

Wrapping up

TanStack Router is an incredibly exciting project. It’s a superbly-made, flexible client-side framework that promises fantastic server-side integration in the near future.

We’ve barely scratched the surface. We just covered the absolute basics of type-safe navigation, layouts, path params, and search params, but know there is much more to know, particularly around data loading and the upcoming server integration.

Article Series

]]>
https://frontendmasters.com/blog/introducing-tanstack-router/feed/ 5 3821