Revolutionizing GitHub Issues: How We Made Navigation Instant
When you're deep in a backlog, every millisecond counts. Even tiny delays can break your flow, turning a quick context switch into a frustrating wait. That's exactly what GitHub Issues users experienced—not because the system was slow in isolation, but because common navigation paths required redundant data fetches. Our team set out to fix this by rethinking how issue pages load from start to finish. Below, we break down the problem, our approach, and the real-world impact in a Q&A format.
Why did GitHub Issues feel slow despite being technically fast?
Even though each individual request might have been efficient, the cumulative cost of fetching data repeatedly caused noticeable delays. When you open an issue, jump to a linked thread, then return to the list, you're paying the full price of server rendering, network fetches, and client boot each time. This isn't about raw server performance—it's about perceived latency. Developers benchmark against the fastest experience they've had, not against old web apps. In 2026, "fast enough" doesn't cut it. For developer tools, latency directly impacts product quality. That's why we focused on reducing the feeling of waiting, not just shaving milliseconds off individual requests.

What was the overarching strategy to modernize performance?
Rather than chasing marginal backend wins, we shifted work to the client and optimized perceived latency. The core idea: render instantly from locally available data, then revalidate in the background. To achieve this, we built a client-side caching layer backed by IndexedDB, a preheating strategy to improve cache hit rates without spamming requests, and introduced a service worker so cached data remains usable even on hard navigations. This approach makes navigation feel instant because the UI doesn't wait for the network—it paints from cached data first. It's a pattern directly transferable to any data-heavy web app.
How does the client-side caching and preheating system work?
The caching layer uses IndexedDB to store issue data locally on the user's device. When you navigate to an issue, the page loads immediately from this cache, not from the network. But a cache is only useful if it has the right data. That's where preheating comes in: instead of blindly caching everything, we predict which issues you're likely to open next (based on your current listing view, for example) and pre-fetch their data into the cache. This improves cache hit rates without making extra requests for every single item. The result is that most navigations feel instant because the data is already there, waiting. Background revalidation ensures the data stays fresh without blocking the user.
How does the service worker make navigations even faster?
A service worker acts as a programmable network proxy in the browser. In our system, it intercepts navigation requests to issue pages and serves cached responses directly—even on hard navigations (like hitting refresh or typing a URL). This means that even if the user navigates away and comes back, the cached data is still available instantly. The service worker also handles offline scenarios gracefully, showing cached content instead of an error. By offloading the network round-trip, the service worker reduces the time to first paint dramatically. Combined with the client-side caching and preheating, it ensures that the only delay users experience is the time it takes to render cached data, which is near instantaneous.

What specific metric did the team target and why?
We targeted perceived latency—the time from a user action (like clicking an issue) until the page appears usable. Traditional metrics like Time to First Byte or server response time don't capture the user's experience of waiting. In developer flows, even a 200ms delay can feel like a context switch. Our goal was to make navigation feel instant: under 100ms from click to visible content. By prioritizing perceived latency, we aligned our optimization with what matters most—keeping developers in flow. The metric guided our architecture choices, from client-side caching to service worker interception, all designed to minimize the time the user spends staring at a blank screen.
What were the measurable results and tradeoffs?
After rolling out these changes, we saw a significant reduction in perceived latency across common navigation paths. For example, opening an issue from a list went from hundreds of milliseconds to near-instant (under 50ms in many cases). The cache hit rate exceeded 80% for most listing views. However, this approach isn't free. Tradeoffs include increased memory usage for caching, complexity in cache invalidation, and the need to handle stale data gracefully. There's also the initial cold-start cost when there's no cached data. We continue to refine the preheating logic and plan to extend caching to other parts of Issues. The key takeaway: by investing in client-side architecture, we made "fast" the default without a full rewrite—and you can apply the same model to your own apps.
Related Articles
- Decentralized AI Agents: How OpenClaw Is Reshaping Enterprise Autonomy
- Canadian Startup Releases Open-Source Pointing Stick Mouse: Ploopy Bean Launches with Four Buttons and QMK Firmware
- Four Essential Updates from the Swift Community: April 2026
- Modernizing Git’s Official Documentation: A Data Model and User‑Centric Improvements
- How Prolly Trees Enable Version-Controlled Databases
- 5 Essential Ways GitHub Uses eBPF to Prevent Deployment Disasters
- Warp Terminal Goes Open Source: A New Model for Community Collaboration
- Mastering Your GitHub Copilot Plan: A Guide to Flex Allotments and the New Max Tier