Cheatsheet
Overview

High Level Designs

HLD questions: Google Docs, YouTube, Twitter Feed, Netflix, Chat App

8 questions

Must Revise Hard

Design Google Docs (Collaborative Editor)

OT CRDT WebSocket Collaboration Presence
Asked at: Google Notion Figma Dropbox

What is this?

Google Docs lets multiple people edit the same document at the same time — and everyone sees each other's changes in real time. Building this from scratch is one of the hardest frontend system design problems because you must handle conflicts when two users type at the same position simultaneously.

The Core Problem: Concurrent Edits

Imagine Alice types 'Hello' and Bob types 'World' at the same time in the same document. Without conflict resolution, one of their changes gets lost. The two main algorithms that solve this are Operational Transformation (OT) and CRDTs.

ELI5: Operational Transformation

Think of two people editing a shopping list. Alice crosses off item #3. Bob adds a new item at position #3 — pushing the old #3 to #4. If Alice's 'delete #3' arrives after Bob's insert, it should now delete #4. OT transforms Alice's operation based on what Bob did in between.

High-Level Architecture

                            
┌─────────────────────────────────────────────────────────┐
│                  GOOGLE DOCS ARCHITECTURE               │
├─────────────────────────────────────────────────────────┤
│                                                         │
│   Browser (Alice)          Browser (Bob)                │
│   ┌──────────────┐         ┌──────────────┐             │
│   │  ProseMirror │         │  ProseMirror │             │
│   │  Editor      │         │  Editor      │             │
│   │  + OT Client │         │  + OT Client │             │
│   └──────┬───────┘         └──────┬───────┘             │
│          │  WebSocket             │  WebSocket          │
│          ▼                        ▼                     │
│   ┌──────────────────────────────────────┐              │
│   │           WebSocket Gateway          │              │
│   │    (handles all live connections)    │              │
│   └──────────────────┬───────────────────┘              │
│                      │                                  │
│          ┌───────────┴──────────┐                       │
│          ▼                      ▼                       │
│   ┌─────────────┐      ┌──────────────┐                 │
│   │  OT Server  │      │  Presence    │                 │
│   │  (conflict  │      │  Service     │                 │
│   │  resolution)│      │  (cursors)   │                 │
│   └──────┬──────┘      └──────────────┘                 │
│          │                                              │
│          ▼                                              │
│   ┌─────────────┐      ┌──────────────┐                 │
│   │  Document   │      │  Redis Pub/  │                 │
│   │  Store (DB) │      │  Sub         │                 │
│   └─────────────┘      └──────────────┘                 │
│                                                         │
└─────────────────────────────────────────────────────────┘

                          

Step-by-Step: What Happens When You Type

  1. 1 User types a character → editor creates an Operation object (e.g., { type: 'insert', pos: 5, char: 'A', revision: 12 })
  2. 2 Operation is optimistically applied locally (instant feedback — no waiting)
  3. 3 Operation is sent to the server via WebSocket
  4. 4 Server receives the op, checks its revision number against server revision
  5. 5 If revisions differ, server transforms the incoming op against all ops that happened since that revision
  6. 6 Server broadcasts the transformed op to all other connected clients
  7. 7 Other clients apply the transformed op to their local document
  8. 8 Server acknowledges back to the sender (so they can clear their pending queue)

The Hard Part: Revision Numbers

Every operation has a revision number. If your local revision is 5 and server is at revision 8, your op must be transformed against revisions 6, 7, and 8 before it can be applied. This is why OT is stateful and complex to implement correctly.

OT vs CRDT: Which to Use?

Aspect OT (Operational Transformation) CRDT
ComplexityHigh — needs central server for orderingMedium — can work peer-to-peer
Server required?Yes — server transforms and broadcastsNo — peers can sync directly
ConsistencyStrong with central serverEventual consistency
Used byGoogle Docs, EtherpadFigma, Notion (newer versions)
PerformanceCan have latency under high concurrencyGenerally faster, no transform cost
History/undoEasy — operation logHarder — tombstones grow forever

Presence: Showing Other Cursors

ELI5: Presence

Presence is the feature where you see Bob's blue cursor blinking on line 3. It's separate from document sync — cursor positions change far more frequently than text content, so they travel over a different, lighter channel.

                            
  Presence Data Flow
  ──────────────────

  Alice moves cursor
        │
        ▼
  Throttle: max 50ms    ← Don't flood the server
        │
        ▼
  WebSocket message:
  { type: "cursor", userId: "alice",
    pos: { line: 3, col: 12 },
    color: "#4285F4" }
        │
        ▼
  Server → Redis Pub/Sub → All other clients
        │
        ▼
  Bob's browser renders Alice's cursor overlay
  on top of the editor canvas

                          

Client-Side Implementation

// ot-client.ts — Simplified OT client
class OTClient {
  private pendingOps: Operation[] = [];  // ops sent but not yet acked
  private revision: number = 0;          // last known server revision

  constructor(
    private docId: string,
    private ws: WebSocket,
    private editor: Editor
  ) {
    this.ws.onmessage = this.handleServerMessage.bind(this);
  }

  // Called when user types
  applyLocalOp(op: Operation): void {
    // 1. Apply immediately to local editor (optimistic update)
    this.editor.apply(op);

    // 2. Tag with current revision so server knows what state we were at
    op.revision = this.revision;

    // 3. Add to pending queue
    this.pendingOps.push(op);

    // 4. Send to server
    this.ws.send(JSON.stringify({ type: "op", op }));
  }

  // Called when server broadcasts an op from another user
  handleServerMessage(event: MessageEvent): void {
    const msg = JSON.parse(event.data);

    if (msg.type === "op") {
      // Transform the incoming op against our pending ops
      // (because pending ops haven't been acked yet — they happened "concurrently")
      let transformedOp = msg.op;
      for (const pending of this.pendingOps) {
        transformedOp = transform(transformedOp, pending);
      }

      // Apply the transformed op
      this.editor.apply(transformedOp);
      this.revision = msg.revision;
    }

    if (msg.type === "ack") {
      // Server acknowledged our op — remove from pending queue
      this.pendingOps.shift();
      this.revision = msg.revision;
    }
  }
}

// Simple transform function for insert operations
function transform(op1: Operation, op2: Operation): Operation {
  if (op1.type === "insert" && op2.type === "insert") {
    // If op2 inserted before op1's position, shift op1 right
    if (op2.position <= op1.position) {
      return { ...op1, position: op1.position + op2.text.length };
    }
  }
  return op1; // no conflict
}

Handling Offline and Reconnection

                            
  Reconnection Strategy
  ─────────────────────

  Client goes offline
        │
        ▼
  Continue editing locally
  Queue all ops in IndexedDB
        │
        ▼
  Connection restored
        │
        ▼
  Send: { type: "catch-up", fromRevision: 42 }
        │
        ▼
  Server sends all ops since revision 42
        │
        ▼
  Client transforms queued ops against
  the received ops (reconciliation)
        │
        ▼
  Send queued ops to server
  Document is back in sync

                          

Interview Tip

When asked about Google Docs, always mention: (1) OT vs CRDT tradeoff, (2) optimistic local application, (3) WebSocket for real-time, (4) presence as a separate concern throttled to 50ms, (5) offline queue with IndexedDB. These 5 points cover 90% of what interviewers want to hear.

Common Interview Follow-Up Questions
5 questions
Q

How do you handle undo/redo in a collaborative editor?

A

Each user has their own undo stack. When you undo, you don't undo the last operation applied — you undo the last operation YOU made, transformed against what everyone else did since.

Q

What if the WebSocket connection drops mid

A

edit? — Buffer ops locally (IndexedDB), reconnect with exponential backoff, send a 'catch-up' request with your last revision to get missed ops.

Q

How do you handle rich text (bold, italic) vs plain text?

A

Rich text uses a different operation schema (Delta format in Quill, or Steps in ProseMirror) but OT principles are the same.

Q

How does the server scale?

A

Each document gets pinned to one server instance (sticky sessions). Redis Pub/Sub broadcasts to multiple server instances if clients of the same doc connect to different servers.

Q

How would you implement comment threads?

A

Comments are metadata on document ranges. They have their own OT — when the range's text moves, the comment anchor must be transformed too.

Must Revise Hard

Design YouTube Frontend

HLS ABR CDN Video Streaming Prefetching
Asked at: Google Netflix Meta ByteDance

What is this?

Design the YouTube frontend: the homepage feed, the video player with adaptive streaming, the recommendation sidebar, and everything that makes billions of people watch videos smoothly on any network speed.

System Overview

                            
┌──────────────────────────────────────────────────────────────┐
│                   YOUTUBE FRONTEND ARCHITECTURE              │
├──────────────────────────────────────────────────────────────┤
│                                                              │
│  ┌─────────────────────────────────────────────────────┐     │
│  │                    Browser                          │     │
│  │  ┌──────────────┐   ┌─────────────┐  ┌──────────┐   │     │
│  │  │  Feed Page   │   │  Video Page │  │ Search   │   │     │
│  │  │  (SSR/ISR)   │   │  (CSR)      │  │ (CSR)    │   │     │
│  │  └──────┬───────┘   └──────┬──────┘  └──────────┘   │     │
│  │         │                  │                        │     │
│  │         │           ┌──────▼───────────────-───┐    │     │
│  │         │           │      Video Player        │    │     │
│  │         │           │  ┌────────────────────┐  │    │     │
│  │         │           │  │  HLS.js / dash.js  │  │    │     │
│  │         │           │  │  ABR Algorithm     │  │    │     │
│  │         │           │  └────────────────────┘  │    │     │
│  │         │           └──────────────────────────┘    │     │
│  └─────────┼──────────────────┼───────────────────────-┘     │
│            │                  │                              │
│            ▼                  ▼                              │
│  ┌──────────────────┐  ┌───────────────────────────────┐     │
│  │   API Gateway    │  │          CDN Edge             │     │
│  │   (Feed, Auth,   │  │  ┌──────────┐ ┌────────────┐  │     │
│  │    Metadata)     │  │  │  Video   │ │ Thumbnail  │  │     │
│  └──────────────────┘  │  │  Chunks  │ │  Images    │  │     │
│                        │  └──────────┘ └────────────┘  │     │
│                        └───────────────────────────────┘     │
└──────────────────────────────────────────────────────────────┘

                          

How Video Streaming Works: HLS and ABR

ELI5: HLS (HTTP Live Streaming)

Imagine a book cut into tiny chapters (2-10 second chunks). A table of contents (manifest file) tells you where each chapter is. Your player downloads chapters ahead of time. If your internet slows down, it switches to a version with smaller pictures (lower quality). That's HLS + Adaptive Bitrate (ABR).

                            
  HLS Manifest Structure (.m3u8)
  ──────────────────────────────

  Master Manifest (index.m3u8)
  │
  ├── 240p playlist  → segment0_240.ts, segment1_240.ts ...
  ├── 480p playlist  → segment0_480.ts, segment1_480.ts ...
  ├── 720p playlist  → segment0_720.ts, segment1_720.ts ...
  └── 1080p playlist → segment0_1080.ts, segment1_1080.ts ...

  ABR Decision Loop (runs every segment):
  ┌───────────────────────────────────────┐
  │  Measure download speed of last chunk │
  │            │                          │
  │            ▼                          │
  │  Buffer level > 15s?                  │
  │   YES → try higher quality            │
  │   NO  → buffer level < 5s?            │
  │         YES → drop to lower quality   │
  │         NO  → stay at current         │
  └───────────────────────────────────────┘

                          

The Feed: Rendering Strategy

Page Rendering Why
Homepage FeedSSR or ISR (Next.js)SEO + fast first paint for logged-out users
Video Player PageCSR after initial shellDynamic, personalized — SEO not critical
Search ResultsSSR for first pageCrawler indexable, subsequent pages CSR
Watch HistoryCSR onlyPrivate data, no SEO needed

Prefetching Strategy

ELI5: Prefetching

When you hover over a video thumbnail, YouTube secretly starts downloading the first few seconds of that video. By the time you click, playback is near instant. This is prefetching — loading data before the user explicitly requests it.

                            
  Prefetch Strategy
  ─────────────────

  User hovers thumbnail (300ms delay threshold)
           │
           ▼
  Prefetch video metadata + first chunk
  Link rel="prefetch" for manifest URL
           │
           ▼
  User clicks → video starts instantly
  from the prefetched buffer

  Viewport-based prefetch (IntersectionObserver):
  ┌─────────────────────────────┐
  │   Viewport                  │
  │   ┌──────┐ ┌──────┐         │
  │   │ Vid1 │ │ Vid2 │         │
  │   └──────┘ └──────┘         │
  │                             │
  │   ┌──────┐ ┌──────┐         │
  │   │ Vid3 │ │ Vid4 │         │
  │   └──────┘ └──────┘         │
  └─────────────────────────────┘
  │   ┌──────┐ ┌──────┐         │
  │   │ Vid5 │ │ Vid6 │         │

                          

Video Player Implementation

// video-player.tsx — Core player setup with HLS
import Hls from 'hls.js';
import { useEffect, useRef } from 'react';

interface VideoPlayerProps {
  manifestUrl: string;      // The .m3u8 URL
  thumbnailUrl: string;
  autoplay?: boolean;
}

export function VideoPlayer({ manifestUrl, thumbnailUrl, autoplay }: VideoPlayerProps) {
  const videoRef = useRef<HTMLVideoElement>(null);
  const hlsRef = useRef<Hls | null>(null);

  useEffect(() => {
    const video = videoRef.current;
    if (!video) return;

    if (Hls.isSupported()) {
      // HLS.js handles ABR automatically
      const hls = new Hls({
        // Start with low quality, let ABR ramp up
        startLevel: -1,          // auto
        // Buffer settings
        maxBufferLength: 30,      // buffer 30s ahead
        maxMaxBufferLength: 60,   // absolute max
        // Enable low latency mode for live streams
        lowLatencyMode: false,
      });

      hls.loadSource(manifestUrl);
      hls.attachMedia(video);

      // Monitor quality switches for analytics
      hls.on(Hls.Events.LEVEL_SWITCHED, (event, data) => {
        console.log('Quality switched to level', data.level);
        // Send to analytics: { event: 'quality_change', level: data.level }
      });

      // Handle errors gracefully
      hls.on(Hls.Events.ERROR, (event, data) => {
        if (data.fatal) {
          switch (data.type) {
            case Hls.ErrorTypes.NETWORK_ERROR:
              // Try to recover from network error
              hls.startLoad();
              break;
            case Hls.ErrorTypes.MEDIA_ERROR:
              hls.recoverMediaError();
              break;
            default:
              // Cannot recover
              hls.destroy();
              break;
          }
        }
      });

      hlsRef.current = hls;
    } else if (video.canPlayType('application/vnd.apple.mpegurl')) {
      // Safari has native HLS support
      video.src = manifestUrl;
    }

    return () => {
      hlsRef.current?.destroy();
    };
  }, [manifestUrl]);

  return (
    <div className="video-container">
      <video
        ref={videoRef}
        poster={thumbnailUrl}   // Show thumbnail until video loads
        controls
        autoPlay={autoplay}
        className="w-full aspect-video bg-black"
      />
    </div>
  );
}

Feed Pagination and Infinite Scroll

// feed-hook.ts — Infinite scroll with IntersectionObserver
import { useEffect, useRef, useCallback, useState } from 'react';

interface FeedItem {
  id: string;
  title: string;
  thumbnailUrl: string;
  channelName: string;
  viewCount: number;
}

interface FeedPage {
  items: FeedItem[];
  nextPageToken: string | null;  // null means no more pages
}

export function useFeed() {
  const [pages, setPages] = useState<FeedPage[]>([]);
  const [isLoading, setIsLoading] = useState(false);
  const pageTokenRef = useRef<string | null>('');  // '' = first page
  const sentinelRef = useRef<HTMLDivElement>(null); // bottom of feed

  const loadNextPage = useCallback(async () => {
    // pageTokenRef.current is null when there are no more pages
    if (isLoading || pageTokenRef.current === null) return;

    setIsLoading(true);
    try {
      const params = new URLSearchParams();
      if (pageTokenRef.current) {
        params.set('pageToken', pageTokenRef.current);
      }

      const res = await fetch('/api/feed?' + params.toString());
      const data: FeedPage = await res.json();

      setPages(prev => [...prev, data]);
      pageTokenRef.current = data.nextPageToken;
    } finally {
      setIsLoading(false);
    }
  }, [isLoading]);

  // IntersectionObserver watches the sentinel div at bottom of feed
  useEffect(() => {
    const observer = new IntersectionObserver(
      ([entry]) => {
        if (entry.isIntersecting) loadNextPage();
      },
      { rootMargin: '400px' }  // trigger 400px before bottom — smooth UX
    );

    if (sentinelRef.current) observer.observe(sentinelRef.current);
    return () => observer.disconnect();
  }, [loadNextPage]);

  const allItems = pages.flatMap(page => page.items);

  return { allItems, isLoading, sentinelRef };
}

CDN Strategy for Video

                            
  CDN Delivery Architecture
  ─────────────────────────

  User in Mumbai
       │
       ▼
  DNS resolves to nearest CDN PoP
       │
       ▼
  ┌──────────────────────────┐
  │  CDN Edge (Mumbai)       │
  │  ┌─────────────────────┐ │
  │  │  Cache HIT?         │ │
  │  │  YES → serve chunk  │ │
  │  │  NO  → fetch from   │ │
  │  │        origin       │ │
  │  └─────────────────────┘ │
  └──────────────────────────┘
       │  (on cache miss)
       ▼
  ┌──────────────────────────┐
  │  Origin (Google servers) │
  │  Video storage + encoder │
  └──────────────────────────┘

  Cache TTL Strategy:
  ─────────────────
  Thumbnails:       Cache-Control: max-age=86400   (1 day)
  Video segments:   Cache-Control: max-age=31536000 (1 year, immutable)
  HLS manifests:    Cache-Control: max-age=5        (5 seconds, for live)
  Feed API:         Cache-Control: s-maxage=10      (10s on CDN, fresh for user)

                          

Interview Tip

For YouTube design, the interviewer wants to hear: (1) HLS + ABR for adaptive streaming, (2) CDN for video chunks with long TTLs, (3) SSR for feed SEO + CSR for player, (4) IntersectionObserver for infinite scroll, (5) prefetching on hover/viewport proximity, (6) skeleton screens for loading states.

Common Interview Follow-Up Questions
5 questions
Q

How do you measure video quality from the frontend?

A

Track buffer events, bitrate switches, startup time, and rebuffering ratio. Send these as analytics events to measure QoE (Quality of Experience).

Q

How does YouTube handle the 'first frame' instant load?

A

They use a poster image (thumbnail) until the first video frame loads. They also prefetch the first segment on hover.

Q

What is the difference between HLS and DASH?

A

HLS uses .m3u8 manifests and .ts segments (Apple's standard). DASH uses .mpd manifests and .mp4 segments (open standard). YouTube uses DASH; Apple uses HLS. Most players support both.

Q

How do you handle autoplay without sound?

A

Browsers block autoplay with audio. YouTube autoplays muted (video.muted = true), then shows a 'click to unmute' UI.

Q

How does YouTube's recommendation algorithm affect frontend design?

A

The feed is personalized server-side. The frontend just renders a generic feed component. A/B tests run by changing the API response, not the UI.

Must Revise Hard

Design Twitter/X Feed

Cursor Pagination SSE WebSocket Virtualization Real-time
Asked at: Twitter Meta LinkedIn Reddit

What is this?

Design the Twitter feed: a real-time, infinite-scrolling list of tweets that updates as new tweets arrive, handles millions of posts efficiently, and stays fast even with thousands of items in the DOM.

Architecture Overview

                            
┌──────────────────────────────────────────────────────────────┐
│                    TWITTER FEED ARCHITECTURE                 │
├──────────────────────────────────────────────────────────────┤
│                                                              │
│   Browser                                                    │
│   ┌──────────────────────────────────────────────────┐       │
│   │  Virtual List (react-window)                     │       │
│   │  Renders only ~20 visible tweets at a time       │       │
│   │  ┌────────────┐                                  │       │
│   │  │  Tweet #1  │  ← In DOM                        │       │
│   │  │  Tweet #2  │  ← In DOM                        │       │
│   │  │  Tweet #3  │  ← In DOM                        │       │
│   │  └────────────┘                                  │       │
│   │  [Tweet #4-100 exist in memory, not in DOM]      │       │
│   │                                                  │       │
│   │  "New tweets (12)" banner at top                 │       │
│   └────────────────────┬─────────────────────────────┘       │
│                        │                                     │
│          ┌─────────────┴──────────────┐                      │
│          │                            │                      │
│          ▼                            ▼                      │
│   ┌─────────────┐           ┌──────────────────┐             │
│   │  REST API   │           │  SSE / WebSocket │             │
│   │  (initial   │           │  (new tweet      │             │
│   │   load +    │           │   notifications) │             │
│   │  pagination)│           └──────────────────┘             │
│   └─────────────┘                                            │
│                                                              │
└──────────────────────────────────────────────────────────────┘

                          

Cursor-Based Pagination

ELI5: Cursor vs Offset Pagination

Offset pagination is like saying 'give me tweets 100-120'. But if someone posts a new tweet, everything shifts — tweet 101 becomes 100. You'd skip or duplicate tweets. Cursor pagination says 'give me tweets before tweet ID abc123'. The cursor is a stable anchor point that doesn't shift when new tweets arrive.

                            
  Cursor Pagination Flow
  ──────────────────────

  Initial load:
  GET /api/feed
  Response: { tweets: [...20 tweets], nextCursor: "tweet_id_xyz" }

  Load more (scroll to bottom):
  GET /api/feed?cursor=tweet_id_xyz
  Response: { tweets: [...20 older tweets], nextCursor: "tweet_id_abc" }

  Check for new tweets (polling or SSE):
  GET /api/feed?since=tweet_id_first
  Response: { tweets: [...new tweets since first tweet], count: 5 }

  Timeline visualization:
  ─────────────────────────────────────────────
  [New]  Tweet Z (just posted)
  [New]  Tweet Y (just posted)
  ── "Show 2 new tweets" banner ──────────────
         Tweet A (first loaded)  ← sinceId anchor
         Tweet B
         Tweet C
         ...
         Tweet T                 ← cursor anchor
  ── [Load more...] ─────────────────────────
         Tweet U (older)

                          

Real-Time Updates: SSE vs WebSocket

Feature SSE (EventSource) WebSocket
DirectionServer → Client onlyBidirectional
ProtocolHTTP/1.1 or HTTP/2Separate WS protocol
Auto-reconnectBuilt-inMust implement manually
Proxy friendlyYes (standard HTTP)Sometimes blocked
Use case for TwitterReceiving new tweet notificationsDMs (bidirectional chat)
ComplexityLowMedium
Browser supportAll modern browsersAll modern browsers

Twitter's Real Choice

Twitter uses SSE for the feed (new tweet notifications) because it is one-directional: server pushes notifications to client. WebSocket is used for DMs where both sides send messages. Don't over-engineer — use the right tool for the job.

Virtual List: The Key to Performance

ELI5: Why Virtual Lists?

If you've scrolled 500 tweets, all 500 are in the DOM. Each tweet has images, buttons, spans — maybe 50 DOM nodes each. That's 25,000 DOM nodes! The browser has to lay them all out even if you can't see them. A virtual list keeps only ~20 tweets in the DOM at a time, recycling nodes as you scroll.

// virtual-feed.tsx — Virtual list for Twitter feed
import { VariableSizeList } from 'react-window';
import { useRef, useCallback, useState } from 'react';
import AutoSizer from 'react-virtualized-auto-sizer';

interface Tweet {
  id: string;
  text: string;
  author: string;
  imageUrl?: string;    // tweets with images are taller
  likeCount: number;
  timestamp: string;
}

// Tweets have variable heights — text-only = 120px, with image = 400px
function getItemSize(tweet: Tweet): number {
  if (tweet.imageUrl) return 400;
  if (tweet.text.length > 200) return 180;
  return 120;
}

export function VirtualFeed({ tweets }: { tweets: Tweet[] }) {
  const listRef = useRef<VariableSizeList>(null);
  const [newTweetCount, setNewTweetCount] = useState(0);

  // When new tweets arrive via SSE, show banner instead of auto-scrolling
  // (auto-scroll is jarring UX — Twitter's approach)
  const handleNewTweets = useCallback((count: number) => {
    setNewTweetCount(prev => prev + count);
  }, []);

  const scrollToTop = () => {
    listRef.current?.scrollToItem(0, 'start');
    setNewTweetCount(0);
  };

  const Row = ({ index, style }: { index: number; style: React.CSSProperties }) => (
    // style must be applied — it contains the absolute position from react-window
    <div style={style}>
      <TweetCard tweet={tweets[index]} />
    </div>
  );

  return (
    <div className="relative h-screen">
      {/* New tweets banner */}
      {newTweetCount > 0 && (
        <button
          onClick={scrollToTop}
          className="fixed top-16 left-1/2 -translate-x-1/2 z-50
                     bg-blue-500 text-white px-4 py-2 rounded-full"
        >
          Show {newTweetCount} new tweets
        </button>
      )}

      {/* Auto-sizing virtual list — fills available height */}
      <AutoSizer>
        {({ height, width }) => (
          <VariableSizeList
            ref={listRef}
            height={height}
            width={width}
            itemCount={tweets.length}
            itemSize={(index) => getItemSize(tweets[index])}
            overscanCount={3}  // render 3 extra items outside viewport
          >
            {Row}
          </VariableSizeList>
        )}
      </AutoSizer>
    </div>
  );
}

SSE Implementation for New Tweets

// use-feed-sse.ts — Real-time feed updates via SSE
import { useEffect, useRef, useCallback } from 'react';

interface NewTweetEvent {
  id: string;
  authorId: string;
  preview: string;
}

export function useFeedSSE(
  onNewTweets: (tweets: NewTweetEvent[]) => void
) {
  const esRef = useRef<EventSource | null>(null);
  const reconnectTimeoutRef = useRef<number>(0);

  const connect = useCallback(() => {
    // EventSource is the browser's built-in SSE client
    const es = new EventSource('/api/feed/stream', {
      withCredentials: true,  // send auth cookies
    });

    es.addEventListener('new_tweets', (event) => {
      const tweets: NewTweetEvent[] = JSON.parse(event.data);
      onNewTweets(tweets);
    });

    es.addEventListener('ping', () => {
      // Server sends periodic pings to keep connection alive
      // and detect dead connections
    });

    es.onerror = () => {
      es.close();
      esRef.current = null;
      // Exponential backoff reconnection
      reconnectTimeoutRef.current = Math.min(
        reconnectTimeoutRef.current * 2 || 1000,
        30000  // max 30s
      );
      setTimeout(connect, reconnectTimeoutRef.current);
    };

    es.onopen = () => {
      // Reset backoff on successful connection
      reconnectTimeoutRef.current = 0;
    };

    esRef.current = es;
  }, [onNewTweets]);

  useEffect(() => {
    connect();
    return () => {
      esRef.current?.close();
    };
  }, [connect]);
}

Interview Tip

The three pillars of Twitter feed design: (1) Cursor pagination — stable, no duplicates when new tweets arrive. (2) SSE for new tweet notifications — simpler than WebSocket for one-directional push. (3) Virtual list — O(1) DOM nodes regardless of feed size. Mention all three with the reasoning behind each choice.

Common Interview Follow-Up Questions
5 questions
Q

How do you handle tweet updates (likes, retweets changing) in real time?

A

Use a separate SSE channel or WebSocket for mutation events. Update the tweet in your local state by ID without re-fetching.

Q

How do you prevent the feed from jumping when images load?

A

Reserve height for images before they load (aspect ratio boxes). Use the tweet's known image dimensions from the API response.

Q

How does Twitter handle the 'while you were away' section?

A

It's a separate API call that returns important tweets from the time window when you were offline, rendered as a distinct section at the top.

Q

What about tweet threads?

A

A thread is a list of tweet IDs. When you expand a thread, you fetch the full thread by its root ID. Threads can be virtualized just like the main feed.

Q

How do you handle very fast scrolling (scroll momentum)?

A

react-window handles this with overscanCount. You can also add a placeholder skeleton while items outside the overscan window render.

Medium

Design Flipkart Product Listing Page

URL-driven state Faceted Search SEO Cart Filters
Asked at: Flipkart Amazon Meesho Myntra

What is this?

Design the Product Listing Page (PLP) of an e-commerce site like Flipkart or Amazon. Users can filter by brand, price, rating, color, size — and results update instantly. The URL must reflect all active filters so users can share/bookmark filtered views, and Google must be able to index the pages.

Architecture Overview

                            
┌──────────────────────────────────────────────────────────────┐
│                   E-COMMERCE PLP ARCHITECTURE                │
├──────────────────────────────────────────────────────────────┤
│                                                              │
│  URL: /laptops?brand=dell,hp&price=20000-60000&sort=rating   │
│                      │                                       │
│                      ▼                                       │
│  ┌───────────────────────────────────────────────────┐       │
│  │            PLP Page (Next.js SSR)                │        │
│  │  ┌────────────────┐   ┌──────────────────────┐   │        │
│  │  │ Filter Sidebar │   │ Product Grid         │   │        │
│  │  │ ────────────── │   │ ──────────────────── │   │        │
│  │  │ Brand: [x] Dell│   │ ┌─────┐ ┌─────┐      │   │        │
│  │  │        [x] HP  │   │ │Prod1│ │Prod2│      │   │        │
│  │  │ Price: ─────── │   │ └─────┘ └─────┘      │   │        │
│  │  │ Rating: ⭐4+   │   │ ┌─────┐ ┌─────┐      │   │        │
│  │  │ Color: ...     │   │ │Prod3│ │Prod4│      │   │        │
│  │  └────────────────┘   │ └─────┘ └─────┘      │   │        │
│  │                       └──────────────────────┘   │        │
│  └───────────────────────────────────────────────────┘       │
│                      │                                       │
│                      ▼                                       │
│  ┌───────────────────────────────────────────────────┐       │
│  │ Search/Filter API (Elasticsearch or Solr)         │       │
│  │ Supports: full-text + faceted filters + sorting   │       │
│  └───────────────────────────────────────────────────┘       │
└──────────────────────────────────────────────────────────────┘

                          

URL-Driven Filter State

ELI5: Why URL-Driven State?

If your filter state lives only in React useState, refreshing the page loses all filters. If you want to share 'Red Nike shoes under ₹3000 sorted by rating' with a friend, you need those filters in the URL. URL is the universal state store for shareable, bookmark-able, indexable pages.

// use-filters.ts — URL-driven filter state
import { useRouter, useSearchParams } from 'next/navigation';
import { useCallback, useMemo } from 'react';

interface Filters {
  brands: string[];
  priceMin: number;
  priceMax: number;
  minRating: number;
  colors: string[];
  sort: 'price_asc' | 'price_desc' | 'rating' | 'newest';
}

export function useFilters() {
  const router = useRouter();
  const searchParams = useSearchParams();

  // Parse filters FROM URL
  const filters = useMemo((): Filters => ({
    brands: searchParams.get('brand')?.split(',').filter(Boolean) ?? [],
    priceMin: Number(searchParams.get('priceMin')) || 0,
    priceMax: Number(searchParams.get('priceMax')) || Infinity,
    minRating: Number(searchParams.get('rating')) || 0,
    colors: searchParams.get('color')?.split(',').filter(Boolean) ?? [],
    sort: (searchParams.get('sort') as Filters['sort']) ?? 'rating',
  }), [searchParams]);

  // Update filters → pushes to URL (triggers re-render + SSR re-fetch)
  const updateFilter = useCallback(<K extends keyof Filters>(
    key: K,
    value: Filters[K]
  ) => {
    const params = new URLSearchParams(searchParams.toString());

    if (key === 'brands' || key === 'colors') {
      const arr = value as string[];
      if (arr.length === 0) params.delete(key === 'brands' ? 'brand' : 'color');
      else params.set(key === 'brands' ? 'brand' : 'color', arr.join(','));
    } else if (key === 'priceMin') {
      params.set('priceMin', String(value));
    } else if (key === 'priceMax') {
      params.set('priceMax', String(value));
    } else if (key === 'minRating') {
      params.set('rating', String(value));
    } else if (key === 'sort') {
      params.set('sort', value as string);
    }

    // Reset to page 1 when filters change
    params.delete('page');

    // Use router.push for browser history (back button works!)
    router.push('?' + params.toString());
  }, [router, searchParams]);

  const clearAllFilters = useCallback(() => {
    router.push('?');  // Clear all params
  }, [router]);

  return { filters, updateFilter, clearAllFilters };
}

Faceted Search: What the API Returns

ELI5: Faceted Search

Facets are the filter options in the sidebar. But crucially, the COUNT next to each option (e.g., 'Dell (234)') must reflect how many results you'd get if you added that filter. This count is computed server-side by Elasticsearch — the frontend just renders the numbers. This is called faceted search.

// types.ts — API response structure
interface PLPResponse {
  products: Product[];
  totalCount: number;
  facets: Facets;         // filter options with counts
  appliedFilters: Filter[]; // for the breadcrumb strip
}

interface Facets {
  brands: FacetOption[];
  colors: FacetOption[];
  ratings: FacetOption[];
  priceRange: { min: number; max: number; histogram: number[] };
}

interface FacetOption {
  value: string;          // e.g., "Dell"
  count: number;          // e.g., 234 — how many products match
  isSelected: boolean;    // true if currently filtered
}

// server/api/products.ts — SSR data fetching
export async function getProducts(searchParams: URLSearchParams) {
  const query = {
    category: searchParams.get('category'),
    brands: searchParams.get('brand')?.split(','),
    priceMin: Number(searchParams.get('priceMin')) || undefined,
    priceMax: Number(searchParams.get('priceMax')) || undefined,
    sort: searchParams.get('sort') || 'rating',
    page: Number(searchParams.get('page')) || 1,
  };

  // Elasticsearch query with aggregations for facets
  const esQuery = {
    query: buildESQuery(query),
    aggs: {
      brands: { terms: { field: 'brand.keyword', size: 20 } },
      colors: { terms: { field: 'color.keyword', size: 20 } },
      price_range: { stats: { field: 'price' } },
    },
    from: (query.page - 1) * 20,
    size: 20,
  };

  const result = await elasticsearch.search(esQuery);
  return transformESResponse(result);
}

SEO Considerations

Filter Type SEO Strategy Reason
Category page (/laptops)SSR with canonical URLHigh-value page, must be indexed
Single filter (/laptops?brand=dell)SSR with canonicalDell laptops is a valid SEO page
Multiple filters (brand+color+price)noindex or canonical to baseToo many combinations — thin content
Sort order (?sort=price)canonical to unsorted pageSame content, different order
Pagination (?page=2)rel=next/prev or canonicalAvoid duplicate content

Cart Implementation

                            
  Cart State Architecture
  ───────────────────────

  Add to Cart clicked
          │
          ▼
  ┌───────────────────────────────────┐
  │  Optimistic Update                │
  │  cartStore.addItem(product)       │
  │  Show "Added to Cart" toast       │
  └───────────────────────────────────┘
                  │ (async)
                  ▼
  ┌───────────────────────────────────┐
  │  POST /api/cart                   │
  │  { productId, quantity: 1 }       │
  └───────────────────────────────────┘
                  │
          ┌───────┴───────┐
          ▼               ▼
      Success          Error (network/OOS)
          │               │
    Confirm local    Rollback optimistic
    state (no-op     update, show error
    if already       toast, undo button
    matches)

  Cart Persistence:
  ─────────────────
  Logged-in user:  Cart stored in backend DB
  Guest user:      Cart stored in localStorage
  On login:        Merge guest cart with backend cart

                          

Interview Tip

The key insight for e-commerce PLP: all filter state lives in the URL. This gives you: shareable URLs, SEO indexability, back-button support, and server-side rendering of filtered results for free. Mention URL-driven state + faceted search from Elasticsearch + optimistic cart updates with rollback.

Common Interview Follow-Up Questions
5 questions
Q

How do you handle price range sliders with URL state?

A

Debounce the slider (wait 300ms after user stops dragging) before updating the URL to avoid spamming navigation history.

Q

How does the filter sidebar show accurate counts without re

A

fetching? — The facet counts come from the initial API response. When you select a filter, you re-fetch with the new filter — the API returns updated counts reflecting the new constraint.

Q

How do you handle out

A

of-stock products? — Show them at the bottom of the list with an 'Out of Stock' badge. The API can return an 'inStock' field. The add-to-cart button is disabled with a 'Notify Me' option instead.

Q

How does image lazy loading work in the product grid?

A

Use loading='lazy' on img tags. For the first 6 products (above fold), use loading='eager' and high fetchpriority to avoid LCP penalty.

Q

How do you implement 'Compare' feature?

A

A local compare state (max 3-4 products). When user clicks compare, add to compare store. A sticky compare bar appears at bottom. Navigate to /compare?ids=a,b,c.

Must Revise Hard

Design Google Sheets

Canvas Rendering Formula Engine Collaboration Virtualization
Asked at: Google Microsoft Notion Airtable

What is this?

Design Google Sheets: a spreadsheet with potentially millions of cells, real-time collaboration, formula evaluation (=SUM(A1:A100)), and performance that can handle large datasets without freezing the browser.

Why Canvas Instead of DOM?

ELI5: The DOM vs Canvas Problem

A 1000-row by 50-column spreadsheet has 50,000 cells. If each cell is a <td> with a <span> inside, that's 100,000 DOM nodes. Browsers start struggling around 5,000-10,000 nodes. The solution: draw the entire grid on an HTML5 Canvas element as pixels. You only track what's visible and handle clicks by calculating which cell was hit using math.

                            
  DOM vs Canvas Rendering
  ───────────────────────

  DOM approach (BAD for large grids):
  ┌──────────────────────────────────────┐
  │ <table>                              │
  │   <tr> × 1000 rows                   │
  │     <td> × 50 cols = 50,000 nodes    │
  │   </tr>                              │
  │ </table>                             │
  │ Browser layout: SLOW (100ms+ paint)  │
  └──────────────────────────────────────┘

  Canvas approach (Google Sheets):
  ┌──────────────────────────────────────┐
  │ <canvas width="1200" height="800">   │
  │  drawGrid()  → 2ms to paint          │
  │  Only visible cells are drawn        │
  │  Click → hitTest(x,y) → cell coords  │
  │  Edit → overlay <input> on cell      │
  └──────────────────────────────────────┘

  Viewport virtualization:
  ─────────────────────────────────────────
  Columns A-Z visible, but AA-ZZZ exist
  Rows 1-40 visible, but 41-1,000,000 exist
  ─────────────────────────────────────────
  Only visible range is rendered on canvas

                          

Architecture Overview

                            
┌──────────────────────────────────────────────────────────────┐
│                  GOOGLE SHEETS ARCHITECTURE                  │
├──────────────────────────────────────────────────────────────┤
│                                                              │
│  ┌─────────────────────────────────────────────────────┐     │
│  │                     Browser                         │     │
│  │                                                     │     │
│  │  ┌──────────────────────────────────────────────┐   │     │
│  │  │            Canvas Renderer                   │   │     │
│  │  │  drawCells(), drawGrid(), drawSelection()    │   │     │
│  │  └───────────────────────┬──────────────────────┘   │     │
│  │                          │ reads from               │     │
│  │  ┌───────────────────────▼──────────────────────┐   │     │
│  │  │            Cell Store (in-memory)            │   │     │
│  │  │  Map<CellRef, CellData>                      │   │     │
│  │  │  { raw: "=SUM(A1:A3)", computed: "6" }       │   │     │
│  │  └───────────────────────┬──────────────────────┘   │     │
│  │                          │ formula evaluation       │     │
│  │  ┌───────────────────────▼──────────────────────┐   │     │
│  │  │            Formula Engine                    │   │     │
│  │  │  Parser → AST → Evaluator → Result           │   │     │
│  │  │  Dependency graph for recalculation          │   │     │
│  │  └───────────────────────┬──────────────────────┘   │     │
│  │                          │                          │     │
│  └──────────────────────────┼─────────────────────────-┘     │
│                             │ WebSocket (OT/CRDT)            │
│  ┌──────────────────────────▼──────────────────────-────┐    │
│  │              Collaboration Server                    │    │
│  └──────────────────────────────────────────────────────┘    │
└──────────────────────────────────────────────────────────────┘

                          

Canvas Rendering Implementation

// canvas-renderer.ts — Core canvas drawing
const COL_WIDTH = 100;   // pixels per column
const ROW_HEIGHT = 25;   // pixels per row
const HEADER_WIDTH = 50; // row number column
const HEADER_HEIGHT = 25; // column letter row

interface Viewport {
  startRow: number;
  startCol: number;
  endRow: number;
  endCol: number;
  scrollX: number;
  scrollY: number;
}

class SheetRenderer {
  private ctx: CanvasRenderingContext2D;
  private viewport: Viewport;
  private cellStore: CellStore;

  constructor(canvas: HTMLCanvasElement, cellStore: CellStore) {
    this.ctx = canvas.getContext('2d')!;
    this.cellStore = cellStore;
    this.viewport = { startRow: 0, startCol: 0, endRow: 40, endCol: 26, scrollX: 0, scrollY: 0 };
  }

  render(): void {
    const ctx = this.ctx;
    // Clear the entire canvas
    ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);

    this.drawBackground();
    this.drawCells();      // cell contents and backgrounds
    this.drawGridLines();  // the grid overlay
    this.drawHeaders();    // row numbers + column letters
    this.drawSelection();  // blue selection border
  }

  private drawCells(): void {
    const { startRow, endRow, startCol, endCol } = this.viewport;

    for (let row = startRow; row <= endRow; row++) {
      for (let col = startCol; col <= endCol; col++) {
        const cell = this.cellStore.getCell(row, col);
        if (!cell) continue;

        const x = HEADER_WIDTH + (col - startCol) * COL_WIDTH;
        const y = HEADER_HEIGHT + (row - startRow) * ROW_HEIGHT;

        // Draw cell background
        this.ctx.fillStyle = cell.bgColor ?? '#ffffff';
        this.ctx.fillRect(x, y, COL_WIDTH, ROW_HEIGHT);

        // Draw cell text (computed value, not formula)
        this.ctx.fillStyle = cell.textColor ?? '#000000';
        this.ctx.font = `${cell.bold ? 'bold ' : ''}13px Arial`;
        this.ctx.textAlign = cell.align ?? 'left';

        // Clip text to cell boundaries
        this.ctx.save();
        this.ctx.rect(x + 2, y, COL_WIDTH - 4, ROW_HEIGHT);
        this.ctx.clip();
        this.ctx.fillText(cell.computed ?? '', x + 4, y + 17);
        this.ctx.restore();
      }
    }
  }

  // Hit test: given mouse x,y — which cell is it?
  hitTest(mouseX: number, mouseY: number): { row: number; col: number } | null {
    const col = Math.floor((mouseX - HEADER_WIDTH) / COL_WIDTH) + this.viewport.startCol;
    const row = Math.floor((mouseY - HEADER_HEIGHT) / ROW_HEIGHT) + this.viewport.startRow;
    if (col < 0 || row < 0) return null;
    return { row, col };
  }

  // When user starts typing in a cell — show real <input>
  showCellEditor(row: number, col: number): void {
    const x = HEADER_WIDTH + (col - this.viewport.startCol) * COL_WIDTH;
    const y = HEADER_HEIGHT + (row - this.viewport.startRow) * ROW_HEIGHT;

    const input = document.createElement('input');
    input.style.position = 'absolute';
    input.style.left = x + 'px';
    input.style.top = y + 'px';
    input.style.width = COL_WIDTH + 'px';
    input.style.height = ROW_HEIGHT + 'px';
    input.style.border = '2px solid #1a73e8';
    input.value = this.cellStore.getCell(row, col)?.raw ?? '';

    // On Enter or blur — commit the edit
    const commit = () => {
      this.cellStore.setCell(row, col, input.value);
      input.remove();
      this.render(); // re-draw canvas
    };
    input.addEventListener('keydown', (e) => {
      if (e.key === 'Enter') commit();
      if (e.key === 'Escape') input.remove();
    });
    input.addEventListener('blur', commit);

    document.querySelector('.sheet-container')!.appendChild(input);
    input.focus();
  }
}

Formula Engine

ELI5: How Formulas Work

When you type =SUM(A1:A3), the formula engine: (1) Parses the text into a tree (AST), (2) Looks up values of A1, A2, A3 from the cell store, (3) Evaluates the tree to get a number, (4) Stores the result as the cell's 'computed' value. When A1 changes, all cells that reference A1 are recalculated — this is tracked by a dependency graph.

                            
  Formula Evaluation Pipeline
  ─────────────────────────────────────────

  Input: "=SUM(A1:A3)+5"
         │
         ▼
  Tokenizer:
  ["=", "SUM", "(", "A1:A3", ")", "+", "5"]
         │
         ▼
  Parser → AST:
  BinaryOp(+)
  ├── FunctionCall(SUM)
  │   └── Range(A1:A3)
  └── Literal(5)
         │
         ▼
  Evaluator:
  Range(A1:A3) → [2, 3, 4]
  SUM([2,3,4]) → 9
  9 + 5 → 14
         │
         ▼
  Cell.computed = "14"
  Cell re-rendered on canvas

  Dependency Graph (for recalculation):
  ─────────────────────────────────────────
  A1 ──► B1 (=A1*2)
  A2 ──► B1
  A3 ──► B1
  B1 ──► C1 (=B1+D1)
  D1 ──► C1

  When A1 changes → recalculate B1 → recalculate C1
  (topological sort of affected cells)

                          

Collaboration in Sheets

Google Sheets uses OT (same as Google Docs) but the operations are different. Instead of text insertions/deletions, operations are cell updates: { type: 'setCellValue', ref: 'B3', value: '=SUM(A1:A3)' }. Conflicts are rarer since cell-level granularity means two users editing different cells never conflict.

Interview Tip

Three unique aspects of Sheets vs Docs: (1) Canvas rendering instead of DOM — explain why (50,000 cell DOM is too slow). (2) Formula engine with dependency graph for recalculation propagation. (3) Cell-level OT operations are simpler than text OT because each cell is independent.

Common Interview Follow-Up Questions
5 questions
Q

How do you handle merged cells?

A

Merged cells are stored as a special cell with a span attribute. The canvas renderer skips drawing the covered cells and draws the merged cell spanning the combined area.

Q

How do you implement copy/paste across sheets?

A

Clipboard stores cell data as JSON (cell values, formats, formulas). Paste adjusts formula references relatively (A1 becomes A3 if pasted 2 rows down) — this is called relative reference adjustment.

Q

How do you handle circular references?

A

Track the dependency graph. Before evaluating, detect cycles with DFS. Show a #CIRCULAR_REF error for cells in the cycle.

Q

How does frozen rows/columns work?

A

The canvas draws two separate regions: frozen area (always at top/left) and scrollable area. They share the same row heights and column widths but scroll independently.

Q

How do you handle 1 million rows?

A

Only the visible range is in memory as a JavaScript object. The rest is stored server-side. When you scroll near the edge of the loaded range, the next batch is fetched.

Must Revise Hard

Design Netflix UI

CDN ABR Personalization Hover Preview Performance
Asked at: Netflix Disney+ Hotstar Amazon Prime

What is this?

Design the Netflix frontend: the home page with personalized rows of content, smooth hover previews that play video clips, adaptive video streaming that adjusts to your internet speed, and CDN delivery that serves content from servers near you.

High-Level Architecture

                            
┌───────────────────────────────────────────────────────────────┐
│                     NETFLIX ARCHITECTURE                      │
├───────────────────────────────────────────────────────────────┤
│                                                               │
│  ┌─────────────────────────────────────────────────────┐      │
│  │                    Browser                          │      │
│  │                                                     │      │
│  │  ┌──────────────────────────────────────────────┐   │      │
│  │  │              Home Page                       │   │      │
│  │  │  ┌────────────────────────────────────────┐  │   │      │
│  │  │  │  Hero Banner (autoplay muted clip)     │  │   │      │
│  │  │  └────────────────────────────────────────┘  │   │      │
│  │  │  ┌────────────────────────────────────────┐  │   │      │
│  │  │  │  Row: "Top Picks for You"              │  │   │      │
│  │  │  │  [Card][Card][Card][Card]→ scroll →    │  │   │      │
│  │  │  └────────────────────────────────────────┘  │   │      │
│  │  │  ┌────────────────────────────────────────┐  │   │      │
│  │  │  │  Row: "Continue Watching"              │  │   │      │
│  │  │  └────────────────────────────────────────┘  │   │      │
│  │  └──────────────────────────────────────────────┘   │      │
│  └─────────────────────────────────────────────────────┘      │
│       │ API calls                    │ Video requests         │
│       ▼                              ▼                        │
│  ┌──────────────-┐          ┌──-────────────────┐             │
│  │  Netflix API  │          │   Open Connect    │             │
│  │  (metadata,   │          │   CDN (Netflix's  │             │
│  │   personalize)│          │   own CDN ISP     │             │
│  └─────────────-─┘          │   appliances)     │             │
│                             └──────────────────-┘             │
└───────────────────────────────────────────────────────────────┘

                          

Open Connect: Netflix's Custom CDN

ELI5: Open Connect

Netflix built their own CDN called Open Connect. They place physical servers (appliances) directly inside ISPs (like Airtel, Jio, Comcast). When you stream a show, the video comes from a server literally in your ISP's data center — maybe 5ms away. This is how Netflix handles 15% of all internet traffic.

Hover Preview: The Engineering Behind It

                            
  Hover Preview Flow
  ──────────────────

  User hovers over card
          │
          ▼ (400ms delay — don't trigger on accidental hover)
  Timer fires → fetch preview video URL
          │
          ▼
  Create <video> element (hidden)
  Load first 30s preview clip from CDN
          │
          ▼ (video buffered enough)
  Scale up card (CSS transform scale(1.4))
  Fade in video overlay
  Play video (muted, loop)
          │
          ▼ (user moves away)
  Cancel pending fetch (AbortController)
  Reverse animation
  Destroy <video> to free memory

  Optimization: Pre-create video element pool
  ──────────────────────────────────────────
  Instead of creating new <video> for each hover:
  Maintain pool of 3 <video> elements
  Reuse → avoids DOM creation overhead

                          
// hover-preview.tsx — Netflix-style hover preview
import { useRef, useCallback, useState } from 'react';

interface ContentCard {
  id: string;
  title: string;
  thumbnailUrl: string;
  previewVideoUrl: string;
}

export function ContentCard({ content }: { content: ContentCard }) {
  const [isPreviewActive, setIsPreviewActive] = useState(false);
  const videoRef = useRef<HTMLVideoElement>(null);
  const hoverTimerRef = useRef<number | null>(null);
  const abortRef = useRef<AbortController | null>(null);

  const handleMouseEnter = useCallback(() => {
    // 400ms delay — avoid triggering on quick mouse pass-through
    hoverTimerRef.current = window.setTimeout(async () => {
      try {
        abortRef.current = new AbortController();

        // Fetch the preview URL (could be a signed CDN URL)
        const res = await fetch(
          '/api/preview/' + content.id,
          { signal: abortRef.current.signal }
        );
        const { videoUrl } = await res.json();

        if (videoRef.current) {
          videoRef.current.src = videoUrl;
          videoRef.current.load();
          // Play when enough is buffered
          videoRef.current.addEventListener('canplay', () => {
            videoRef.current?.play();
            setIsPreviewActive(true);
          }, { once: true });
        }
      } catch (err) {
        // AbortError is expected when user moves away — ignore it
        if ((err as Error).name !== 'AbortError') {
          console.error('Preview failed:', err);
        }
      }
    }, 400);
  }, [content.id]);

  const handleMouseLeave = useCallback(() => {
    // Cancel the 400ms timer if user left before it fired
    if (hoverTimerRef.current !== null) {
      clearTimeout(hoverTimerRef.current);
    }
    // Cancel in-flight fetch
    abortRef.current?.abort();

    // Clean up video
    if (videoRef.current) {
      videoRef.current.pause();
      videoRef.current.src = '';
    }
    setIsPreviewActive(false);
  }, []);

  return (
    <div
      className={`content-card ${isPreviewActive ? 'preview-active' : ''}`}
      onMouseEnter={handleMouseEnter}
      onMouseLeave={handleMouseLeave}
    >
      <img src={content.thumbnailUrl} alt={content.title} />

      {/* Hidden video element — shown when preview is active */}
      <video
        ref={videoRef}
        className={`preview-video ${isPreviewActive ? 'visible' : ''}`}
        muted
        loop
        playsInline
      />

      {isPreviewActive && (
        <div className="preview-overlay">
          <h3>{content.title}</h3>
          <div className="action-buttons">
            <button>Play</button>
            <button>Add to List</button>
            <button>More Info</button>
          </div>
        </div>
      )}
    </div>
  );
}

Personalization: How Rows Are Ordered

ELI5: Netflix Personalization

Netflix has 1,500+ different homepage row configurations. Which rows you see, in what order, and which titles appear in each row — all determined by ML models trained on your watch history. The frontend just renders a list of rows from the API. The personalization is entirely server-side.

// home-page.tsx — Rendering personalized rows
interface ContentRow {
  id: string;
  title: string;        // "Top Picks for You", "Action Movies", etc.
  algorithm: string;    // for analytics
  items: ContentItem[];
  totalCount: number;
}

interface HomePageData {
  rows: ContentRow[];
  heroContent: ContentItem;
}

// Server-side fetch — personalized per user
async function getHomePageData(userId: string): Promise<HomePageData> {
  // Netflix's Falcor or GraphQL API — fetches all rows in one request
  const response = await fetch('/api/home', {
    headers: { 'X-User-Id': userId },
    // Cache for 5 minutes — balance freshness vs performance
    next: { revalidate: 300 },
  });
  return response.json();
}

// The row component with horizontal scroll
function ContentRow({ row }: { row: ContentRow }) {
  const scrollRef = useRef<HTMLDivElement>(null);

  const scroll = (direction: 'left' | 'right') => {
    const el = scrollRef.current;
    if (!el) return;
    el.scrollBy({
      left: direction === 'right' ? 600 : -600,
      behavior: 'smooth',
    });
  };

  return (
    <section className="content-row">
      <h2>{row.title}</h2>
      <div className="row-container">
        <button className="scroll-btn left" onClick={() => scroll('left')}>

        </button>
        <div ref={scrollRef} className="cards-container">
          {row.items.map(item => (
            <ContentCard key={item.id} content={item} />
          ))}
        </div>
        <button className="scroll-btn right" onClick={() => scroll('right')}>

        </button>
      </div>
    </section>
  );
}

Performance Optimizations

Optimization Technique Impact
Image loadingloading='lazy' + srcSet for different screen sizesReduces initial page load by ~60%
Row virtualizationOnly load rows near viewportHome page has 20+ rows — don't load all
Prefetch on hoverLink rel='prefetch' for watch pageInstant navigation to video player
Hero autoplayStart muted clip on page loadEngagement — users stay longer
Image CDNResize/optimize images at CDN edgeServe WebP to modern browsers
API response cachingSWR with 5-min stale-while-revalidateInstant render on revisit

Interview Tip

Netflix key points: (1) Open Connect CDN — physical servers in ISPs, (2) DASH/HLS ABR for adaptive streaming, (3) hover preview with 400ms delay + AbortController, (4) personalization is server-side ML — frontend just renders rows, (5) muted autoplay for hero banner, (6) lazy loading rows and images.

Common Interview Follow-Up Questions
5 questions
Q

How does Netflix decide which thumbnail to show you?

A

A/B testing of thumbnails. Netflix personalizes thumbnails — different users see different artwork for the same title based on what images they historically click on.

Q

How does Netflix handle slow internet mid

A

stream? — ABR drops to lower quality (e.g., 1080p → 480p). Netflix also prebuffers aggressively (30-60 seconds ahead). If buffer runs out completely, show 'Buffering...' spinner.

Q

How do you implement continue watching?

A

Server tracks playback position. On player unload (pagehide event), send position to API. On next load, the player seeks to saved position.

Q

How does Netflix handle multiple profiles?

A

Profile selection at login sets a profile token. All API calls include this token. The recommendation model is per-profile. Frontend just renders whatever API returns.

Q

How does Netflix's homepage load so fast?

A

SSR the initial page with first 2-3 rows. Lazy load remaining rows as user scrolls. Cache API responses in browser with SWR. Preconnect to CDN domain in HTML head.

Must Revise Medium

Design a Real-Time Chat App

WebSocket Message States Offline Queue Reconnection
Asked at: Meta Slack Microsoft Telegram

What is this?

Design a real-time chat application like WhatsApp or Slack. Features include: instant message delivery, message status (sent/delivered/read), offline message queue, reconnection handling, and typing indicators.

Architecture Overview

                            
┌──────────────────────────────────────────────────────────────┐
│                     CHAT APP ARCHITECTURE                    │
├──────────────────────────────────────────────────────────────┤
│                                                              │
│   Alice's Browser              Bob's Browser                 │
│   ┌──────────────────┐         ┌──────────────────┐          │
│   │  Chat UI         │         │  Chat UI         │          │
│   │  Message List    │         │  Message List    │          │
│   │  Input Box       │         │  Input Box       │          │
│   └────────┬─────────┘         └────────┬─────────┘          │
│            │ WebSocket                  │ WebSocket          │
│            ▼                            ▼                    │
│   ┌──────────────────────────────────────────────────┐       │
│   │              WebSocket Server                    │       │
│   │  (handles connections, rooms, routing)           │       │
│   └──────────────────────────┬───────────────────────┘       │
│                              │                               │
│             ┌────────────────┼──────────────────┐            │
│             ▼                ▼                  ▼            │
│   ┌──────────────┐  ┌──────────────┐  ┌──────────────┐       │
│   │  Message DB  │  │  Redis       │  │  Push Notif  │       │
│   │  (Cassandra/ │  │  Pub/Sub     │  │  (FCM/APNs)  │       │
│   │   DynamoDB)  │  │  (fan-out)   │  │  for offline │       │
│   └──────────────┘  └──────────────┘  └──────────────┘       │
└──────────────────────────────────────────────────────────────┘

                          

Message Status: The Three Ticks

ELI5: Message Status Flow

WhatsApp's single tick (sent), double tick (delivered), blue tick (read) — each represents a distinct server-side event. The sender's client listens for status updates via WebSocket and updates the message in local state.

                            
  Message Status State Machine
  ────────────────────────────

  Alice types and sends message
          │
          ▼
  Status: PENDING (clock icon)
  Message optimistically added to UI
  Stored in IndexedDB offline queue
          │ (WebSocket send)
          ▼
  Status: SENT (single tick ✓)
  Server ACK received
  Message persisted in DB
          │ (Bob's WS receives message)
          ▼
  Status: DELIVERED (double tick ✓✓)
  Bob's client sends delivery receipt
          │ (Bob opens conversation)
          ▼
  Status: READ (blue tick ✓✓)
  Bob's client sends read receipt

  Status transitions are sent back to Alice via WebSocket:
  Server → Alice: { type: "receipt", msgId: "123", status: "delivered" }

                          

WebSocket Client with Reconnection

// chat-socket.ts — Robust WebSocket client
type MessageHandler = (msg: ChatMessage) => void;

class ChatSocket {
  private ws: WebSocket | null = null;
  private reconnectAttempts = 0;
  private maxReconnectAttempts = 10;
  private offlineQueue: ChatMessage[] = [];  // messages sent while disconnected
  private handlers: Map<string, MessageHandler[]> = new Map();
  private pingInterval: number | null = null;

  constructor(private userId: string, private serverUrl: string) {}

  connect(): void {
    this.ws = new WebSocket(this.serverUrl + '?userId=' + this.userId);

    this.ws.onopen = () => {
      console.log('WS connected');
      this.reconnectAttempts = 0;

      // Send any messages that were queued while offline
      this.flushOfflineQueue();

      // Start ping to detect dead connections (proxy timeout prevention)
      this.pingInterval = window.setInterval(() => {
        if (this.ws?.readyState === WebSocket.OPEN) {
          this.ws.send(JSON.stringify({ type: 'ping' }));
        }
      }, 30000);  // every 30 seconds
    };

    this.ws.onmessage = (event) => {
      const msg = JSON.parse(event.data);

      if (msg.type === 'pong') return;  // ignore ping responses

      // Route to registered handlers
      const handlers = this.handlers.get(msg.type) ?? [];
      handlers.forEach(h => h(msg));
    };

    this.ws.onclose = (event) => {
      clearInterval(this.pingInterval!);

      if (event.code === 1000) {
        // Normal closure — don't reconnect
        return;
      }

      this.scheduleReconnect();
    };

    this.ws.onerror = () => {
      // onclose will fire after onerror — handle reconnect there
    };
  }

  private scheduleReconnect(): void {
    if (this.reconnectAttempts >= this.maxReconnectAttempts) {
      // Show "Connection lost" UI
      this.emit('connection_failed', null);
      return;
    }

    // Exponential backoff: 1s, 2s, 4s, 8s... max 30s
    const delay = Math.min(1000 * Math.pow(2, this.reconnectAttempts), 30000);
    this.reconnectAttempts++;

    setTimeout(() => this.connect(), delay);
  }

  send(message: ChatMessage): void {
    if (this.ws?.readyState === WebSocket.OPEN) {
      this.ws.send(JSON.stringify(message));
    } else {
      // Offline — queue the message
      this.offlineQueue.push(message);
      // Persist to IndexedDB for page refresh survival
      this.persistToIndexedDB(message);
    }
  }

  private flushOfflineQueue(): void {
    const queue = [...this.offlineQueue];
    this.offlineQueue = [];

    for (const msg of queue) {
      this.ws!.send(JSON.stringify(msg));
    }
  }

  on(type: string, handler: MessageHandler): () => void {
    if (!this.handlers.has(type)) this.handlers.set(type, []);
    this.handlers.get(type)!.push(handler);
    // Return unsubscribe function
    return () => {
      const list = this.handlers.get(type) ?? [];
      this.handlers.set(type, list.filter(h => h !== handler));
    };
  }

  private emit(type: string, data: any): void {
    const handlers = this.handlers.get(type) ?? [];
    handlers.forEach(h => h(data));
  }

  private async persistToIndexedDB(message: ChatMessage): Promise<void> {
    // Store in IndexedDB so messages survive page refresh
    const db = await openDB('chat-queue', 1);
    await db.put('pending', message);
  }

  disconnect(): void {
    this.ws?.close(1000, 'User logged out');
  }
}

Typing Indicators

ELI5: Typing Indicators

When Alice starts typing, her client sends a 'typing_start' event to the server. The server forwards it to Bob. If Alice hasn't typed anything for 3 seconds, her client sends 'typing_stop'. This is implemented with a debounced timer — not on every keystroke.

// typing-indicator.ts — Efficient typing status
class TypingIndicator {
  private typingTimer: number | null = null;
  private isCurrentlyTyping = false;

  constructor(
    private chatSocket: ChatSocket,
    private conversationId: string
  ) {}

  // Called on every keystroke in the message input
  onKeyPress(): void {
    if (!this.isCurrentlyTyping) {
      // First keystroke — send typing start
      this.isCurrentlyTyping = true;
      this.chatSocket.send({
        type: 'typing_start',
        conversationId: this.conversationId,
      });
    }

    // Reset the stop timer on every keystroke
    if (this.typingTimer !== null) {
      clearTimeout(this.typingTimer);
    }

    // Stop typing indicator after 3s of inactivity
    this.typingTimer = window.setTimeout(() => {
      this.isCurrentlyTyping = false;
      this.chatSocket.send({
        type: 'typing_stop',
        conversationId: this.conversationId,
      });
    }, 3000);
  }

  // Called when message is sent
  onMessageSent(): void {
    if (this.typingTimer !== null) {
      clearTimeout(this.typingTimer);
    }
    if (this.isCurrentlyTyping) {
      this.isCurrentlyTyping = false;
      this.chatSocket.send({
        type: 'typing_stop',
        conversationId: this.conversationId,
      });
    }
  }
}

Message Rendering: Chat Bubbles

                            
  Chat Message States in UI
  ─────────────────────────

  Alice's view:                    Bob's view:
  ─────────────────────────────────────────────────
  "Hello Bob!"          PENDING   (No message yet)
  └── clock icon                  (offline queue)
         │
         ▼
  "Hello Bob!"          SENT      "Hello Bob!"
  └── ✓                           (appears instantly)
         │
         ▼
  "Hello Bob!"          DELIVERED "Hello Bob!"
  └── ✓✓ (grey)         (Bob's app is open)
         │
         ▼
  "Hello Bob!"          READ      "Hello Bob!"
  └── ✓✓ (blue)         (Bob opened convo)

  Optimistic UI principle:
  ────────────────────────
  Message appears in Alice's chat immediately
  PENDING → SENT → DELIVERED → READ
  Never remove message on network failure —
  instead show retry button with FAILED status

                          

Interview Tip

Chat app essentials: (1) WebSocket with exponential backoff reconnection, (2) offline queue in IndexedDB, (3) optimistic UI — show message immediately as PENDING, (4) message status receipts (sent/delivered/read), (5) typing indicators throttled to avoid flooding server, (6) Redis Pub/Sub for multi-server fan-out.

Common Interview Follow-Up Questions
5 questions
Q

How do you handle message ordering?

A

Messages have a server-assigned sequence number per conversation. Client sorts by sequence number. Optimistic messages use a temporary local ID and are re-sorted when the server ACK arrives.

Q

How do you implement group chats?

A

A conversation has N participants. Server fans out each message to all N connections. Delivery receipt requires ACK from all N clients. Read receipt is per-user.

Q

What about media messages (images, videos)?

A

Client uploads file directly to S3 using presigned URL. Then sends a message with the S3 URL. Thumbnail is generated server-side asynchronously. UI shows upload progress bar.

Q

How do you handle message search?

A

Index messages in Elasticsearch. Search is a REST API call, not WebSocket. Only search within your conversations for privacy.

Q

How do end

A

to-end encrypted messages work? — Key exchange (Diffie-Hellman) happens client-side on first message. Messages are encrypted in the browser before sending. Server stores and forwards ciphertext — cannot read contents.

Must Revise Medium

Design Search Autocomplete

Debounce AbortController Cache Keyboard Navigation Accessibility
Asked at: Google Amazon Flipkart Swiggy

What is this?

Design a search autocomplete widget — the dropdown that appears as you type in a search box, showing suggestions in real time. The challenge is doing this efficiently without hammering the API on every keystroke, handling race conditions where old requests arrive after newer ones, and making it accessible with keyboard navigation.

The Problems to Solve

  • Too many API calls: Without debounce, typing 'react' fires 5 requests (r, re, rea, reac, react)
  • Race conditions: If 're' response arrives after 'react' response, old results flash on screen
  • No caching: Same query typed twice hits the server twice
  • Accessibility: Keyboard users need arrow key navigation + screen reader support
  • Network errors: What to show if the API fails?

Architecture Overview

                            
┌──────────────────────────────────────────────────────────────┐
│                  AUTOCOMPLETE ARCHITECTURE                   │
├──────────────────────────────────────────────────────────────┤
│                                                              │
│  User types "react"                                          │
│       │                                                      │
│       ▼                                                      │
│  ┌─────────────────────────────────────────────┐             │
│  │  Debounce (300ms)                           │             │
│  │  Wait for user to stop typing before firing │             │
│  └──────────────────────┬──────────────────────┘             │
│                         │                                    │
│                         ▼                                    │
│  ┌─────────────────────────────────────────────┐             │
│  │  Client-side Cache (Map<query, results>)    │             │
│  │  HIT  → return cached results instantly     │             │
│  │  MISS → proceed to API call                 │             │
│  └──────────────────────┬──────────────────────┘             │
│                         │ (cache miss)                       │
│                         ▼                                    │
│  ┌─────────────────────────────────────────────┐             │
│  │  AbortController                            │             │
│  │  Cancel previous in-flight request          │             │
│  │  before sending new one                     │             │
│  └──────────────────────┬──────────────────────┘             │
│                         │                                    │
│                         ▼                                    │
│  ┌─────────────────────────────────────────────┐             │
│  │  GET /api/search/suggest?q=react            │             │
│  │  Response: ["react", "react native", ...]   │             │
│  └─────────────────────────────────────────────┘             │
└──────────────────────────────────────────────────────────────┘

                          

Debounce + AbortController: The Core Solution

ELI5: Debounce

Debounce is like a patient waiter. Instead of taking your order every time you open your mouth, they wait until you've stopped talking for 300ms. If you say 'I want the...' — they wait. 'Soup' — they wait. 'Please' — they wait 300ms, see you're done, then go to the kitchen. This saves the kitchen from being overwhelmed.

ELI5: AbortController

AbortController is like being able to call back a delivery driver. You ordered pizza, then changed your mind and ordered sushi. Without AbortController, both orders arrive. With it, you can cancel the pizza order if it hasn't been made yet. This prevents stale responses from flashing old results.

// use-autocomplete.ts — Complete autocomplete hook
import { useState, useEffect, useRef, useCallback } from 'react';

interface UseAutocompleteOptions {
  debounceMs?: number;    // default 300
  minChars?: number;      // don't search for 1-char queries
  maxCacheSize?: number;  // LRU cache size
}

export function useAutocomplete(options: UseAutocompleteOptions = {}) {
  const {
    debounceMs = 300,
    minChars = 2,
    maxCacheSize = 100,
  } = options;

  const [query, setQuery] = useState('');
  const [suggestions, setSuggestions] = useState<string[]>([]);
  const [isLoading, setIsLoading] = useState(false);
  const [error, setError] = useState<string | null>(null);
  const [activeIndex, setActiveIndex] = useState(-1);  // for keyboard nav

  // LRU cache: Map preserves insertion order
  const cacheRef = useRef<Map<string, string[]>>(new Map());

  // AbortController ref — cancel in-flight requests
  const abortRef = useRef<AbortController | null>(null);

  // Debounce timer ref
  const timerRef = useRef<number | null>(null);

  const fetchSuggestions = useCallback(async (q: string) => {
    // Check cache first
    if (cacheRef.current.has(q)) {
      setSuggestions(cacheRef.current.get(q)!);
      return;
    }

    // Cancel any previous request
    abortRef.current?.abort();
    abortRef.current = new AbortController();

    setIsLoading(true);
    setError(null);

    try {
      const res = await fetch(
        '/api/search/suggest?q=' + encodeURIComponent(q),
        { signal: abortRef.current.signal }
      );

      if (!res.ok) throw new Error('Search failed');

      const data: string[] = await res.json();

      // Update cache (evict oldest if over limit)
      if (cacheRef.current.size >= maxCacheSize) {
        const firstKey = cacheRef.current.keys().next().value;
        cacheRef.current.delete(firstKey);
      }
      cacheRef.current.set(q, data);

      setSuggestions(data);
    } catch (err) {
      if ((err as Error).name === 'AbortError') {
        // This is expected — a newer query superseded this one
        return;
      }
      setError('Could not load suggestions');
      setSuggestions([]);
    } finally {
      setIsLoading(false);
    }
  }, [maxCacheSize]);

  // Effect: debounce the fetch on query change
  useEffect(() => {
    if (timerRef.current !== null) {
      clearTimeout(timerRef.current);
    }

    if (query.length < minChars) {
      setSuggestions([]);
      setActiveIndex(-1);
      return;
    }

    timerRef.current = window.setTimeout(() => {
      fetchSuggestions(query);
    }, debounceMs);

    return () => {
      if (timerRef.current !== null) clearTimeout(timerRef.current);
    };
  }, [query, fetchSuggestions, debounceMs, minChars]);

  // Keyboard navigation handler
  const handleKeyDown = useCallback((e: React.KeyboardEvent) => {
    switch (e.key) {
      case 'ArrowDown':
        e.preventDefault();  // prevent scroll
        setActiveIndex(i => Math.min(i + 1, suggestions.length - 1));
        break;
      case 'ArrowUp':
        e.preventDefault();
        setActiveIndex(i => Math.max(i - 1, -1));
        break;
      case 'Enter':
        if (activeIndex >= 0) {
          // Select the highlighted suggestion
          setQuery(suggestions[activeIndex]);
          setSuggestions([]);
        }
        break;
      case 'Escape':
        setSuggestions([]);
        setActiveIndex(-1);
        break;
    }
  }, [suggestions, activeIndex]);

  return {
    query,
    setQuery,
    suggestions,
    isLoading,
    error,
    activeIndex,
    handleKeyDown,
  };
}

Accessible UI Component

// search-autocomplete.tsx — Accessible dropdown
import { useId } from 'react';
import { useAutocomplete } from './use-autocomplete';

export function SearchAutocomplete() {
  const listboxId = useId();  // stable ID for ARIA
  const {
    query,
    setQuery,
    suggestions,
    isLoading,
    error,
    activeIndex,
    handleKeyDown,
  } = useAutocomplete({ debounceMs: 300, minChars: 2 });

  const isOpen = suggestions.length > 0 || isLoading || !!error;

  return (
    <div className="autocomplete-container" role="combobox"
         aria-expanded={isOpen}
         aria-haspopup="listbox"
         aria-owns={listboxId}>
      <input
        type="search"
        value={query}
        onChange={e => setQuery(e.target.value)}
        onKeyDown={handleKeyDown}
        placeholder="Search..."
        // ARIA: tell screen readers which item is highlighted
        aria-autocomplete="list"
        aria-controls={listboxId}
        aria-activedescendant={
          activeIndex >= 0 ? 'suggestion-' + activeIndex : undefined
        }
      />

      {isOpen && (
        <ul
          id={listboxId}
          role="listbox"
          className="suggestions-dropdown"
        >
          {isLoading && (
            <li role="status" aria-live="polite">Loading...</li>
          )}
          {error && (
            <li role="alert">{error}</li>
          )}
          {suggestions.map((suggestion, index) => (
            <li
              key={suggestion}
              id={'suggestion-' + index}
              role="option"
              aria-selected={index === activeIndex}
              className={index === activeIndex ? 'highlighted' : ''}
              onMouseDown={(e) => {
                // mousedown instead of click — fires before input blur
                e.preventDefault();
                setQuery(suggestion);
              }}
            >
              {/* Highlight matching portion */}
              <HighlightMatch text={suggestion} query={query} />
            </li>
          ))}
        </ul>
      )}
    </div>
  );
}

// Highlight the matching part of the suggestion
function HighlightMatch({ text, query }: { text: string; query: string }) {
  const idx = text.toLowerCase().indexOf(query.toLowerCase());
  if (idx === -1) return <span>{text}</span>;

  return (
    <span>
      {text.slice(0, idx)}
      <strong>{text.slice(idx, idx + query.length)}</strong>
      {text.slice(idx + query.length)}
    </span>
  );
}

Caching Strategy

Cache Level Storage TTL Benefit
In-memory (Map)JS Map in componentSession lifetimeInstant — no network
Browser cacheHTTP Cache-Control header5-10 minutesSurvives page refresh
Service WorkerCache Storage APIHoursWorks offline
Server cacheRedis5-15 minutesReduces DB load
                            
  Cache Lookup Flow
  ─────────────────

  User types "react"
        │
        ▼
  Check in-memory Map
  HIT? → return immediately (0ms)
  MISS?
        │
        ▼
  Check browser HTTP cache
  HIT? → return from cache (~5ms)
  MISS?
        │
        ▼
  Fetch from API (~100-300ms)
  Store in both caches
  Return results

  Cache key: exact query string (lowercase, trimmed)
  "React" and "react" → same cache entry

                          

Interview Tip

Autocomplete has 4 key engineering challenges, each with a specific solution: (1) Too many requests → Debounce 300ms. (2) Race conditions → AbortController cancels stale requests. (3) Repeated queries → Client-side LRU cache. (4) Keyboard accessibility → ARIA combobox pattern with activeDescendant. Memorize these 4 pairs.

Common Interview Follow-Up Questions
5 questions
Q

How do you handle the case where 'react' and 'React' should be the same cache key?

A

Normalize before cache lookup: query.toLowerCase().trim().

Q

What if the user types very fast and the debounce still fires multiple requests?

A

AbortController handles this. Each new request cancels the previous one. Only the latest request's response is used.

Q

How do you implement 'recent searches' shown before the user types?

A

Store recent searches in localStorage (max 5). Show them when input is focused but empty. Clear button removes an entry.

Q

How would you handle search with categories (like Amazon

A

search in 'Electronics')? — Add a category selector. Include the category in the API request params and the cache key.

Q

How do you measure autocomplete performance?

A

Track: Time to First Suggestion (TTFS), cache hit rate, click-through rate on suggestions, abandonment rate. Send these as analytics events.