Shreyansh Singh
  • About
  • Posts (current)
  • Projects
  • Publications
  • Bookshelf
  • CV
  • mlsys
  • •

  • transformer
  • •

  • paper-summaries
  • •

  • MLSys
  • •

  • LLMs
  • •

  • PPML
  • Paper Summary #14 - Physics of Language Models: Part 3.1, Knowledge Storage and Extraction

    My notes from the Physics of Language Models series of papers.

    22 min read   ·   January 17, 2026

    2026   ·   transformer   knowledge   paper-summaries   ·   LLMs

    Paper Summary #14 - Physics of Language Models: Part 3.1, Knowledge Storage and Extraction
  • Understanding Multi-Head Latent Attention (MLA)

    A mathematical and code deep-dive on one of the key innovations from Deepseek - Multihead Latent Attention (MLA)

    14 min read   ·   November 08, 2025

    2025   ·   attention   mla   ·   LLMs

    Understanding Multi-Head Latent Attention (MLA)
  • Deriving the Gradient for the Backward Pass of Layer Normalization

    Understanding the math behind Layer Normalization and deriving the gradients for the backward pass.

    8 min read   ·   June 04, 2025

    2025   ·   ml   math   ·   ML

    Deriving the Gradient for the Backward Pass of Layer Normalization
  • Notes from GTC'25: CUDA Techniques to Maximize Compute and Instruction Throughput

    My notes from the talk on maximizing compute and instruction throughput at NVIDIA GTC 2025.

    32 min read   ·   April 04, 2025

    2025   ·   cuda   mlsys   ·   MLSys

    Notes from GTC'25: CUDA Techniques to Maximize Compute and Instruction Throughput
  • Notes from GTC'25: CUDA Techniques to Maximize Memory Bandwidth and Hide Latency - Part 2

    Second part of my notes from the talk on maximizing memory bandwidth at NVIDIA GTC 2025.

    33 min read   ·   March 23, 2025

    2025   ·   cuda   mlsys   ·   MLSys

    Notes from GTC'25: CUDA Techniques to Maximize Memory Bandwidth and Hide Latency - Part 2
  • Notes from GTC'25: CUDA Techniques to Maximize Memory Bandwidth and Hide Latency - Part 1

    First part of my notes from the talk on maximizing memory bandwidth at NVIDIA GTC 2025.

    33 min read   ·   March 23, 2025

    2025   ·   cuda   mlsys   ·   MLSys

    Notes from GTC'25: CUDA Techniques to Maximize Memory Bandwidth and Hide Latency - Part 1
  • Faster Cross-Encoder Inference: Unleashing torch.compile for speed

    A quick writeup on accelerating a Jina Cross-Encoder using torch.compile

    20 min read   ·   March 02, 2025

    2025   ·   inference-optimization   efficiency   mlsys   ·   MLSys

    Faster Cross-Encoder Inference: Unleashing torch.compile for speed
  • Paper Summary #13 - Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning Process

    My notes from the Physics of Language Models series of papers.

    27 min read   ·   September 21, 2024

    2024   ·   transformer   reasoning   paper-summaries   ·   LLMs

    Paper Summary #13 - Physics of Language Models: Part 2.1, Grade-School Math and the Hidden Reasoning Process
  • Paper Summary #12 - Image Recaptioning in DALL-E 3

    The image recaptioning technique used in DALL-E 3 was extended to videos in Sora.

    12 min read   ·   February 18, 2024

    2024   ·   image-captioning   generative-ai   ·   Computer Vision

    Paper Summary #12 - Image Recaptioning in DALL-E 3
  • Paper Summary #11 - Sora

    OpenAI announced a ground-breaking text-to-video diffusion model capable of generating high-definition videos up to 60 seconds long.

    7 min read   ·   February 18, 2024

    2024   ·   diffusion   image-generation   video-generation   generative-ai   ·   Computer Vision

    Paper Summary #11 - Sora
  • Newer
  • 1
  • 2
  • 3
  • 4
  • Older
© Copyright 2026 Shreyansh Singh.