Exporting large querysets leads to 504 timeout errors #49

Open
opened 2026-04-05 16:21:19 +02:00 by MrUnknownDE · 0 comments
Owner

Originally created by @Nick-Gatti on 3/30/2026

NetBox Version

v4.4.0

Python Version

3.12

Area(s) of Concern

  • User Interface
  • REST API
  • GraphQL API
  • Python ORM
  • Other

Details

For context, our environment has over 1 million device objects. When exporting objects as CSV (via the "Export" button on any list view), the response blocks entirely until all matching objects have been fetched, serialized, and buffered in memory before a single byte is sent to the client. For large tables (tens of thousands of objects), this results in the browser appearing to hang with no download activity, timeout errors before the response completes, and high peak memory consumption on the server. In addition, from a UX perspective, the end user is unable to navigate away from the page while the export is processing, and once the timeout is hit, the web server returns a 504 error while the query is still processing on the server.

Image Image

As one potential solution, I switched the export functionality locally to use a StreamingHttpResponse so that the HTTP server isn't waiting on all rows to be loaded into memory. Instead, the export function fetches results in batches of 200 rows and streams the response to the client, yielding each row to the CSV writer as it's processed. This means the first CSV row reaches the client within milliseconds of the request, the end-user is able to navigate away from the page as the file downloads, memory stays bounded to ~200 rows at a time, and timeouts are no longer a concern regardless of table size.

*Originally created by @Nick-Gatti on 3/30/2026* ### NetBox Version v4.4.0 ### Python Version 3.12 ### Area(s) of Concern - [x] User Interface - [ ] REST API - [ ] GraphQL API - [X] Python ORM - [ ] Other ### Details For context, our environment has over 1 million device objects. When exporting objects as CSV (via the "Export" button on any list view), the response blocks entirely until all matching objects have been fetched, serialized, and buffered in memory before a single byte is sent to the client. For large tables (tens of thousands of objects), this results in the browser appearing to hang with no download activity, timeout errors before the response completes, and high peak memory consumption on the server. In addition, from a UX perspective, the end user is unable to navigate away from the page while the export is processing, and once the timeout is hit, the web server returns a 504 error while the query is still processing on the server. <img width="773" height="112" alt="Image" src="https://github.com/user-attachments/assets/dc031d33-4bc2-4574-b58e-57cde4722e93" /> <img width="642" height="123" alt="Image" src="https://github.com/user-attachments/assets/6a027705-9e06-4a19-b280-efeba1b5759a" /> As one potential solution, I switched the export functionality locally to use a `StreamingHttpResponse` so that the HTTP server isn't waiting on all rows to be loaded into memory. Instead, the export function fetches results in batches of 200 rows and streams the response to the client, yielding each row to the CSV writer as it's processed. This means the first CSV row reaches the client within milliseconds of the request, the end-user is able to navigate away from the page as the file downloads, memory stays bounded to ~200 rows at a time, and timeouts are no longer a concern regardless of table size.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: github/netbox#49