HACKER Q&A
📣 supai

Supabase PG upgrade wiped production DB, PITR backups failing


We are currently experiencing a total production outage and severe data loss on Supabase, and we cannot get a response from support. We are hoping someone from their team sees this here.

The Timeline of Failure:

1. We performed a Postgres version upgrade on our instance. 2. For an unknown reason, this upgrade triggered an unexpected downgrade of our disk size. 3. We ran a standard REINDEX: REINDEX DATABASE postgres; Because the disk space was severely limited by the bug in step 2, the disk ran out of space entirely. 4. This out-of-space event caused the entire database to wipe. 5. We immediately attempted a Point-in-Time Recovery (PITR), but the restore process is failing on Supabase's end.

Our project is now completely inaccessible.

We have an open critical support ticket (#SU-342355), posted on GitHub discussions, and reached out on X, but have received zero response from a human.

If @kiwicopple, @antwilson, or any Supabase infra engineers are reading this: please do not delete the underlying AWS EBS volume. We need an engineer to manually mount the volume and extract the WAL or raw data pages before the blocks are overwritten.

Any advice from the community on escalating this further is appreciated.


  👤 supai Accepted Answer ✓
Update: Supabase support came through and successfully recovered the database. Our site is back online and no data was lost.

The Supabase dashboard is still experiencing errors and won't load properly, but the underlying data and API are intact.

Thank you to the HN mods for un-flagging this, and to everyone who took a look. Hopefully, we'll receive a post-mortem from Supabase on the exact cause of the disk downgrade issue during the PG update.