CURRENT EPOCH · EPOCHTIME.TOOLS · A PRECISION INSTRUMENT FOR TIME
Converter Batch Difference Blog
Languages
JavaScript Python TypeScript Go Rust Java PHP SQL Bash
Specialty
LDAP Timestamp .NET Ticks Chrome/WebKit Cocoa / Core Data Discord Timestamp Excel OADate Unix Hex
Standards
ISO 8601 Guide Year 2038 NTP Timestamp GPS Time Julian Day

The bug is closer than you think

The Year 2038 problem is famous as a distant problem. 2038 is 12 years away as of this writing. Surely your team will have re-platformed by then. Surely the code with the 32-bit column is going to be retired.

You'd be surprised.

The real bug isn't on January 19, 2038. The real bug is the day your code tries to compute a timestamp for any moment after that. A 25-year mortgage with an origination date today has a maturity in 2051. A 30-year archival retention policy starting today expires in 2056. A long-lived certificate, a long-baseline scientific experiment, a backup schedule, a multi-decade subscription — all of these have already started hitting the Y2038 boundary in databases that store time as a 32-bit signed integer.

If you maintain code that does any kind of "now + N years" arithmetic, you should audit it. Here's how.

Step 1: Find every column that stores a timestamp as a 32-bit integer

The fastest way is to query your schema directly. Here's the SQL for the most common databases:

PostgreSQL

SELECT table_schema, table_name, column_name, data_type
FROM information_schema.columns
WHERE data_type IN ('integer', 'int4')
  AND (column_name ILIKE '%time%'
    OR column_name ILIKE '%date%'
    OR column_name ILIKE '%_at'
    OR column_name ILIKE '%_ts'
    OR column_name ILIKE '%epoch%'
    OR column_name ILIKE '%timestamp%');

MySQL

SELECT table_schema, table_name, column_name, column_type
FROM information_schema.columns
WHERE column_type IN ('int', 'int(11)', 'int unsigned', 'mediumint')
  AND (column_name LIKE '%time%'
    OR column_name LIKE '%date%'
    OR column_name LIKE '%_at'
    OR column_name LIKE '%_ts'
    OR column_name LIKE '%epoch%');

Look at the results carefully. Some of those columns won't actually be timestamps — they might be durations in seconds, or row sequence numbers with "time" in the name. But every column whose values look like Unix epoch seconds (currently 10 digits, around 1.7 billion) needs attention.

Step 2: Find every code path that produces those values

Your database is one side of the contract. The application code that writes to it is the other. Search your codebase for patterns like:

# Python: int() of a float seconds value
grep -rn "int(time.time())" .
grep -rn "int(datetime" .

# JavaScript: Math.floor of Date.now() / 1000
grep -rn "Date.now()" .

# Java: integer division of currentTimeMillis
grep -rn "/ 1000" .

# C / C++: time_t to int casts
grep -rn "(int)" . | grep -i time

Every cast from a wide type to a narrow type is a place where Y2038 silently corrupts your data. The C cast (int)time(NULL) on a 64-bit platform compiles cleanly and works correctly for current dates — and silently truncates for any timestamp above 2^31-1.

Step 3: Audit your serialization formats

If your data passes through any of the following, check the schema:

Step 4: Plan the migration

Migrating a column has three parts: add new column, double-write, switch reads, drop old column. Here's the safest pattern:

Phase 1: Add a new BIGINT column

ALTER TABLE events ADD COLUMN ts_64 BIGINT;
CREATE INDEX events_ts_64_idx ON events (ts_64);

-- Backfill from existing column
UPDATE events SET ts_64 = ts_32 WHERE ts_64 IS NULL;

Backfilling a large table in chunks rather than one giant UPDATE is wise. Most production migrations use a loop that updates 10,000 rows at a time with a brief sleep between batches to keep replication lag manageable.

Phase 2: Double-write

Update application code to write both columns. Keep this running for at least one full release cycle so you can roll back if anything breaks. Make sure all your tests and staging environments are doing the double-write before you flip production.

Phase 3: Switch reads

Change application reads to use the new column. Keep the old column intact. Run for another cycle to verify nothing's reading the old column anymore.

Phase 4: Drop the old column

Wait at least 30 days after the last read of the old column. Then drop it. Many teams keep the old column for a year as a paranoia hedge — disk is cheap, regret is expensive.

The harder part: serialized data in flight

The schema migration is the easy half. The harder half is data that's already serialized and sitting somewhere:

The reality check

Y2038 isn't going to be a single midnight crisis. It will be — and already is — a long tail of small bugs that surface as soon as someone does now() + 13 years in code that runs through a 32-bit pipe somewhere. The first reports started around 2020, when 18-year mortgages originated in 2020 started computing maturities past 2038. The reports will keep coming.

The good news: most modern infrastructure is already fine. 64-bit Linux, modern databases with native timestamp types, JavaScript (which uses 53-bit-safe doubles), Python (arbitrary precision integers), and Java (always 64-bit longs) all handle current dates correctly without you doing anything. The bug is almost entirely in legacy systems and explicit casts in user code.

You can check any timestamp's proximity to the Y2038 boundary in the main converter — it shows a warning whenever an input is within range of the overflow. For background, the Y2038 explainer covers what's at risk and what's safe in detail.


Published May 4, 2026. Tagged: y2038, database, migration.

← Back to blog  ·  Try the converter