CURRENT EPOCH · EPOCHTIME.TOOLS · A PRECISION INSTRUMENT FOR TIME
Converter Batch Difference Blog
Languages
JavaScript Python TypeScript Go Rust Java PHP SQL Bash
Specialty
LDAP Timestamp .NET Ticks Chrome/WebKit Cocoa / Core Data Discord Timestamp Excel OADate Unix Hex
Standards
ISO 8601 Guide Year 2038 NTP Timestamp GPS Time Julian Day

The question I get asked weekly

"Should I store timestamps as Unix seconds or milliseconds?" It's a fair question. Both work. JavaScript uses milliseconds; databases often default to seconds; APIs vary. There's no universal right answer, but there is a pragmatic one.

Here's my take, with the reasoning, the trade-offs, and the cases where you'd choose differently.

The short answer

Use milliseconds. Unless you have a specific reason not to.

The reasons:

  1. JavaScript uses milliseconds natively. Date.now(), new Date(ms), .getTime() — all millisecond-based. If any part of your stack ever touches a browser, you save a conversion at every boundary.
  2. The storage cost difference is negligible. A 64-bit integer holds either with room to spare.
  3. Millisecond precision is often the bare minimum for ordering events in distributed systems. Multiple events can occur in the same second; few occur in the same millisecond on a single machine.
  4. Most logging frameworks (Java's System.currentTimeMillis, Python's time.time() * 1000, etc.) already produce millisecond timestamps.

The case for seconds

Seconds win in these situations:

The case for microseconds or nanoseconds

Higher precision earns its place when:

For most application code, microseconds and nanoseconds are overkill and introduce annoying conversion steps. If you don't have a specific reason, don't.

The storage question, definitively

Pick a consistent format for your application and stick with it. Mixed-unit timestamps across your codebase are a constant source of bugs. Here's what to do:

LayerRecommended typeNotes
Database columnBIGINT (64-bit) msY2038-safe, JavaScript-native
JSON API fieldNumber (ms)Or ISO 8601 string for readability
Log filesISO 8601 stringHuman-readable, sortable
Protocol buffersgoogle.protobuf.TimestampResolution is nanos, stored efficiently
URL query paramsSecondsShorter is better in URLs

Notice URLs are different. You want short URLs. Using milliseconds in a URL adds 3 characters per timestamp, which compounds if you have multiple. A 10-digit seconds value is fine.

The detection problem

If you receive a timestamp value from an external source and don't know whether it's seconds or ms, count the digits:

The boundaries are based on the magnitude required to express current dates. A "current" timestamp in seconds is between 10^9 and 10^10. In milliseconds, between 10^12 and 10^13. The gap is wide enough that there's no ambiguity for any plausible value.

The smart-parse logic in the main converter does exactly this digit-count check, and you can paste any value into it to see what gets detected. The batch converter does the same on each line independently, so a mixed-format file is fine.

The float trap

Some languages and tools represent Unix timestamps as floating-point seconds — that is, seconds with a fractional component. Python's time.time() returns a float. PostgreSQL's extract(epoch from now()) returns a numeric.

This sounds nice (sub-second precision without changing units), but it has a real precision problem. A 64-bit double can only represent integers up to 2^53 exactly (about 9 quadrillion). For current Unix seconds, this is fine — but for milliseconds, you can lose precision around dates very far in the future or past.

More immediately: floats don't compare cleanly. 1737386400.0 == 1737386400 is true in JavaScript, but 1737386400.000001 == 1737386400 is false. If you're using floats for timestamps in a hash key, equality comparison, or database join, you can get phantom mismatches.

The rule: store as integer milliseconds. Convert to float only when you need to display sub-second precision. Never store the float.

The "ISO 8601 string is better" argument

Some teams prefer storing timestamps as ISO 8601 strings (2026-05-15T14:30:00.123Z) instead of integers. The arguments:

The arguments against:

For large transactional tables with billions of rows: integer milliseconds. For audit logs, debug tables, and anything you'd read manually: ISO 8601 string. For everything else: pick one and be consistent.

The final answer

If you're starting fresh and have no constraints: 64-bit integer milliseconds. Y2038-safe, JavaScript-native, big enough for any practical precision need, small enough for high-volume tables.

If you have an existing system using seconds, don't change it just for the sake of it. Conversion costs aren't zero, and consistency matters more than theoretical purity.

If anyone tells you you need nanosecond precision, ask them what event-ordering problem they're actually trying to solve. The answer is usually "we don't need it; ms is fine."


Published April 9, 2026. Tagged: design, database, conventions.

← Back to blog  ·  Try the converter