The question I get asked weekly
"Should I store timestamps as Unix seconds or milliseconds?" It's a fair question. Both work. JavaScript uses milliseconds; databases often default to seconds; APIs vary. There's no universal right answer, but there is a pragmatic one.
Here's my take, with the reasoning, the trade-offs, and the cases where you'd choose differently.
The short answer
Use milliseconds. Unless you have a specific reason not to.
The reasons:
- JavaScript uses milliseconds natively.
Date.now(),new Date(ms),.getTime()— all millisecond-based. If any part of your stack ever touches a browser, you save a conversion at every boundary. - The storage cost difference is negligible. A 64-bit integer holds either with room to spare.
- Millisecond precision is often the bare minimum for ordering events in distributed systems. Multiple events can occur in the same second; few occur in the same millisecond on a single machine.
- Most logging frameworks (Java's
System.currentTimeMillis, Python'stime.time() * 1000, etc.) already produce millisecond timestamps.
The case for seconds
Seconds win in these situations:
- Database columns where space matters at scale. If you have a 10TB table and the timestamp column is a hot index, the difference between a 4-byte int (Y2038-vulnerable) and an 8-byte bigint isn't trivial. For 32-bit timestamps you save 4 bytes per row. For 64-bit, there's no difference between storing seconds or ms — both fit in 8 bytes — so this argument doesn't actually favor seconds at 64-bit precision.
- HTTP/cookie expirations are specified in seconds (RFC 6265). If you're writing infrastructure that interacts with cookies or HTTP cache headers, working in seconds avoids confusion.
- Cron-style schedulers almost always work in seconds. Sub-second scheduling is rare.
- Human-readable logs. A 10-digit Unix timestamp (seconds) is recognizable at a glance. A 13-digit millisecond timestamp looks like an unrelated ID.
- External APIs you don't control. If you're integrating with an API that returns seconds, store seconds. Don't convert at every read.
The case for microseconds or nanoseconds
Higher precision earns its place when:
- You're building a database engine, queue, or distributed system where event ordering at sub-millisecond granularity matters. Cassandra, Kafka, and most modern OLTP databases internally use microsecond or nanosecond timestamps.
- You're doing high-frequency trading or scientific measurement. Real submillisecond events need real submillisecond timestamps.
- You're working with a system that demands it. Cocoa Core Data on Apple platforms uses fractional seconds (effectively nanoseconds since 2001). PostgreSQL's
timestamptzstores microseconds.
For most application code, microseconds and nanoseconds are overkill and introduce annoying conversion steps. If you don't have a specific reason, don't.
The storage question, definitively
Pick a consistent format for your application and stick with it. Mixed-unit timestamps across your codebase are a constant source of bugs. Here's what to do:
| Layer | Recommended type | Notes |
|---|---|---|
| Database column | BIGINT (64-bit) ms | Y2038-safe, JavaScript-native |
| JSON API field | Number (ms) | Or ISO 8601 string for readability |
| Log files | ISO 8601 string | Human-readable, sortable |
| Protocol buffers | google.protobuf.Timestamp | Resolution is nanos, stored efficiently |
| URL query params | Seconds | Shorter is better in URLs |
Notice URLs are different. You want short URLs. Using milliseconds in a URL adds 3 characters per timestamp, which compounds if you have multiple. A 10-digit seconds value is fine.
The detection problem
If you receive a timestamp value from an external source and don't know whether it's seconds or ms, count the digits:
- 10 digits (1.7 billion-ish): Unix seconds, current era
- 13 digits (1.7 trillion-ish): Unix milliseconds, current era
- 16 digits: microseconds
- 19 digits: nanoseconds
The boundaries are based on the magnitude required to express current dates. A "current" timestamp in seconds is between 10^9 and 10^10. In milliseconds, between 10^12 and 10^13. The gap is wide enough that there's no ambiguity for any plausible value.
The smart-parse logic in the main converter does exactly this digit-count check, and you can paste any value into it to see what gets detected. The batch converter does the same on each line independently, so a mixed-format file is fine.
The float trap
Some languages and tools represent Unix timestamps as floating-point seconds — that is, seconds with a fractional component. Python's time.time() returns a float. PostgreSQL's extract(epoch from now()) returns a numeric.
This sounds nice (sub-second precision without changing units), but it has a real precision problem. A 64-bit double can only represent integers up to 2^53 exactly (about 9 quadrillion). For current Unix seconds, this is fine — but for milliseconds, you can lose precision around dates very far in the future or past.
More immediately: floats don't compare cleanly. 1737386400.0 == 1737386400 is true in JavaScript, but 1737386400.000001 == 1737386400 is false. If you're using floats for timestamps in a hash key, equality comparison, or database join, you can get phantom mismatches.
The rule: store as integer milliseconds. Convert to float only when you need to display sub-second precision. Never store the float.
The "ISO 8601 string is better" argument
Some teams prefer storing timestamps as ISO 8601 strings (2026-05-15T14:30:00.123Z) instead of integers. The arguments:
- Self-describing — anyone reading the database knows what it is
- Lexicographic sort = chronological sort (within the same timezone)
- No precision-loss risks
The arguments against:
- Bigger column size (~25 bytes vs 8)
- Slower to parse on read
- Slower to compare than integer math
- Locale-dependent parsing if you're sloppy about the format
For large transactional tables with billions of rows: integer milliseconds. For audit logs, debug tables, and anything you'd read manually: ISO 8601 string. For everything else: pick one and be consistent.
The final answer
If you're starting fresh and have no constraints: 64-bit integer milliseconds. Y2038-safe, JavaScript-native, big enough for any practical precision need, small enough for high-volume tables.
If you have an existing system using seconds, don't change it just for the sake of it. Conversion costs aren't zero, and consistency matters more than theoretical purity.
If anyone tells you you need nanosecond precision, ask them what event-ordering problem they're actually trying to solve. The answer is usually "we don't need it; ms is fine."
Published April 9, 2026. Tagged: design, database, conventions.