Almost all computers count time as seconds from the epoch (midnight 1/1/1970). That then gets converted into a readable time, which may go through UTC to be converted first, but that’s not how it’s storing it.
You’re referring to UNIX time. And you’re correct.
It’s a count of how many seconds from midnight, January first, 1970, UTC.
Local computers update that time, still in UTC, from time servers, usually over NTP, then translate that time reading from UNIX time in UTC, to a human readable format in the local time zone.
All computers are still keeping track of time from Epoch in UTC.
Unix time is far less universal in computing than you might hope. A few exceptions I’m aware of:
Most real-time clock hardware stores datetime as separate binary-coded decimal fields representing months, days, hours, minutes, and seconds as one byte each, and often the year too (resulting in a year 2100 limit).
Python’s datetime, WIN32’s SYSTEMTIME, Java’s LocalDateTime, and MySQL’s DATETIME similarly have separate attributes for year, month, day, etc.
NTFS stores a 64-bit number representing time elapsed since the year 1601 in 100-nanosecond resolution for things like file creation time.
NTP uses an epoch of midnight 1900-01-01 with unsigned seconds elapsed and an unusual base-2 fractional part
GPS uses an epoch of midnight 1980-01-06 with a week number and time within the week as separate values.
Converting between time formats is a common source of bugs and each one will overflow in different ways. A time value might overflow in the year 2036, 2038, 2070, 2100, 2156, or 9999.
Also, Unix time is often managed with a separate nanoseconds component for increased resolution. Like in C structtimespec, modern *nix filesystems like ext4/xfs/btrfs/zfs, etc.
Almost all computers count time as seconds from the epoch (midnight 1/1/1970). That then gets converted into a readable time, which may go through UTC to be converted first, but that’s not how it’s storing it.
You’re referring to UNIX time. And you’re correct.
It’s a count of how many seconds from midnight, January first, 1970, UTC.
Local computers update that time, still in UTC, from time servers, usually over NTP, then translate that time reading from UNIX time in UTC, to a human readable format in the local time zone.
All computers are still keeping track of time from Epoch in UTC.
Unix time is far less universal in computing than you might hope. A few exceptions I’m aware of:
Converting between time formats is a common source of bugs and each one will overflow in different ways. A time value might overflow in the year 2036, 2038, 2070, 2100, 2156, or 9999.
Also, Unix time is often managed with a separate nanoseconds component for increased resolution. Like in C
struct timespec
, modern *nix filesystems like ext4/xfs/btrfs/zfs, etc.