API:
- type
PyTime_t
- constant
PyTime_MIN (PyTime_t type)
- constant
PyTime_MAX (PyTime_t type)
double PyTime_AsSecondsDouble(PyTime_t t): Convert a timestamp to a number of seconds.
PyTime_t PyTime_Monotonic(void): similar to time.monotonic_ns()
PyTime_t PyTime_PerfCounter(void): similar to time.perf_counter_ns()
PyTime_t PyTime_Time(void): similar to time.time_ns()
PyTime_Monotonic(), PyTime_PerfCounter() and PyTime_Time() return 0 on error (and ignore silently the error), and clamp the clock to the [PyTime_MIN; PyTime_MAX] range on integer overflow.
These functions are being used internally in Python since around Python 3.5 to avoid rounding issues (floating point <=> integer) at nanosecond resolution. PyTime_t is just a 64-bit signed integer.
The "nanosecond" unit is not explicit in the API, the unit is "arbitrary" even if it's nanosecond in practice. The internal C API has functions to create PyTime_t values from seconds and from nanoseconds. I didn't add them in this inital public C API.
Cython started to use this API whereas the private API was removed in Python 3.13 alpha 1. Cython needs:
PyTime_t type
PyTime_Time()
PyTime_AsSecondsDouble()
The Python API reports errors as regular exceptions, whereas the C API silently ignores errors. When I designed and implemented PEP 418: Add monotonic time, performance counter, and process time functions, I was worried about errors while reading time. I added code at Python startup to read the 3 clocks (time, perf_counter, monotonic) and fail with a fatal error if any failed. Many years later, I removed the check since it never failed.
The C API is designed to be convenient to use, not to be "perfect" (report unlikely error). Over 10 years, I saw a single failure in a custom sandbox which blocked syscalls to read time. It was a single user on a very specific issue, and it was an issue in the sandbox config, not in Python. IMO returning 0 in the C API is the sane behavior in this case. Bothering all users to have to check for errors just for that would be overkill.
Example of usage:
double benchmark(void)
{
PyTime_t t1 = PyTime_PerfCounter();
// ... code ...
PyTime_t t2 = PyTime_PerfCounter();
return PyTime_AsSecondsDouble(t2 - t1)
}
The PyTime internal C API is way more complete, but I chose to start with the bare minimum for the public C API.
Pull request: python/cpython#112135
API:
PyTime_tPyTime_MIN(PyTime_ttype)PyTime_MAX(PyTime_ttype)double PyTime_AsSecondsDouble(PyTime_t t): Convert a timestamp to a number of seconds.PyTime_t PyTime_Monotonic(void): similar totime.monotonic_ns()PyTime_t PyTime_PerfCounter(void): similar totime.perf_counter_ns()PyTime_t PyTime_Time(void): similar totime.time_ns()PyTime_Monotonic(),PyTime_PerfCounter()andPyTime_Time()return 0 on error (and ignore silently the error), and clamp the clock to the[PyTime_MIN; PyTime_MAX]range on integer overflow.These functions are being used internally in Python since around Python 3.5 to avoid rounding issues (floating point <=> integer) at nanosecond resolution.
PyTime_tis just a 64-bit signed integer.The "nanosecond" unit is not explicit in the API, the unit is "arbitrary" even if it's nanosecond in practice. The internal C API has functions to create PyTime_t values from seconds and from nanoseconds. I didn't add them in this inital public C API.
Cython started to use this API whereas the private API was removed in Python 3.13 alpha 1. Cython needs:
PyTime_ttypePyTime_Time()PyTime_AsSecondsDouble()The Python API reports errors as regular exceptions, whereas the C API silently ignores errors. When I designed and implemented PEP 418: Add monotonic time, performance counter, and process time functions, I was worried about errors while reading time. I added code at Python startup to read the 3 clocks (time, perf_counter, monotonic) and fail with a fatal error if any failed. Many years later, I removed the check since it never failed.
The C API is designed to be convenient to use, not to be "perfect" (report unlikely error). Over 10 years, I saw a single failure in a custom sandbox which blocked syscalls to read time. It was a single user on a very specific issue, and it was an issue in the sandbox config, not in Python. IMO returning 0 in the C API is the sane behavior in this case. Bothering all users to have to check for errors just for that would be overkill.
Example of usage:
The PyTime internal C API is way more complete, but I chose to start with the bare minimum for the public C API.
Pull request: python/cpython#112135