Where does time go?

Logo

Programming is all about time. A good program should be correct, which requires things to happen in the right order. It should also be efficient, which requires things to happen as fast as possible. These two objectives are often at odds: for instance, distributing computations over several threads of execution can help with efficiency, but at the cost of making it harder to maintain correctness, as the order of operations becomes non-deterministic.

Programming is all about time, but there are no mainstream programming languages (and only very few niche ones) that have any notion of time at all. At best, there is some amount of support for synchronization and concurrency (e.g., waiting for a list of promises to all be resolved), but while it is generally trivial to express an idea such as “repeat something 3 times”, there is no equivalent for “repeat something for 3 seconds”.

I believe that there would be many uses for a more explicit representation of time. A simple example is timing out to avoid waiting forever for a network request or a user interaction to finish. Another simple example is recognizing a double click or double tap gesture from a user: the clicks or taps have to happen within a set time limit to be recognized as a single gesture, as opposed to two distinct events. Other examples where timing is integral: retrying a failed request with a different timeout, debouncing or throttling events, showing flipbook animations at a set frame rate that may not match the frame rate of the user’s display, &c.

Four strange characters walking together
Gang of Four

Now, thinking about this for a minute should raise quite a few questions about what “repeat something for 3 seconds” would actually mean. What if we are in the middle of something when 3 seconds have passed? Then what happens to the rest of the computation? How can the runtime ensure that exactly 3 seconds have passed before interrupting the computation? What if we want to debug this program and insert a break point in the middle of something? Then how does time move forward when we step through something? What if we want to test the timeout mechanism and run many different tests simulating different situations? Then do we have to actually sit there for many seconds or minutes every time the test suite runs while it is waiting for nothing to happen?

Synchronization, which was mentioned above, also becomes crucial now. Our hypothetical language now has a concept of time, and in the examples seen so far, durations are mostly “empty”: making a network request or measuring the time between two user events is just waiting; these could be long-running processes, but they are very different from e.g., a SAT solver that just performs a huge number of computations and keeps busy for a while until it produces a result. So, since our time is mostly spent waiting, we should be able to do something else and come back to that request when it finishes, or that event when it actually occurs. We are now in more familiar territory, since this hints at classical concepts like threads, callbacks, promises, futures or async/await mechanisms.

All of these tools allow us to perform various long-running tasks concurrently, e.g., a network request does not have to block the whole program. We can get a promise of a future result, carry on whatever else we were doing, and handle that result when it finally arrives. But since both time and the program have moved forward, how can we tell what state the rest of the program is in now that the result of the request is available? Is that result still relevant? Even worse, at any time during execution, how can we know that there are requests in progress that are expected to finish at some point? This can be important for testing, since test expectations can verifying that the program is in some expected state after a series of action, but can we be sure that there is not another callback right around the corner that will screw everything up? This usually necessitates a lot of bookkeeping, with complex applications requiring the maintainance of a large amount of state. And inevitably, higher complexity means a higher number of bugs, and those are harder to debug and test for because they can happen at unpredictable times.

So where does time go? Early programming language design focused on bringing structure to the control flow of a program. But anything that happens asynchronously now falls back into spaghetti code where callbacks, event handlers, promises or threads are scattered all over with no explicit structure, making maintainance, debugging and testing increasingly difficult. We will try to think of some solutions to this problem in forthcoming articles. ⚄⚄

And I forgot to even mention error handling, which may require its own future article!