Recent optimizations to Relay allow us to cache Docker containers in order to eliminate the startup overhead for long pipelines, or pipelines with many iterations.
In order to maximize the effect of such optimizations, we should dispatch successive command invocations to a given bundle to the same Relay instance, ensuring we hit a "hot cache".
Currently, we select a Relay at random on each invocation, meaning we could theoretically incur container startup costs once per each Relay instance hosting a given bundle.
We can solve the problem by remembering which Relay is chosen at random for a given bundle in a pipeline, and then dispatching to that same Relay on successive invocations of commands in that bundle for the remainder of the pipeline (instead of choosing randomly each time).
Recent optimizations to Relay allow us to cache Docker containers in order to eliminate the startup overhead for long pipelines, or pipelines with many iterations.
In order to maximize the effect of such optimizations, we should dispatch successive command invocations to a given bundle to the same Relay instance, ensuring we hit a "hot cache".
Currently, we select a Relay at random on each invocation, meaning we could theoretically incur container startup costs once per each Relay instance hosting a given bundle.
We can solve the problem by remembering which Relay is chosen at random for a given bundle in a pipeline, and then dispatching to that same Relay on successive invocations of commands in that bundle for the remainder of the pipeline (instead of choosing randomly each time).