1. Introduction and context of the problem
In [Reference Higgins1], the author introduced the Biker-hiker problem, which entails finding optimal schemes for transporting n travellers, who collectively have k bicycles, to their common destination in the minimum possible time. Optimal schemes were those in which each traveller rode ${k}/{n}$ of their journey by bicycle. Schemes were represented by $n\times n$ binary matrices and an algorithm was devised to determine when a matrix represented an optimal scheme, based on the Dyck language [4] of well-formed strings of parentheses. A pair of mutually transpose matrices provided two optimal scheme types, the first of which minimized the number of cycle handovers while the transpose scheme kept the number and separation of the travelling cohorts to a minimum.
Here we re-interpret this as a problem of n objects manufactured by n agents who have access to k machines that work faster than the agents. A modified version of the problem in which the machines are more versatile in nature is the basis of this paper. We re-imagine the scenario by saying that each of the n agents has an identical task to complete. For convenience of description, we take this task to be the manufacture of an object. There are $k\leq n$ identical machines available that can execute the task faster than an unsupported agent. If a machine is not in use, an agent may continue to manufacture the object they are constructing by passing it to that machine, but only if the machine is configured to continue the build from this point. This constraint corresponds to the fact that in the Biker-hiker problem, a traveller may only mount a bicycle if they and a bicycle are at the same point in the journey. In contrast, if an agent takes over the manufacture of an object, they are capable of recognizing what point in the build has been reached for this object and continue its construction from that point. This corresponds to the ability of a traveller in the Biker-hiker context to continue at any point of the journey on foot.
To formulate the subject of this paper, we alter the nature of the machines by allowing them to share with agents the capacity to continue the build of an object from any given point in its construction. In the original travelling setting, this would correspond to bicycles that could instantaneously move from their current position to any other to be used by a traveller.
Since the machines are faster than the agents, in any optimal scheme (one that minimizes the total production time of the order), no machine will ever be idle. In view of this, there is no loss in adjusting the setting of the problem to view the k machines as agents in their own right, and at all times, some set of k objects will be undergoing construction by the machines. There is then no need to have any more than $n-k$ other agents available to contribute to the building of the n objects.
Taken all together, these observations allow us to settle on the final make-up of the problem. There are two types of agent, identical in all respects except that one type works faster than the other. Let us say they number $k_{1}$ and $k_{2}$ , with $k_{1}+k_{2}=n$ , the total number of objects to be made.
Indeed, we shall widen the setting by allowing for an arbitrary number m of agent types and for any number $p \geq n$ objects to be manufactured. This general problem is formally stated in Section 2, where we present one optimal scheme type under the assumption that any time lost in halting the process to pass partially made objects between agents is negligible compared with the overall manufacturing time. In Section 3, however, we return to the $m=2$ case and introduce methods that, in general, greatly reduce the number of stoppages involved in executing an optimal scheme.
2. The general problem
2.1. Definition and principal features of an optimal scheme
We have an order to manufacture $p\geq n$ identical objects using n agents, and we wish to do this in the minimum possible time. We shall refer to this challenge as a p-object problem. Each of the n agents at our disposal produce one object at a time, but with varying speeds. There are $k_{i}$ agents of type i and m types are available, so that $k_{1}+k_{2}+\cdots +k_{m}=n$ . Agent type i takes $t_{i}$ time units (which we shall call hours) to complete the manufacture of one object. A stipulated process for completing the order will be known as a p-scheme, or simply a scheme. At any point during the process, an agent may be halted and its partially constructed object replaced by another partially built object. This second object may be at a different stage of construction, but all agents are capable of recognizing this and can continue the build from the current state. Our initial analysis will assume that the time to pass a partially built object from one agent to another is negligible compared with the overall build time.
Observe that if a scheme S has the property that no agent is ever idle and all of them simultaneously complete the build of the object in hand (in time H say), then S is optimal, for such a scheme is working at maximum capacity all throughout the execution of the order. We exhibit the existence of such a scheme S in the general case in Section 2.2. Since it is not possible to exceed the production of the equivalent of n objects in time H (even allowing for partially completed builds), it follows that in any optimal scheme, all agents must complete their build at time H for there to be n completed objects at this minimum time. We illustrate this principle through a simple example.
Example 2.1. Take $k_{1}=k_{2}=1$ (so that $n=m=2$ ), and let $t_{1}=1$ , $t_{2}=2$ . For both agents to work continuously until the order is completed, we exchange objects at just the right moment, which in this case is the $40$ minute mark, and then both objects are fully built after $80$ minutes.
Proposition 2.2. Given the existence of an optimal solution scheme S for the n-object problem (featuring n agents), an optimal solution exists for the p-object problem for any $p\geq n$ .
Proof. Write $p = an+b$ for a positive integer a and a nonnegative integer b, with b satisfying $0\leq b\leq n-1$ . Act the given optimal scheme S for the n-object problem to produce the first set of n manufactured objects. Then repeat S a further $a - 1$ times, yielding an output of $an$ objects, which have been manufactured in the least possible time as no agent has had an idle moment. There remain b objects still to be produced, and so we act the given optimal b-object scheme on a set of b of our fastest agents to complete the process in minimum time, thereby solving our optimization problem.
It follows that to find examples of optimal schemes, we may henceforth restrict attention to the case where $p = n$ , and so the numbers of agents and objects match.
Proposition 2.3
-
(a) In any optimal scheme S for the n-object problem:
-
(i) the time H required to complete the scheme is the harmonic mean of the completion times of the individual agents,
(2.1) $$ \begin{align} H=n\bigg(\sum_{i=1}^{m}k_{i}t_{i}^{-1}\bigg)^{-1}; \end{align} $$ -
(ii) the proportion $p_{j}$ of the build constructed by the set of type j agents is
(2.2) $$ \begin{align} p_{j}=\frac{k_{j}}{t_{j}}\bigg(\sum_{i=1}^{m}k_{i}t_{i}^{-1}\bigg)^{-1}. \end{align} $$
-
-
(b) Conversely, if all agents of a scheme begin simultaneously and work for duration H as specified by (2.1), then in doing so, they have completed an optimal scheme.
Proof. (a) (i) In an optimal solution, all agents work at full capacity for some common time length, H. Since an agent of type i takes $t_{i}$ hours to make an object, its production rate is $t_{i}^{-1}$ objects/hour. The combined rate of production, R, of all the agents in objects/hour is therefore given by the sum
The time taken for the agents to produce the equivalent of $1$ object is therefore $R^{-1}$ , and so the total time H to manufacture the order of n objects is given by $nR^{-1}$ , which is the harmonic mean of the individual times, as stated in (2.1).
(ii) The production rate of any agent of type j is $t_{j}^{-1}$ objects/hour. Collectively, while executing an optimal scheme, the $k_{j}$ agents of type j produce
objects, and so the proportion $p_j$ of the n objects produced by the type j agents is as stated in (2.2),
(b) The combined work rate of the set of agents is R, and since $H=nR^{-1}$ , it follows that after time H, the agents have produced the equivalent of $HR = n$ objects, none of which could have been completed before time H as that would leave some agent idle. Therefore, their action represents an optimal scheme.
The partition P of the time interval I of duration H into n equal intervals is the basis of the fundamental optimal scheme we shall introduce, as it is the length of time for agents of an optimal scheme to collectively build the equivalent of one object.
Definition 2.4
-
(a) We shall refer to H as defined in (2.1) as the harmonic optimum time for an optimal n-object scheme.
-
(b) The partition P divides I into n intervals each of length ${H}/{n}=R^{-1}$ . We call $R^{-1}$ the atomic time unit (a.u.) for an optimal scheme.
-
(c) A scheme is uniform if, for all $i (1\leq i\leq m)$ , each object is worked on continuously and is worked on by type i agents for exactly $k_{i}$ a.u. We shall refer to $k_{i}$ as the type i quota for objects in a uniform scheme.
Proposition 2.5
-
(a) Let S be an optimal scheme for the n-object problem. If, for all i, the time each object is worked on by type i agents is the same for all objects, then S is a uniform scheme.
-
(b) All uniform schemes are optimal.
-
(c) For $m=2$ , a scheme S is optimal if and only if S is uniform.
Proof. (a) Since S is optimal, all agents work for H hours, which equals n a.u., and so the sum of the total times worked by type i agents is $nk_{i}$ a.u. If each of the n objects were worked on by type i agents for $K_{i}$ a.u., then the sum total work time by the type i agents would also equal $nK_{i}$ , whence $K_{i}=k_{i}$ , and so S is uniform.
(b) Suppose that S is a uniform scheme for the n-object problem. Then each object is worked on for $\sum _{i=1}^m k_i = n$ a.u. Since all objects work continuously for H hours with identical numbers of hours assigned to each agent type, they have all reached the same point in their build, which must be full completion as all agents have worked to full capacity throughout. Therefore, S is optimal.
(c) If S is uniform, then S is optimal by item (b), so it only remains to check that if $m=2$ and S is an optimal scheme, then S is uniform. For any pair of objects O and $O'$ , let $K_{1}$ and $K_{1}'$ be the respective times, measured in hours, that each object is worked on by type- $1$ agents. Then, since any two objects O and $O'$ have equal times of manufacture in an optimal scheme,
Since $t_1\neq t_2$ , we get $K_1=K_{1}'$ , whence it follows from item (a) that S is uniform.
Example 2.6. For $m\geq 3$ , it is not necessarily the case that all optimal schemes are uniform. To see this, take $k_{1}=k_{2}=k_{3}=1,$ so that $n=m=3$ . Let the completion times of agents $A_{1},A_{2}$ and $A_{3}$ be respectively $3,6$ and $4$ .
There is a nonuniform optimal scheme S constructed as follows. Agents $1$ and $2$ exchange objects $O_{1}$ and $O_{2}$ after $2$ hours, while agent $3$ works $O_{3}$ for $4$ hours until completion. After $4$ hours, the proportion of objects $1$ and $2$ that have been completed will be, in both cases, $2(1/3+1/6)=1$ , and so all three objects are completed in $H=4$ hours. Therefore, S is optimal but S is not uniform as it is not the case that each object is worked on by each type of agent for the same length of time. We shall expand on this in Section 2.2.
2.2. Optimal scheme construction
Definition 2.7 (The n-cyclic scheme).
We label the n agents $A_{1},A_{2},\ldots ,A_{n}$ , a freely chosen order. We label the n objects $O_{1},O_{2},\ldots ,O_{n}$ , where $O_{i}$ is the object that begins to be made by agent $A_{i}$ in the first interval $I_{1}$ . The objects are then cycled around through the agents: picture all the agents arranged in a circle. At the end of each time interval $I_{j}$ , a whistle blows. Each agent then takes the object it is currently working on and passes it to the agent on its right to continue the build. In symbols, at the end of interval $I_{j}$ , the object currently being worked on by $A_{i}$ is handed to $A_{i+1}$ , where we take $n + 1 = 1$ .
Theorem 2.8. The n-cyclic scheme, $C_{n}$ , is optimal.
Proof. By examining the evolution of $O_{i}$ in $C_{n}$ , we see that in the successive intervals $I_{1},I_{2},\ldots ,I_{j},\ldots ,I_{n}$ , object $O_{i}$ is worked on by the respective agents,
Therefore, $O_{i}$ is worked on by each of the n agents exactly once, and for the same length of time, which is $1$ a.u. Since $H = n$ a.u., it follows from Proposition 2.3(b) that S is optimal.
Example 2.9. Let us take $k_{1}=3,k_{2}=4$ and $k_{3}=1$ , (so that $n=8$ ), with $t_{1}=1,t_{2}=2$ and $t_{3}=4$ . Then, in objects/hour,
Each of the $8$ intervals equals
The proportion of the objects built by each agent type is
Let us choose to order the agents in increasing execution time. We may track the build of any of the objects. For instance, at the end of the sixth stage, $O_{5}$ will have been worked on by type-2 agents for the first three stages, ( $A_{5},A_{6}$ and $A_{7}$ , representing ${3}/{4}$ of the type- $2$ quota); for the fourth stage, by the type-3 agent $A_{8}$ ; and for the fifth and sixth stages, by the type-1 agents, $A_{1}$ and $A_{2}$ (representing ${2}/{3}$ of the type-1 quota). Hence, the proportion of the build of $O_{5}$ completed at the end of Stage 6 is
2.3. Schemes that run uniform sub-schemes in parallel
Example 2.6 was a simple instance of two uniform schemes running in parallel. We explore this idea further using properties of harmonic means.
Lemma 2.10
-
(a) Let T be a finite list of positive numbers (so that repeats are allowed) with harmonic mean $H(T)=H$ . Let U be a sub-list of T strictly contained in T. If $H(U)=H$ , then $H(T\setminus U)=H(U)=H$ .
-
(b) If $U,V$ are finite lists of positive numbers with $H(U)=H(V)=h$ , then $H(U\cup V)=h$ , where $U\cup V$ denotes the union of U and V as lists, so there is a distinct element of $U\cup V$ for each member of U and of V.
Proof. (a) By re-indexing as necessary, we may write $U=\{a_{1},a_{2},\ldots ,a_{m}\}$ and $T\setminus U=\{a_{m+1},a_{m+2},\ldots ,a_{m+n}\}$ for some $m,n\geq 1$ . Let
We are given that
(b) We may take $U=\{a_{1},a_{2},\ldots ,a_{m}\}$ and $V=\{a_{m+1},a_{m+2},\ldots ,a_{m+k}\}$ for some $k\geq 1$ , and let $S_{1}$ and $S_{2}$ denote the respective sums of the reciprocals of the members of U and of V. We are given
Remark 2.11. The conclusion of Lemma 2.10(b) is not true if we take the set union of U and V when they have elements in common. For example, take $U=\{1,5,20\},\,V=\{1,6,12\}$ so that, as sets, $U\cup V=\{1,5,6,12,20\}$ . Then $H(U)=H(V)={12}/{5}$ , but $H(U\cup V)={10}/{3}$ . However, if we treat the collections as lists, allowing repeated members when taking the union (in this example this would yield the list $(1,1,5,6,12,20)$ ), then the argument of Lemma 2.10 applies and all three harmonic means coincide.
Let $T=\{t_{1},t_{2},\ldots ,t_{n}\}$ , a list of job completion times of our n agents in the n-object problem, and let H be the harmonic mean of T. If T contains a proper sub-list U with $H(U)=H$ , then by Lemma 2.10(a), we have $H(T\setminus U)=H$ also, allowing us to split T into two complementary sub-lists $U,V$ with $H(U)=H(V)=H$ . This process may be repeated on the lists U and V until we have partitioned T into a disjoint union of r lists say, which we write as
where each sub-list $T_{i}$ is irreducible, meaning that $T_{i}$ has no proper sub-list U with $H(U)=H$ . We call this an irreducible representation of T.
Given an irreducible representation of T, we may construct an optimal scheme S for the n-object problem from any collection of optimal schemes $\{S_{i}\}_{1\leq i\leq r}$ , where $S_{i}$ is an optimal scheme for the $n_{i}$ -object problem, with $n_{i}$ defined by $n_{i}=\sum _{j=1}^{s_{i}}t_{i,j},$ where $t_{i,1},t_{i,2},\ldots ,t_{i,s_{i}}$ are the members of the list T that belong to $T_{i}$ . Since the harmonic means of all these r sub-problems equal H, we may run these r schemes $S_{i}$ in parallel to provide an optimal solution S of the original n-object problem. In recognition, we denote this scheme as
In our Example 2.6, we have $T=\{3,6,4\}$ , so that $T_{1}=\{3,6\}, T_{2}=\{4\}$ and $H=4$ . The sub-schemes $S_{1}$ and $S_{2}$ are respectively the $2$ -cyclic and $1$ -cyclic schemes. The overall optimal scheme S is then the sum of two irreducible uniform schemes, $S=S_{1}\bigoplus S_{2}$ , but S is not itself uniform.
A given finite list T of positive numbers may always be presented as a sum of irreducible harmonic components. However, as our next example shows, T may have more than one irreducible representationFootnote 1.
Example 2.12. The set $T=\{2,3,4,5,6,7,9,10,12,14,15\}$ can be split into two subset pairs preserving the harmonic mean either as $U_1=\{2,7,9,10,15\}$ and $U_2 = T\setminus U_1 = \{3,4,5,6,12,14\}$ , or as $V_1 = \{2,5,6,10,14,15\}$ and $V_2=T\setminus V_1 = \{3,4,7,9,12\}$ , but not in any other ways. All five harmonic means come to ${315}/{58}\approx 5.4310$ .
We would expect that the problem of determining whether a given list admits a harmonic partition is hard, as the simpler problem of determining whether such a list has an additive partition is NP-complete [8]. However, we show below how to generate any number of such examples of lists with multiple harmonic partitions. This allows us to construct an optimal nonuniform scheme that is not a parallel sum of irreducible schemes.
Proposition 2.13. The harmonic mean $H=H(x,y)$ of two positive real numbers x and y satisfies $H(x,y)=2m$ if and only if $(x-m)(y-m)=m^{2}$ .
Proof. Since
the proof is complete.
This result lets us produce any number of distinct pairs of integers, x and y, with a common harmonic mean through choosing m so that $m^{2}$ has the requisite number of pairs of distinct factors. The structure of our next example requires three such pairs, so we take $m = 6$ and employ the factorizations $36=2\times 18=3\times 12=4\times 9$ .
Example 2.14. We construct a scheme S with $n=6$ agents and objects, and with respective job completion times of these agents $A_1,\ldots ,A_6$ given by $t_1 = 2 + 6, t_2 = 18+6$ , so that $t_1 = 8,t_2 = 24$ ; similarly, $t_3 = 9,t_4 = 18$ , and $t_5 = 10,t_6 = 15$ . By Proposition 2.13, each of these pairs share a harmonic mean of $H=2\times 6=12$ . Next, it follows by Lemma 2.10(b) that the harmonic mean of the union of any two of the pairs of agent times, $(8,24),(9,18)$ and $(10,15)$ , is also $12$ , whence it follows again by Lemma 2.10(b) that $12$ is likewise the harmonic mean of the full set of six agent production times. Consider the scheme S represented by Table 1.
Each object $O_i$ is worked on successively by the agents in its column when passing from top to bottom. For example, $O_4$ is worked on in turn by agents $A_4,A_3,A_6$ and $A_5$ . Since each object is worked on by the agents of two pairs, each with the harmonic mean of $12$ hours, it follows that the duration of the common interval between exchanges is ${12}/{4}=3$ hours. Each agent appears exactly once in each of the $3$ -hour windows, which are represented by the rows. It follows that S is indeed an optimal scheme. However, each pair of objects share common agents and so S is not a sum of irreducible schemes, and nor is S uniform as each object is worked on by only $4$ of the $6$ agents.
As a bonus, we note that since it consists of three pairs with an equal harmonic mean of $12$ , the set $\{8,9,10,15,18,24\}$ can be split into three harmonic partitions, one for each pair. We can indeed create an example of order five by taking the set consisting of two of these pairs and their common harmonic mean of $12$ : $T=\{8,9,12,18,24\}$ , the two partitions being
2.4. Removing unnecessary exchanges
It is always the case that we may take $m\leq n$ . Indeed, in most real world examples, we will have $m<<n$ for typically n might be of the order of hundreds or thousands, while m, the number of different manufacturing speeds of our agents, might be in single digits. Therefore, even though we were operating only a handful of different agent types, the cyclic scheme algorithm could involve thousands of stoppages of the manufacturing process. We then look to see if we can adjust our scheme to lower the number of stoppages. For a given scheme of production S, we shall call the number of times the whistle blows to halt production the halting number, and denote it by $h = h(S)$ . For the n-cyclic scheme $C_n$ , we have $h(C_n)=n-1$ .
Remark 2.15. Let $n'={n}/{d}$ , where d is the greatest common divisor (gcd) of the list of integers $k_1,k_2,\ldots ,k_m$ . Note that d is also a divisor of n. Collect the n objects into $n'$ groups of d objects, regarding each such d-set as a single object. In a similar way, group the $k_{i}$ agents of type i into $k_{i}'={k_{i}}/{d}$ sets of d agents, with each set regarded as a single agent. We may now view this n-object problem as an $n'$ -object problem with a list of $k_{1}',k_{2}',\ldots ,k_{m}'$ agents. In particular, this procedure would allow replacement of a $C_n$ -scheme by a $C_{n'}$ -scheme, thereby reducing the halt number by a factor of d. A typical object of the reduced scheme will consist of d objects, all worked on to the same point in their manufacture, which is then acted on in parallel by a collection of d agents of the same type to continue the scheme.
Henceforth, we will assume that the reduction process described has been carried out so that the gcd d of $k_1,k_2,\ldots ,k_m$ is $1$ .
3. The $m = 2$ case: the Euclidean scheme
The first interesting value for m is therefore $m = 2$ , which is the focus of the remainder of the paper. By Proposition 2.5(c), the only optimal schemes are uniform. For $n=2$ , $C_{2}$ is the unique optimal scheme with the fewest halts, as $h(C_{2})=1$ represents the least halt number as both objects must be worked on by both agents; moreover, the exchange must occur at the halfway point of the execution of the scheme for the scheme to be optimal. We may continue then under the assumption that $n\geq 3$ . Due to the role they will play as remainders in the Euclidean algorithm, we shall denote the number of type-1 and type-2 agents by $r_1$ and $r_2$ , respectively, and without loss, we take $r_1> r_2$ with $r_1 = ar_2 + r$ for positive integers a and r ( $1\leq r\leq r_2 -1$ ). (We assume that the speeds of the two agent types differ, but do not specify which is the faster.) An optimal (uniform) scheme is characterized by the condition that each object is worked on by agents of type 1 and type 2 respectively for $r_1 = ar_2 + r$ a.u. and $r_2$ a.u. It follows that $n = (a+1)r_2 + r$ .
Definition 3.1 (The Euclidean scheme).
We consider the following refined algorithm, which has stages corresponding to the lines of the Euclidean algorithm applied to the pair $(r_1,r_2)$ . The associated optimal scheme will be known as the Euclidean $(r_{1},r_{2})$ -scheme, denoted by $E=E(r_1,r_2)$ .
Let the opening line of the Euclidean algorithm be $r_1 = a_1r_2 + r_3$ . We suppress subscripts and write a and r for $a_1$ and $r_3$ , respectively. Partition the objects into $a+2$ sets as follows. There are $a+1$ sets $S_{0},S_{1},\ldots ,S_{a}$ , each of order $r_2$ , and a remainder set, $S_{a+1}$ , of order r. Denote by $S^{(p)}$ the union
At the beginning of Stage 1, each object of $S_0$ is assigned to an agent of type 2, while all other objects are assigned to type-1 agents.
In Stage 1, all agents first work on their initial object for a time of $r_2$ a.u.
The objects of $S_0$ are then exchanged by their current agents with the objects of $S_1$ so that each agent that was working on an object of $S_0$ changes to an object from $S_1$ and vice versa.
After a further $r_2$ a.u., the objects of $S_1$ exchange agents with those of $S_2$ . This action of object exchange between agents is carried out in this fashion so that after the $(i+1)$ st halt, the objects of $S_i$ are exchanged with those of $S_{i+1}$ , until on the ath exchange, the objects of $S_{a-1}$ and $S_a$ are exchanged between the corresponding agent sets. This entails a halts in all. Stage 1 ends with the ath exchange.
At this transition point between Stages 1 and 2, all objects in $S^{(a-1)}$ have been worked on by type-2 agents for $r_2$ a.u. and by type 1 agents for $(a-1)r_2$ a.u. In the subsequent stages, which collectively represent a time interval of
each of the members of $S^{(a-1)}$ will continue to be worked on by their current type-1 agent. This in effect removes these $ar_2$ type-1 agents from further consideration, leaving an $(r_2 + r_3)$ -problem with the type-2 agents now in the majority. Since the objects in $S^{(a-1)}$ undergo no further agent exchange, we shall say they have become passive objects, while the other objects, which now enter the second stage, are labelled active. Similarly, an agent is described as passive or active according to whether the agent is working on a passive or active object.
We now repeat the preceding process recursively, mirroring the action of the Euclidean algorithm itself. Stage 2 acts the process of Stage 1 for the objects of $S_{a}\cup S_{a+1}$ using the second line of the Euclidean algorithm for $(r_1,r_2)$ , but with the roles of the type-1 and type-2 agents reversed. This role alternation of the two types is a feature that persists as the process is acted throughout subsequent stages.
Acting the Euclidean algorithm on $(r_1,r_2)$ , where $r_1$ and $r_2$ have no common factor, yields for some $t\geq 2$ ,
Stage i will correspond to line i of the Euclidean algorithm. In particular, we have $(r_t,a_t,r_{t+1},r_{t+2})=(r_t,r_t,1,0)$ . Stage t then has $a_t = r_t$ exchanges, at the end of which all objects are worked on by their current agent for the final a.u., taking all builds to completion and marking the conclusion of what we deem to be Stage t. This concludes the specifications of the Euclidean scheme.
The sets $S_i$ and the coefficient a change as we pass from one stage of the algorithm to the next. To compare these sets between stages calls for notation with double subscripts.
To compare the sets of objects in question, consider Stage $i (1\leq i\leq t)$ of $E(r_1,r_2)$ . Let $S_{i,j}$ denote the set $S_j$ at the ith stage of the scheme $(0\leq j\leq a_{i}+1)$ . By construction, the active objects of Stage i all begin with identical work records with respect to both agent types. Moreover, during the ith stage, each object in $S_{i,j}$ has been worked on by each type of agent for the same length of time as each other member of $S_{i,j}$ ; indeed, we say something more precise.
Definition 3.2. The work record for the objects in $S_{i,j}$ is $(t_{1},t_{2})_{i,j}$ , where $t_{p} (p=1,2)$ denotes the number of a.u. for which the members of $S_{i,j}$ have been worked on by agents of type p at the completion of Stage i.
Remark 3.3. It follows from the recursive construction of the stages that for any i, for all active objects, there are only two distinct work record pairs determined by whether, at the end of Stage i, $S_{i,j}$ represents a set of passive objects $(0\leq j\leq a_i-1)$ or active objects $(a_i \leq j\leq a_i+1)$ . We therefore simplify notation by denoting the two respective pairs by $(t_1,t_2)_i$ and $(T_1,T_2)_i$ . In particular, all objects active in Stage $i + 1$ begin that stage having been worked on for $T_1$ a.u. by type-1 agents and $T_2$ a.u. by type-2 agents.
In the following theorem, proof of optimality of $E(r_1,r_2)$ includes the precise work record of all objects that become passive at the conclusion of each stage.
Theorem 3.4. The Euclidean scheme $E(r_1,r_2)$ is optimal.
Proof. We show that in $E(r_1,r_2)$ , each object is worked on continuously and uniformly for a total of $r_1 + r_2 = n$ a.u., from which the result follows from Proposition 2.3(b). The proof is by induction on the stage number i.
An object that becomes passive at the end of Stage 1 has the work record:
Similarly, for active objects, at the close of Stage 1,
The length of Stage 1 is $a_1r_2 = r_1 - r_3$ . The remaining number of a.u. before H expires matches the number of active objects and agents as all three are equal to
Therefore, in moving from Stage 1 to Stage 2, the framework for the scheme is repeated with $(r_1,r_2)$ replaced by $(r_2,r_3)$ , where $r_2 = a_2r_3 + r_4$ (so that $r_2> r_3)$ . However, the roles of type-1 and type-2 agents are reversed, as it is type 2 that now form the majority.
An object that becomes passive at the end of Stage 2 has the work record:
Similarly, at the close of Stage 2, we have for the active objects,
The length of Stage 2 is $a_2r_3 = r_2 - r_4$ a.u., and hence the number of a.u. before H expires is
which matches the number of active objects and active agents in Stage 3.
We next show by induction on i that for the objects that become passive at the end of Stage i, if i is odd, then
while for the active objects,
If i is even, then
Moreover, at the conclusion of Stage i, the number of active objects matches both the number of active agents and the time remaining until the harmonic optimum expires, which is $r_i + r_{i+1}$ . The induction is anchored on the $i = 1$ and $i = 2$ cases given in (3.1)–(3.4).
Let $i+1 \geq 3$ be odd. Stage $i+1$ is based on
Since i is even, we apply (3.8) to give $(T_1,T_2)_i = (r_1 - r_{i+1}, r_2 - r_{i+2})$ for the first term in the next sum. For the second term, we apply the form of $(t_1,t_2)_1$ given in (3.1), but increment the subscripts by i. This yields
in accord with (3.5) for $i+1$ ; similarly, adjusting (3.2) for line $i+1$ yields
in accord with (3.6) for $i+1$ .
Now let $i+1\geq 4$ be even. Since i is odd, we apply (3.6) to give the first term of the next sum, which is $(T_1,T_2)_i = (r_1 - r_{i+2}, r_2 - r_{i+1})$ , while for the second term, we apply the form of $(t_1,t_2)_2$ in (3.3), but increment the subscripts of $(r_3,r_2 - r_3 - r_4)$ by $i - 1$ (as $2 + (i - 1) = i + 1$ ). This yields
in accord with (3.7) for $i+1$ ; similarly, by incrementing the second term in (3.4), we infer that
in accord with (3.8) for $i+1$ , and so the induction continues in both cases.
At the transition from Stage $i+1$ to Stage $i+2$ , for both the odd and even case, the common number of active objects and agents is
which equals the remaining time in the harmonic optimum: $r_{i+1} + r_{i+2} - a_{i+2}r_{i+2}$ , and so the induction continues.
If t is odd, putting $i = t$ in (3.5) gives that at the conclusion of the final stage,
The final a.u. then adds $(1,0)$ to this pair for all passive objects, giving the required final pair of $(r_1,r_2)$ representing the numbers of a.u. worked by type-1 and type-2 agents, respectively. On the other hand, by (3.6),
and this time it is $(0,1)$ added to the work record of the final active object, giving the required pair, $(r_1,r_2)$ .
If t is even, then by (3.7),
The final a.u. adds $(0,1)$ to this pair for all passive objects, giving the required final pair of $(r_1,r_2)$ . On the other hand, by (3.8),
The single active object this applies to then has $(1,0)$ added to complete the scheme with the required pair of $(r_1,r_2)$ .
In all cases then, each object is worked on for $r_1 + r_2 = n$ a.u. $= H$ , and so $E(r_1,r_2)$ is optimal, and this completes the proof.
We record some useful observations that emerged in the proof of Theorem 3.4.
Corollary 3.5. For the Euclidean scheme $E=E(r_{1},r_{2})$ :
-
(a) there are t stages in E. The halt number $h=h(E)$ is $h=\sum _{i=1}^{t}a_{i}$ ;
-
(b) the length of Stage i is $a_ir_{i+1}=r_{i} -r_{i+2}$ a.u. $(1\leq i\leq t-1)$ , and Stage t has length $a_t+1$ a.u.;
-
(c) in any stage, the members of $S_0\cup S_a$ undergo a single exchange, those of $\bigcup _{i=1}^{a-1}S_i$ undergo two exchanges, while those of $S_{a+1}$ are not exchanged.
Proof. (a) For Stage i of the t stages, there are $a_i$ halts including for Stage t. Summing the $a_i$ thus gives $h(E)$ .
(b) For $1\leq i\leq t$ , for Stage i, prior to each of the $a_i$ halts, all agents work continuously for $r_{i+1}$ a.u. Hence, the length of Stage i is $a_{i}r_{i+1}$ a.u. $(1\leq i\leq t-1)$ and Stage t has $a_tr_{t+1} + 1 = a_t + 1$ a.u.
(c) These are observations of the exchange process.
Example 3.6. For $r_1 = 180, r_2 = 53,n = r_1 +r_2 = 233,$
We shall initially label our parameters $r_1, r_2, a,r$ where $r_1 = ar_2 + r$ , updating values for these parameters in accord with our scheme as we pass from one stage to the next.
Set the parameters for Stage 1 based on the first line of the Euclidean algorithm for $(r_1, r_2)$ :
Since $a=3$ , we partition the objects into $a+2=5$ sets:
All agents now work on their objects for $r_2 = 53$ a.u. and then halt to allow the first exchange. There are $a=3$ such halts and exchanges in Stage 1.
At the end of Stage $1$ ,
Each object in $S^{(a-1)}=S^{(2)}$ will next be worked on to completion by the type-1 agent that now possesses it, making $ar_2 = 3r_2$ type-1 agents unavailable for the objects of $S_3\cup S_4$ . The number of a.u. until completion is given by $r_2 + r.$ Therefore, at the end of the entire process, the members of the sets in $S^{(2)}$ will have the required final assignment pair:
which in this case gives
There remain r type-1 and $r_2$ type-2 agents available to $S_3\cup S_4$ at the beginning of the second stage.
We now analyse Stage 2 for the objects in $S_3\cup S_4$ . As we pass from one stage to the next, the roles of the agent types are reversed. Our parameters have their values updated. Note that the entries of $(T_1,T_2)_1$ are the respective inherited starting values of type 1 a.u. and type 2 a.u. for the remaining $r_2 + r$ active objects of Stage 2.
Stage 2:
Note that the form of the pair $((a-1)r_2,r_2)$ of Stage 1 is reversed in Stage 2 to become $(r_2,(a-1)r_2)$ ; this is due to the exchange of roles of type-1 and type-2 agents, a feature of each passage to a new stage.
Since $r_2 + r= 21 + 11 = 32,$ at the end of the entire process, the objects in the current $S^{(a-1)}=S^{(1)}$ will have final assignment pair $(180,21+32)=(180,53)$ , as required.
Stage 3:
Since $r_2 + r = 11 + 10 = 21$ , at the end of the entire process, the objects in the current $S^{(a-1)}=S^{0}$ will have final assignment pair $(159+21,53)=(180,53)$ , as required.
Stage 4:
Since $r_2 + r = 10 + 1 = 11$ , at the end of the entire process, the objects in the current $S^{(0)}$ will have final assignment pair $(180,42+11)=(180,53)$ , as required.
Stage 5 (final stage):
By Corollary 3.5(a), the total number of halts h of the agents is the sum of values of the parameter a over the five stages:
This compares with the simple cyclic method $C_{233}$ , which has $n - 1 = 232$ such pauses in production.
Corollary 3.5(b) gives the list of lengths of the five stages as $159,42,11,10$ and $11$ , which sum to $n = 233$ . This completes the illustration of our example.
4. Euclidean schemes and their halt numbers
4.1. Matrix of an optimal scheme
With each optimal n-scheme S of the $m=2$ problem, we may associate an $n\times n$ binary matrix $M=M(S)$ whereby the $(i,j)$ th entry of M is $1$ or $0$ according to whether object $O_i$ in the jth time interval $I_j$ is worked on by a type-1 or a type-2 agent.
The labelling of the set of objects is arbitrary: a permutation $\pi $ of the rows of M gives a new matrix $M_{\pi }$ that corresponds to a permutation of the object set ${\cal O}$ , which is to say a re-numbering of the members of ${\cal O}$ (by $\pi ^{-1}$ ). Hence, the corresponding schemes, $S(M)$ and $S(M_{\pi })$ , are equivalent up to the labelling of the objects.
At the beginning of each interval in the execution of an optimal scheme S, the type-1 and type-2 agents are re-assigned their objects, forming two sets $K_1$ and $K_2$ of objects, respectively. For a given ordering of ${\cal O}$ , these sets are equal for all stages for a pair of schemes S and $S'$ exactly when $M(S)=M(S')$ . Therefore, we may declare that two schemes S and $S'$ are equivalent if $M(S)=M(S')$ , and then extend this notion of equivalence to the case where $M(S)=M_{\pi }(S')$ for some row permutation $\pi $ of the rows of $M(S')$ .
Clearly, the columns of $M(S)$ will contain $k = r_1$ entries of $1$ and $r_2$ entries of $0$ . Proposition 2.5(c) is equivalent to the statement that S is optimal if and only if each row also contains exactly k instances of $1$ .
In summary, S is optimal if and only if $M(S)$ is k-uniform, meaning that each row and column of M has exactly k ones. Up to equivalence, there is a one-to-one correspondence between optimal n-schemes S with k type-1 agents and k-uniform $n\times n$ binary matrices.
4.2. Successive Fibonacci numbers
The number of halts in a Euclidean scheme is determined by the sum of the coefficients $a_i$ , which corresponds to the interpretation of the Euclidean algorithm whereby each step involves simply subtracting the smaller of the two integers in hand from the larger. In the case of a pair of successive Fibonacci integers, $f_{p+1},f_p$ say, all the $a_i$ are equal to $1$ (apart from the final coefficient where the remainder is $0$ ). This leads to a halt number of order $\log n$ , where $n=f_{p+1}+f_p = f_{p+2}$ .
To see this, put $r_1=f_{p+1}$ , $r_2 = f_p$ , two consecutive and distinct Fibonacci numbers (so that $p\geq 2$ ), whence $n=f_{p+1}+f_p=f_{p+2}$ . There are then $p-1$ lines in the Euclidean algorithm:
Now, $a_i = 1$ for all but the final line, where the coefficient of $f_2$ equals $2$ . By Corollary 3.5(a), $h(E(f_{p+1},f_p))=(p-1-1)+2=p.$ The respective lengths of the $p-1$ stages are $f_p,f_{p-1},\ldots ,f_3 = 2,2f_2 + 1 = 3$ . Since $f_2 = f_1$ , the sum of these lengths is equal to
all in accord with Corollary 3.5(b). Since $f_n=\lfloor \phi ^{n}/\sqrt {5}\rceil $ , the nearest integer to $\phi ^{n}/\sqrt {5}$ , where $\phi $ denotes the Golden ratio $(1+\sqrt {5})/{2}$ , (see, for example, [6]), it follows that for all sufficiently large p,
Hence, since $h=p$ ,
Example 4.1. We represent the Euclidean Fibonacci scheme $F_p$ for $p=5$ in matrix form. Since $f_5=5$ and $f_6=8$ , there are $5$ agents labelled type 2 ( $A_1$ – $A_5$ ) and $8$ of type 1 ( $A_6$ – $A_{13}$ ), with $p = 5$ stages and $f_7 = 13$ objects in all. By Corollary 3.5(b), the halts, which number $p=5$ , occur at the end of the intervals $f_5 = 5$ , $5+f_4 = 8$ , $8+f_3 = 10$ , $10 + f_2 = 11$ and $11+f_1 = 12$ .
The matrix $M(F_5)$ of the Euclidean Fibonacci $5$ -scheme is shown in Table 2. In all but the final stage, the active objects form three sets, $S_0,S_1$ and $S_2$ , which feature only one set of exchanges, which occur between $S_0$ and $S_1$ . Initially, object $O_i$ is worked on by agent $A_i$ . The columns within each stage are identical. As we pass from one stage to the next, the objects of $S_0$ become passive, the “old” $S_2$ becomes the “new” $S_0$ , while the “old” $S_1$ splits to form the “new” $S_1,S_2$ pair. The subscript on the $(i,j)$ th entry of $M(F_5)$ gives the number of the agent that is working on object $O_i$ in the jth a.u. of the scheme $F_5$ (and so, in respect to $M(F_5)$ , the objects label the rows while the agents, recorded as subscripts, may “move” from one column to the next).
Stage 1:
At the conclusion of Stage 1, the objects in $S_0$ are with their set of final agents, $\{A_6,A_7,A_8,A_9,A_{10}\}$ (see Table 2).
Stage 2:
At the end of Stage 2, the objects in $S_{0}$ are with their final agents, $\{A_3,A_4,A_5\}$ (see Table 2).
Stage 3:
At the end of Stage 3, the objects in $S_0$ are with their final agents, $\{A_{11},A_{12}\}$ .
Stage 4:
At the end of Stage 4, the object in $S_0$ is with its final agent, $A_2$ .
Stage 5:
After the final exchange of Stage 5, the objects in $S_0$ and $S_1$ are with their respective final agents, $A_{13}$ and $A_1$ . This completes the example.
Remark 4.2 (Halting number of generic Euclidean examples).
The halting number of the Euclidean scheme based on two successive Fibonacci numbers that sum to n is of order $O(\ln n)$ , while at the other extreme, the only optimal scheme S when $r_1 = n$ and $r_2 = 1$ has a halting number of $h(S) = n-1$ .
For a given positive integer n, the expected halting number is the mean number of subtractions in the Euclidean algorithm for two relatively prime integers $r_2 < r_1$ such that $r_1 + r_2 = n.$ The mean number of lines in the Euclidean algorithm in this case is certainly of order $O(\ln n)$ , and indeed much more precise statements are known (see [5], which cites [Reference Tonkov3]). General considerations suggest that the mean halt number for such a pair $(r_1,r_2)$ is then $O((\ln n)^{2})$ [Reference Tao2], but this has not been proved.
The question may be formulated as asking for the expected sum of the coefficients of the continued fraction expansion of $a/(n-a)$ for a drawn at random from $1,\ldots , n/2$ (and restricted to be coprime to n). From the Gauss–Kuzmin distribution [7] and the St Petersburg paradox analysis, Tao comments in [Reference Tao2] that it is reasonable to expect the answer to be $O((\ln n)^{2})$ , though rigorous analysis may be difficult.
4.3. Production times with allowance for handovers
Consider again the $m = 2$ case with timings $t_1 = 1$ for a set of $r_1$ agents, and $t_2 = T \neq 1$ for a set of $r_2$ agents. The optimal production time is
We now make allowance for a time interval, the halting time, of length $\varepsilon>0$ for each halt during a scheme and an initial loading time of $\varepsilon $ also. For $C_{n}$ , the total production time C is then
Applying (4.2) to Example 4.1, taking $T = 2$ and $\varepsilon =0.005$ , we have to five significant figures that the harmonic optimum time is
This compares with the production times C of $C_{13}$ and E of the Euclidean scheme:
The excess percentages over the harmonic optimum are respectively $5.3\%$ and $2.4\%$ .
Applying (4.2) to Example 3.6, however, again taking $T=2$ and $\varepsilon =0.005$ , to five significant figures,
Hence, with an exchange period $\varepsilon $ equal to 18 seconds, we have for $C_{233}$ an increase in production time above the harmonic optimum of $103$ %. In contrast, with the Euclidean scheme, we calculated $h(E(53,180))=17$ , so that the build time E of $E(53,180)$ is
which represents an increase of only 8% above the harmonic optimum. This suggests that with a generic example involving nonzero exchange times, the Euclidean scheme is much more efficient that the n-cyclic scheme.
4.4. Comparison with the Biker-hiker problem
As mentioned in Section 1, in the $m = 2$ case, the n-object problem with k machines corresponds to the Biker-hiker problem with k bicycles and n travellers where the bicycles may move instantaneously from their current staging post to any other. It follows that the time of an optimal scheme where this superpower may be exploited must be less than the least time possible in the original setting.
It transpires that for the Biker-hiker problem, in any optimal solution, each traveller covers exactly $k/n$ of the distance of the journey on the faster mode of transport (the bicycle), while in our n-object problem with k faster agents, each object spends exactly $k/n$ of the total journey time with faster agents (remembering that when $m = 2$ , all optimal schemes are uniform). In the Biker-hiker case, the time taken cycling will be less than $k/n$ of the total scheme time just because cycling beats walking, and so will cover the $k/n$ distance in less than that proportion of the total time.
This is confirmed by comparing the formulae for the optimal times. From (4.1), we see that if there are $r_1$ agents of one completion time, which we take to be 1, and $r_2$ of another time T, then the optimal time is
This holds whether or not the first agent type is the faster of the two. The time required to execute an optimal scheme for the Biker-hiker problem is
(It may be noted that the lesser time of $1$ and T must apply to bicycle travel. The opposite scenario corresponds to broken bicycles that impede the progress of a traveller obliged to walk them along. In this case, the optimal completion time is simply the time taken to complete the journey with a broken bicycle, and this is the same for any positive value of k, the number of bikes.)
We then expect the first of these two expressions to be less than the second. Simplifying this inequality shows it to be equivalent to $(T-1)^2> 0$ , which is true as $T \neq 1$ .
4.5. Euclidean scheme is greedy
The Euclidean scheme represents a greedy algorithm in that the process is never halted unless continuation would result in some object exceeding its quota of agent type for an (optimal) uniform scheme.
We show that if the greedy principle is adopted to give a scheme S, we are effectively forced into the Euclidean scheme. After $r_2$ a.u., the set $S_0$ of objects being worked on by the members of the set of type- $2$ agents reach their type-2 quota, and so the process must halt. The members of $S_0$ are then exchanged with a subset $T_0$ of objects being worked on by type-1 agents. The Euclidean directive, however, is not the only possible continuation of our greedy scheme S.
Let ${\cal O}$ denote the n-set of objects. In general, an exchange of a greedy scheme S acts a permutation $\pi :{\cal O}\rightarrow {\cal O}$ subject to the constraint that $S_{0}\pi \cap S_0=\emptyset $ . For $E(r_1,r_2)$ , this permutation $\pi $ has the additional property that $S_0\pi ^{2} = S_0$ , while leaving all other objects fixed. However, in both $E(r_1,r_2)$ and S, there is a set of $r_2$ objects that have completed their type-2 quota and which are now assigned to type-1 agents, there is a set of $r_2$ objects that have been worked on for $r_2$ a.u. by type-1 agents and which now are assigned to type-2 agents, while the remainder of the objects have been worked on by type-1 agents for $r_2$ a.u. and remain assigned to type-1 agents. The outcome from both schemes is therefore identical up to the numbering of the agents in that there is a permutation $'$ of ${\cal O}$ such that for each $O\in {\cal O}$ , at any time point in the execution of the schemes E and S, O in scheme E and $O'$ in scheme S have been worked on for the same length of time by type-2 agents (and hence also by type-1 agents). Moreover, this equivalence will persist as we pass to subsequent exchanges and stages provided we adhere to the greedy principle of never halting until forced to so as not to exceed type quotas of objects. Therefore, we may identify any scheme based on the greedy principle with the Euclidean scheme $E(r_1,r_2).$
It remains to be determined however if an approach based on the greedy principle is effective for the general n-object problem when $m\geq 3$ .
Acknowledgements
The thoughtful and astute suggestions of the referees were helpful and appreciated.