Abstract
In this paper, we focus on the design of binary constant weight codes that admit low-complexity encoding and decoding algorithms, and that have size \(M=2^k\) so that codewords can conveniently be labeled with binary vectors of length k. For every integer \(\ell \ge 3\), we construct a \((n=2^\ell , M=2^{k_{\ell }}, d=2)\) constant weight code \({{{\mathcal {C}}}}[\ell ]\) of weight \(\ell \) by encoding information in the gaps between successive 1’s of a vector, and call them as cyclic-gap constant weight codes. The code is associated with a finite integer sequence of length \(\ell \) satisfying a constraint defined as anchor-decodability that is pivotal to ensure low complexity for encoding and decoding. The time complexity of the encoding algorithm is linear in the input size k, and that of the decoding algorithm is poly-logarithmic in the input size n, discounting the linear time spent on parsing the input. Both the algorithms do not require expensive computation of binomial coefficients, unlike the case in many existing schemes. Among codes generated by all anchor-decodable sequences, we show that \({{{\mathcal {C}}}}[\ell ]\) has the maximum size with \(k_{\ell } \ge \ell ^2-\ell \log _2\ell + \log _2\ell - 0.279\ell - 0.721\). As k is upper bounded by \(\ell ^2-\ell \log _2\ell +O(\ell )\) information-theoretically, the code \({{{\mathcal {C}}}}[\ell ]\) is optimal in its size with respect to two higher order terms of \(\ell \). In particular, \(k_\ell \) meets the upper bound for \(\ell =3\) and one-bit away for \(\ell =4\). On the other hand, we show that \({{{\mathcal {C}}}}[\ell ]\) is not unique in attaining \(k_{\ell }\) by constructing an alternate code \(\mathcal{{\hat{C}}}[\ell ]\) again parameterized by an integer \(\ell \ge 3\) with a different low-complexity decoder, yet having the same size \(2^{k_{\ell }}\) when \(3 \le \ell \le 7\). Finally, we also derive new codes by modifying \({{{\mathcal {C}}}}[\ell ]\) that offer a wider range on blocklength and weight while retaining low complexity for encoding and decoding. For certain selected values of parameters, these modified codes too have an optimal k.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let n and \(w\le n\) be positive integers. A constant weight binary (n, M, d) code \({{{\mathcal {C}}}}\) of blocklength n and weight w is defined as a subset of \(\{0,1\}^n\) of size M such that every element has the same Hamming weight w. The parameter d is the minimum distance of the code defined as
where \(d_H(\textbf{c}_1, \textbf{c}_2)\) denotes the Hamming distance between the binary vectors \(\textbf{c}_1, \textbf{c}_2\). The function A(n, d, w) is the maximum possible size M of a binary constant weight code of blocklength n, weight w and minimum distance d. When \(d=2\), there is no additional constraint on the codebook and therefore it is clear that
While there is a rich body of literature that attempt on characterizing A(n, d, w) for \(d \ge 4\) [1,2,3,4, 14, 15, 22], it still remains open in the general setting.
Along with characterization of A(n, d, w), another pertinent problem in the field of constant weight codes is the design of such codes that admit fast implementation of encoding and decoding. Considering the ease of implementation using digital hardware, it is desirable that the encoding algorithm takes in fixed-length binary vectors as input. In many systems employing a binary constant weight code, only a subset of the codebook having size as a power of 2 is used to enable efficient implementation, and the rest of the codebook is ignored (e.g., see [23]). Therefore we constrain the size of the codebook to \(M=2^k\) for some positive integer k. We refer to k as the combinatorial dimension of the code. The design of low-complexity algorithms for encoding and decoding constant weight codes has been posed as a problem (Research Problem 17.3) in the widely recognized textbook by MacWilliams and Sloane [21]. In the present paper, we focus on this problem for the simplest case of \(d=2\) assuming a codebook size of \(M=2^k\), with an aim to achieve the largest possible k.
Since \(d=2\), any binary vector of weight w can be included in the codebook and therefore our problem of interest aligns with the problem considered by Schalwijk [27] to enumerate all binary n-sequences of weight w. In [6], Cover generalized Schalwijk’s indexing scheme to make it applicable to an arbitrary subset of n-sequences. Prior to the works of Schalwijk and Cover, the indexing of constant weight n-sequences of weight w was studied in combinatorial literature; for example, Lehmor code [20] produces an indexing different from that of Schalwijk’s scheme. In combinatorial literature, an n-sequence of weight w is identified as a w-subset (or w-combination) of \(\{0,1,\ldots , n-1\}\) and the set of all w-combinations is assigned with an order, for instance the lexicographic order. The rank of a w-subset S is the number of w-subsets that are strictly less than S with respect to the lexicographic order, and the set S is indexed using its rank. A procedure to compute the rank of a w-subset is referred to as a ranking algorithm and conversely, to recover the w-subset associated to a given rank as an unranking algorithm. The study of ranking/unranking algorithms and their complexity dates back to [24]. There are many unranking algorithms [9, 13, 16,17,18, 26] proposed in literature aimed primarily at reducing the time complexity. However, all these algorithms require costly computation of binomial coefficients that have either large time complexity if done online or space complexity in case these coefficients are precomputed and stored in lookup tables. The first attempt to avoid computation of binomial coefficients is made by Sendrier in [28], but the resulting code is of variable blocklength. Given this background, our paper makes the following contributions.
-
1.
We present a family of binary \((n,M=2^k,d=2)\) cyclic-gap constant weight codes \({{{\mathcal {C}}}}[\ell ]\) parameterized by an integer \(\ell \ge 3\). The cyclic-gap code has blocklength \(n=2^{\ell }\), weight \(w = \ell \) and combinatorial dimension \(k=k_{\ell }\) as defined in (5). The code admits an encoding algorithm (Algorithm 1) that is of linear complexity in input size \(k_{\ell }\). Except for the linear time-complexity spent on parsing the input, its decoding algorithm (Algorithm 2) has a time-complexity that is poly-logarithmic in input size n. Neither the encoding nor the decoding require computation of binomial coefficients.
-
2.
The code \({{{\mathcal {C}}}}[\ell ]\) is associated to a finite integer sequence \(f_{\ell }\) of length \(\ell \) defined in Definition 1 that satisfies a constraint referred to as anchor-decodability that is instrumental in realizing encoding and decoding algorithms of very low complexity. Among all the codes generated by anchor-decodable sequences, we prove that \({{{\mathcal {C}}}}[\ell ]\) maximizes the combinatorial dimension. At the same time, we also show that \({{{\mathcal {C}}}}[\ell ]\) is not a unique code that maximizes the combinatorial dimension. This is done by providing a second code construction \(\mathcal{{\hat{C}}}[\ell ]\) with an alternate low-complexity decoder, but with the same combinatorial dimension as that of \({{{\mathcal {C}}}}[\ell ]\) when \(3 \le \ell \le 7\).
-
3.
While the cyclic-gap code \({{{\mathcal {C}}}}[\ell ]\) has a natural price to pay in its combinatorial dimension k, it performs fairly well against the information-theoretic upper bound \(\lfloor \log _2 A(n,2,w) \rfloor \). When \(\ell =3\), it in fact achieves the upper bound, and when \(\ell =4\), it is one bit away from the upper bound. In general, while both \(k_{\ell }\) and \(\lfloor \log _2 A(2^{\ell },2,\ell ) \rfloor \) grow quadratically with \(\ell \), the difference \(\Delta (\ell ) = \lfloor \log _2 A(2^{\ell },2,\ell ) \rfloor - k_{\ell }\) is upper bounded by \({(1+\tfrac{1}{2}\log _2 e)} \ell - {\tfrac{3}{2}} \log _2 \ell \), i.e., growing only linearly with \(\ell \).
-
4.
Without compromising on complexity, we derive new codes permitting a larger range of parameters by modifying \({{{\mathcal {C}}}}[\ell ]\) in three different ways. In the first approach, the derived code \({{{\mathcal {C}}}}_t[\ell ]\) has blocklength \(n=2^\ell \), weight \(w=t\) and combinatorial dimension k as defined in (50) for \(\log _2 t < \ell -1\). In the second approach, the derived code \({{{\mathcal {D}}}}_t[\ell ]\) has blocklength \(n=2^\ell \), weight \(w=t\) and combinatorial dimension k as defined in (51) for \(1 \le t \le \ell -1\). In the third approach, the derived code \({{{\mathcal {B}}}}_t[\ell ]\) has blocklength \(n=2^\ell -2^t +1\), weight \(w=\ell \) and combinatorial dimension \(k=k_{\ell } - 2t\). For certain selected values of parameters, these codes also achieve the corresponding upper bound on k.
2 The construction of cyclic-gap codes
Let \(|\textbf{x}|\) denote the length of a vector (or a finite sequence) \(\textbf{x}\). We use \(\textbf{x}_1 \Vert \textbf{x}_2\) to denote the concatenation of two vectors \(\textbf{x}_1, \textbf{x}_2\). Entries in a vector \(\textbf{x}\) of length \(|\textbf{x}|=\textsf{len}\) are denoted by \(x[0],x[1],\ldots , x[\textsf{len}-1]\). We use \(\textbf{x}[a,m]\) to denote the sub-vector \([x[a], \ x[(a+1) \mod \mathsf len ],\) \(\cdots \ x[(a+m-1) \mod \mathsf len]]\), where the \(1\le m \le \textsf{len}\) elements are accessed in a cyclic manner starting from x[a]. A complementary sub-vector of length \((\textsf{len}-m)\) can be obtained by deleting \(\textbf{x}[a,m]\) from \(\textbf{x}\) and it is denoted by \(\bar{\textbf{x}}[a,m]\). We use \(\text {dec}(\textbf{x})\) to denote the decimal equivalent of the binary vector \(\textbf{x}\) assuming big-endian format (least significant bit at the far end on the right). The Hamming weight of a vector \(\textbf{x}\) is denoted by \(w_H(\textbf{x})\). For integers a, b, we use [a] to denote \(\{1,2,\ldots , a\}\) and \([a \ b]\) to denote \(\{a, a+1, \ldots , b\}\). We use \(1^m\) to denote a vector of m 1’s and \(0^m\) to denote a vector of m 0’s. We use \(\text {Im}(f)\) to denote image of a function f.
Our main idea behind the construction is to divide the message vector \(\mathbf{{x}}\) into \(\ell \) blocks of non-decreasing lengths, and then use the decimal value of each block to determine the position of the next 1-entry in the codeword of length \(2^\ell \). Following this rule, the gaps among the \(\ell \) 1-entries in a codeword will also allow us to recover the message uniquely. We first start with a simple warm-up construction in Sect. 2.1, which provides the intuition behind our approach, before developing the general construction and related theorems in Sects. 2.2, 2.3, and 2.4.
2.1 A warm-up construction
Let us restrict that \(\ell \) is a power of 2. The encoding works as follows. First, we divide the binary message vector \(\mathbf{{x}}\) into \(\ell \) blocks \(\mathbf{{x}_{\ell }},\mathbf{{x}_{\ell -1}},\mathbf{{x}_{\ell -2}},\ldots ,\mathbf{{x}_2},\mathbf{{x}_1}\) of lengths \(\ell ,\ell -\log _2 \ell ,\ell -\log _2 \ell ,\ldots ,\ell -\log _2 \ell ,\ell -\log _2 \ell -1\), respectively without altering the order of bits, i.e., \(\mathbf{{x}}=\mathbf{{x}_{\ell }}||\mathbf{{x}_{\ell -1}}||\mathbf{{x}_{\ell -2}}||\ldots ||\mathbf{{x}_2}||\mathbf{{x}_1}\). For instance, with \(\ell =4\), we will have the sequence 1, 2, 2, 4 such that i-th element of the sequence is the length of \(\textbf{x}_i\) for \(i=1,2,3,4\). With \(\ell =8\), we have the sequence 4, 5, 5, 5, 5, 5, 5, 8. Note that the length of \(\mathbf{{x}}\) is \(|\mathbf{{x}}|=\ell +{(\ell -2)}(\ell -\log _2\ell )+(\ell -\log _2\ell -1)=\ell ^2 - \ell \log _2\ell + (\log _2\ell -1)\).
Next, we encode this message into a binary codeword \(\mathbf{{c}}\) of length \(2^\ell \) and Hamming weight \(\ell \) as follows. We set \(\mathbf{{c}}=(c[0],c[1],\ldots ,c[2^\ell -1])\) to the all-zero codeword and index its bits from 0 to \(2^\ell -1\). Let \(\textsf {pos}_{\ell }\triangleq \textsf {dec}(\mathbf{{x}_{\ell }})\) be the decimal value of the block \(\mathbf{{x}_{\ell }}\). Leave the first \(\textsf {pos}_{\ell }\) bits unchanged as 0’s, but set the \((\textsf {pos}_{\ell }+1)\)-th bit of \(\mathbf{{c}}\) to one, i.e. \(c[\textsf {pos}_{\ell }] \triangleq 1\). Now, we move to \(\mathbf{{x}_{\ell -1}}\) and again let \(\textsf {pos}_{\ell -1} \triangleq \textsf {dec}(\mathbf{{x}_{\ell -1}})\). We skip \(\textsf {pos}_{\ell -1}\) 0’s after the first 1, and set the next bit to 1, i.e. \(c[(\textsf {pos}_\ell +\textsf {pos}_{\ell -1}+1)\mod 2^\ell ]\triangleq 1\). Note that here we move from the left to the right cyclically along the codeword indices, wrapping around at the end. We continue the process until the last block \(\mathbf{{x}_1}\) is read and the last 1 is added to \(\mathbf{{c}}\).
Illustration of the encoding process when \(\ell =4\) and the message vector \(\mathbf{{x}}=(1,0,1,0,1,1,1,0,0)\) is encoded into the codeword \(\mathbf{{c}}\) of length \(16=2^4\) (represented by the circle) with \(c[1]=c[2]=\mathbf{{c}}[10]=c[14]=1\). For decoding, one first determines the anchor (the underlined 1), which is the 1 that has the largest number of consecutive zeros on its left (cyclically), or equivalently, has the largest gap to the nearest 1 on its left. Once the anchor is found, each message block can be recovered by counting the number of 0’s between the current 1 to the next
For the example illustrated in Fig. 1, when \(\ell =4\), the message vector \(\mathbf{{x}}= (1,0,1,0,1,1,1,0,0)\) is divided into \(\mathbf{{x}}_4=(1,0,1,0)\), \(\mathbf{{x}}_3=(1,1)\), \(\mathbf{{x}}_2=(1,0)\), and \(\mathbf{{x}}_1=(0)\), which are of lengths 4, 2, 2, 1 as described earlier. Since \(\textsf {dec}(\mathbf{{x}}_4)=10\), we set \(c[10]=1\), noting that the bits of \(\mathbf{{c}}\) are indexed from 0 to 15. Next, since \(\textsf {dec}(\mathbf{{x}}_3)=3\), we set \(c[14]=c[(10+3+1)]=1\). Similarly, as \(\textsf {dec}(\mathbf{{x}}_2)=2\) and \(\textsf {dec}(\mathbf{{x}}_1)=0\), we set \(c[1] = c[14+2+1] = 1\) and \(c[2] = c[1+0+1] = 1\). As the result, \(\mathbf{{c}}= (0,1,1,0,0,0,0,0,0,0,{\underline{1}},0,0,0,1,0)\). To decode, given such a codeword \(\mathbf{{c}}\), we need to reconstruct \(\mathbf{{x}}\). If the position of the “first” 1 (called the anchor), which corresponds to the block \(\mathbf{{x}_{\ell }}\), is known then \(\mathbf{{x}_{\ell }}\) can be recovered right away. Moreover, the gap (that is, the number of 0’s) between this 1 and the next 1 on its right (cyclically, wrapping around if necessary) will be the decimal value of the block \(\mathbf{{x}_{\ell -1}}\). For example, if we know the 1 at index 10 of \(\mathbf{{c}}\) (the underlined one) is the anchor, then we can derive immediately that \(\mathbf{{x}}_4=(1,0,1,0)\). Moreover, we can simply count the number of 0’s between this 1 and the next, which is 3, and recover \(\mathbf{{x}}_3 = (1,1)\). All the \(\ell \) blocks of \(\mathbf{{x}}\) can be recovered in this way. Thus, the key step is to determine the anchor.
We claim that thanks to the way we split \(\mathbf{{x}}\), the 1 with the largest number of 0’s on its left (wrapping around if necessary) in \(\mathbf{{c}}\) is the anchor, created by \(\mathbf{{x}_{\ell }}\). Note that for the 1’s created by \(\mathbf{{x}_1},\ldots ,\mathbf{{x}_{\ell -1}}\), the numbers of 0’s on their left are at most \(\max _{\mathbf{{x}_{\ell -1}}}\textsf {dec}(\mathbf{{x}_{\ell -1}}) = 2^{\ell -\log _2\ell }-1 = \frac{2^{\ell }}{\ell } - 1\). On the other hand, for every \(\ell \ge 3\), the number of 0’s on the left of the anchor is at least
which proves our claim.
Finally, note that this warm-up construction assumes \(\ell \) as a power of 2. This can be generalized for any \(\ell \ge 3\).
2.2 A finite integer sequence
In this subsection, we generalize the sequence used in the warm-up construction for every \(\ell \ge 3\).
Definition 1
Let \(\ell \ge 3\). Then \(f_{\ell }(i), i = 1,2,\ldots , \ell \) is a finite integer sequence of length \(\ell \) defined as follows. If \(\ell \) is not a power of 2, then
where \(\mu = 2^{\lceil \log _2 \ell \rceil } - \ell \). If \(\ell \) is a power of 2, then
Next we define
The lower bound on \(k_{\ell }\) obtained in the following proposition gives a lucid estimate of how it grows with \(\ell \).
Proposition 2.1
Let \(\ell \ge 3\) be an integer. Suppose \(\ell = 2^a+b\) such that \(2^a \le \ell \) is the maximum power of 2 and \(b\ge 0\). Then
As a corollory, \(k_{\ell } \ \ge \ \ell ^2 - \ell \log _2\ell + \log _2 \ell - \ell (1-\tfrac{1}{2\ln 2}) - \tfrac{1}{2\ln 2} \) for every \(\ell \ge 3\).
Proof
The bound in (7) is trivially true with equality when \(b=0\) and hence it is tight. When \(\ell \) is not a power of 2, i.e., \(b \ne 0\), we substitute value of \(\mu \) in (6) to obtain
In (8), we use an upper bound for \(\lfloor \log _2 \ell \rfloor \) in terms of \(\log _2 \ell \) obtained by invoking the inequality \(\ln (1+x) \ge \frac{x}{1+x}\). Observe that \(b < \tfrac{\ell }{2}\). We substitute it in (7) and observe that \({\ell (1-\tfrac{1}{2\ln 2}) + \tfrac{1}{2\ln 2} \ge 1}\) for every \(\ell \ge 3\). This proves the corollary. \(\square \)
2.3 Encoding information in gaps
In this section, we present an encoding algorithm (see Algorithm 1) that encodes information in gaps between successive 1’s of a binary vector of length \(n=2^{\ell }\), using the sequence \(s_{\ell }=s_{\ell }(1),s_{\ell }(2),\ldots ,s_{\ell }(\ell )\) where \(s_{\ell }(\ell )\) is fixed to be \(\ell \). More specifically, the message vector \(\mathbf{{x}}\) will be divided into \(\ell \) blocks \(\mathbf{{x}_{\ell }},\ldots ,\mathbf{{x}_2}, \mathbf{{x}_1}\), which are of lengths \(s_{\ell }(\ell ),\ldots ,s_{\ell }(2), s_{\ell }(1)\), and gaps between successive 1’s of the codewords depend on the decimal value of each of these blocks. The function gap defined below formalizes the notion of gap as the first step.
Definition 2
Let \(a, b \in {\mathbb {Z}}_n\). Then the gap from a to b is a natural number taking values in \([0 \ (n-1)]\) given by
The encoding algorithm given in Algorithm 1 is invoked taking the sequence \(s_{\ell }\) as an auxiliary input. The input \(\textbf{x}\) is the message vector that gets encoded, and its length must be
The encoded vector is the output \(\textbf{c}\) of length n. The input vector \(\textbf{x}\) is partitioned as \(\textbf{x}_{\ell }\Vert \textbf{x}_{\ell -1} \Vert \cdots \Vert \textbf{x}_{1}\) such that \(|\textbf{x}_{i}|=s_{\ell }(i)\) for \(i \in [\ell ]\). The vector \(\textbf{c}\) is initialized as all-zero vector and \(\ell \) locations of \(\textbf{c}\) are set to 1 subsequently. The input bits are read in blocks \(\textbf{x}_{\ell -1}, \textbf{x}_{\ell -2},\ldots \textbf{x}_{1}\) and every time a block \(\textbf{x}_i, \ell \ge i \ge 1\) is read, a bit in \(\textbf{c}\) is set to 1 in such a manner that the gap from the previously set 1 is equal to \(\text {dec}(\textbf{x}_j)\). The gap is always computed modulo n so that the position pointer \(\textsf{pos}\) can wrap around cyclically. The algorithm has a linear time-complexity in input size \(k(s_{\ell })\), and it defines the encoding map \(\phi :\{0,1\}^{k(s_{\ell })} \xrightarrow \{0,1\}^{n}\).

Choosing the auxiliary input \(s_{\ell }\) as \(f_{\ell }\) defined in Sect. 2.2 and fixing \(\ell =4\) recovers the warm-up construction presented in Sect. 2.1. Apart from the fact that \(s_{\ell }(\ell )=\ell \) always, there is room to vary \(s_{\ell }(i),i=1,2,\ldots , \ell -1\). Thus Algorithm 1 provides a generic method to encode information as gaps in a vector of length \(n=2^{\ell }\). What it requires is to identify a “good” sequence so as to produce a code that is easily decodable and at the same time has a high combinatorial dimension.
2.4 A decodability criterion and a decoding algorithm
In this subsection, we first establish a criterion for unique decodability of a vector \(\textbf{c}\) obtained as the output of the encoding algorithm \(\phi \). The criterion solely depends on the auxiliary input \(s_{\ell }\) and is stated in Definition 4.
Definition 3
Let \(\textbf{g} = (g[0], g[1],\ldots , g[\ell -1])\) be a vector of length \(\ell \). Then the circular shift of \(\textbf{g}\) by \(\ell _0 \in {\mathbb {Z}}_{\ell }\) is defined as
For any \(\ell _0 \in {\mathbb {Z}}\), the definition still holds true by replacing \(\ell _0\) by \(\ell _0 \mod \ell \) in (10).
Definition 4
Let \(\ell \ge 3\) be an integer. A non-decreasing sequence \(s_{\ell }\) of length \(\ell \) is said to be anchor-decodable if \(s_{\ell }(\ell )=\ell \) and the following two conditions hold:
-
1.
$$\begin{aligned} 2^{\ell } - \sum _{i=1}^{\ell -1} 2^{s_{\ell }(i)}\ge & 2^{s_{\ell }(\ell -1)}. \end{aligned}$$(11)
-
2.
The vector \(\varvec{\gamma } = (2^{\ell } - 1 - \sum _{i=1}^{\ell -1} 2^{s_{\ell }(i)}, 2^{s_{\ell }(\ell -1)}-1,2^{s_{\ell }(\ell -2)}-1,\ldots ,2^{s_{\ell }(1)}-1)\) is distinguishable from any of its cyclic shifts, i.e., \(\textsf{cshift}(\varvec{\gamma },\ell _0) \ne \varvec{\gamma }\) for every integer \(0< \ell _0 < \ell \).
In what follows in this subsection, we will describe why the conditions in Definition 4 are important and how they naturally lead to a fast decoding algorithm as presented in Algorithm 2. As the first step, we show that the Hamming weight of \(\phi (\textbf{x})\) is always \(\ell \) for every input \(\textbf{x}\) to the encoder in Algorithm 1 if the sequence \(s_{\ell }\) is anchor-decodable.
Lemma 2.2
Let \(\ell \ge 3\) and \(n = 2^\ell \). If \(s_{\ell }=\big (s_{\ell }(1),s_{\ell }(2),\ldots ,s_{\ell }(\ell )\big )\) is an anchor-decodable sequence, then \(w_H(\phi (\textbf{x})) = \ell \) for every \(\textbf{x} \in \{0,1\}^{k(s_{\ell })}\), where \(\phi (\cdot )\) is determined by Algorithm 1.
Proof
Let \(\textbf{c} = \phi (\textbf{x})\). After completing the first iteration of the loop in Line 4 of Algorithm 1, the position pointer \(\textsf{pos}\) takes a value \(p_0=\text {dec}(\textbf{x}_{\ell })\) lying between 0 and \(2^\ell -1\), and \(\textbf{c}\) has Hamming weight 1 with \(c[p_0]=1\). The loop has \((\ell -1)\) remaining iterations indexed by \(j=\ell -1,\ldots , 1\). In each of these \((\ell -1)\) iterations, \(\textsf{pos}\) is incremented modulo n at least by 1 and at most by \(2^{|\textbf{x}_j|}, j \in [\ell -1]\). Therefore, the maximum cumulative increment p in \(\textsf{pos}\) from \(p_0\) by the end of these \((\ell -1)\) iterations is given by:
If \(s_{\ell }\) is anchor-decodable, then from (11) we obtain that
Since \(p < n\), a distinct bit of \(\textbf{c}\) is flipped from 0 to 1 in every iteration and therefore \(w_H(\textbf{c})=\ell \). \(\square \)
Let us view the input \(\textbf{x}\) as concatenation of \(\ell \) binary strings as \(\textbf{x} = \textbf{x}_{\ell } \Vert \textbf{x}_{\ell -1} \Vert \cdots \Vert \textbf{x}_{1}\) where \(|\textbf{x}_i|=s_{\ell }(i)\). Suppose that \(\textbf{c} = \phi (\textbf{x})\) is the output of Algorithm 1. By Lemma 2.2, \(\textbf{c}\) has \(\ell \) 1’s. Let \(j[m], m=0,1,\ldots , \ell -1\) denote the locations of 1’s in \(\textbf{c}\) counting from left to right and let
denote the array of the number of zeros between two successive 1’s cyclically wrapping around \(\textbf{c}\) if required. The principle of the decoding algorithm in Algorithm 2 is to uniquely identify the anchor bit of \(\textbf{c}\) assuming that the sequence \(s_{\ell }\) is anchor-decodable. Recall (Sect. 2.1) that the anchor bit in a codeword \(\textbf{c}\) is the first bit flipped to 1 while running the encoding algorithm to generate \(\textbf{c}\). To be precise,
and we call \(j[\mathsf{anchor\_index}]\) as the anchor and \(\textbf{c}[j[\mathsf{anchor\_index}]]\) as the anchor bit 1. The procedure FindAnchor (Algorithm 3) invoked at Line 3 of Algorithm 2 returns \(\mathsf{anchor\_index}\) and its correctness will be analyzed shortly. If the index \(\mathsf{anchor\_index}\) is uniquely identified by an input vector \(\textbf{c}\), then it is straightforward to observe that \(\textbf{x}_{\ell }, \textbf{x}_{\ell -1}, \ldots , \textbf{x}_{1}\) are uniquely determined. The procedure to recover \(\textbf{x}\) given the knowledge of \(j[\mathsf{anchor\_index}]\) is laid down in Lines 4–8 of Algorithm 2.


Let us proceed to check the correctness of Algorithm 3 FindAnchor. It is straightforward to see that:
Therefore we have
The inequality in (16) follows from the way \(\textbf{x}_{\ell -i}\) is encoded by Algorithm 1. It is straightforward to check that equality holds in (16) if and only if the message vector is of the type
When the message vector satisfies (17), every gap except \(\mathbf{{g}}[\mathsf{anchor\_index}]\) becomes maximal in length, and therefore we refer to this special case as the maximal-gap case. The Lines 2–3 in Algorithm 3 check for the maximal-gap case by comparing every circular shift of the vector \(\textbf{g}\) with a fixed vector \(\mathsf{gaps\_allone}\). The vector \(\mathsf{gaps\_allone}\) corresponds to a message vector of the type
for which \(\mathsf{anchor\_index} = 0\). If \(\textsf{cshift}(\textbf{g},n_0)\) becomes equal \(\mathsf{gaps\_allone}\) for some \(0 \le n_0 \le (\ell -1)\), then by second condition in Definition 4, \(n_0\) is unique and is equal to \(\mathsf{anchor\_index}\).
If (17) is false, then clearly (16) satisfies with strict inequality, and in that case
by the first condition of Definition 4 and the fact that \(s_{\ell }\) is non-decreasing. Thus Line 5 of Algorithm 3 correctly identifies the \(\mathsf{anchor\_index}\) and therefore it is correct if the sequence \(s_{\ell }\) is anchor-decodable. Thus Algorithm 2 provides an explicit decoder that maps \(\textbf{c}\) uniquely to \(\textbf{x}\) leading to the following theorem.
Theorem 2.3
Let \(\ell \ge 3\) and \(n=2^\ell \). For every anchor-decodable sequence \(s_{\ell }\) as defined in Definition 4, the map \(\phi \) defined by Algorithm 1 with \(s_{\ell }\) as auxiliary input is one-to-one. Furthermore, for every \(\textbf{x} \in \{0,1\}^{k(s_{\ell })}\) with \(k(s_{\ell })=\sum _i s_{\ell }(i)\), Algorithm 2 outputs \(\textbf{x}\) when \(\phi (\textbf{x}) \in \{0,1\}^{2^{\ell }}\) is passed as its input.
2.5 Cyclic-gap constant weight codes
By Theorem 2.3 and Lemma 2.2, every anchor-decodable sequence \(s_{\ell }\) has an associated binary constant weight code \(\phi (\{0,1\}^{k(s_{\ell })})\). We define
as a cyclic-gap constant weight code associated to its characteristic sequence \(s_{\ell }\). The codewords of \({{{\mathcal {C}}}}[s_{\ell }]\) can be obtained as \(2^{k(s_{\ell })}\) distinct permutations of \(1^{\ell }\Vert 0^{n-\ell }\). This is a subcode of Type I permutation modulation of size \(2^{\ell }\atopwithdelims ()\ell \), with initial vector \(1^{\ell }\Vert 0^{n-\ell }\) introduced in [29]. Therefore the encoder \(\phi \) gives an elegant method to map binary vectors of length \(k(s_{\ell })\) to a subset of the permutation code, which is otherwise usually carried out by picking vectors in lexicographic order [23].
In the following, we verify that \(f_{\ell }\) defined in Definition 1 is an anchor-decodable sequence. Clearly \(f_{\ell }\) is non-decreasing and \(f_\ell (\ell )=\ell \). When \(\ell \) is not a power of 2,
and (23) holds with equality if and only if \(\mu > 1\). In the above, (22) follows by substituting the value of \(\mu \) and calculating that:
On the other hand, when \(\ell \) is a power of 2,
By (23) and (25), the first condition of anchor-decodability is satisfied. In order to check for the second condition in Definition 4, let us first compute \(\varvec{\gamma }\) as:
Clearly, \(\varvec{\gamma }\) is distinguishable from any of its \((\ell -1)\) non-trivial cyclic shifts as \(1 \le \mu \le \ell -2\), establishing that \(f_{\ell }\) is anchor-decodable. As will be shown in the next subsection, \(f_{\ell }\) is in fact an optimal anchor-decodable sequence producing the largest possible code \({{{\mathcal {C}}}}[\ell ]\) as defined below. We call \({{{\mathcal {C}}}}[\ell ]\) as the cyclic-gap constant weight code without any specific reference to the characteristic sequence.
Definition 5
Let \(\ell \ge 3\). We define the code \({{{\mathcal {C}}}}[\ell ] = {{{\mathcal {C}}}}[f_{\ell }]\) where the characteristic sequence \(s_{\ell }\) is chosen as \(f_{\ell }\). The code has blocklength \(n=2^\ell \), weight \(w=\ell \), and combinatorial dimension \(k=k(f_{\ell })=k_{\ell }\) where \(k_{\ell }\) is given in (5).
2.6 On the optimality of \({{{\mathcal {C}}}}[\ell ]\)
In this section, our interest is to identify an anchor-decodable sequence \(s_{\ell }\) that attains the maximum combinatorial dimension for its associated code \({{{\mathcal {C}}}}[s_{\ell }]\). In the following theorem, we establish that \(f_{\ell }\) maximises \(k(s_{\ell })\).
Theorem 2.4
Let \(\ell \ge 3\). Among all anchor-decodable sequences \(\{s_{\ell } \}\) as defined in Definition 4, the sequence \(f_{\ell }\) as defined in Definition 1 maximizes \(k(s_{\ell })=\sum _{i}s_{\ell }(i)\).
Proof
Suppose that there exists an anchor-decodable sequence \(s_{\ell }\) with \(k(s_{\ell }) = k_{\ell } + 1 = \ell + \mu (\ell - \lfloor \log _2 \ell \rfloor ) + (\ell - \mu - 1) (\ell - \lceil \log _2 \ell \rceil )\) where \(\mu = 2^{\lceil \log _2 \ell \rceil } - \ell \).
Consider the case that \(\mu = 0\). Clearly, it is not possible to have \(s_{\ell }(\ell -1) < \ell - \log _2\ell \) because then, it is impossible to have \(k(s_{\ell }) = \ell + (\ell -1)(\ell -\log _2\ell )\) as \(s_{\ell }\) is non-decreasing. Suppose that \(s_{\ell }(\ell -1)=\ell - \log _2\ell \). Then \(s_{\ell }(i)\) must be \(\ell - \log _2\ell \) for every \(1 \le i \le \ell -1\) again for the same reason. Then we have
Therefore, the second condition in Definition 4 is violated because the vector \({\varvec{\gamma }}\) is not indistinguishable from its cyclic shifts. Next, suppose that \(s_{\ell }(\ell -1) > \ell - \log _2\ell \). Then we have
In the above, (26) follows from the convexity of the function \(2^x\). The assertion in (27) violates the first condition of Definition 4 leading to a contradiction when \(\mu =0\).
Consider the case that \(\mu \ge 1\). Since \(s_{\ell }(\ell - 1)\) is maximum among \(s_{\ell }(i), i \in [\ell -1]\), we have
Next, we claim that
for every non-decreasing sequence \(s_{\ell }\) satisfying \(s_{\ell }(\ell )=\ell \) and \(\sum _{i=1}^{\ell } s_{\ell }(i) = k_{\ell }+1\). If
then it is obvious that (29) satisfies with equality. The sequence \(f_{\ell }'\) has an additional feature that it is balanced. The non-decreasing integer sequence \(s_{\ell }\) with \(s_{\ell }(\ell )=\ell \) is said to be balanced if \(s_{\ell }(\ell -1) - s_{\ell }(1)\) becomes the least possible value. Under the constraint that \(\sum _{i=1}^{\ell } s_{\ell }(i) = k_{\ell }+1\), it is straightforward to check that \(f_{\ell }'\) is the unique balanced sequence that \(s_{\ell }\) can become when \(\mu \ge 1\). Furthermore, \(f_{\ell }'(\ell -1)- f_{\ell }'(1)=1\). Let \(s_{\ell } \ne f_{\ell }'\) be an arbitrary non-decreasing sequence such that \(s_{\ell }(\ell )=\ell \) and \(k(s_{\ell }) = k_{\ell }+1\). Clearly, \(s_{\ell }(\ell -1)-s_{\ell }(1) \ge 2\) and \(s_{\ell }\) is not balanced. It is possible to transform \(s_{\ell }\) to the balanced sequence \(f_{\ell }'\) invoking the following algorithm.
-
1.
Determine \(s_{\max } = \max _{i\in [\ell -1]} s_{\ell }(i)\), \(s_{\min } = \min _{i\in [\ell -1]} s_{\ell }(i)\). Identify the smallest \(i_{\max }\) and the largest \(i_{\min }\) such that \(s_{\ell }(i_{\max }) = s_{\max }\) and \(s_{\ell }(i_{\min }) = s_{\min }\).
-
2.
Update \(s_{\ell }(i_{\max }) \leftarrow s_{\max }-1\) and \(s_{\ell }(i_{\min }) \leftarrow s_{\min }+1\).
-
3.
Keep iterating Steps 1–2 until \(s_{\ell }(\ell -1)-s_{\ell }(1) = 1\).
Since \(s_{\ell }(\ell -1)-s_{\ell }(1)\) must decrease in a finite number of iterations and \(s_{\ell }\) always remains non-decreasing, it is clear that the algorithm is correct. Observe that \(k(s_{\ell })\) remains fixed as \(k_{\ell }+1\) after every iteration. On the other hand, we have
in every iteration because \(s_{\max }-s_{\min } \ge 2\). Therefore the value of \(\sum _{i=1}^{\ell -1} 2^{s_{\ell }(i)}\) decreases after every iteration until \(s_{\ell }\) becomes equal to \(f_{\ell }'\) in the last iteration. Since we have equality in (29) when \(s_{\ell } = f_{\ell }'\), it follows that (29) is true for the general \(s_{\ell }\).
By (29), we have
where (30) follows from (24) and (28). This violates the first condition of Definition 4 leading to a contradiction when \(\mu \ge 1\). This completes the proof. \(\square \)
It turns out that the optimal sequence \(f_{\ell }\) has connections with optimal lengths of a Huffman source code. This establishes a link between our constant weight binary code and Huffman code. The details of this connection with a self-contained review of Huffman source code, and an alternate proof of Theorem 2.4 resulting from this connection are described in Appendix A.
3 A second code construction
As established by Theorem 2.4, the cyclic-gap code \({{{\mathcal {C}}}}[\ell ]\) has the maximum combinatorial dimension among the family of codes \(\{{{{\mathcal {C}}}}[s_\ell ] \mid s_{\ell } \text { is anchor-decodable}\}\) that are encoded by Algorithm 1 and are decodable by Algorithm 2. Both these algorithms are of very low complexity. Two questions that arise at this point are:
-
1.
Can Algorithm 1 generate fast decodable codes when \(s_{\ell }\) is not necessarily anchor-decodable?
-
2.
Is \(f_{\ell }\) a unique sequence and hence \({{{\mathcal {C}}}}[\ell ]\) a unique code that achieves the maximum combinatorial dimension \(k_{\ell }\)?
In this section, we answer the first question in the affirmative by presenting a code generated by Algorithm 1 picking as auxiliary input a sequence that is not anchor-decodable, yet admitting an alternate decoder that has the same order of complexity as that of Algorithm 2. As evident in the proof of Theorem 2.3, the crux of the decoding algorithm in Algorithm 2 lies in the fact that the maximum gap between two successive 1’s in a codeword of \({{{\mathcal {C}}}}[s_{\ell }]\) uniquely identifies the anchor when \(s_{\ell }\) is anchor-decodable. It turns out that we can come up with an alternate fast decoding algorithm that relies not just on the maximum gap, but on a subset of gaps containing the maximum one. Interestingly, the combinatorial dimension of such a new code matches with \(k_{\ell }\) for certain values of \(\ell \), thereby establishing that the sequence \(f_{\ell }\) is not unique in that sense. This answers the second question in the negative.
3.1 An alternate integer sequence
Let \(\ell \) and r be two integer parameters satisfying \(\ell \ge 3\) and \(1 \le r \le \lfloor \frac{\ell +3}{4} \rfloor \). In this subsection, we define a sequence \(f_{\ell ,r}\) that is not anchor-decodable.
Definition 6
Let \(\ell \ge 3\) and \(1 \le r \le \lfloor \frac{\ell +3}{4} \rfloor \). Then
where
Observe that \(\delta (\ell ,r)\) equals 1 for every permitted value of \(\ell ,r\) except for a limited set of parameters \((r=1,\ell =3,4)\) and \((r=2,\ell =5,6)\). Next we define
We compile certain useful numerical identities pertaining to the sequence in the following proposition.
Proposition 3.1
The following identities hold:
-
1.
\(f_{\ell , r}(i) > \ell - r -1, \ \ i = \ell -1,\ell -2,\ldots , \ell -2r+2\).
-
2.
\(f_{\ell , r}(i) = \ell - r -1, \ \ i = \ell -2r+1,\ell -2r\).
-
3.
\(f_{\ell , r}(i) < \ell - r -1, \ \ i = \ell -2r-1,\ell -2r-2,\ldots , 1\).
-
4.
When \(r_1 < r_2 \le \left\lfloor \frac{\ell + 3}{4} \right\rfloor \), \(f_{\ell r_1}(i) \le f_{\ell r_2}(i)\) for every \(i \in [\ell ]\).
-
5.
When \(r_1 < r_2 \le \left\lfloor \frac{\ell + 3}{4} \right\rfloor \), \(k_{\ell , r_1} < k_{\ell , r_2}\).
Proof
They all follow from definitions in a straightforward manner. It is necessary to have \(\delta (\ell ,r)=0\) when \(\ell \le 2r+2\) for the first three identities to hold. \(\square \)
By fifth property of Proposition 3.1, \(k_{\ell r}\) is maximized at \(r_{\text {max}} = \left\lfloor \frac{\ell + 3}{4} \right\rfloor \), and we define \({\hat{f}}_{\ell } = f_{\ell , r_{\text {max}}}\). We also define
where
A compilation of \({\hat{f}}_{\ell }\) and \(f_{\ell }\) along with corresponding values of \({\hat{k}}_{\ell }\) and \(k_{\ell }\) is provided in Table 1.
3.2 The encoding algorithm
As described in Sect. 2.3, Algorithm 1 provides a generic encoding method because it can be invoked with any \(\ell \)-length auxiliary input sequence \(s_{\ell }\) such that \(s_{\ell }(\ell )=\ell \). We invoke it with the choice \(s_{\ell } = f_{\ell ,r}\). In the following Lemma 3.2, we show that the Hamming weight of the encoder output is always \(\ell \) despite the fact that \(f_{\ell , r}\) is not anchor-decodable.
Lemma 3.2
With \(s_{\ell }= f_{\ell , r}\) as defined in Definition 6, \(w_H(\phi (\textbf{x})) = \ell \) for every \(\textbf{x} \in \{0,1\}^{k_{\ell r}}\) where \(\phi (\cdot )\) is determined by Algorithm 1.
Proof
We follow the same line of arguments as in the proof of Lemma 2.2. The maximum cumulative increment p in the variable \(\textsf{pos}\) over the last \((\ell -1)\) iterations of the loop in Line 4 is given by:
Since \(p < 2^{\ell }\) by (36), a distinct bit of \(\textbf{c}\) is set from 0 to 1 in each of these \((\ell -1)\) iterations and therefore \(w_H(\textbf{c})=\ell \). \(\square \)
3.3 A decoding algorithm and a constant weight code
Let \(\textbf{c}\) be an output of the encoder. In order to decode the input \(\textbf{x}\) uniquely, it is necessary and sufficient to identify the anchor. However, the sequence \(f_{\ell , r}\) is not anchor-decodable, and therefore the procedure FindAnchor in Algorithm 3 will not work. Nevertheless, we illustrate with an example of \(\ell =7, r=2\) that it is possible to determine the anchor bit based on the pattern of gaps in \(\textbf{c}\) (See Fig. 2 for a pictorial illustration.). Continuing the approach taken in the description of warm-up construction (see Fig. 1 in Sect. 2.1), the codeword of length \(n=128\) is represented as a circle with 128 points indexed from 0 to 127. The codeword \(\textbf{c}\) picked in the example has \(c[j]=1\) for \(j =10, 26, 32, 37, 64, 96, 127\) and zero everywhere else. To avoid clutter in Fig. 2, we indicate the starting point 0 and mark only those points at which \(c[j]=1\), instead of all the 128 points.
First, we identify the gaps between successive 1’s as \(\textbf{g}[m], m=0,1,\ldots , 6\) in order starting from the first gap \(\textbf{g}[0] = \textsf{gap}(127,10) = 10\). Other gaps are \(\textbf{g}[1] = 15, \textbf{g}[2] = 5, \textbf{g}[3]= 4, \textbf{g}[4]= 26, \textbf{g}[5]= 31, \textbf{g}[6]= 30\). The principle is to look for a stretch of \((2r-1)=3\) consecutive gaps in a clockwise direction such that the last gap in each of these stretches is \(\ge 2^{\ell -r-1} = 16\). The gap that is on or above the threshold \(2^{\ell -r-1}\) is referred to as a candidate gap. There are three such stretches marked in this example, marked as \(\textcircled {a}\), \(\textcircled {b}\) and \(\textcircled {c}\) in Fig. 2. Among these three, the stretch \(\textcircled {c}\) containing \((\textbf{g}[2], \textbf{g}[3], \textbf{g}[4])= (5,4,26)\) is unique in the sense that every gap in that stretch apart from the last gap \(\textbf{g}[4]\) does not qualify as a candidate gap. The bit c[64] at the end of \(\textcircled {c}\) is therefore picked as the anchor bit. Once the anchor is identified as c[64], the binary equivalent of 64 gives rise to \(\textbf{x}_{7}\), and that of following six gaps \((\textbf{g}[5], \textbf{g}[6],\textbf{g}[0],\textbf{g}[1], \textbf{g}[2], \textbf{g}[3])= (31, 30, 10, 15, 5, 4)\) yield \(\textbf{x}_{6},\textbf{x}_{5},\textbf{x}_{4},\textbf{x}_{3},\textbf{x}_{2}\) and \(\textbf{x}_{1}\). Except when \(\delta (\ell , r)=1\) and a specific type of message vector appears, the above procedure for finding the anchor bit works. The correctness of the above procedure and the way to handle special cases constitute the following theorem.
Illustration of the principle of decoding algorithm for \(\ell =7, r=2\) when the codeword \(\textbf{c}\) has 1’s at \(c[j], j =10, 26, 32, 37, 64, 96, 127\) (marked with dots) and 0’s everywhere else. There are three clock-wise stretches of gaps marked as \(\textcircled {a}\), \(\textcircled {b}\) and \(\textcircled {c}\) that end in a candidate gap, i.e., with value on or above 16. The stretch \(\textcircled {c}\) given by (5, 4, 26) is unique among these three because in \(\textcircled {c}\) every gap value apart from the last one does not qualify as a candidate. The bit c[64] at the end of the stretch \(\textcircled {c}\) is therefore picked as the anchor bit
Theorem 3.3
When the auxiliary input is chosen as \(f_{\ell , r}\) given in Definition 6, the map \(\phi \) defined by Algorithm 1 is one-to-one.
Proof


Let \(\textbf{c}\) be an arbitrary output of the encoder when the input \(\textbf{x} = \textbf{x}_{\ell } \Vert \textbf{x}_{\ell -1} \Vert \cdots \Vert \textbf{x}_{1} \in \{0,1\}^{k_{\ell r}}\) where \(|\textbf{x}_i|=f_{\ell ,r}(i)\). Along the lines of the proof of Theorem 2.3, we provide an explicit decoder for \(\textbf{c}\) that maps uniquely to \(\textbf{x}\). The decoder as given in Algorithm 4 is exactly the same in Algorithm 3 except for the fact that anchor_index is determined by invoking a different procedure FindAnchor2 presented in Algorithm 5. The part of the proof that argues correctness of Algorithm 4 once anchor_index is correctly determined remains the same as that of Theorem 2.3 and we do not repeat it here. The notations \(j[m], m=1,2,\ldots , \ell -1, j[\mathsf{anchor\_index}]\) and \(\mathbf{{g}}\in {\mathbb {Z}}_n^{\ell }\) also remain the same.
It is sufficient to argue that the procedure FindAnchor2 is correct. Following the same line of arguments after (15), we have
The inequality in (37) follows from the way \(\textbf{x}_{\ell -i}\) is encoded by Algorithm 1. It is straightforward to check that equality holds in (37) if and only if the message vector is of the type
When the message vector satisfies (39), every gap except \(\mathbf{{g}}[\mathsf{anchor\_index}]\) becomes maximal in length, and therefore we refer to this special case as the maximal-gap case. When \(\delta (\ell ,r)=1\), Lines \(2-3\) in Algorithm 5 checks for the maximal-gap case by comparing every circular shift of the vector \(\textbf{g}\) with a fixed vector \(\mathsf{gaps\_allone}\). The vector \(\mathsf{gaps\_allone}\) corresponds to a message vector of the type
for which \(\mathsf{anchor\_index} = 0\). If \(\textsf{cshift}(\textbf{g},n_0)\) becomes equal \(\mathsf{gaps\_allone}\) for some \(0 \le n_0 \le (\ell -1)\), then by the first three identities of Proposition 3.1, \(n_0\) is unique and is equal to \(\mathsf{anchor\_index}\).
If (39) is false, then clearly (37) satisfies with strict inequality, and in that case \(\mathbf{{g}}[\mathsf{anchor\_index}] \ge 2^{\ell -r-1}\) for every \(\ell ,r\) by (38). The binary array \(\textsf{cadidates}\) generated after the execution of the loop in Line 8 is such that \(\textsf{cadidates}[m]=1, m \in [0 \ \ell -1]\) if and only if \(\mathbf{{g}}[m] \ge 2^{\ell -r-1}\), and therefore the binary array \(\textsf{cadidates}\) keeps a record of all gaps that can be a candidate for \(\textbf{g}[\mathsf{anchor\_index}]\). As already made clear, \(\mathsf{anchor\_index}\) is indeed picked as a candidate. As a result, it becomes possible to execute Line 9 as there is always an \(m_0\) such that \(\textsf{cadidates}[m_0] =1\). If there are no other candidates, \(\mathsf{anchor\_index}\) is indeed \(m_0\). This is exactly what the procedure returns as the value of \(\mathsf{anchor\_index}\) is not changed after executing Line 10.
Let us investigate how the procedure works when there are more than one candidate for \(\mathsf{anchor\_index}\). If \(r=1\), we observe that
and therefore there shall be exactly one candidate for \(\mathsf{anchor\_index}\) and we fall back to the previous case. So in the discussion on having multiple candidates, we assume that \(r \ge 2\). By the second and third identities in Proposition 3.1, we have
Since \(\ell \ge 4r-3\), we have \(\ell -2r+1 \ge 2r-2\). In addition, since \(\ell \ge 3\) and \(\ell \ge 4r-3\), we have \(\ell > 2r\). Therefore, the set \(\{\ell -2r+1,\ell -2r, \ldots , 1\}\) contains the subset \(\{1,2,\ldots , 2r-2\}\) which is non-empty as \(r \ge 2\). Thus (41) implies a non-vacuous statement
The loop at Line 12 begins its iterations starting with \(m=(m_0+1) \mod \ell \) where \(m_0\) corresponds to a candidate gap. As a consequence, in every iteration of the loop indexed by m, the variable \(\mathsf{non\_cand\_cnt\_bkwd}\) acts as a counter for the number of gaps to the left of \(\mathbf{{g}}[m]\) (counting cyclically) that do not qualify as candidates until a candidate is met. It follows from (42) that when \(m=\mathsf{anchor\_index}\), \(\mathsf{non\_cand\_cnt\_bkwd} \ge 2r-2\), and therefore Lines 17–18 get executed if the loop prolongs enough to witness \(m=\mathsf{anchor\_index}\). Suppose \(m' \ne \mathsf{anchor\_index}\) corresponds to a candidate gap, i.e., \(\textsf{cadidates}[m'] =1\), then it means that \(\mathbf{{g}}[m'] \ge 2^{\ell - r - 1}\). But we know that \(m' = (\mathsf{anchor\_index} + i') \mod \ell \) for some \(i' \in [\ell -1]\) and \(\mathbf{{g}}[m'] < 2^{f_{\ell , r}(\ell -i')}\). Since \(m'\) is a candidate, \(i'\) must satisfy that
and it follows from the first identity in Proposition 3.1 that
It follows from (43) that for any candidate \(m' \ne \mathsf{anchor\_index}\), the number of non-candidate gaps to the left of \(m'\) (cyclically) is strictly less than \(2r-2\). In other words, it must be that \(\mathsf{non\_cand\_cnt\_bkwd} < 2r-2\), and therefore Lines 17–18 will not get executed for \(m'\). Therefore the iterations of the loop until \(m = \mathsf{anchor\_index}\) happens. Thus we have shown that the Lines 17–18 get executed if and only if \(m = \mathsf{anchor\_index}\). Therefore the procedure FindAnchor2 to determine anchor_index uniquely is indeed correct when there are multiple candidates. This completes the proof. \(\square \)
The decoding algorithm as presented in Algorithms 4 and 5 illustrates the principle of operation but can be implemented as a single-pass loop on n bits using a circular buffer. Therefore it has the same order of complexity as that of Algorithm 2. By Lemma 3.2 and Theorem 3.3, \({{{\mathcal {C}}}}[f_{\ell ,r}]\) is a binary constant weight code even if \(f_{\ell ,r}\) is not anchor-decodable. The sequence \({\hat{f}}_{\ell }\) obtained by specializing \(f_{\ell ,r}\) with \(r=r_{\max }\) produces a code with maximum combinatorial dimension among \(\{ {{{\mathcal {C}}}}[f_{\ell ,r}] \mid 1 \le r \le r_{\max } \}\) leading to the following definition.
Definition 7
Let \(\ell \ge 3\). We define the code \(\mathcal{{\hat{C}}}[\ell ] = {{{\mathcal {C}}}}[{\hat{f}}_{\ell }]\). The code \(\mathcal{{\hat{C}}}[\ell ]\) has blocklength \(n=2^\ell \), weight \(w=\ell \), and combinatorial dimension \(k={{\hat{k}}_{\ell }}\) as defined in (34).
4 Properties of the codes
4.1 On codebook size
The following straightforward lemma gives an information-theoretic upper bound on the size of any binary constant weight code.
Lemma 4.1
Let \({{{\mathcal {C}}}}\) be a constant weight binary code of blocklength n, weight w and combinatorial dimension k. Then
It is easy to see that both \({{{\mathcal {C}}}}[\ell ]\) and \(\mathcal{{\hat{C}}}[\ell ]\) has minimum distance \(d=2\) because \(1^\ell \Vert 0^{n-\ell }\) and \(0\Vert 1^{\ell }\Vert 0^{n-\ell -1}\) that are apart by Hamming distance 2 are codewords in both the codes. Therefore, it is meaningful to compare their combinatorial dimensions against the bound in (44). If we substitute \(n=2^{\ell }, w = \ell \) in (44), we obtain
The inequality (45) follows from Stirling’s approximation. Along with (46), another upper bound can be obtained owing to the cyclic-like structure that our construction brings along.
Lemma 4.2
If \(\textbf{c} \in {{{\mathcal {C}}}}[\ell ]\) (or \(\mathcal{{\hat{C}}}[\ell ]\)), then \(\textsf{cshift}(\textbf{c},n_0) \in {{{\mathcal {C}}}}[\ell ]\) (or \(\mathcal{{\hat{C}}}[\ell ]\)) for every \(n_0\). When \(n_0 \in {\mathbb {Z}}_{2^\ell }\), \(\textsf{cshift}(\textbf{c},n_0)\) must be distinct for every distinct \(n_0\).
Proof
Let \(\textbf{x} = \phi ^{-1}(\textbf{c}) = \textbf{x}_{\ell } \Vert {\hat{\textbf{x}}}_{\ell }\) where \(|\textbf{x}_{\ell }|=\ell \) and \(\phi \) is invoked with auxiliary input \(f_{\ell }\) or \({\hat{f}}_{\ell }\) as the case may be. For every \(n_0\), \(\textsf{cshift}(\textbf{c},n_0)\) is a codeword corresponding to a message obtained by updating \(\textbf{x}_{\ell }\) (if required), but keeping \({\hat{\textbf{x}}}_{\ell }\) fixed. This proves the first claim. Consider all codewords obtained by varying \(\textbf{x}_{\ell }\) over all \(2^\ell \) possibilities, but keeping \({{\hat{\textbf{x}}}}_{\ell }\) fixed. They must all be distinct from one another because \(\phi \) is one-to-one. Since there can at most be \(2^{\ell }\) distinct cyclic shifts possible for \(\textbf{c}\), \(\textsf{cshift}(\textbf{c},n_0)\) must all be distinct for every \(n_0 \in {\mathbb {Z}}_{2^\ell }\). \(\square \)
Let \(C_n\) denote the cyclic group of order n. By Lemma 4.2, the action of \(C_{2^\ell }\) on \({{{\mathcal {C}}}}[\ell , r]\) results in orbits of size \(2^\ell \). This implies that \({{{\mathcal {C}}}}[\ell , r]/C_{2^\ell }\) contains only primitive binary necklaces of length \(2^\ell \) and weight \(\ell \). Recall that a binary necklace of length n is an equivalence class of vectors in \(\{0,1\}^n\) considering all the n rotations of a vector as equivalent. A binary necklace is said to be primitive if the size of the equivalence class is n. The count of primitive binary necklaces of length n and weight w is known to be [25]
where q(d, wd/n) is the coefficient of \(x^{wd/n}y^{(n-w)d/n}\) in the polynomial
Here \(\mu (\cdot )\) and \(\phi _E(\cdot )\) are Möbius function and Euler’s totient function respectively. By Lemmas 4.2 and (47), both \(k_{\ell }\) and \({\hat{k}}_{\ell }\) are upper bounded by
It is not clear when the bound in (49) is strictly better than the one in (44) for an arbitrary value of \(\ell \). In any case, the sizes of both \(\mathcal{{\hat{C}}}[\ell ]\) and \({{{\mathcal {C}}}}[\ell ]\) must respect both the upper bounds (46) and (49).
Comparing the lower bound in Proposition 2.1 and the upper bound in (46), it is worthwhile to make the following inferences on the performance of \({{{\mathcal {C}}}}[\ell ]\) and \(\mathcal{{\hat{C}}}[\ell ]\). When \(\ell =3\), \(\mathcal{{\hat{C}}}[3] = {{{\mathcal {C}}}}[3]\) and the code is optimal as \(k_3=5\) matches the information-theoretic upper bound. The code \({{{\mathcal {C}}}}[4]\) (same as \(\mathcal{{\hat{C}}}[4]\)) has \(k_4=9\) that is one bit away from the bound. While both \({{{\mathcal {C}}}}[\ell ]\) and \(\mathcal{{\hat{C}}}[\ell ]\) have the same combinatorial dimension for \(3 \le \ell \le 7\), \({{{\mathcal {C}}}}[\ell ]\) clearly outperforms \(\mathcal{{\hat{C}}}[\ell ]\) for \(\ell \ge 8\). The gap \(\Delta (\ell )\) between the achievable combinatorial dimension \(k_{\ell }\) of \({{{\mathcal {C}}}}[\ell ]\) and the information-theoretic limit, i.e.,
is bounded by \(\Delta (\ell ) \le \bigl (1+\tfrac{1}{2\ln 2}\bigr ) \ell - \tfrac{3}{2}\log _2 \ell - \tfrac{\ln (2\pi /e)}{2\ln 2}\) by (46) and Proposition 2.1. We observe that \(\Delta (\ell )\) grows strictly slower than the quadratic growth of both \(k_{\ell }\) and the upper bound with respect to \(\ell \) (See Fig. 3).
4.2 Encoding and decoding complexities
The encoding algorithm (Algorithm 1) clearly has linear time-complexity in the input size. Both the decoding algorithms (Algorithms 2 and 4) involve three important steps: (a) parsing the input of length \(n=2^\ell \) to identify the gap vector of length \(\ell \), (b) parsing the gap vector to identify the starting point, and finally (c) converting \(\ell \) gap values to their binary representation. Each step has time complexity O(n), \(O(\ell )=O(\log n)\) and \(O(\ell ^2)=O(\log ^2 n)\) respectively. Except for the first round of parsing the input to obtain the gaps, which is linear in input size n, the remaining part has a poly-logarithmic time-complexity in input size. Whereas the Algorithm 2 computes only the maximum value among the gap vector, the Algorithm 4 needs to compute all gaps above a particular threshold. Therefore, despite that both have the same order of complexity, Algorithm 4 has a larger time-complexity if we consider constants.
The encoding/decoding algorithms of most of the constant weight codes involve the computation of binomial coefficients. One way to circumvent this problem is to store these coefficients as lookup tables, but in that case, it results in large space complexity. For example, a classic encoding (unranking) algorithm based on combinadics [13] requires the storage of around \(w{n \atopwithdelims ()w}\) binomial coefficients. Our algorithms fully eliminate the need to compute binomial coefficients.
5 Derived codes
In this section, we derive new codes from the codes described in Sects. 2 and 3 by suitable transformations that help to enlarge the parameter space. In a certain range of parameters, they also achieve the information-theoretic upper bound on its size. Though we describe these new codes taking \({{{\mathcal {C}}}}[\ell ]\) as the base code, similar transformations are applicable for \({{{\mathcal {C}}}}[s_{\ell }]\) and \(\mathcal{{\hat{C}}}[\ell ]\) as well.
5.1 Enlarging the range of weight
We present two different ways to enlarge the range of weight parameter.
5.1.1 \({{{\mathcal {C}}}}_{t}[\ell ]\): by modifying the sequence
Let \(\ell \ge 3\) and t be positive integers such that \(\log _2 t < \ell -1\). Then we define a sequence \(f_{\ell }^{(t)}\) of length t as follows. If t is not a power of 2,
where \(\mu _t = 2^{\lceil \log _2 t \rceil } - t\). If t is a power of 2, then
The construction of \({{{\mathcal {C}}}}[\ell ]\) and related theorems developed in Sect. 2 holds true even with respect to \(f_{\ell }^{(t)}\) if we suitably modify the encoding and decoding algorithms to take into account the change in length of the sequence. To be precise, the necessary changes are the following:
-
1.
The algorithm (1) will be invoked with \(f_{\ell }^{(t)}\) as the auxiliary input. The input \(\textbf{x}\) will be split as concatenation of t binary strings \(\textbf{x}=\textbf{x}_t\Vert \textbf{x}_{t-1}\Vert \cdots \Vert \textbf{x}_1\) where \(|\textbf{x}_i|=f_{\ell }^{(t)}(i)\). Furthermore, the loop in Line 4 will have t iterations.
-
2.
In similar lines, the decoding algorithm Algorithm 2 will be invoked with \(f_{\ell }^{(t)}\) as the second input. The algorithm will identify t locations of 1’s in the input \(\textbf{c}\) at Line 1 and correspondingly the gap vector \(\mathbf{{g}}\) will have t entries. The loop at Line 5 will have \(t-1\) iterations. The FindAnchor procedure will be modified to take a t-length vector as input. The computation of gaps_allone will be modified to include \(f_{\ell }^{(t)}(i), i=t-1,t-2,\ldots , 1\) as its tail end.
It is straightforward to see both the conditions of anchor-decodability can be translated for \(f_{\ell }^{(t)}\) since
and the sequence \((2^{\ell } - 1 - \sum _{i=1}^{t-1} 2^{f_{\ell }^{(t)}(i)}, 2^{f_{\ell }^{(t)}(t-1)}-1, \ldots , 2^{f_{\ell }^{(t)}(1)}-1)\) is distinguishable from its cyclic shifts. For this reason, it turns out that the output of the encoder will always lead to a vector of weight t and furthermore, the decoding algorithm with the above modifications will always be correct. Thus we have a new code \({{{\mathcal {C}}}}_t[\ell ]\) with parameters
It can be checked that \({{{\mathcal {C}}}}_2[\ell ]\) has \(k \ = \ 2\ell -2 \ = \ \lfloor \log _2 A(2^{\ell },2,2) \rfloor \) and therefore the code \({{{\mathcal {C}}}}_2[\ell ]\) is optimal for every \(\ell \ge 3\).
5.1.2 \({{{\mathcal {D}}}}_t[\ell ]\): by shortening the message
Let \(\ell \ge 3\) and \(t < \ell \) be positive integers. Clearly, the encoding and decoding of \({{{\mathcal {C}}}}[\ell ]\) work correct even if the message vector \(\textbf{x}=\textbf{x}_\ell \Vert \textbf{x}_{\ell -1}\Vert \cdots \Vert \textbf{x}_1\) is shortened by setting \(\textbf{x}_1=\textbf{0}, \textbf{x}_2=\textbf{0}, \ldots , \textbf{x}_{\ell -t}=\textbf{0}\). This simple observation leads to deriving a new constant weight code with weight \(w=t\) by applying suitable modifications in Algorithms 1 and 2. The necessary modifications are the following.
-
1.
Set the last \((\ell -t)\) blocks \(\textbf{x}_{\ell -t}, \textbf{x}_{\ell -t-1}, \ldots , \textbf{x}_{1}\) to all-zero vectors. Reset those bits to 0 that are set to 1 in the last \(\ell -t\) iterations of the loop (corresponding to \(\textbf{x}_{\ell -t}, \textbf{x}_{\ell -t-1}, \ldots , \textbf{x}_{1}\)) in the encoding algorithm.
-
2.
In the decoding algorithm, identify t locations of 1’s in the input \(\textbf{c}\) at Line 1 and correspondingly the gap vector \(\mathbf{{g}}\) will have t entries. The loop at Line 5 will have \(t-1\) iterations. The FindAnchor procedure will be modified to take a t-length vector as input. Compute the gaps_allone vector as \(\mathsf{gaps\_allone} = (2^{\ell } - \sum _{i=1}^{t-1} 2^{s_{\ell }(i)}, 2^{s_{\ell }(t-1)}, \ldots , 2^{s_{\ell }(1)})\) as a vector of length t.
It is clear that the output of the modified encoder will always be a vector of weight t. The new decoding algorithm will be correct for the following reasons. Suppose the encoding is carried out by Algorithm 1 without any modifications mentioned above. Since \(\textbf{x}_i=\textbf{0}\) for 1 \(\le i \le \ell -t\), there will be a run of \(\ell -t\) consecutive 1’s in the output of the encoder that appears to the left (cyclically) of the gap \(\mathbf{{g}}[\mathsf{anchor\_index}]\). In the modified encoder, these 1’s are flipped to zero, and therefore \(\mathbf{{g}}[\mathsf{anchor\_index}]\) is increased by \(\ell -t\) whereas all the remaining gaps hold on to the same values as that of the output provided by Algorithm 1. Thus the anchor-decodability criterion is not violated and therefore the modified decoding algorithm must be correct.
The resultant code obtained by the modified encoder is denoted by \({{{\mathcal {D}}}}_t[\ell ]\) and has parameters given by:
It is easy to check that \({{{\mathcal {D}}}}_2[\ell ]\) is exactly the same as the optimal code \({{{\mathcal {C}}}}_2[\ell ]\).
5.2 Enlarging the range of blocklength
Let \(\ell \ge 3\) and let \(t < f_{\ell }(1)\) be positive integers. Unlike the construction of \({{{\mathcal {D}}}}_2[\ell ]\), it is possible to shorten the message vector \(\textbf{x}=\textbf{x}_\ell \Vert \textbf{x}_{\ell -1}\Vert \cdots \Vert \textbf{x}_1\) by setting first (most significant) t bits of \(\textbf{x}_{\ell }\) and the last (least significant) t bits of \(\textbf{x}_{1}\) as zero before passing it to the encoding algorithm Algorithm 1. This leads to a constant weight code \({{{\mathcal {B}}}}_t[\ell ]\) with a smaller size, and reduced blocklength but with the same weight \(\ell \), provided that the encoding algorithm is adjusted with suitable modifications. The code \({{{\mathcal {B}}}}_t[\ell ]\) has parameters
The modified encoding algorithm is presented in Algorithm 6. In spite of the reduction in blocklength, the weight still remains as \(\ell \) as shown in Lemma 5.1.

Lemma 5.1
For every output \(\textbf{c}\) of Algorithm 6, \(w_H(\textbf{c}) = \ell \).
Proof
Consider \(\textbf{c}\) in Algorithm 6 before Line 8 is executed. Recall the proof of Lemma 2.2 and in particular (12) that estimates the maximum cumulative increment p in the variable \(\textsf{pos}\). Applying that to the context of Algorithm 6, we observe that p by the end of \((\ell -1)\) iterations of the loop at Line 5 satisfies
So the truncation of \(\textbf{c}\) by \(2^t-1\) effected by the execution of Line 8 does not lead to the removal of a bit with value 1. Therefore the output \(\textbf{c}\) has Hamming weight \(\ell \). \(\square \)


The decoding algorithm (presented in Algorithm 7) is exactly in line with Algorithm 2, but with necessary modifications to take care of the reduced length. The correctness of the decoder is established in Lemma 5.2.
Lemma 5.2
For every output \(\textbf{c}\) of Algorithm 6, \(\textbf{c}\) is correctly decoded by Algorithm 7.
Proof
The encoding algorithm of \({{{\mathcal {B}}}}_t[\ell ]\) differs from Algorithm 1 in two aspects. First, the location \(j[\mathsf{anchor\_index}]\) [recall the definition in (14)] is the product of \(2^t\) and \(\textsf{dec}(\textbf{x}_{\ell })\). Second, \((2^t-1)\) bits are deleted by Line 8 of the encoding algorithm, thus reducing the length of the codeword to \(2^\ell - 2^t+1\). By Lemma 5.1, all these deleted bits are zeros. Therefore, the deletion only affects \(\textbf{g}[\mathsf{anchor\_index}]\) that do not carry any information regarding \(\textbf{x}_{\ell -1}, \textbf{x}_{\ell -2}\ldots , \textbf{x}_{1}\). As a consequence, if \(j[\mathsf{anchor\_index}]\) is identified correctly, then \(\textbf{x}_{\ell -1}, \textbf{x}_{\ell -2}\ldots , \textbf{x}_{1}\) will be decoded correctly.
Because of the relative decrease in \(\textbf{g}[\mathsf{anchor\_index}]\) due to the deletion of bits, the value of \(j[\mathsf{anchor\_index}]\) can be less than the corresponding value in Algorithm 2. Let \(\Delta j\) be this difference. If the set of indices of deleted bits as a subset of \([0,2^{\ell }-1]\) does not intersect with \([0,2^t\textsf{dec}(\textbf{x}_{\ell })-1]\), then \(\Delta j = 0\). If it does intersect, then \(\Delta j \le 2^t-1\). In any case, \(\Delta j \le 2^t-1\). By (53), in spite of a reduction by \(2^t-1\) on its value, \(\textbf{g}[\mathsf{anchor\_index}]\) and hence \(\mathsf{anchor\_index}\) will be correctly identified by \(\textsc {FindAnchorB}\) procedure. However, as noted above, the value of \(j[\mathsf{anchor\_index}]\) will be less by \(\Delta j\). At the same time, by Line 3 of the encoder (Algorithm 6), \(\text {dec}(\textbf{x}_{\ell })\) is multiplied by \(2^t\) while identifying the anchor bit. More precisely, we must have that \(2^t\text {dec}(\textbf{x}_{\ell })-\Delta j= j[\mathsf{anchor\_index}]\). Therefore, \(\lceil j[\mathsf{anchor\_index}]/2^t \rceil \) recovers the value of \(\textsf{dec}(\textbf{x}_{\ell })\) correctly despite the shift in \(j[\mathsf{anchor\_index}]\). Thus \(\textbf{x}_{\ell }\) is decoded correctly establishing that Algorithm 7 is correct. \(\square \)
A compilation of parameter sets of all the three derived codes for small values of \(\ell , t\) is provided in Table 2. The table also provides a comparison of k with respective information-theoretic bound.
6 Conclusion and future work
Binary constant weight codes find extensive applications in many engineering problems such as on-chip and inter-chip interconnections [30, 31], source compression [7], data storage [19], design of spherical codes for communication over Gaussian channels [10], optical communication [5], spread-spectrum communication [8], and cryptography [11]. Therefore the design of such codes with low-complexity encoding and decoding algorithms becomes quite relevant in practice. In this paper, we present several families of binary constant weight codes supporting a wide range of parameters while permitting linear encoding complexity and poly-logarithmic (discounting the linear time spent on parsing the input) decoding complexity. The present work opens up new directions for exploration such as: (a) enlarging the codebook further by controlled compromise on complexity, (b) achieving a larger minimum distance by reducing the codebook size, and (c) study of correlation properties of the codes.
Data availability
This article does not have any associated data.
References
Agrell E., Vardy A., Zeger K.: Upper bounds for constant-weight codes. IEEE Trans. Inf. Theory 46(7), 2373–2395 (2000). https://doi.org/10.1109/18.887851.
Bitan S., Etzion T.: Constructions for optimal constant weight cyclically permutable codes and difference families. IEEE Trans. Inf. Theory 41(1), 77–87 (1995). https://doi.org/10.1109/18.370117.
Brouwer A.: Bounds for binary constant weight codes. https://www.win.tue.nl/ (Online; Accessed 23 Oct 2023) (2023).
Brouwer A.E., Shearer J.B., Sloane N.J.A., Smith W.D.: A new table of constant weight codes. IEEE Trans. Inf. Theory 36(6), 1334–1380 (1990). https://doi.org/10.1109/18.59932.
Chung H., Kumar P.V.: Optical orthogonal codes—new bounds and an optimal construction. IEEE Trans. Inf. Theory 36(4), 866–873 (1990). https://doi.org/10.1109/18.53748.
Cover T.: Enumerative source encoding. IEEE Trans. Inf. Theory 19(1), 73–77 (1973).
Dai V., Zakhor A.: Binary combinatorial coding. In: Data Compression Conference, 2003. Proceedings. DCC 2003, p. 420 (2003). https://doi.org/10.1109/DCC.2003.1194039.
Ding C., Fuji-Hara R., Fujiwara Y., Jimbo M., Mishima M.: Sets of frequency hopping sequences: bounds and optimal constructions. IEEE Trans. Inf. Theory 55(7), 3297–3304 (2009). https://doi.org/10.1109/TIT.2009.2021366.
Er M.C.: Lexicographic ordering, ranking and unranking of combinations. Int. J. Comput. Math. 17(1), 277–283 (1985).
Ericson T., Zinoviev V.: Chapter 6–non-symmetric alphabets. In: Ericson T., Zinoviev V. (eds.) Codes on Euclidean Spheres. North-Holland Mathematical Library, vol. 63, pp. 179–194. Elsevier, New York (2001). https://doi.org/10.1016/S0924-6509(01)80051-9.
Finiasz M., Gaborit P., Sendrier N.: Improved fast syndrome based cryptographic hash functions. In: ECRYPT Hash Workshop 2007, Proceedings, p. 155 (2011).
Gallager R.G.: Principles of Digital Communication. Cambridge University Press, New York (2008).
Genitrini A., Pépin M.: Lexicographic unranking of combinations revisited. Algorithms 14(3), 97 (2021).
Graham R., Sloane N.: Lower bounds for constant weight codes. IEEE Trans. Inf. Theory 26(1), 37–43 (1980). https://doi.org/10.1109/TIT.1980.1056141.
Johnson S.M.: A new upper bound for error-correcting codes. IRE Trans. Inf. Theory 8(3), 203–207 (1962).
Knott G.D.: A numbering systems for combinations. Commun. ACM 17(1), 45–46 (1974).
Kokosinski Z.: Algorithms for unranking combinations and their applications. In: Hamza M.H. (ed.) Proceedings of the Seventh IASTED/ISMM International Conference on Parallel and Distributed Computing and Systems, Washington, D.C., USA, October 19-21, 1995, pp. 216–224 (1995).
Kruchinin V.V., Shablya Y.V., Kruchinin D.V., Rulevskiy V.: Unranking small combinations of a large set in co-lexicographic order. Algorithms 15(2), 36 (2022).
Kurmaev O.F.: Constant-weight and constant-charge binary run-length limited codes. IEEE Trans. Inf. Theory 57(7), 4497–4515 (2011). https://doi.org/10.1109/TIT.2011.2145490.
Lehmer D.H.: Teaching combinatorial tricks to a computer. In: Proceedings of Symposium in Applied Mathematics, vol. 10, pp. 179–193. American Mathematical Society, Providence, RI/New York (1960).
MacWilliams F.J., Sloane N.J.A.: The Theory of Error-Correcting Codes. Mathematical Library. North-Holland Publishing Company, New York (1977).
Moreno O., Zhang Z., Kumar P.V., Zinoviev V.A.: New constructions of optimal cyclically permutable constant weight codes. IEEE Trans. Inf. Theory 41(2), 448–455 (1995). https://doi.org/10.1109/18.370146.
Nordio A., Viterbo E.: Permutation modulation for fading channels. In: 10th International Conference on Telecommunications, 2003. ICT 2003, vol. 2, pp. 1177–11832 (2003). https://doi.org/10.1109/ICTEL.2003.1191603.
Pascal E.: Sopra una formula numerica. G. Di Mat. 25, 45–49 (1887).
Riordan J.: An Introduction to Combinatorial Analysis. Princeton Legacy Library. Princeton University Press, Princeton (1978).
Ruskey F., Williams A.: The coolest way to generate combinations. Discret. Math. 309(17), 5305–5320 (2009).
Schalkwijk J.: An algorithm for source coding. IEEE Trans. Inf. Theory 18(3), 395–399 (1972).
Sendrier N.: Encoding information into constant weight words. In: Proceedings. International Symposium on Information Theory, 2005. ISIT 2005, pp. 435–438 (2005).
Slepian D.: Permutation modulation. Proc. IEEE 53(3), 228–236 (1965). https://doi.org/10.1109/PROC.1965.3680.
Tabor J.: Noise reduction using low weight and constant weight coding techniques. Masters Thesis, MIT, Cambridge, MA, USA. https://dspace.mit.edu/handle/1721.1/14030 (1990).
Tallini L.: Design of some new efficient balanced codes. Masters Thesis, Oregon State University, Corvallis, OR, USA. https://ir.library.oregonstate.edu/downloads/qb98mh689 (1994)
Acknowledgements
We would like to thank both the anonymous reviewers for their useful comments. Two insightful suggestions from one of the reviewers were instrumental in shaping up the initial construction and the proof of its optimality into their current form. This work is supported by the Australian Research Council through the Discovery Project under Grant DP200100731.
Funding
Open Access funding enabled and organized by CAUL and its Member Institutions. This work is supported by the Australian Research Council through the Discovery Project under Grant DP200100731.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflict of interest to declare that are relevant to the content of this article.
Additional information
Communicated by G. Ge.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A. Connection of \(f_{\ell }\) with Huffman code
Appendix A. Connection of \(f_{\ell }\) with Huffman code
In this section, we draw connections between the optimal anchor-decodable sequence and optimal lengths of a Huffman code. We also present an alternate proof of Theorem 2.4 that follows from the aforesaid connection. Our approach is to transform the maximization problem into an equivalent problem that is related to minimization of average length of a source code for a discrete source with alphabet-size \(\ell \). It is well-known that Huffman algorithm yields an optimal source code having the minimum average length. After establishing necessary equivalences, the optimal codeword lengths of Huffman code can be made use of to construct a sequence that maximizes \(k(s_{\ell }) = \sum _{i}s_{\ell }(i)\). It turns out that the resultant sequence is indeed \(f_{\ell }\).
In the first step, we give a brief review of concepts pertaining to a discrete source and the optimal Huffman code for such a source. A discrete source has a finite alphabet and a probability mass function defined on that alphabet. Let us consider a discrete source with an alphabet \({{{\mathcal {A}}}}=\{a_1, a_2,\ldots , a_{\ell }\}\) and a uniform probability mass function, i.e., \(\Pr (a_i) = (1/\ell )\) for every i. By slight abuse of notation, we use \({{{\mathcal {A}}}}\) to denote the source as well. A binary source code is a mapping \(s:{{{\mathcal {A}}}} \rightarrow \{ \} \{0,1\}^*\) and we say \(a_i\) has a codeword length \(L(a_i) \triangleq |s(a_i)|\). The average length of the source code is defined as
A source code that minimizes \(\bar{L}({{{\mathcal {A}}}})\) over all possible source codes is called an optimal code and it is well-known that Huffman encoding algorithm produces an optimal source code [12]. The Huffman algorithm constructs a rooted binary tree of \(\ell \) leaf nodes in which each symbol \(a_i\) uniquely corresponds to a leaf node. We call it a Huffman code tree. Let \(T_H = (V_H, E_H)\) be the Huffman code tree associated to the source \({{{\mathcal {A}}}}\) with root node \(v_{r}\) and \(\ell \) leaf nodes \(v(a_i), i=1,2,\ldots , \ell \). The binary codeword associated to \(a_i\) can be identified from the leaf node as follows. Among two possible children of a node v in the binary tree, the edge from v to the left one is marked as 0 and to the right one as 1. Let P(v) denote the unique path from \(v_{r}\) to an arbitrary node v. Then the unique path from \(v_{r}\) to the leaf node \(v(a_i) \in V_H\) identifies a binary string. This forms the codeword \(s(a_i)\). The depth of a node v in a binary tree is the length of the unique path P(v) and is denoted by \(L_T(v)\). Therefore, \(L_T(v(a_i))=L(a_i)\). A source code that can be represented as a rooted binary tree with leaves representing codewords as described above is called a prefix-free code. Hence Huffman code is an optimal code that is prefix-free as well. The following two lemmas are relevant for our proof.
Lemma A.1
[12] Consider a discrete source with with alphabet \({{{\mathcal {A}}}} = \{a_i, i=1,2,\ldots , \ell \}\). Let \(T_H\) denote the Huffman code tree of the source. Then
-
1.
\(T_H\) is full.
-
2.
If the source has uniform distribution, then for every leaf node \(v(a_i)\), \(L_{T_H}(v(a_i))\) is either \(\lceil \log _2\ell \rceil \) or \(\lfloor \log _2\ell \rfloor \).
-
3.
If \(\ell \) is a power of 2 and a prefix-free source code has average length \(\log _2\ell \), then \(L_{T_p}(v(a_i)) = \log _2 \ell \) for every i where \(T_p\) is the code tree of the code.
Proof
All these are discussed in Gallager’s textbook [12]. The first assertion is presented as Lemma 2.5.2 in [12]. The second follows from Exercise 2.14(a) and the third from Exerice 2.9(a) in [12]. \(\square \)
Lemma A.2
(Kraft Inequality [12]) Consider a prefix-free source code for a discrete source with alphabet \({{{\mathcal {A}}}} = \{a_i, i=1,2,\ldots , \ell \}\). Let \(L(a_1), L(a_2),\ldots , L(a_{\ell })\) denote the lengths of codewords. Then
Conversely, if \(L(a_1), L(a_2),\ldots , L(a_{\ell })\) are positive integers satisfying (A1), then there exists a prefix-free source code with these as codeword lengths.
In the second step, we extend the Huffman code tree \(T_H\) to a form a perfect binary tree \(T=(V,E)\) of depth \(\ell \). For any node \(v \in V\), let N(v) be the set of leaf nodes of T for which the unique path from \(v_r\) to the leaf node includes v. We define N(v) as the canopy of v. By the first property of Lemma A.1, the canopies \(N(v(a_i))\) in T are pairwise disjoint and furthermore, their union forms the set of all leaf nodes of T. A collection of nodes \(U \subset V\) is said to be prefix-free if for any \(u_1, u_2 \in U\), the path from \(v_r\) to one of these nodes does not pass through the other node. Next, let us consider the problem of maximizing
over all prefix-free subsets U of V. It is straightforward to see that minimization of \(\bar{L}({{{\mathcal {A}}}})\) over all prefix-free codes is equivalent to maximisation of \(k'(U)\) over all prefix-free \(U\subset V\) of size \(\ell \). Thus Huffman algorithm turns out to be an algorithm to identify a prefix-free set of nodes
in a perfect binary tree of depth \(\ell \). By the second assertion of Lemma A.1, every node \(u \in U^*\) is such that \(L_T(u) = \log _2\ell \) when \(\ell \) is a power of 2. Let us next consider the case when \(\ell \) is not a power of 2. Again by Lemma A.1, \(L_T(u) = \lfloor \log _2\ell \rfloor \) or \(L_T(u) = \lceil \log _2 \ell \rceil \). Let \(M = \{ u \in U^* \mid L_T(u)=\lfloor \log _2 \ell \rfloor \}\) and \(m = |M|\). For every node \(u \in U^* \setminus M\), \(L_T(u)= \lceil \log _2 \ell \rceil \). Since the canopies of nodes in U form a partition on the set of leaves of T, we count all the leaves of T in two different ways to obtain:
We observe that \(m =\mu \) that is defined as part of Definition 1. Thus the sequence \((\ell -L_T(u), u \in U^*)\) in non-decreasing order is given by:
In the third step, we consider a variant of the maximisation problem in (A2) which aligns with our problem of identifying an anchor-decodable sequence with maximum \(k(s_{\ell })\). Let us define
In order to establish an equivalence between finding \(U_1^*\) and our desired anchor-decodable sequence, let us consider any prefix-free set \(U = \{u_1,u_2,\ldots , u_\ell \}\) such that \(L_T(u_1) \le L_T(u_2) \le \cdots \le L_T(u_{\ell })\). Then define
where the first \((\ell -1)\) entries are determined by U. Since U is prefix-free and \(|U|=\ell \), U defines a prefix-free code for a source with alphabet size \(\ell \). By Lemma A.2 and the fact that \(\ell -L_T(u_1)\) is maximum in the set \(\{\ell -L_T(u), u \in U\}\), we have
This means that \(2^{\ell } - \sum _{i=1}^{\ell -1}2^{s_{\ell }(i)} \ge 2^{s_{\ell }(\ell -1)}\) and hence the non-decreasing sequence \(s_{\ell }\) satisfies the first condition in Definition 4. In addition, we observe that
Therefore finding \(U_1^*\) is equivalent to finding an \(s_{\ell }\) that maximizes \(k(s_{\ell })\) while satisfying the first condition of Definition 4.
In the fourth step, we argue that \(U_1^*=U^*\). Suppose that \(U_1^* = \{ u_1^*, u_2^*,\ldots , u_\ell ^*\}\) such that \(L_T(u_1^*) \le L_T(u_2^*) \le \cdots \le L_T(u_{\ell }^*)\) and \(U^* = \{u_{h1}, u_{h2},\ldots , u_{h\ell }\}\) such that \(L_T(u_{h1}) \le L_T(u_{h2}) \le \cdots \le L_T(u_{h\ell })\). At the outset, we clarify a subtle point with regard to the definition of both \(U^*\) and \(U_1^*\). If there are multiple candidates for \(U_1^*\) or \(U^*\), then we pick a common random element from the intersection of those candidate sets as the choice for both \(U_1^*\) and \(U^*\). Thus whenever there is a non-trivial intersection for these candidate sets, \(U_1^*=U^*\). Suppose that \(U_1^*\ne U^*\). This implies that there is no single U that is a maximizer for both problems (A2) and (A4) simultaneously. Hence it must also be true that
Suppose that \(\max _{u\in U_1^*} (\ell - L_T(u) ) < \max _{u\in U^*} (\ell - L_T(u) )\). By the second assertion in Lemma A.1, \(\ell -L_T(u_{hi})\) is either equal to or one less than \(\max _{u\in U^*} (\ell - L_T(u) )\) for every \(u_{hi} \in U^*\). This implies that \(\max _{u\in U_1^*} (\ell - L_T(u) ) \le \min _{u\in U^*} (\ell - L_T(u) )\) and therefore (A6) can not be true leading to a contradiction. So let us assume that \(\max _{u\in U_1^*} (\ell - L_T(u) ) \ge \max _{u\in U^*} (\ell - L_T(u) )\). In that case, (A6) implies that
This is a contradiction to the fact that \(U^*\) is a maximizer for the problem in (A2). Hence we have proved that \(U_1^* = U^*= \{ u_1^*, u_2^*,\ldots , u_\ell ^*\}\). Therefore, within the set of all sequences that is constrained by the first condition in Definition 4, the sequence
where \(\ell -L_T(u_i^*)\) is as given in (A3) in the same order, maximises \(k(s_{\ell })\).
As the final step to complete the proof, we proceed to check if \(s_{\ell }^*\) satisfies the second condition in Definition 4. Let us define
Let us consider the case when \(\ell \) is not a power of 2. Recall that m is the number of \(u\in U^*\) satisfying \(L_T(u)=\lfloor \log _2 \ell \rfloor \). Then it can be computed that
Since \(1 \le m \le \ell - 2\), \(\varvec{\gamma }^*\) can not be equal to \(\textsf{cshift}(\varvec{\gamma }^*,\ell _0)\) for any \(1 \le \ell _0 < \ell \). Since \(s_{\ell }^* = f_{\ell }\) when \(\ell \) is not a power of 2, we have completed the proof for that case.
What remains is the case when \(\ell \) is a power of 2. In this case, \(\varvec{\gamma }^* = (\frac{2^{\ell }}{\ell }-1, \frac{2^{\ell }}{\ell }-1, \ldots , \frac{2^{\ell }}{\ell }-1)\) and clearly \(\textsf{cshift}(\varvec{\gamma }^*,\ell _0) = \varvec{\gamma }^*\) for every \(\ell _0\). Thus \(s_{\ell }^*\) violates the second condition of Definition 4 and therefore is not anchor-decodable. Since
\(\max k(s_{\ell }) \le \ell ^2 - \ell \log _2 \ell + \log _2 \ell \) where the maximisation is over the set of all anchor-decodable sequences. On the other hand, we have
and \(f_\ell \) is anchor-decodable. Therefore the optimality of \(f_{\ell }\) follows if we prove that \(k(s_{\ell }) \ne \ell ^2 - \ell \log _2 \ell + \log _2 \ell \) for any anchor-decodable \(s_{\ell }\). Suppose on the contrary \(k({\hat{s}}_{\ell })= \ell ^2 - \ell \log _2 \ell + \log _2 \ell \) for some anchor-decodable sequence \({\hat{s}}_{\ell }\). The vector of lengths \((\log _2\ell ,\ell -{\hat{s}}_{\ell }(\ell -1),\ell -{\hat{s}}_{\ell }(\ell -2),\ldots , \ell -{\hat{s}}_{\ell }(1))\) has average length \(\log _2\ell \), noting that \({\hat{s}}_\ell (\ell ) = \ell \) due to the definition of an anchor-decodable sequence. Since \({\hat{s}}_{\ell }\) respects the first condition of Definition 4, we must have
The inequality in (A8) is true because the average of \((\ell -1)\) numbers \(\ell -{\hat{s}}_{\ell }(\ell -1),\ell -{\hat{s}}_{\ell }(\ell -2),\ldots , \ell -{\hat{s}}_{\ell }(1)\) can be computed as \(\log _2\ell \) and hence \(\min _{i=1,\ldots ,\ell -1} (\ell - {\hat{s}}_{\ell }(i)) \le \log _2\ell \). By (A8) and Lemma A.2, the length vector \((\log _2\ell ,\ell -{\hat{s}}_{\ell }(\ell -1),\ell -{\hat{s}}_{\ell }(\ell -2),\ldots , \ell -{\hat{s}}_{\ell }(1))\) corresponds to a prefix-free code of average length \(\log _2\ell \). If \(T_p\) is the code tree of the code, then by the third statement of Lemma A.1, \(L_{T_p}(v_i) = \log _2\ell \) for every i. Therefore \({\hat{s}}_{\ell }(i)=\ell -\log _2\ell \) for every \(i=1,2,\ldots ,\ell -1\). Thus \({\hat{s}}_\ell \) becomes equal to \(s_{\ell }^*\) leading to a contradiction to the assumption that \({\hat{s}}_{\ell }\) is anchor-decodable. It follows that \(k(s_\ell )\) is maximized by choosing \(s_{\ell } = f_{\ell }\) when \(\ell \) is a power of 2. This completes the alternate proof of Theorem 2.4, that builds on top of a connection with Huffman code.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sasidharan, B., Viterbo, E. & Dau, S.H. Binary cyclic-gap constant weight codes with low-complexity encoding and decoding. Des. Codes Cryptogr. 92, 4247–4277 (2024). https://doi.org/10.1007/s10623-024-01494-8
Received:
Revised:
Accepted:
Published:
Version of record:
Issue date:
DOI: https://doi.org/10.1007/s10623-024-01494-8




