Recent zbMATH articles in MSC 94https://zbmath.org/atom/cc/942024-02-28T19:32:02.718555ZUnknown authorWerkzeugNew bound for affine resolvable designs and its application to authentication codeshttps://zbmath.org/1527.050322024-02-28T19:32:02.718555Z"Kurosawa, Kaoru"https://zbmath.org/authors/?q=ai:kurosawa.kaoru"Kageyama, Sanpei"https://zbmath.org/authors/?q=ai:kageyama.sanpeiSummary: In this paper, we first prove that \(b \leq v+t-1\) for an ``affine'' \(( \alpha_1,\dots, \alpha_t)\)-resolvable design, where \(b\) denotes the number of blocks, \(v\) denotes the number of symbols and \(t\) denotes the number of classes. Our inequality is an opposite to the well known inequality \(b \geq v+t -1\) for a ``balanced'' \(( \alpha_1,\dots, \alpha_t)\)-resolvable design. Next, we present a more tight lower bound on the size of keys than before for authentication codes with arbitration by applying our inequality. Although this model of authentication codes is very important in practice, it has been too complicated to be analyzed. We show that the receiver's key has a structure of an affine \(\alpha \)-resolvable design \(( \alpha_1=\cdots= \alpha_t= \alpha )\) and \(v\) corresponds to the number of keys under a proper assumption. (Note that our inequality is a lower bound on \(v\).)
For the entire collection see [Zbl 0847.00053].Weighted subspace designs from \(q\)-polymatroidshttps://zbmath.org/1527.050332024-02-28T19:32:02.718555Z"Byrne, Eimear"https://zbmath.org/authors/?q=ai:byrne.eimear"Ceria, Michela"https://zbmath.org/authors/?q=ai:ceria.michela"Ionica, Sorina"https://zbmath.org/authors/?q=ai:ionica.sorina"Jurrius, Relinde"https://zbmath.org/authors/?q=ai:jurrius.relinde-p-m-jSummary: The Assmus-Mattson Theorem [\textit{E. F. Assmus jun.} and \textit{H. F. Mattson jun.}, J. Comb. Theory 6, 122--151 (1969; Zbl 0179.02901)] gives a way to identify block designs arising from codes. This result was broadened to matroids and weighted designs by \textit{T. Britz} et al. [SIAM J. Discrete Math. 23, No. 2, 1082--1099 (2009; Zbl 1219.05033)]. In this work we present a further two-fold generalisation: first from matroids to polymatroids and also from sets to vector spaces. To achieve this, we study the characteristic polynomial of a \(q\)-polymatroid and outline several of its properties. We also derive a MacWilliams duality result and apply this to establish criteria on the weight enumerator of a \(q\)-polymatroid for which dependent spaces of the \(q\)-polymatroid form the blocks of a weighted subspace design.On the coset graph construction of distance-regular graphshttps://zbmath.org/1527.051812024-02-28T19:32:02.718555Z"Shi, Minjia"https://zbmath.org/authors/?q=ai:shi.minjia"Krotov, Denis S."https://zbmath.org/authors/?q=ai:krotov.denis-s"Solé, Patrick"https://zbmath.org/authors/?q=ai:sole.patrickSummary: We show that no more new distance-regular graphs in the tables of the book of \textit{A. E. Brouwer} et al. [Distance-regular graphs. Berlin etc.: Springer-Verlag (1989; Zbl 0747.05073)] can be produced by using the coset graph of additive completely regular codes over finite fields.Whitney numbers of combinatorial geometries and higher-weight Dowling latticeshttps://zbmath.org/1527.060012024-02-28T19:32:02.718555Z"Ravagnani, Alberto"https://zbmath.org/authors/?q=ai:ravagnani.albertoSummary: We study the Whitney numbers of the first kind of combinatorial geometries, in connection with the theory of error-correcting codes. The first part of the paper is devoted to general results relating the Möbius functions of nested atomistic lattices, extending some classical theorems in combinatorics. We then specialize our results to restriction geometries, i.e., to sublattices \(\mathcal{A}\) of the lattice of subspaces of an \(\mathbb{F}_q\)-linear space, say, \(X\), generated by a set of projective points \(A\subseteq X\). In this context, we introduce the notion of \textit{subspace distribution} and show that partial knowledge of the latter is equivalent to partial knowledge of the Whitney numbers of \(\mathcal{A}\). This refines a classical result by Dowling. The most interesting applications of our results are to be seen in the theory of \textit{higher-weight Dowling lattices} (HWDLs), to which we devote the second and most substantive part of the paper. These combinatorial geometries were introduced by Dowling in 1971 in connection with fundamental problems in coding theory, most notably the famous MDS conjecture. They were further studied by, among others, Zaslavsky, Bonin, Kung, Brini, and Games. To date, still very little is known about these lattices and the techniques to compute their Whitney numbers have not been discovered yet. In this paper, we bring forward the theory of HWDLs, computing their Whitney numbers for new infinite families of parameters. We also show that the second Whitney numbers of HWDLs are polynomials in the underlying field size \(q\), whose coefficients are curious expressions involving the Bernoulli numbers. In passing, we obtain new results intersecting coding theory and enumerative combinatorics.On the number of factorizations of \(t\bmod N\) and the probability distribution of Diffie-Hellman secret keys for many usershttps://zbmath.org/1527.110032024-02-28T19:32:02.718555Z"Leibak, Alar"https://zbmath.org/authors/?q=ai:leibak.alarAuthor's abstract: We study the number \(R_n(t,N)\) of tuplets \((x_1,\ldots, x_n)\) of congruence classes modulo \(N\) such that \(x_1\cdots x_n \equiv t\pmod N\). As a result, we derive a recurrence for \(R_n(t,N)\) and prove some multiplicative properties of \(R_n(t,N)\). Furthermore, we apply the result to study the probability distribution of Diffie-Hellman keys used in multiparty communication. We show that this probability distribution is not uniform.
Reviewer: László Tóth (Pécs)Introducing \(S\)-index into factoring RSA modulus via Lucas sequenceshttps://zbmath.org/1527.110932024-02-28T19:32:02.718555Z"Abu, N. A."https://zbmath.org/authors/?q=ai:abu.nur-azman"Salim, F."https://zbmath.org/authors/?q=ai:salim.flora-d|salim.farzad"Ariffin, M. R. K."https://zbmath.org/authors/?q=ai:ariffin.mahammad-rezal-kamel|ariffin.muhamad-rezal-kamelSummary: At any instance in the factoring algorithm, the accumulative result stands independently. In eect, there is no clear direction to manoeuvre whether to go left or right. General Lucas sequences are practically useful in cryptography. In the past quarter century, factoring large RSA modulo into its primes is one of the most important and most challenging problems in computational number theory. A factoring technique on RSA modulo is mainly hindered by the strong prime properties. The success of factoring few large RSA modulo within the last few decades has been due to computing prowess overcoming one strong prime of RSA modulo. In this paper, some useful properties of Lucas sequences shall be explored in factoring RSA modulo. This paper will also introduces the \(S\)-index formation in solving quadratic equation modulo \(N\). The \(S\)-index pattern is very useful in designing an algorithm to factor RSA modulo. The \(S\)-index will add another comparative tool to better manoeuvre in a factoring process. On one hand, it shall remain a theoretical challenge to overcome the strong prime properties. On the other hand, it shall remain a computational challenge to achieve a running time within polynomial time to factor RSA modulo. This paper will propose an avenue to do both using general Lucas sequences.Reduction-free multiplication for finite fields and polynomial ringshttps://zbmath.org/1527.110992024-02-28T19:32:02.718555Z"Oliva Madrigal, Samira Carolina"https://zbmath.org/authors/?q=ai:oliva-madrigal.samira-carolina"Saldamlı, Gökay"https://zbmath.org/authors/?q=ai:saldamli.gokay"Li, Chen"https://zbmath.org/authors/?q=ai:li.chen.1|li.chen.3"Geng, Yue"https://zbmath.org/authors/?q=ai:geng.yue"Tian, Jing"https://zbmath.org/authors/?q=ai:tian.jing"Wang, Zhongfeng"https://zbmath.org/authors/?q=ai:wang.zhongfeng"Koç, Çetin Kaya"https://zbmath.org/authors/?q=ai:koc.cetin-kayaSummary: The complexity of the multiplication operation over polynomial rings and finite fields drastically changes with the selection of the defining polynomial of the respective mathematical structure. Trinomials and pentanomials are the most natural choices for the best arithmetic. In this paper, we first present a study in which a special type of trinomial does not require any reduction steps. We then introduce two new algorithms, FIKO and RF-FIKO, fully interleaved bit-parallel Karatsuba-Ofman multipliers where the latter is only concerned with the three Karatsuba-Ofman terms and is free from the bipartite reduction circuits. All algorithms are implemented in FPGA and ASIC, and detailed implementation results are presented, showing significant improvements to existing methods.
For the entire collection see [Zbl 1516.11002].How to construct CSIDH on Edwards curveshttps://zbmath.org/1527.140632024-02-28T19:32:02.718555Z"Moriya, Tomoki"https://zbmath.org/authors/?q=ai:moriya.tomoki"Onuki, Hiroshi"https://zbmath.org/authors/?q=ai:onuki.hiroshi"Takagi, Tsuyoshi"https://zbmath.org/authors/?q=ai:takagi.tsuyoshiIsogeny-based cryptography relies on the complexity of the Isogeny Problem, which involves computing isogenies between elliptic curves. This cryptographic branch is a promising candidate for post-quantum cryptography.
\textit{D. Jao} and \textit{L. De Feo} [Lect. Notes Comput. Sci. 7071, 19--34 (2011; Zbl 1290.94094)] introduced SIDH (Supersingular Isogeny Diffie-Hellman), a key exchange protocol based on isogenies, later resulting in SIKE (Supersingular Isogeny Key Encapsulation) becoming a 4th round candidate in the NIST post-quantum cryptography standardisation. However, \textit{W. Castryck} and \textit{T. Decru} [Lect. Notes Comput. Sci. 14008, 423--447 (2023; Zbl 07774575)] demonstrated a vulnerability in SIDH.
Subsequently, \textit{W. Castryck} et al. proposed CSIDH (Commutative Supersingular Isogeny Diffie-Hellman) in [Lect. Notes Comput. Sci. 11274, 395--427 (2018; Zbl 1407.81084)], operating on supersingular elliptic curves over \(\mathbb{F}_p\). CSIDH relies on a commutative group action on \(\mathbb{F}_p\)-isomorphism classes of supersingular Montgomery curves over \(\mathbb{F}_p\), utilising \(\pi_p\), the \(p\)-Frobenius map, to determine the action's points.
Their work demonstrates that selecting a random element from \(\mathbb{F}_p\) as an \(x\)-coordinate of a Montgomery curve yields a point in \(\ker(\pi_p -1)\) or \(\ker(\pi_p +1)\), essential for CSIDH's computations. Furthermore, they establish the uniqueness of a Montgomery coefficient up to \(\mathbb{F}_p\)-isomorphism, enabling CSIDH group action computations solely through \(\mathbb{F}_p\)-arithmetic, simplifying operations.
\textit{M. Meyer} and \textit{S. Reith} [Lect. Notes Comput. Sci. 11356, 137--152 (2018; Zbl 1407.81087)] introduced a faster CSIDH algorithm using isogenies over Edwards curves instead of Montgomery curves. Edwards curves possess cryptographic significance due to their complete group law on \(E(\mathbb{F}_p)\), enabling efficient addition formulae in certain cases.
The paper's focus is on extending the CSIDH algorithm to purely Edwards curves over \(\mathbb{F}_p\). The core of the paper introduces four key results, providing the groundwork for constructing the algorithm to evaluate the class group action based on Edwards curves. These results include a method to compute w-coordinates of points in \(E_d\), ensuring unbiased point generation, demonstrating comparable success probabilities to those on Montgomery curves, and establishing the uniqueness of the \(d\)-coefficient as a shared key.
Emphasising the class group action's significance in CSIDH, the authors specifically explore its implications within the realm of Edwards curves, presenting Vélu formulae tailored to this context. To enhance the algorithm's efficiency, an extended Elligator construction for Edwards curves is proposed, enabling cryptographic key exchange using elliptic curves as a form of random noise concealment.
Finally, the paper concludes with an in-depth analysis and implementation, complemented by detailed appendices.
Reviewer: Ramsès Fernàndez-València (Barcelona)Intermittent control for synchronization of hybrid multi-weighted complex networks with reaction-diffusion effectshttps://zbmath.org/1527.340882024-02-28T19:32:02.718555Z"Guo, Beibei"https://zbmath.org/authors/?q=ai:guo.beibei"Xiao, Yu"https://zbmath.org/authors/?q=ai:xiao.yu(no abstract)Estimates of entropy for multiplier operators of systems of orthonormal functionshttps://zbmath.org/1527.410082024-02-28T19:32:02.718555Z"Milaré, J."https://zbmath.org/authors/?q=ai:milare.jessica"Kushpel, A. K."https://zbmath.org/authors/?q=ai:kushpel.alexander-k"Tozoni, S. A."https://zbmath.org/authors/?q=ai:tozoni.sergio-antonioThe authors calculate upper and lower estimates for \(\varepsilon\)-entropy and entropy numbers of multiplier operators of systems of orthonormal functions. Upper estimates require that a Marcinkiewicz-type multiplier theorem is available for the system. Among the applications, the following function systems are considered: the trigonometric system, the Haar system, the Walsh system and the Vilenkin system. Some related results for \(\varepsilon\)-entropy of sets of functions on compact smooth manifolds can be found in [\textit{M. Ehler} and \textit{F. Filbir}, J. Approx. Theory 225, 41--57 (2018; Zbl 1388.41017); \textit{A. Kushpel} et al., Turk. J. Math. 45, No. 1, 167--184 (2021; Zbl 1522.41012)].
Reviewer: Yuri A. Farkov (Moskva)Uncertainty principles for doubly periodic functionshttps://zbmath.org/1527.420022024-02-28T19:32:02.718555Z"Wei, Xin"https://zbmath.org/authors/?q=ai:wei.xin"Qu, Feifei"https://zbmath.org/authors/?q=ai:qu.feifei"Liu, Hua"https://zbmath.org/authors/?q=ai:liu.hua.2"Bian, Xiaoli"https://zbmath.org/authors/?q=ai:bian.xiaoli(no abstract)Uncertainty principle for free metaplectic transformationhttps://zbmath.org/1527.420112024-02-28T19:32:02.718555Z"Zhang, Zhichao"https://zbmath.org/authors/?q=ai:zhang.zhichaoSummary: This study devotes to Heisenberg's uncertainty inequalities of complex-valued functions in two free metaplectic transformation (FMT) domains without the assumption of orthogonality. In our latest work [the author, ibid. 27, No. 4, Paper No. 68, 32 p. (2021; Zbl 1471.42024)], it is crucial that the FMT needs to be orthogonal for a decoupling of the cross terms. Instead of applying the orthogonality assumption, our current work uses the trace inequality for the product of symmetric matrices and positive semidefinite matrices to address the problem of coupling between cross terms. It formulates two types of lower bounds on the uncertainty product of complex-valued functions for two FMTs. The first one relies on the minimum eigenvalues of \(\mathfrak{A}_j^{\mathrm{T}}\mathfrak{A}_j- \mathfrak{B}_j^{\mathrm{T}}\mathfrak{A}_j\), \(\mathfrak{B}_j^{\mathrm{T}}\mathfrak{A}_j\), \(\mathfrak{B}_j^{\mathrm{T}}\mathfrak{B}_j - \mathfrak{B}_j^{\mathrm{T}}\mathfrak{A}_j\), while the other one relies on the minimum eigenvalues of \(\mathfrak{A}_j^{\mathrm{T}}\mathfrak{A}_j + \mathfrak{B}_j^{\mathrm{T}}\mathfrak{A}_j\), \(\mathfrak{B}_j^{\mathrm{T}}\mathfrak{B}_j + \mathfrak{B}_j^{\mathrm{T}}\mathfrak{A}_j\) and the maximum eigenvalues of \(\mathfrak{B}_j^{\mathrm{T}}\mathfrak{A}_j\), where \(\mathfrak{A}_j\), \(\mathfrak{B}_j\), \(j = 1, 2\) are the blocks found in symplectic matrices. Also, they are all relying on the covariance and absolute covariance. Sufficient conditions that truly give rise to the lower bounds are obtained. The theoretical results are verified by examples and experiments.Hyperbent functions from hyperovalshttps://zbmath.org/1527.510022024-02-28T19:32:02.718555Z"Abdukhalikov, Kanat"https://zbmath.org/authors/?q=ai:abdukhalikov.kanat-s"Ho, Duy"https://zbmath.org/authors/?q=ai:ho.duy|ho.duy-binhThe paper under review is organized as follows. In Section 2, the authors recall preliminary results about polar presentation of hyperovals and Kloosterman sums. In Section 3, they consider the connection between Klososterman sums and suitable \(g\)-functions, from translations, Subiaco and Adelaide hyperovals. In Section 4, the authors describe a construction of hyperbent functions using suitable \(g\)-functions. Examples of hyperbent functions from the authors construction are provided in Section 5. In Subsection 5.3 the authors provide examples of hyperbent functions from translation hyperovals in the following Table 5.
Reviewer: Jan Kurek (Lublin)On a new generalized Tsallis relative operator entropyhttps://zbmath.org/1527.540052024-02-28T19:32:02.718555Z"Tarik, Lahcen"https://zbmath.org/authors/?q=ai:tarik.lahcen"Chergui, Mohamed"https://zbmath.org/authors/?q=ai:chergui.mohamed-el-amine"El Wahbi, Bouazza"https://zbmath.org/authors/?q=ai:el-wahbi.bouazzaSummary: In this paper, we present a generalization of Tsallis relative operator entropy defined for positive operators and we investigate some related properties. Some inequalities involving the generalized Tsallis relative operator entropy are pointed out as well.Efficient location-based tracking for IoT devices using compressive sensing and machine learning techniqueshttps://zbmath.org/1527.620532024-02-28T19:32:02.718555Z"Aboushelbaya, Ramy"https://zbmath.org/authors/?q=ai:aboushelbaya.ramy"Aguacil, Taimir"https://zbmath.org/authors/?q=ai:aguacil.taimir"Huang, Qiuting"https://zbmath.org/authors/?q=ai:huang.qiuting"Norreys, Peter A."https://zbmath.org/authors/?q=ai:norreys.peter-aSummary: In this chapter, a scheme based on compressive sensing (CS) for the sparse reconstruction of down-sampled location data is presented for the first time. The underlying sparsity properties of the location data are explored and two algorithms based on LASSO regression and neural networks are shown to be able to efficiently reconstruct paths with only \(\sim 20\)\% sampling of the GPS receiver. An implementation for iOS devices is discussed and results from it are shown as proof of concept of the applicability of CS in location-based tracking for Internet of Things (IoT) devices.
For the entire collection see [Zbl 1495.90002].A tensor regularized nuclear norm method for image and video completionhttps://zbmath.org/1527.650472024-02-28T19:32:02.718555Z"Bentbib, A. H."https://zbmath.org/authors/?q=ai:bentbib.abdeslem-hafid"El Hachimi, A."https://zbmath.org/authors/?q=ai:el-hachimi.anas"Jbilou, K."https://zbmath.org/authors/?q=ai:jbilou.khalide"Ratnani, A."https://zbmath.org/authors/?q=ai:ratnani.ahmedSummary: In the present paper, we propose two new methods for tensor completion of third-order tensors. The proposed methods consist in minimizing the average rank of the underlying tensor using its approximate function, namely the tensor nuclear norm. The recovered data will be obtained by combining the minimization process with the total variation regularization technique. We will adopt the alternating direction method of multipliers, using the tensor T-product, to solve the main optimization problems associated with the two proposed algorithms. In the last section, we present some numerical experiments and comparisons with the most known image video completion methods.A first-order Rician denoising and deblurring modelhttps://zbmath.org/1527.650902024-02-28T19:32:02.718555Z"Yang, Wenli"https://zbmath.org/authors/?q=ai:yang.wenli.1"Huang, Zhongyi"https://zbmath.org/authors/?q=ai:huang.zhongyi"Zhu, Wei"https://zbmath.org/authors/?q=ai:zhu.wei.1Summary: In this paper, we propose a first-order variational model for the Rician denoising and deblurring. The model employs a recently developed regularizer that has proven to be effective in image restoration [\textit{W. Zhu}, J. Sci. Comput. 88, No. 2, Paper No. 46, 23 p. (2021; Zbl 1471.94006)]. Due to this regularizer, our model is able to suppress the staircase effect, and more importantly, it helps preserve image contrast, which considerably reduces the blurring effect in restored images. We present the maximum-minimum principle of the model and give a more accurate convex approximation than the conventional one for the fidelity term. Augmented Lagrangian method is utilized to minimize the associated functional. Synthetic and real gray and color images are tested to demonstrate the features of our model.Advances in information and computer security. 18th international workshop on security, IWSEC 2023, Yokohama, Japan, August 29--31, 2023. Proceedingshttps://zbmath.org/1527.680132024-02-28T19:32:02.718555ZThe articles of this volume will not be indexed individually. For the preceding workshop see [Zbl 1503.68013].Efficient private set intersection cardinality protocol in the reverse unbalanced settinghttps://zbmath.org/1527.680602024-02-28T19:32:02.718555Z"Li, Hanyu"https://zbmath.org/authors/?q=ai:li.hanyu"Gao, Ying"https://zbmath.org/authors/?q=ai:gao.ying.1Summary: Private set intersection cardinality (PSI-CA) is a variant of private set intersection (PSI) that allows two parties, the sender and the receiver, to compute the cardinality of the intersection without leaking anything more to the other party. It's one of the best-studied applications of secure computation, and many PSI-CA protocols in balanced or unbalanced scenarios have been proposed. Generally, unbalanced scenario means that the private set size of the receiver is significantly smaller than that of the sender. This paper mainly focuses on a new scenario in which the receiver's set size (client) is much larger than that of the sender (server) called the reverse unbalanced scenario. We study PSI-CA protocols that are secure against semi-honest adversaries, using the Hash-Prefix filter to effectively reduce the computation and communication overhead. We greatly optimize the previous unbalanced PSI-CA protocol and construct a reverse unbalanced PSI-CA protocol. In addition, we introduce private information retrieval (PIR) to resist the privacy leakage of the Hash-Prefix filter. By implementing all protocols on the same platform, we compare the protocols' performance theoretically and experimentally. Combined with the Cuckoo filter, elliptic curve and multi-threading, the computational and communication efficiency of our protocol is \(26.87 \times\) and \(8.48 \times\) higher than the existing unbalanced PSI-CA protocols. By setting sets with significant differences in size, we also prove the feasibility of our protocol in anonymous identity authentication.
For the entire collection see [Zbl 1516.68028].On separating proofs of knowledge from proofs of membership of languages and its application to secure identification schemes (extended abstract of COCOON'95)https://zbmath.org/1527.680732024-02-28T19:32:02.718555Z"Sakurai, Kouichi"https://zbmath.org/authors/?q=ai:sakurai.kouichiSummary: A four-move protocol for quadratic residuosity is proposed and the security is discussed. An application of the proposed protocol to a cryptographic identification scheme introduces a new notion of practical soundness. Our basic approach is to separate proofs of knowledge from proofs of membership of languages. Previous works deal with proofs of knowledge as an additional property of proofs of membership.
For the entire collection see [Zbl 0847.00053].On information, quanta, and contexthttps://zbmath.org/1527.680922024-02-28T19:32:02.718555Z"Acacio de Barros, J."https://zbmath.org/authors/?q=ai:de-barros.jose-acacioSummary: Classical information theory relies on the probabilities that signals appear in a source. As such, information depends on Kolmogorov's probability theory. However, it is well known that some quantum sources are context-dependent, and therefore may not conform to classical probability theory. For such sources, information can be defined through von Neumann's entropy. However, there are possible contextual sources that are neither quantum nor classical. In this chapter, we explore a simple example of one such source, and show that, similarly to the quantum case, the presence of contextuality increases the amount of information. We also show that, by extending Shannon's entropy to accommodate negative quasi-probability distributions, we are able to provide a measure of informational content that is neither described by Shannon nor von Neumann's entropies.
For the entire collection see [Zbl 1448.68036].Quantitative verification of masked arithmetic programs against side-channel attackshttps://zbmath.org/1527.681292024-02-28T19:32:02.718555Z"Gao, Pengfei"https://zbmath.org/authors/?q=ai:gao.pengfei"Xie, Hongyi"https://zbmath.org/authors/?q=ai:xie.hongyi"Zhang, Jun"https://zbmath.org/authors/?q=ai:zhang.jun.2|zhang.jun|zhang.jun.29|zhang.jun.8|zhang.jun.17|zhang.jun.34|zhang.jun.7|zhang.jun.6|zhang.jun.26|zhang.jun.42|zhang.jun.1|zhang.jun.12|zhang.jun.31|zhang.jun.16|zhang.jun.9|zhang.jun.3|zhang.jun.11|zhang.jun.5|zhang.jun.10|zhang.jun.4|zhang.jun.15|zhang.jun.23|zhang.jun.27"Song, Fu"https://zbmath.org/authors/?q=ai:song.fu"Chen, Taolue"https://zbmath.org/authors/?q=ai:chen.taolueSummary: Power side-channel attacks, which can deduce secret data via statistical analysis, have become a serious threat. Masking is an effective countermeasure for reducing the statistical dependence between secret data and side-channel information. However, designing masking algorithms is an error-prone process. In this paper, we propose a hybrid approach combing type inference and model-counting to verify masked arithmetic programs against side-channel attacks. The type inference allows an efficient, lightweight procedure to determine most observable variables whereas model-counting accounts for completeness. In case that the program is not perfectly masked, we also provide a method to quantify the security level of the program. We implement our methods in a tool \textsc{QMVerif} and evaluate it on cryptographic benchmarks. The experiment results show the effectiveness and efficiency of our approach.
For the entire collection see [Zbl 1408.68025].Class-specific information measures and attribute reducts for hierarchy and systematicnesshttps://zbmath.org/1527.682402024-02-28T19:32:02.718555Z"Zhang, Xianyong"https://zbmath.org/authors/?q=ai:zhang.xianyong"Yao, Hong"https://zbmath.org/authors/?q=ai:yao.hong"Lv, Zhiying"https://zbmath.org/authors/?q=ai:lv.zhiying"Miao, Duoqian"https://zbmath.org/authors/?q=ai:miao.duoqianSummary: Attribute reduction of rough set theory underlies knowledge acquisition and has two hierarchical types (classification-based and class-specific attribute reducts) and two perspectives from algebra and information theory; thus, there are four combined modes in total. Informational class-specific reducts are fundamental but lacking and are thus investigated by correspondingly constructing class-specific information measures. First, three types of information measures (i.e., information entropy, conditional entropy, and mutual information) are novelly established at the class level by hierarchical decomposition to acquire their hierarchical connection, systematical relationship, uncertainty semantics, and granulation monotonicity. Second, three types of informational class-specific reducts are correspondingly proposed to acquire their internal relationship, basic properties, and heuristic algorithm. Third, the informational class-specific reducts achieve their transverse connections, including the strength feature and consistency degeneration, with the algebraic class-specific reducts and their hierarchical connections, including the hierarchical strength and balance, with the informational classification-based reducts. Finally, relevant information measures and attribute reducts are effectively verified by decision tables and data experiments. Class-specific information measures deepen existing classification-based information measures by a hierarchical isomorphism, while the informational class-specific reducts systematically perfect attribute reduction by level and viewpoint isomorphisms; these results facilitate uncertainty measurement and information processing, especially at the class level.Quantum Brascamp-Lieb dualitieshttps://zbmath.org/1527.810102024-02-28T19:32:02.718555Z"Berta, Mario"https://zbmath.org/authors/?q=ai:berta.mario"Sutter, David"https://zbmath.org/authors/?q=ai:sutter.david"Walter, Michael"https://zbmath.org/authors/?q=ai:walter.michaelSummary: Brascamp-Lieb inequalities are entropy inequalities which have a dual formulation as generalized Young inequalities. In this work, we introduce a fully quantum version of this duality, relating quantum relative entropy inequalities to matrix exponential inequalities of Young type. We demonstrate this novel duality by means of examples from quantum information theory -- including entropic uncertainty relations, strong data-processing inequalities, super-additivity inequalities, and many more. As an application we find novel uncertainty relations for Gaussian quantum operations that can be interpreted as quantum duals of the well-known family of `geometric' Brascamp-Lieb inequalities.Unifying nonclassical correlation via Tsallis \(\alpha\)-entropy and its application in bilocal scenariohttps://zbmath.org/1527.810162024-02-28T19:32:02.718555Z"Muthuganesan, R."https://zbmath.org/authors/?q=ai:muthuganesan.rSummary: This article introduces a new variant of a family of measurement-induced nonlocality (MIN) measures using Tsallis \(\alpha\)-relative entropy (T\(\alpha\)E). We show that the proposed MIN measure satisfies all the axioms of a faithful measure of quantum correlation and resolves the local ancilla issue of Hilbert-Schmidt MIN. Analytically, we evaluate the MIN for an arbitrary pure state and \(2 \times n\)-dimensional mixed state. We also establish the simple relations between Tsallis MIN and the other MINs available in the literature. Extending the notion of the Tsallis entropy-based MIN, we characterize the nonbilocal correlation in a bilocal scenario and identify the closer connection between the nonlocal and nonbilocal correlation measures.A quantum dialogue reduced by half unitary operationshttps://zbmath.org/1527.810222024-02-28T19:32:02.718555Z"Lang, Yan-Feng"https://zbmath.org/authors/?q=ai:lang.yan-fengSummary: Quantum dialogue (QD) can enable two communicants to exchange their secrets in an unconditionally secure way without pre-shared keys. However, existent QD protocols generally work in a mode that both communicants are required to exercise unitary operations for QD. And their information-theoretical efficiencies (ITE) are also basically 50\%. This work proposes a new concept of unilateral unitary operation to realize QD. That is, only one communicant performing unitary operation can implement QD as well, thus reducing half unitary operations. So the proposed method both reduces cost and increases efficiency. It neither damages the QD security nor leaks information to attackers. Moreover, it makes the ITE up to a rare 66.67\%. Mere single-photons are necessary here, whose preparation and measurement can be easily implemented with current techniques. Therefore, the proposed QD protocol is of good practicability and can be thought of as a potent alternative for QD.Memory effects displayed in the evolution of continuous variable systemhttps://zbmath.org/1527.810242024-02-28T19:32:02.718555Z"Hesabi, Samaneh"https://zbmath.org/authors/?q=ai:hesabi.samaneh"Bera, Anindita"https://zbmath.org/authors/?q=ai:bera.anindita"Chruściński, Dariusz"https://zbmath.org/authors/?q=ai:chruscinski.dariuszSummary: We analyze non-Markovian memory effects displayed by the quantum Brownian motion modelled as a quantum harmonic oscillator coupled to a bath consisting of harmonic oscillators. In particular, the evolution of fidelity, relative entropy and quantum entanglement (measured by negativity) is considered for a family of Gaussian states (one and two mode squeezed vacuum and three mode basset-hound states). It is shown how to enhance detection of memory effects by adding an additional (ancillary) mode such that only a single mode (a system) is subjected to the evolution. It is observed that even a small amount of squeezing allows detection of non-Markovian memory effects.On the impossibility of key agreements from quantum random oracleshttps://zbmath.org/1527.810372024-02-28T19:32:02.718555Z"Austrin, Per"https://zbmath.org/authors/?q=ai:austrin.per"Chung, Hao"https://zbmath.org/authors/?q=ai:chung.hao"Chung, Kai-Min"https://zbmath.org/authors/?q=ai:chung.kai-min"Fu, Shiuan"https://zbmath.org/authors/?q=ai:fu.shiuan"Lin, Yao-Ting"https://zbmath.org/authors/?q=ai:lin.yaoting"Mahmoody, Mohammad"https://zbmath.org/authors/?q=ai:mahmoody.mohammadSummary: We study the following question, first publicly posed by \textit{A. Hosoyamada} and \textit{T. Yamakawa} in [Lect. Notes Comput. Sci. 12491, 3--32 (2020; Zbl 07666628)]. Can parties \textsf{A}, \textsf{B} with quantum computing power and classical communication rely only on a random oracle (that can be queried in quantum superposition) to agree on a key that is private from eavesdroppers?
We make the first progress on the question above and prove the following.
\begin{itemize}
\item When only one of the parties \textsf{A} is classical and the other party \textsf{B} is quantum powered, as long as they ask a total of \(d\) oracle queries and agree on a key with probability 1, then there is always a way to break the key agreement by asking \(O(d^2)\) number of classical oracle queries.
\item When both parties can make quantum queries to the random oracle, we introduce a natural conjecture, which if true would imply attacks with \({\operatorname{poly}}(d)\) classical queries to the random oracle. Our conjecture, roughly speaking, states that the multiplication of any two degree-\(d\) real-valued polynomials over the Boolean hypercube of influence at most \(\delta =1/{\operatorname{poly}}(d)\) is nonzero. We then prove our conjecture for exponentially small influences, which leads to an (unconditional) classical \(2^{O(md)} \)-query attack on any such key agreement protocol, where \(m\) is the oracle's output length.
\item Since our attacks are classical, we then ask whether it is always possible to find classical attacks on key agreements with imperfect completeness in the quantum random oracle model. We prove a barrier for this approach, by showing that if the folklore ``Simulation Conjecture'' (first formally stated by \textit{S. Aaronson} and \textit{A. Ambainis} in [``The need for structure in quantum speedups'', Preprint, \url{arXiv:0911.0996 }]) about the possibility of simulating efficient-query quantum algorithms using efficient-query classical algorithms is false, then there is in fact such a secure key agreement in the quantum random oracle model that cannot be broken classically.
\end{itemize}
For the entire collection see [Zbl 1514.94002].Constructive post-quantum reductionshttps://zbmath.org/1527.810382024-02-28T19:32:02.718555Z"Bitansky, Nir"https://zbmath.org/authors/?q=ai:bitansky.nir"Brakerski, Zvika"https://zbmath.org/authors/?q=ai:brakerski.zvika"Kalai, Yael Tauman"https://zbmath.org/authors/?q=ai:tauman-kalai.yaelSummary: Is it possible to convert classical reductions into post-quantum ones? It is customary to argue that while this is problematic in the interactive setting, non-interactive reductions do carry over. However, when considering quantum auxiliary input, this conversion results in a \textit{non-constructive} post-quantum reduction that requires duplicating the quantum auxiliary input, which is in general inefficient or even impossible. This violates the win-win premise of provable cryptography: an attack against a cryptographic primitive should lead to an algorithmic advantage.
We initiate the study of constructive quantum reductions and present positive and negative results for converting large classes of classical reductions to the post-quantum setting in a constructive manner. We show that any non-interactive non-adaptive reduction from assumptions with a polynomial solution space (such as decision assumptions) can be made post-quantum constructive. In contrast, assumptions with super-polynomial solution space (such as general search assumptions) cannot be generally converted.
Along the way, we make several additional contributions:
\begin{enumerate}
\item We put forth a framework for reductions (or general interaction) with \textit{stateful} solvers for a computational problem, that may change their internal state between consecutive calls. We show that such solvers can still be utilized. This framework and our results are meaningful even in the classical setting.
\item A consequence of our negative result is that quantum auxiliary input that is useful against a problem with a super-polynomial solution space cannot be generically ``restored'' post-measurement. This shows that the novel rewinding technique of \textit{A. Chiesa} et al. [in: IEEE 62nd Annual Symposium on Foundations of Computer Science, FOCS '21. Denver, Colorado: IEEE, 49--58 (2021; \url{doi:10.1109/FOCS52979.2021.00014})] is tight in the sense that it cannot be extended beyond a polynomial measurement space.
\end{enumerate}
For the entire collection see [Zbl 1514.94003].On the security of proofs of sequential work in a post-quantum worldhttps://zbmath.org/1527.810392024-02-28T19:32:02.718555Z"Blocki, Jeremiah"https://zbmath.org/authors/?q=ai:blocki.jeremiah"Lee, Seunghoon"https://zbmath.org/authors/?q=ai:lee.seunghoon"Zhou, Samson"https://zbmath.org/authors/?q=ai:zhou.samsonSummary: A Proof of Sequential Work (PoSW) allows a prove to convince a resource-bounded verifier that the prover invested a substantial amount of sequential time to perform some underlying computation. PoSWs have many applications including time-stamping, blockchain design, and universally verifiable CPU benchmarks. \textit{M. Mahmoody} et al. [in: Proceedings of the 4th conference on innovations in theoretical computer science, ITCS'13, Berkeley, CA, USA, January 9--12, 2013. New York, NY: Association for Computing Machinery (ACM). 373--388 (2013; Zbl 1362.94041)] gave the first construction of a PoSW in the random oracle model though the construction relied on expensive depth-robust graphs. In a recent breakthrough, \textit{B. Cohen} and \textit{K. Pietrzak} [Lect. Notes Comput. Sci. 10821, 451--467 (2018; Zbl 1428.94067)] gave an efficient PoSW construction that does not require expensive depth-robust graphs.\par In the classical parallel random oracle model, it is straightforward to argue that any successful PoSW attacker must produce a long \(\mathcal{H} \)-sequence and that any malicious party running in sequential time \(T-1\) will fail to produce an \(\mathcal{H} \)-sequence of length \(T\) except with negligible probability. In this paper, we prove that any quantum attacker running in sequential time \(T-1\) will fail to produce an \(\mathcal{H}\)-sequence except with negligible probability -- even if the attacker submits a large batch of quantum queries in each round. The proof is substantially more challenging and highlights the power of Zhandry's recent compressed oracle technique [\textit{M. Zhandry}, Lect. Notes Comput. Sci. 11693, 239--268 (2019; Zbl 1509.81445)]. We further extend this result to establish post-quantum security of a non-interactive PoSW obtained by applying the Fiat-Shamir transform to Cohen and Pietrzak's [loc. cit.] efficient construction.
For the entire collection see [Zbl 1465.94005].The gap is sensitive to size of preimages: collapsing property doesn't go beyond quantum collision-resistance for preimages bounded hash functionshttps://zbmath.org/1527.810402024-02-28T19:32:02.718555Z"Cao, Shujiao"https://zbmath.org/authors/?q=ai:cao.shujiao"Xue, Rui"https://zbmath.org/authors/?q=ai:xue.ruiSummary: As an enhancement of quantum collision-resistance, the collapsing property of hash functions proposed by \textit{D. Unruh} [Lect. Notes Comput. Sci. 10032, 166--195 (2016; Zbl 1407.94156)] emphasizes the hardness for distinguishing a superposition state of a hash value from a collapsed one. The collapsing property trivially implies the quantum collision-resistance. However, it remains to be unknown whether there is a reduction from the collapsing hash functions to the quantum collision-resistant hash functions. In this paper, we further study the relations between these two properties and derive two intriguing results as follows:
\begin{itemize}
\item Firstly, when the size of preimages of each hash value is bounded by some polynomial, we demonstrate that the collapsing property and the collision-resistance must hold simultaneously. This result is proved via a semi-black-box manner by taking advantage of the invertibility of a unitary quantum circuit.
\item Next, we further consider the relations between these two properties in the exponential-sized preimages case. By giving a construction of polynomial bounded hash functions, which preserves the quantum collision-resistance, we show the existence of collapsing hash functions is implied by the quantum collision-resistant hash functions when the size of preimages is not too large to the expected value.
\end{itemize}
Our results indicate that the gap between these two properties is sensitive to the size of preimages. As a corollary, our results also reveal the non-existence of polynomial bounded equivocal collision-resistant hash functions.
For the entire collection see [Zbl 1514.94003].Communication complexity of private simultaneous quantum messages protocolshttps://zbmath.org/1527.810412024-02-28T19:32:02.718555Z"Kawachi, Akinori"https://zbmath.org/authors/?q=ai:kawachi.akinori"Nishimura, Harumichi"https://zbmath.org/authors/?q=ai:nishimura.harumichiSummary: The private simultaneous messages (PSM) model is a non-interactive version of the multiparty secure computation (MPC), which has been intensively studied to examine the communication cost of the secure computation. We consider its quantum counterpart, the private simultaneous quantum messages (PSQM) model, and examine the advantages of quantum communication and prior entanglement of this model.\par In the PSQM model, \(k\) parties \(P_1,\dots,P_k\) initially share a common random string (or entangled states in a stronger setting), and they have private classical inputs \(x_1,\dots, x_k\). Every \(P_i\) generates a quantum message from the private input \(x_i\) and the shared random string (entangled states), and then sends it to the referee \(R\). Receiving the messages from the \(k\) parties, \(R\) computes \(F(x_1,\dots,x_k)\) from the messages. Then, \(R\) learns nothing except for \(F(x_1,\dots,x_k)\) as the privacy condition.\par We obtain the following results for this PSQM model. (i) We demonstrate that the privacy condition inevitably increases the communication cost in the two-party PSQM model as well as in the classical case presented by \textit{B. Applebaum} et al. [J. Cryptology 33, No. 3, 917--953 (2020; Zbl 1457.94003)]. In particular, we prove a lower bound \((3-o(1))n\) of the communication complexity in PSQM protocols with a shared random string for random Boolean functions of \(2n\)-bit input, which is larger than the trivial upper bound \(2n\) of the communication complexity without the privacy condition. (ii) We demonstrate a factor two gap between the communication complexity of PSQM protocols with shared entangled states and with shared random strings by designing a multiparty PSQM protocol with shared entangled states for a total function that extends the two-party equality function. (iii) We demonstrate an exponential gap between the communication complexity of PSQM protocols with shared entangled states and with shared random strings for a two-party partial function.
For the entire collection see [Zbl 1465.94005].Quantum-access security of the Winternitz one-time signature schemehttps://zbmath.org/1527.810422024-02-28T19:32:02.718555Z"Majenz, Christian"https://zbmath.org/authors/?q=ai:majenz.christian"Manfouo, Chanelle Matadah"https://zbmath.org/authors/?q=ai:manfouo.chanelle-matadah"Ozols, Maris"https://zbmath.org/authors/?q=ai:ozols.maris-aSummary: Quantum-access security, where an attacker is granted superposition access to secret-keyed functionalities, is a fundamental security model and its study has inspired results in post-quantum security. We revisit, and fill a gap in, the quantum-access security analysis of the Lamport one-time signature scheme (OTS) in the quantum random oracle model (QROM) by \textit{G. Alagic} et al. [Lect. Notes Comput. Sci. 12107, 788--817 (2020; Zbl 1480.81027)]]. We then go on to generalize the technique to the Winternitz OTS. Along the way, we develop a tool for the analysis of hash chains in the QROM based on the superposition oracle technique by \textit{M. Zhandry} [Lect. Notes Comput. Sci. 11693, 239--268 (2019; Zbl 1509.81445)] which might be of independent interest.
For the entire collection see [Zbl 1465.94005].New constructions of collapsing hasheshttps://zbmath.org/1527.810432024-02-28T19:32:02.718555Z"Zhandry, Mark"https://zbmath.org/authors/?q=ai:zhandry.markSummary: Collapsing is a post-quantum strengthening of collision resistance, needed to lift many classical results to the quantum setting. Unfortunately, the only existing standard-model proofs of collapsing hashes require LWE. We construct the first collapsing hashes from the quantum hardness of any one of the following problems:
\begin{itemize}
\item LPN in a variety of low noise or high-hardness regimes, essentially matching what is known for collision resistance from LPN.
\item Finding cycles on exponentially-large expander graphs, such as those arising from isogenies on elliptic curves.
\item The ``optimal'' hardness of finding collisions in \textit{any} hash function.
\item The \textit{polynomial} hardness of finding collisions, assuming a certain plausible regularity condition on the hash.
\end{itemize}
As an immediate corollary, we obtain the first statistically hiding post-quantum commitments and post-quantum succinct arguments (of knowledge) under the same assumptions. Our results are obtained by a general theorem which shows how to construct a collapsing hash \(H'\) from a post-quantum collision-resistant hash function \(H\), regardless of whether or not \(H\) itself is collapsing, assuming \(H\) satisfies a certain regularity condition we call ``semi-regularity''.
For the entire collection see [Zbl 1514.94003].Dynamical properties of quasiparticles in a tunable Kekulé graphene superlatticehttps://zbmath.org/1527.811112024-02-28T19:32:02.718555Z"Xiong, Xiao-Yu"https://zbmath.org/authors/?q=ai:xiong.xiaoyu"Hu, Xi-Dan"https://zbmath.org/authors/?q=ai:hu.xi-dan"Zhu, Qizhong"https://zbmath.org/authors/?q=ai:zhu.qizhong"Li, Zhi"https://zbmath.org/authors/?q=ai:li.zhi.4Summary: We investigate the dynamical properties of quasiparticles in graphene superlattices with three typical Kekulé distortions (i.e., Kekulé-O, Kekulé-Y and Kekulé-M). On the one hand, we numerically show the visualized evolution process of Kekulé quasiparticles; while on the other hand, we analytically obtain the centroid trajectory of the quasiparticles, and both of them agree well with each other. The results reveal that the relativistic \textit{Zitterbewegung} (ZB) phenomenon occurs in the Kekulé systems. Furthermore, through analyzing the frequency of ZB, we unveil the one-to-one relationship between ZB and Kekulé textures, i.e., the ZB frequencies of Kekulé-O, Kekulé-Y and Kekulé-M quasiparticles feature single, double and six frequencies, respectively. Finally, we propose a scheme to distinguish among different Kekulé textures from the dynamical perspective. The predictions in this paper are expected to be experimentally verified in the near future, so as to facilitate further research of Kekulé structures in solid materials or artificial systems.Characterising heavy-tailed networks using q-generalised entropy and q-adjacency kernelshttps://zbmath.org/1527.820042024-02-28T19:32:02.718555Z"Koponen, Ismo T."https://zbmath.org/authors/?q=ai:koponen.ismo-t"Palmgren, Elina"https://zbmath.org/authors/?q=ai:palmgren.elina"Keski-Vakkuri, Esko"https://zbmath.org/authors/?q=ai:keski-vakkuri.eskoSummary: Heavy-tailed networks, which have degree distributions characterised by slower than exponentially bounded tails, are common in many different situations. Some interesting cases, where heavy tails are characterised by inverse powers \(\lambda\) in the range \(1<\lambda<2\), arise for associative knowledge networks, and semantic and linguistic networks. In these cases, the differences between the networks are often delicate, calling for robust methods to characterise the differences. Here, we introduce a method for comparing networks using a density matrix based on q-generalised adjacency matrix kernels. It is shown that comparison of networks can then be performed using the q-generalised Kullback-Leibler divergence. In addition, the q-generalised divergence can be interpreted as a q-generalised free energy, which enables the thermodynamic-like macroscopic description of the heavy-tailed networks. The viability of the q-generalised adjacency kernels and the thermodynamic-like description in characterisation of complex networks is demonstrated using a simulated set of networks, which are modular and heavy-tailed with a degree distribution of inverse power law in the range \(1<\lambda<2\).Statistical ensembles and logarithmic corrections to black hole entropyhttps://zbmath.org/1527.830522024-02-28T19:32:02.718555Z"Ghosh, Aritra"https://zbmath.org/authors/?q=ai:ghosh.aritraSummary: In this paper, we consider general statistical ensembles and compute logarithmic corrections to the microcanonical entropy resulting due to thermodynamic fluctuations which are controlled by the boundary conditions, i.e. due to choice of ensemble. The framework is applied to the case of non-extremal black holes to give certain logarithmic corrections to the Bekenstein-Hawking entropy. We argue that within the framework of black hole chemistry, where the cosmological constant is identified with bulk pressure, the isoenthalpic-isobaric entropy rather than microcanonical entropy carries a more natural and consistent thermodynamic interpretation as black hole entropy. Logarithmic corrections to both microcanonical and isoenthalpic-isobaric entropies of black holes are computed, and we show that the latter set of corrections in black hole chemistry are of the same form as corrections to the microcanonical entropy in theories where the cosmological constant is not interpreted as a thermodynamic pressure. Finally, we compute logarithmic corrections to entropy in the framework of holographic black hole chemistry. We emphasize upon the choice of statistical ensemble, both in the bulk and on the boundary, in order to have a consistent comparison between them. The corrections studied in this paper are distinct from those obtained from Euclidean quantum gravity.Stable recovery of weighted sparse signals from phaseless measurements via weighted \(l_1\) minimizationhttps://zbmath.org/1527.902642024-02-28T19:32:02.718555Z"Huo, Haiye"https://zbmath.org/authors/?q=ai:huo.haiye(no abstract)Rational inattention with multiple attributeshttps://zbmath.org/1527.910542024-02-28T19:32:02.718555Z"Walker-Jones, David"https://zbmath.org/authors/?q=ai:walker-jones.davidSummary: This paper studies a new measure for the cost of learning that allows the different attributes of the options faced by an agent to differ in their associated learning costs. The new measure maintains the tractability of Shannon's classic measure but produces richer choice predictions and identifies a new form of informational bias significant for welfare and counterfactual analysis that is conducted with the multinomial logit model. Necessary and sufficient conditions are provided for optimal agent behavior under the new measure for the cost of learning.Information-theoretic cryptographyhttps://zbmath.org/1527.940012024-02-28T19:32:02.718555Z"Tyagi, Himanshu"https://zbmath.org/authors/?q=ai:tyagi.himanshu"Watanabe, Shun"https://zbmath.org/authors/?q=ai:watanabe.shunIn cryptanalysis, the proofs of security, robustness, and efficiency are essential and for sure information theoretical methods are very useful in getting such proofs. Cryptography is based on the assumption of the existence of one-way functions. In practice, the hardness of computationally inverting those maps entails the robustness of cryptographic protocols. Information-theoretic cryptography may require additionally that certain correlated random variables are available for participants in a cryptographic environment. Different kinds of correlation make feasible cryptographic primitives for encryption, authentication, and secure computation. When modeling the notion of security, from the computational point of view, the adversaries are restricted to polynomial-time procedures. At the same time, there is no such restriction in the information-theoretic setting.
Many discussions have been published about the ``uneasy relation between cryptographers and mathematicians''. It is interesting how the authors remark, in the initial part of the book, the relation between researchers in cryptography and researchers in information theory. The reviewed book may serve to close the gap between the researchers in information theory and the practitioners and developers, in industry and academia, of cryptography. As a textbook, it has the level of graduate programs in mathematics and computer science and as a reference book, it is a valuable tool for consultation.
The book consists of two parts. The first part is devoted to encryption, authentication, and secret keys explaining, having in mind a general audience, the information-theoretically secure cryptography. Through the notion of mutual information, it is analyzed how much information relative to plaintexts may be gained from corresponding ciphertexts, after the Shannon hteorem (to attain perfect secrecy, the length of the secret key must be as long as the length of the message) the notion of approximate security is discussed, hash functions are treated with detail as well as information reconciliation and privacy amplification secure keys agreement. Authentication is dealt with using universal hash families. The second part is devoted to secure computation. Here, several parties aim to calculate a function, each party uses his private data without revealing it to any other party. The cases of two parties with passive and active adversaries and the cases of more than two parties are studied. Broadcast primitives and security conditions for secure multiparty computing are analyzed in detail. In designing secure multiparty computing, zero-knowledge proofs, and verifiable secret-sharing schemes are exposed. I want to remark that the authors suggest three possible courses at the graduate level with their book as a textbook: basic cryptography, cryptography from correlated randomness, and secure computation.
In summary, the reviewed book is of great utility as a textbook and as a reference for consultation. We may congratulate the authors on joining both approaches of computational cryptography and information-theoretical secure cryptography.
Reviewer: Guillermo Morales Luna (Ciudad de México)On the papers of O. M. Kasim-zade in the field of complexity theory and theory of multivalued logicshttps://zbmath.org/1527.940022024-02-28T19:32:02.718555Z"Kochergin, Vadim Vasil'evich"https://zbmath.org/authors/?q=ai:kochergin.vadim-vasilevichSummary: The paper is an attempt both to give an overview of the results of O. M. Kasim-zade, the well-known specialist in discrete mathematics and mathematical cybernetics, and to understand his scientific legacy in fields such as research measures the circuit complexity of Boolean functions related to the operation of the circuits, the problems of implicit and parametric expressibility in finite-valued logics, the questions of the depth and the complexity of Boolean functions and functions of multivalued logics in infinite bases.Handbook of mathematical models and algorithms in computer vision and imaging. Mathematical imaging and visionhttps://zbmath.org/1527.940032024-02-28T19:32:02.718555ZPublisher's description: This handbook gathers together the state of the art on mathematical models and algorithms for imaging and vision. Its emphasis lies on rigorous mathematical methods, which represent the optimal solutions to a class of imaging and vision problems, and on effective algorithms, which are necessary for the methods to be translated to practical use in various applications. Viewing discrete images as data sampled from functional surfaces enables the use of advanced tools from calculus, functions and calculus of variations, and nonlinear optimization, and provides the basis of high-resolution imaging through geometry and variational models. Besides, optimization naturally connects traditional model-driven approaches to the emerging data-driven approaches of machine and deep learning. No other framework can provide comparable accuracy and precision to imaging and vision.
Written by leading researchers in imaging and vision, the chapters in this handbook all start with gentle introductions, which make this work accessible to graduate students. For newcomers to the field, the book provides a comprehensive and fast-track introduction to the content, to save time and get on with tackling new and emerging challenges. For researchers, exposure to the state of the art of research works leads to an overall view of the entire field so as to guide new research directions and avoid pitfalls in moving the field forward and looking into the next decades of imaging and information services. This work can greatly benefit graduate students, researchers, and practitioners in imaging and vision; applied mathematicians; medical imagers; engineers; and computer scientists.
The articles of mathematical interest will be reviewed individually.Uncertainty of the virtual image correlation methodhttps://zbmath.org/1527.940042024-02-28T19:32:02.718555Z"François, Marc Louis Maurice"https://zbmath.org/authors/?q=ai:francois.marc-louis-mauriceSummary: The virtual image correlation method applies for the measurement of silhouettes boundaries with sub-pixel uncertainty. It consists in a correlation between an image of interest and a virtual image based on a parameterized curve. It is shown that the method is exact in 1D, insensitive to contrast variation, and that the bias induced by brightness variation can be easily corrected. Optimal value of the virtual image width, the sole parameter of the method, and optimal numerical settings are established. An estimator is proposed to assess the relevance of the user-chosen parameterized curve family to match the contour. An analytical formula and a diagram are given for the measurement uncertainty in both cases of noiseless and noisy images. The results obtained are validated by tests carried out on synthetic images and on real images.
{{\copyright} 2022 The Author. \textit{International Journal for Numerical Methods in Engineering} published by John Wiley \& Sons Ltd.}Fourth-order anisotropic diffusion for inpainting and image compressionhttps://zbmath.org/1527.940052024-02-28T19:32:02.718555Z"Jumakulyyev, Ikram"https://zbmath.org/authors/?q=ai:jumakulyyev.ikram"Schultz, Thomas"https://zbmath.org/authors/?q=ai:schultz.thomasSummary: Edge-enhancing diffusion (EED) can reconstruct a close approximation of an original image from a small subset of its pixels. This makes it an attractive foundation for PDE based image compression. In this work, we generalize second-order EED to a fourth-order counterpart. It involves a fourth-order diffusion tensor that is constructed from the regularized image gradient in a similar way as in traditional second-order EED, permitting diffusion along edges, while applying a non-linear diffusivity function across them. We show that our fourth-order diffusion tensor formalism provides a unifying framework for all previous anisotropic fourth-order diffusion based methods, and that it provides additional flexibility. We achieve an efficient implementation using a fast semi-iterative scheme. Experimental results on natural and medical images suggest that our novel fourth-order method produces more accurate reconstructions compared to the existing second-order EED.
For the entire collection see [Zbl 1515.94008].Multisource uncertain dynamic load identification fitted by Legendre polynomial based on precise integration and the Savitzky-Golay filtershttps://zbmath.org/1527.940062024-02-28T19:32:02.718555Z"Wang, Lei"https://zbmath.org/authors/?q=ai:wang.lei.17"Xu, Hanying"https://zbmath.org/authors/?q=ai:xu.hanying"Wang, Xiaojun"https://zbmath.org/authors/?q=ai:wang.xiaojun.2"Ding, Xuyun"https://zbmath.org/authors/?q=ai:ding.xuyunSummary: In this article, a precise integral method (PIM) for the load identification of continuous structures based on the polynomial nonlinear hypothesis is presented. Essentially, load identification can be regarded as solving a dynamic equation in an inverse process. PIM is famous for its high precision in positive engineering problems. It has been already used in load reconstruction, but most cases are with discrete structures. In each time step, the dynamic responses of the measured points are used to identify the load without establishing a recursive chain, so PIM is not sensitive to the initial value and has no accumulated error. Legendre polynomials (LPIM) is a computational scheme. The dynamic load is assumed to be a polynomial fitting nonlinear function in each integral time step with LPIM. Just with the first modal, LPIM can reliably identify the concentrated load. In addition, as uncertainty is getting more and more attention in engineer, it is also needed to be considered in load identification. In linear systems, interval vertex is a reliable method to obtain the load bounds in the virtue of multisource uncertainties. The results of identified load are too lumpy because of the polynomial fitting. To improve the precision, Savitsky-Golay (S-G) filters which fit the curves by polynomials in a frame length is introduced to smooth down the load in high fidelity. Eight numerical examples are investigated to demonstrated the efficiency and precision of the developed method.
{{\copyright} 2022 John Wiley \& Sons Ltd.}Equivalent relation between normalized spatial entropy and fractal dimensionhttps://zbmath.org/1527.940072024-02-28T19:32:02.718555Z"Chen, Yanguang"https://zbmath.org/authors/?q=ai:chen.yanguangSummary: Fractal dimension is defined on the base of entropy, including macro state entropy and information entropy. The generalized correlation dimension of multifractals is based on Renyi entropy. However, the mathematical transform from entropy to fractal dimension is not clear in both theory and practice. This paper is devoted to revealing the new equivalence relation between spatial entropy and fractal dimension using the box-counting method. By means of a set of regular fractals, the numerical relationship between spatial entropy and fractal dimension is examined. The results show that the ratio of actual entropy (\(M_q\)) to the maximum entropy (\(M_{\mathrm{max}}\)) equals the ratio of actual dimension (\(D_q\)) to the maximum dimension (\(D_{\mathrm{max}}\)). The spatial entropy and fractal dimension of complex spatial systems can be converted into one another by using functional box-counting method. The theoretical inference is verified by observational data of urban form. A conclusion is that the normalized spatial entropy is equal to the normalized fractal dimension. Fractal dimensions proved to be the characteristic values of entropies. In empirical studies, if the linear size of spatial measurement is small enough, a normalized entropy value is infinitely approximate to the corresponding normalized fractal dimension value. Based on the theoretical result, new spatial indexes of urban space filling can be defined, and multifractal parameters can be generalized to describe both simple systems and complex systems.Nonparametric log kernel estimator of extropy functionhttps://zbmath.org/1527.940082024-02-28T19:32:02.718555Z"Irshad, Muhammed Rasheed"https://zbmath.org/authors/?q=ai:irshad.muhammed-rasheed"Maya, Radhakumari"https://zbmath.org/authors/?q=ai:maya.radhakumariFollowing [\textit{F. Lad} et al., Stat. Sci. 30, No. 1, 40--58 (2015; Zbl 1332.62027)], the extropy \({F_X}(x)\) of a non-negative random variable with absolutely continuous cumulative distribution function and with probability density function \({f_X}(x)\) is defined by \(J(X) = - \frac{1}{2}\int\limits_0^\infty {f_X^2(x)dx} \). Following [\textit{H. D. Nguyen} et al., ``Positive data kernel density estimation via the logKDE package for R'', in: R. Islam (ed.) et al., Data Mining. Proceedings of the 16th Australasian conference, AusDM 2018, Bahrurst, NSW, Australia, November 28--30, 2018. Singapore: Springer. 269--280 (2019; \url{doi:10.1007/978-981-13-6661-1_21})], the log-kernel density estimator of \({f_X}(x)\) is defined by \({\hat f_{\log }}(x) = \frac{1}{n}\sum\limits_{i = 1}^n {L(x,X,h)} \), where \(L(x,z,h)\) is the log-kernel function with bandwith \(h > 0\) at the location parameter \(z\). Here, a non-parametric log-kernel density estimator for the extropy function \({\hat J_n}(X) = - \frac{1}{2}\int\limits_0^\infty {\hat f_{\log }^2} (x)dx\) is proposed and it is proved that under suitable conditions \({\hat J_n}(X) \to J(X)\) when \(n \to \infty \). The proposed estimator is illustrated on a real data set from [\textit{T. Bjerkedal}, Am. J. Epidemiol. 72, No. 1, 130--148 (1960; \url{doi:10.1093/oxfordjournals.aje.a120129})] containing 79 elements.
Reviewer: Jaak Henno (Tallinn)The \(q\)-exponentials do not maximize the Rényi entropyhttps://zbmath.org/1527.940092024-02-28T19:32:02.718555Z"Oikonomou, Thomas"https://zbmath.org/authors/?q=ai:oikonomou.thomas"Kaloudis, Konstantinos"https://zbmath.org/authors/?q=ai:kaloudis.konstantinos"Bagci, G. Baris"https://zbmath.org/authors/?q=ai:bagci.g-barisSummary: It is generally assumed that the Rényi entropy is maximized by the \(q\)-exponentials and is hence useful to construct a generalized statistical mechanics. However, to the best of our knowledge, this assumption has never been explicitly checked. In this work, we consider the Rényi entropy with the linear and escort mean value constraints and check whether it is indeed maximized by \(q\)-exponentials. We show, both theoretically and numerically, that the Rényi entropy yields erroneous inferences concerning the optimum distributions of the \(q\)-exponential form and moreover exhibits high estimation errors in the regime of long range correlations. Finally, we note that the Shannon entropy successfully detects the power law distributions when the logarithmic mean value constraint is used.Pilot pattern design scheme with branch and bound in PSA-OFDM systemhttps://zbmath.org/1527.940102024-02-28T19:32:02.718555Z"Wang, Shuchen"https://zbmath.org/authors/?q=ai:wang.shuchen"Gao, Suixiang"https://zbmath.org/authors/?q=ai:gao.suixiang"Yang, Wenguo"https://zbmath.org/authors/?q=ai:yang.wenguoSummary: Pilot symbol assisted (PSA) channel estimation is an important means to improve the communication quality of orthogonal frequency division multiplexing (OFDM) systems. The insertion position of the pilot in the frequency domain and time domain of the OFDM symbol is called the pilot pattern. The appropriate pilot pattern can greatly reduce channel estimation error and enhance communication quality. In this paper, the branch and bound (BnB) method is adopted to design the pilot pattern BnB-PP for the first time. Specifically, the result of the linear minimum mean square error method is taken as the target value of channel estimation in PSA-OFDM systems. For branching, pilot positions are randomly selected one by one in the form of the binary tree. For the boundary, after the pilots are filled randomly, a correction term is subtracted from the result of channel estimation at this time to present the expectation boundary. The results show that BnB-PP is better than the common pilot pattern. When signal-to-noise ratio is 36, the average MSE of channel estimation for 32 and 64 pilots in 1344 data signals is reduced by \(93.24\%\) and \(62.33\%\) respectively compared with the lattice-type pilot pattern.Individual discrete logarithm with sublattice reductionhttps://zbmath.org/1527.940112024-02-28T19:32:02.718555Z"Al Aswad, Haetham"https://zbmath.org/authors/?q=ai:al-aswad.haetham"Pierrot, Cécile"https://zbmath.org/authors/?q=ai:pierrot.cecileSummary: The Number Field Sieve and its numerous variants is the best algorithm to compute discrete logarithms in medium and large characteristic finite fields. When the extension degree \(n\) is composite and the characteristic \(p\) is of medium size, the Tower variant (TNFS) is asymptotically the most efficient one. Our work deals with the last main step, namely the individual logarithm step, that computes a smooth decomposition of a given target \(T\) in the finite field thanks to two distinct phases: an initial splitting step, and a descent tree. In this article, we improve on the current state-of-the-art Guillevic's algorithm dedicated to the initial splitting step for composite \(n\). While still exploiting the proper subfields of the target finite field, we modify the lattice reduction subroutine that creates a lift in a number field of the target \(T\). Our algorithm returns lifted elements with lower degrees and coefficients, resulting in lower norms in the number field. The lifted elements are not only much likely to be smooth because they have smaller norms, but it permits to set a smaller smoothness bound for the descent tree. Asymptotically, our algorithm is faster and works for a larger area of finite fields than Guillevic's algorithm, being now relevant even when the medium characteristic \(p\) is such that \(L_{p^n}(1/3) \leq p < L_{p^n}(1/2)\). In practice, we conduct experiments on 500-bit to 2048-bit composite finite fields: Our method becomes more efficient as the largest non trivial divisor of \(n\) grows, being thus particularly adapted to even extension degrees.A new alternative to NTRU cryptosystem based on highly dimensional algebra with dense lattice structurehttps://zbmath.org/1527.940122024-02-28T19:32:02.718555Z"Al-Saidi, N. M. G."https://zbmath.org/authors/?q=ai:al-saidi.nadia-m-g"Yassein, H. R."https://zbmath.org/authors/?q=ai:yassein.hassan-rashedSummary: NTRU public key is a cryptosystem with hard mathematical structure and short key size relies in its security on the hardness of lattice based cryptosystems. Its smallest key size made it highly performed system and grant it an advantageous over other numbers theoretical based cryptosystems. Due to Shamir's conclusion about the duon commutative computations in encryption and decryption processes of any cryptosystem leads to turn it into a highly lattice attack resist system. In this paper, a new alternative non associative, non-commutative multidimensional system is proposed under the same principle of NTRU cryptosystem. It is called HXDTRU, where its operations taken place in a specially designed high dimensional algebra called hexadecnion algebra. The proposed system is implemented, and its security and effociency are analyzed.Ligero: lightweight sublinear arguments without a trusted setuphttps://zbmath.org/1527.940132024-02-28T19:32:02.718555Z"Ames, Scott"https://zbmath.org/authors/?q=ai:ames.scott"Hazay, Carmit"https://zbmath.org/authors/?q=ai:hazay.carmit"Ishai, Yuval"https://zbmath.org/authors/?q=ai:ishai.yuval"Venkitasubramaniam, Muthuramakrishnan"https://zbmath.org/authors/?q=ai:venkitasubramaniam.muthuramakrishnanSummary: We design and implement a simple zero-knowledge argument protocol for \textsf{NP} whose communication complexity is proportional to the square-root of the verification circuit size. The protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography. Our protocol is obtained by applying an optimized version of the general transformation of \textit{Y. Ishai} et al. [STOC 2007, 21--30 (2007; Zbl 1232.68044)] to a variant of the protocol for secure multiparty computation of \textit{I. Damgård} and \textit{Y. Ishai} [Lect. Notes Comput. Sci. 4117, 501--520 (2006; Zbl 1161.94394)]. It can be viewed as a simple zero-knowledge interactive PCP based on ``interleaved'' Reed-Solomon codes. This paper is an extended version of the authors' paper [in: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, CCS '17, Dallas, TX, USA, October 30 -- November 3, 2017. New York, NY: Association for Computing Machinery (ACM). 2087--2104 (2017; \url{doi:10.1145/3133956.3134104})] and features a tighter analysis, better implementation along with formal proofs. For large verification circuits, the Ligero prover remains competitive against subsequent works with respect to the prover's running time, where our efficiency advantages become even bigger in an amortized setting, where several instances need to be proven simultaneously. Our protocol is attractive not only for very large verification circuits but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage with \(2^{-40}\) soundness error, the communication complexity is roughly 35KB. The communication complexity of our protocol is independent of the circuit structure and depends only on the number of gates. For \(2^{-40}\) soundness error, the communication becomes smaller than the circuit size for circuits containing roughly 3 million gates or more. With our refined analysis the Ligero system's proof lengths and prover's running times are better than subsequent post-quantum ZK-SNARKs for small to moderately large circuits.On the primitivity of the AES-128 key-schedulehttps://zbmath.org/1527.940142024-02-28T19:32:02.718555Z"Aragona, Riccardo"https://zbmath.org/authors/?q=ai:aragona.riccardo"Civino, Roberto"https://zbmath.org/authors/?q=ai:civino.roberto"Volta, Francesca Dalla"https://zbmath.org/authors/?q=ai:dalla-volta.francescaSummary: The key-scheduling algorithm in the AES is the component responsible for selecting from the master key the sequence of round keys to be xor-ed to the partially encrypted state at each iteration. We consider here the group \(\Gamma\) generated by the action of the AES-128 key-scheduling operation, and we prove that the smallest group containing \(\Gamma\) and all the translations of the message space is primitive. As a consequence, we obtain that no linear partition of the message space can be invariant under its action.Efficient proofs of knowledge for threshold relationshttps://zbmath.org/1527.940152024-02-28T19:32:02.718555Z"Avitabile, Gennaro"https://zbmath.org/authors/?q=ai:avitabile.gennaro.1"Botta, Vincenzo"https://zbmath.org/authors/?q=ai:botta.vincenzo"Friolo, Daniele"https://zbmath.org/authors/?q=ai:friolo.daniele"Visconti, Ivan"https://zbmath.org/authors/?q=ai:visconti.ivanSummary: Recently, there has been great interest towards constructing efficient zero-knowledge proofs for practical languages. In this work, we focus on proofs for threshold relations, in which the prover is required to prove knowledge of witnesses for \(k\) out of \(\ell\) statements.
The main contribution of our work is an efficient and modular transformation that starting from a large class of \(\varSigma\)-protocols and a corresponding threshold relation \(\mathcal{R}_\mathsf{k,\ell}\), provides an efficient \(\varSigma\)-protocol for \(\mathcal{R}_\mathsf{k,\ell}\) with improved communication complexity w.r.t. prior results. Our transformation preserves statistical/perfect honest-verifier zero knowledge.
For the entire collection see [Zbl 1515.68025].A composable security treatment of ECVRF and batch verificationshttps://zbmath.org/1527.940162024-02-28T19:32:02.718555Z"Badertscher, Christian"https://zbmath.org/authors/?q=ai:badertscher.christian"Gaži, Peter"https://zbmath.org/authors/?q=ai:gazi.peter"Querejeta-Azurmendi, Iñigo"https://zbmath.org/authors/?q=ai:querejeta-azurmendi.inigo"Russell, Alexander"https://zbmath.org/authors/?q=ai:russell.alexander-cSummary: Verifiable random functions allow a key-pair holder to verifiably evaluate a pseudorandom function under that particular key pair. These primitives enable fair and verifiable pseudorandom lotteries, essential in proof-of-stake blockchains such as Algorand and Cardano, and are being used to secure billions of dollars of capital. As a result, there is an ongoing IRTF effort to standardize VRFs, with a proposed ECVRF based on elliptic-curve cryptography appearing as the most promising candidate.
In this paper, towards understanding the general security of VRFs and in particular the ECVRF construction, we provide an ideal functionality in the Universal Composability (UC) framework [\textit{R. Canetti}, J. ACM 67, No. 5, Article No. 28, 94 p. (2020; Zbl 1491.68036)] that captures VRF security, and show that ECVRF UC-realizes it.
Additionally, we study batch verification in the context of VRFs. We provide a UC-functionality capturing a VRF with batch-verification capability, and propose modifications to ECVRF that allow for this feature. We again prove that our proposal UC-realizes the desired functionality. Finally, we provide a performance analysis showing that verification can yield a factor-two speedup for batches with 1024 proofs, at the cost of increasing the proof size from 80 to 128 bytes.
For the entire collection see [Zbl 1515.68025].Round-optimal oblivious transfer and MPC from computational CSIDHhttps://zbmath.org/1527.940172024-02-28T19:32:02.718555Z"Badrinarayanan, Saikrishna"https://zbmath.org/authors/?q=ai:badrinarayanan.saikrishna"Masny, Daniel"https://zbmath.org/authors/?q=ai:masny.daniel"Mukherjee, Pratyay"https://zbmath.org/authors/?q=ai:mukherjee.pratyay"Patranabis, Sikhar"https://zbmath.org/authors/?q=ai:patranabis.sikhar"Raghuraman, Srinivasan"https://zbmath.org/authors/?q=ai:raghuraman.srinivasan"Sarkar, Pratik"https://zbmath.org/authors/?q=ai:sarkar.pratikSummary: We present the first round-optimal and plausibly quantum-safe oblivious transfer (OT) and multi-party computation (MPC) protocols from the computational CSIDH assumption -- the weakest and most widely studied assumption in the CSIDH family of isogeny-based assumptions. We obtain the following results:
\begin{itemize}
\item[--] The first round-optimal maliciously secure OT and MPC protocols in the plain model that achieve (black-box) simulation-based security while relying on the computational CSIDH assumption.
\item[--] The first round-optimal maliciously secure OT and MPC protocols that achieves Universal Composability (UC) security in the presence of a trusted setup (common reference string plus random oracle) while relying on the computational CSIDH assumption.
\end{itemize}
Prior plausibly quantum-safe isogeny-based OT protocols (with/without setup assumptions) are either not round-optimal, or rely on potentially stronger assumptions.
We also build a 3-round maliciously-secure OT extension protocol where each base OT protocol requires only 4 isogeny computations. In comparison, the most efficient isogeny-based OT extension protocol till date due to \textit{Y.-F. Lai} et al., [Lect. Notes Comput. Sci. 12696, 213--241 (2021; Zbl 1479.94207)] requires 12 isogeny computations and 4 rounds of communication, while relying on the same assumption as our construction, namely the reciprocal CSIDH assumption.
For the entire collection see [Zbl 1524.94001].Updatable NIZKs from non-interactive zapshttps://zbmath.org/1527.940182024-02-28T19:32:02.718555Z"Baghery, Karim"https://zbmath.org/authors/?q=ai:baghery.karim"Bardeh, Navid Ghaedi"https://zbmath.org/authors/?q=ai:bardeh.navid-ghaediSummary: \textit{M. Bellare} et al. [Lect. Notes Comput. Sci. 10032, 777--804 (2016; Zbl 1407.94082)] studied the security of NIZK arguments under subverted Structured Reference String (SRS) and presented some positive and negative results. In their best positive result, they showed that by defining an SRS as a tuple of knowledge assumption in bilinear groups (e.g. \(g^a, g^b, g^{ab}\)), and then using a Non-Interactive (NI) zap to prove that either there is a witness for the statement \(\mathsf{x}\) or one knows the trapdoor of SRS (e.g. \(a\) or \(b\)), one can build NIZK arguments that can achieve soundness and subversion zero-knowledge (zero-knowledge without trusting a third party; Sub-ZK). In this paper, we expand their idea and use NI zaps (of knowledge) to build NIZK arguments (of knowledge) with updatable, universal, and succinct SRS. To this end, we first show that their proposed sound and Sub-ZK NIZK argument can also achieve updatable soundness, which is a more desired notion than the plain soundness. Updatable soundness allows the verifier to update the SRS one time and bypass the need for a trusted third party. Then, we show that using a similar OR language, given a NI zap (of knowledge) and a key-updatable signature scheme, one can build NIZK arguments that can achieve Sub-ZK and updatable simulation soundness (resp. updatable simulation extractability). The proposed constructions are the first NIZK arguments that have updatable and succinct SRS, and do not require a random oracle. Our instantiations show that in the resulting NIZK arguments the computational cost for the parties to verify/update the SRS is negligible, namely, a few exponentiations and pairing checks. The run times of the prover and verifier, as well as the size of the proof, are asymptotically the same as those of the underlying NI zap.
For the entire collection see [Zbl 1515.68028].CRAFT: \underline{C}omposable \underline{R}andomness beacons and output-independent \underline{A}bort MPC \underline{F}rom \underline{T}imehttps://zbmath.org/1527.940192024-02-28T19:32:02.718555Z"Baum, Carsten"https://zbmath.org/authors/?q=ai:baum.carsten"David, Bernardo"https://zbmath.org/authors/?q=ai:david.bernardo-machado"Dowsley, Rafael"https://zbmath.org/authors/?q=ai:dowsley.rafael"Kishore, Ravi"https://zbmath.org/authors/?q=ai:kishore.ravi"Nielsen, Jesper Buus"https://zbmath.org/authors/?q=ai:nielsen.jesper-buus"Oechsner, Sabine"https://zbmath.org/authors/?q=ai:oechsner.sabineSummary: Recently, time-based primitives such as time-lock puzzles (TLPs) and verifiable delay functions (VDFs) have received a lot of attention due to their power as building blocks for cryptographic protocols. However, even though exciting improvements on their efficiency and security (e.g. achieving non-malleability) have been made, most of the existing constructions do not offer general composability guarantees and thus have limited applicability. \textit{C. Baum} et al. [Lect. Notes Comput. Sci. 12698, 429--459 (2021; Zbl 1479.94124)] presented in TARDIS the first (im)possibility results on constructing TLPs with Universally Composable (UC) security and an application to secure two-party computation with output-independent abort (OIA-2PC), where an adversary has to decide to abort before learning the output. While these results establish the feasibility of UC-secure TLPs and applications, they are limited to the two-party scenario and suffer from complexity overheads. In this paper, we introduce the first UC constructions of VDFs and of the related notion of publicly verifiable TLPs (PV-TLPs). We use our new UC VDF to prove a folklore result on VDF-based randomness beacons used in industry and build an improved randomness beacon from our new UC PV-TLPs. We moreover construct the first multiparty computation protocol with punishable output-independent aborts (POIA-MPC), i.e. MPC with OIA and financial punishment for cheating. Our novel POIA-MPC both establishes the feasibility of (non-punishable) OIA-MPC and significantly improves on the efficiency of state-of-the-art OIA-2PC and (non-OIA) MPC with punishable aborts.
For the entire collection see [Zbl 1524.94001].Sender-binding key encapsulationhttps://zbmath.org/1527.940202024-02-28T19:32:02.718555Z"Benz, Laurin"https://zbmath.org/authors/?q=ai:benz.laurin"Beskorovajnov, Wasilij"https://zbmath.org/authors/?q=ai:beskorovajnov.wasilij"Eilebrecht, Sarai"https://zbmath.org/authors/?q=ai:eilebrecht.sarai"Müller-Quade, Jörn"https://zbmath.org/authors/?q=ai:muller-quade.jorn"Ottenhues, Astrid"https://zbmath.org/authors/?q=ai:ottenhues.astrid"Schwerdt, Rebecca"https://zbmath.org/authors/?q=ai:schwerdt.rebeccaSummary: Secure communication is gained by combining encryption with authentication. In real-world applications encryption commonly takes the form of KEM-DEM hybrid encryption, which is combined with ideal authentication. The pivotal question is how weak the employed key encapsulation mechanism (KEM) is allowed to be to still yield universally composable (UC) secure communication when paired with symmetric encryption and ideal authentication. This question has so far been addressed for public-key encryption (PKE) only, showing that encryption does not need to be stronger than sender-binding CPA, which binds the CPA secure ciphertext non-malleably to the sender ID. For hybrid encryption, prior research unanimously reaches for CCA2 secure encryption which is unnecessarily strong. Answering this research question is vital to develop more efficient and feasible protocols for real-world secure communication and thus enable more communication to be conducted securely.
In this paper we use ideas from the PKE setting to develop new answers for hybrid encryption. We develop a new and significantly weaker security notion -- sender-binding CPA for KEMs -- which is still strong enough for secure communication. By using game-based notions as building blocks, we attain secure communication in the form of ideal functionalities with proofs in the UC-framework. Secure communication is reached in both the classic as well as session context by adding authentication and one-time/replayable CCA secure symmetric encryption respectively. We furthermore provide an efficient post-quantum secure LWE-based construction in the standard model giving an indication of the real-world benefit resulting from our new security notion. Overall we manage to make significant progress on discovering the minimal security requirements for hybrid encryption components to facilitate secure communication.
For the entire collection see [Zbl 1524.94001].Hybrid dual and meet-LWE attackhttps://zbmath.org/1527.940212024-02-28T19:32:02.718555Z"Bi, Lei"https://zbmath.org/authors/?q=ai:bi.lei"Lu, Xianhui"https://zbmath.org/authors/?q=ai:lu.xianhui"Luo, Junjie"https://zbmath.org/authors/?q=ai:luo.junjie"Wang, Kunpeng"https://zbmath.org/authors/?q=ai:wang.kunpengSummary: The Learning with Errors (LWE) problem is one of the most prominent problems in lattice-based cryptography. Many practical LWE-based schemes, including Fully Homomorphic encryption (FHE), use sparse ternary secret for the sake of efficiency. Several (hybrid) attacks have been proposed that benefit from such sparseness, thus researchers believe the security of the schemes with sparse ternary secrets is not well-understood yet. Recently, \textit{A. May} [Lect. Notes Comput. Sci. 12826, 701--731 (2021; Zbl 1486.94131)] proposed an efficient meet-in-the-middle attack named Meet-LWE for LWE with ternary secret, which significantly improves Odlyzko's algorithm. In this work, we generalize May's Meet-LWE and then introduce a new hybrid attack which combines Meet-LWE with lattice dual attack. We implement our algorithm to FHE-type parameters of LWE problem and compare it with the previous hybrid dual attacks. The result shows that our attack outperforms other attacks in a large range of parameters. We note that our attack has no impact on the LWE-based schemes in the PQC Standardization held by NIST as their secrets are not sparse and/or ternary.
For the entire collection see [Zbl 1515.68044].Pattern matching in encrypted stream from inner product encryptionhttps://zbmath.org/1527.940222024-02-28T19:32:02.718555Z"Bouscatié, Élie"https://zbmath.org/authors/?q=ai:bouscatie.elie"Castagnos, Guilhem"https://zbmath.org/authors/?q=ai:castagnos.guilhem"Sanders, Olivier"https://zbmath.org/authors/?q=ai:sanders.olivierSummary: Functional encryption features secret keys, each associated with a key function \(f\), which allow to directly recover \(f(x)\) from an encryption of \(x\), without learning anything more about \(x\). This property is particularly useful when delegating data processing to a third party as it allows the latter to perform its task while ensuring minimum data leakage. However, this generic term conceals a great diversity in the cryptographic constructions that strongly differ according to the functions \(f\) they support.
A recent series of works has focused on the ability to search a pattern within a data stream, which can be expressed as a function \(f\). One of the conclusions of these works was that this function \(f\) was not supported by the current state-of-the-art, which incited their authors to propose a new primitive called Stream Encryption supporting Pattern Matching (SEPM). Some concrete constructions were proposed but with some limitations such as selective security or reliance on non-standard assumptions.
In this paper, we revisit the relations between this primitive and two major subclasses of functional encryption, namely Hidden Vector Encryption (HVE) and Inner Product Encryption (IPE). We indeed first exhibit a generic transformation from HVE to SEPM, which immediately yields new efficient SEPM constructions with better features than existing ones. We then revisit the relations between HVE and IPE and show that we can actually do better than the transformation proposed by Katz, Sahai and Waters in their seminal paper on predicate encryption [\textit{J. Katz} et al., Lect. Notes Comput. Sci. 4965, 146--162 (2008; Zbl 1149.94323)]. This allows to fully leverage the vast state-of-the-art on IPE which contains adaptively secure constructions proven under standard assumptions. This results in countless new SEPM constructions, with all the features one can wish for. Beyond that, we believe that our work sheds a new light on the relations between IPE schemes and HVE schemes and in particular shows that some of the former are more suitable to construct the latter.
For the entire collection see [Zbl 1524.94001].Breaking the \(O(\sqrt{n})\)-bit barrier: Byzantine agreement with polylog bits per partyhttps://zbmath.org/1527.940232024-02-28T19:32:02.718555Z"Boyle, Elette"https://zbmath.org/authors/?q=ai:boyle.elette"Cohen, Ran"https://zbmath.org/authors/?q=ai:cohen.ran"Goel, Aarushi"https://zbmath.org/authors/?q=ai:goel.aarushiSummary: Byzantine agreement (BA), the task of \(n\) parties to agree on one of their input bits in the face of malicious agents, is a powerful primitive that lies at the core of a vast range of distributed protocols. Interestingly, in BA protocols with the best overall communication, the demands of the parties are highly unbalanced: the amortized cost is \(\tilde{O}(1)\) bits per party, but some parties must send \(\Omega (n)\) bits. In best known balanced protocols, the overall communication is sub-optimal, with each party communicating \(\tilde{O}(\sqrt{n})\). In this work, we ask whether asymmetry is inherent for optimizing total communication. In particular, is BA possible where each party communicates only \(\tilde{O}(1)\) bits? Our contributions in this line are as follows:
\begin{itemize}
\item We define a cryptographic primitive -- succinctly reconstructed distributed signatures (SRDS) -- that suffices for constructing \(\tilde{O}(1)\) balanced BA. We provide two constructions of SRDS from different cryptographic and public-key infrastructure (PKI) assumptions.
\item The SRDS-based BA follows a paradigm of boosting from ``almost-everywhere'' agreement to full agreement, and does so in a single round. Complementarily, we prove that PKI setup and cryptographic assumptions are necessary for such protocols in which every party sends \(o(n)\) messages.
\item We further explore connections between a natural approach toward attaining SRDS and average-case succinct non-interactive argument systems (SNARGs) for a particular type of NP-Complete problems (generalizing Subset-Sum and Subset-Product).
\end{itemize}
Our results provide new approaches forward, as well as limitations and barriers, toward minimizing per-party communication of BA. In particular, we construct the first two BA protocols with \(\tilde{O}(1)\) balanced communication, offering a trade-off between setup and cryptographic assumptions and answering an open question presented by \textit{V. King} and \textit{J. Saia} [Lect. Notes Comput. Sci. 5805, 464--478 (2009; Zbl 1261.68167)].Multi-instance secure public-key encryptionhttps://zbmath.org/1527.940242024-02-28T19:32:02.718555Z"Brunetta, Carlo"https://zbmath.org/authors/?q=ai:brunetta.carlo"Heum, Hans"https://zbmath.org/authors/?q=ai:heum.hans"Stam, Martijn"https://zbmath.org/authors/?q=ai:stam.martijnSummary: Mass surveillance targets many users at the same time with the goal of learning as much as possible. Intuitively, breaking many users' cryptography simultaneously should be at least as hard as that of only breaking a single one, but ideally security degradation is gradual: an adversary ought to work harder to break more. \textit{M. Bellare} et al. [Lect. Notes Comput. Sci. 7417, 312--329 (2012; Zbl 1296.94080)] introduced the notion of multi-instance security to capture the related concept for password hashing with salts. \textit{B. Auerbach} et al. [ibid. 12107, 475--506 (2020; Zbl 1479.94117)] motivated the study of public key encryption (PKE) in the multi-instance setting, yet their technical results are exclusively stated in terms of key encapsulation mechanisms (KEMs), leaving a considerable gap.
We investigate the multi-instance security of public key encryption. Our contributions are twofold. Firstly, we define and compare possible security notions for multi-instance PKE, where we include PKE schemes whose correctness is not perfect. Secondly, we observe that, in general, a hybrid encryption scheme of a multi-instance secure KEM and an arbitrary data encapsulation mechanism (DEM) is unlikely to inherit the KEM's multi-instance security. Yet, we show how with a suitable information-theoretic DEM, and a computationally secure key derivation function if need be, inheritance is possible. As far as we are aware, ours is the first inheritance result in the challenging multi-bit scenario.
For the entire collection see [Zbl 1524.94002].A new attack on the RSA cryptosystem based on continued fractionshttps://zbmath.org/1527.940252024-02-28T19:32:02.718555Z"Bunder, M."https://zbmath.org/authors/?q=ai:bunder.martin-w"Tonien, J."https://zbmath.org/authors/?q=ai:tonien.josephSummary: This paper presents a new improved attack on RSA based on Wiener's technique using continued fractions. In the RSA cryptosystem with public modulus \(N=pq\), public key \(e\) and secret key \(d\), if \(d < \frac{1}{3} N^{\frac{1}{4}}\), Wiener's original attack recovers the secret key \(d\) using the convergents of the continued fraction of \(\frac{e}{N}\). Our new method uses the convergents of the continued fraction of \(\frac{e}{N'}\) instead, where \(N'\) is a number depending on \(N\). We will show that our method can recover the secret key if \(d^2e < 8N^{\frac{3}{2}}\), so if either \(d\) or \(e\) is relatively small the RSA encryption can be broken. For \(e \approx {N^t}\), our method can recover the secret key if \(d < 2 \sqrt 2 N^{\frac{3}{4}-\frac{1}{2}}\) and certainly for \(d < 2\sqrt 2 N^{\frac{1}{4}}\). Our experiments demonstrate that for a 1024-bit modulus RSA, our method works for values of \(d\) of up to 270 bits compared to 255 bits for Wiener.Handle the traces: revisiting the attack on ECDSA with EHNPhttps://zbmath.org/1527.940262024-02-28T19:32:02.718555Z"Cao, Jinzheng"https://zbmath.org/authors/?q=ai:cao.jinzheng"Pan, Yanbin"https://zbmath.org/authors/?q=ai:pan.yanbin"Cheng, Qingfeng"https://zbmath.org/authors/?q=ai:cheng.qingfeng"Li, Xinghua"https://zbmath.org/authors/?q=ai:li.xinghuaSummary: The Elliptic Curves Digital Signature Algorithm (ECDSA) is a standard public key signature protocol. It has become essential to the security of blockchain, digital currency, and many more Internet applications. Currently, it is still one of the most popular algorithms used for digital signing and SSL/TLS transport. Among the possible attacks on ECDSA, side-channel attack poses a serious threat against hardware and software implementations. In particular, Extended Hidden Number Problem can be used when ECDSA leaks side-channel information about the double-and-add chains. The problem is then converted to the shortest vector problem and solved with lattice algorithms. In this paper, we analyze the Extended Hidden Number Problem and present an improved EHNP lattice attack on ECDSA implementations that adopt leaky scalar multiplication. Our attack requires information of double-and-add chains or traces extracted from side-channel results. In addition to methods such as elimination and merging that have been introduced, we make further improvements according to the specific structure of the lattice basis. In fact, the specific property of EHNP allows us to find a sublattice that contains the target vector. We simulate the attack to the secp256k1, and the result shows that three signatures are enough to lead to a success rate greater than 0.8. When 4 or 5 traces are known, the success rate is close to 1. The new algorithm significantly improves the performance of attacks using EHNP methods.
For the entire collection see [Zbl 1515.68044].Attribute-based anonymous credential: optimization for single-use and multi-usehttps://zbmath.org/1527.940272024-02-28T19:32:02.718555Z"Chan, Kwan Yin"https://zbmath.org/authors/?q=ai:chan.kwan-yin"Yuen, Tsz Hon"https://zbmath.org/authors/?q=ai:yuen.tsz-honSummary: User attributes can be authenticated by an attribute-based anonymous credential while keeping the anonymity of the user. Most attribute-based anonymous credential schemes are designed specifically for either multi-use or single-use. In this paper, we propose a unified attribute-based anonymous credential system, in which users always obtain the same format of credential from the issuer. The user can choose to use it for an efficient multi-use or single-use show proof. It is a more user-centric approach than the existing schemes.
Technically, we propose an interactive approach to the credential issuance protocol using a two-party computation with an additive homomorphic encryption. At the same time, it keeps the security property of impersonation resilience, anonymity, and unlinkability. Apart from the interactive protocol, we further design the show proofs for efficient single-use credentials which maintain the user anonymity.
For the entire collection see [Zbl 1515.68028].Simple, fast, efficient, and tightly-secure non-malleable non-interactive timed commitmentshttps://zbmath.org/1527.940282024-02-28T19:32:02.718555Z"Chvojka, Peter"https://zbmath.org/authors/?q=ai:chvojka.peter"Jager, Tibor"https://zbmath.org/authors/?q=ai:jager.tiborSummary: Timed commitment schemes, introduced by \textit{D. Boneh} and \textit{M. Naor} [Lect. Notes Comput. Sci. 1880, 236--254 (2000; Zbl 0989.94517)], can be used to achieve fairness in secure computation protocols in a simple and elegant way. The only known non-malleable construction in the standard model is due to \textit{J. Katz} et al. [ibid. 12552, 390--413 (2020; Zbl 1485.94098)]. This construction requires general-purpose zero knowledge proofs with specific properties, and it suffers from an inefficient commitment protocol, which requires the committing party to solve a computationally expensive puzzle.
We propose new constructions of non-malleable non-interactive timed commitments, which combine (an extension of) the Naor-Yung paradigm used to construct IND-CCA secure encryption with a non-interactive ZK proof for a simple algebraic language. This yields much simpler and more efficient non-malleable timed commitments in the standard model.
Furthermore, our constructions also compare favourably to known constructions of timed commitments in the random oracle model, as they achieve several further interesting properties that make the schemes very practical. This includes the possibility of using a homomorphism for the forced opening of multiple commitments in the sense of \textit{G. Malavolta} and \textit{S. A. K. Thyagarajan} [ibid. 11692, 620--649 (2019; Zbl 1456.94099)], and they are the first constructions to achieve public verifiability, which seems particularly useful to apply the homomorphism in practical applications.
For the entire collection see [Zbl 1524.94001].Efficient NIZK arguments with straight-line simulation and extractionhttps://zbmath.org/1527.940292024-02-28T19:32:02.718555Z"Ciampi, Michele"https://zbmath.org/authors/?q=ai:ciampi.michele"Visconti, Ivan"https://zbmath.org/authors/?q=ai:visconti.ivanSummary: Non-interactive zero-knowledge (NIZK) arguments allow a prover to convince a verifier about the truthfulness of an NP-statement by sending just one message, without disclosing any additional information. In several practical scenarios, the Fiat-Shamir transform is used to convert an efficient constant-round public-coin honest-verifier zero-knowledge proof system into an efficient NIZK argument system. This approach is provably secure in the random oracle model, crucially requires the programmability of the random oracle and extraction works through rewinds. The works of \textit{Y. Lindell} [Lect. Notes Comput. Sci. 9014, 93--109 (2015; Zbl 1354.94038)] and \textit{M. Ciampi} et al. [ibid. 9563, 112--141 (2016; Zbl 1377.94044)] proposed efficient NIZK arguments with non-programmable random oracles along with a programmable common reference string.
In this work we show an efficient NIZK argument with straight-line simulation and extraction that relies on features that alone are insufficient to construct NIZK arguments (regardless of efficiency). More specifically we consider the notion of quasi-polynomial time simulation proposed by \textit{R. Pass} [ibid. 2656, 160--176 (2003; Zbl 1037.68536)] and combine it with simulation and extraction with non-programmable random oracles thus obtaining a NIZK argument of knowledge where neither the zero-knowledge simulator, nor the argument of knowledge extractor needs to program the random oracle. Still, both the simulator and the extractor are straight-line. Our construction uses as a building block a modification of the Fischlin's transform [\textit{M. Fischlin}, Lect. Notes Comput. Sci. 3621, 152--168 (2005; Zbl 1145.94467)] and combines it with the concept of dense puzzles introduced by \textit{F. Baldimtsi} et al. [ibid. 10032, 902--933 (2016; Zbl 1407.94079)]. We also argue that our NIZK argument system inherits the efficiency features of Fischlin's transform, which represents the main advantage of Fischlin's protocol over existing schemes.
For the entire collection see [Zbl 1515.68028].Pseudorandom correlation functions from variable-density LPN, revisitedhttps://zbmath.org/1527.940302024-02-28T19:32:02.718555Z"Couteau, Geoffroy"https://zbmath.org/authors/?q=ai:couteau.geoffroy"Ducros, Clément"https://zbmath.org/authors/?q=ai:ducros.clementSummary: Pseudorandom correlation functions (PCF), introduced in the work of \textit{E. Boyle} et al. [``Correlated pseudorandom functions from variable-density'', Preprint, \url{https://ia.cr/2020/1417}], allow two parties to locally generate, from short correlated keys, a near-unbounded amount of pseudorandom samples from a target correlation. PCF is an extremely appealing primitive in secure computation, where they allow to confine all preprocessing phases of all future computations two parties could want to execute to a single short interaction with low communication and computation, followed solely by offline computations. Beyond introducing the notion, Boyle et al. [loc. cit.] gave a candidate construction, using a new variable-density variant of the learning parity with noise (LPN) assumption. Then, to provide support for this new assumption, the authors showed that it provably resists a large class of linear attacks, which captures in particular all known attacks on LPN.
In this work, we revisit the analysis of the VDLPN assumption. We make two key contributions:
\begin{itemize}
\item First, we observe that the analysis of Boyle et al. [loc. cit.]. is purely asymptotic: they do not lead to any concrete and efficient PCF instantiation within the bounds that offer security guarantees. To improve this state of affairs, we combine a new variant of a VDLPN assumption with an entirely new, much tighter security analysis, which we further tighten using extensive computer simulations to optimize parameters. This way, we manage to obtain for the first time a set of provable usable parameters (under a simple combinatorial conjecture which is easy to verify experimentally), leading to a concretely efficient PCF resisting all linear tests.
\item Second, we identify a flaw in the security analysis of Boyle et al. [loc. cit.], which invalidates their proof that VDLPN resists linear attacks. Using several new non-trivial arguments, we repair the proof and fully demonstrate that VDLPN resists linear attack; our new analysis is more involved than the original (flawed) analysis.
\end{itemize}
Our parameters set leads to PCFs with keys around 3 MB allowing \(\sim 500\) evaluations per second on one core of a standard laptop for 110 bits of security; these numbers can be improved to 350 kB keys and \(\sim 3950\) evaluations/s using a more aggressive all-prefix variant. All numbers are quite tight: only within a factor 3 of the best bounds one could heuristically hope for.
For the entire collection see [Zbl 1524.94002].Decentralized multi-authority attribute-based inner-product FE: large universe and unboundedhttps://zbmath.org/1527.940312024-02-28T19:32:02.718555Z"Datta, Pratish"https://zbmath.org/authors/?q=ai:datta.pratish"Pal, Tapas"https://zbmath.org/authors/?q=ai:pal.tapasSummary: This paper presents the first decentralized multi-authority attribute-based inner product functional encryption \((\mathsf{MA}\text{-}\mathsf{ABIPFE})\) schemes supporting vectors of a priori unbounded lengths. The notion of \(\mathsf{AB}\text{-}\mathsf{IPFE}\), introduced by \textit{M. Abdalla} et al., [Lect. Notes Comput. Sci. 12493, 467--497 (2020; Zbl 1511.94033)], combines the access control functionality of attribute-based encryption \((\mathsf{ABE})\) with the possibility of evaluating linear functions on encrypted data. A decentralized \(\mathsf{MA}\text{-}\mathsf{ABIPFE}\) defined by \textit{S. Agrawal} et al. [ibid. 12828, 208--238 (2021; Zbl 1489.94082)] essentially enhances the \(\mathsf{ABE}\) component of \(\mathsf{AB}\text{-}\mathsf{IPFE}\) to the decentralized multi-authority setting where several authorities can independently issue user keys involving attributes under their control. In \(\mathsf{MA}\text{-}\mathsf{ABIPFE}\) for unbounded vectors \((\mathsf{MA}\text{-}\mathsf{ABUIPFE})\), encryptors can encrypt vectors of arbitrary length under access policies of their choice whereas authorities can issue secret keys to users involving attributes under their control and vectors of arbitrary lengths. Decryption works in the same way as for \(\mathsf{MA}\text{-}\mathsf{ABIPFE}\) provided the lengths of the vectors within the ciphertext and secret keys match.
We present two \(\mathsf{MA}\text{-}\mathsf{ABUIPFE}\) schemes supporting access policies realizable by linear secret sharing schemes \((\mathsf{LSSS})\), in the significantly faster prime-order bilinear groups under decisional assumptions based on the target groups which are known to be weaker compared to their counterparts based in the source groups. The proposed schemes demonstrate different trade-offs between versatility and underlying assumptions. The first scheme allows each authority to control a bounded number of attributes and is proven secure under the well-studied decisional bilinear Diffie-Hellman \((\mathsf{DBDH})\) assumption. On the other hand, the second scheme allows authorities to control exponentially many attributes and attributes are not required to be enumerated at the setup, that is, supports large attribute universe, and is proven secure under a non-interactive \(q\)-type variant of the \(\mathsf{DBDH}\) assumption called \(L\)-\(\mathsf{DBDH}\), similar to what was used in prior large-universe multi-authority \(\mathsf{ABE}(\mathsf{MA}\text{-}\mathsf{ABE})\) construction.
When compared with the only known \(\mathsf{MA}\text{-}\mathsf{ABIPFE}\) scheme due to Agrawal et al. [loc. cit.], our schemes offer significantly higher efficiency while offering greater flexibility and security under weaker assumptions at the same time. Moreover, unlike Agrawal et al., our schemes can support the appearance of the same attributes within an access policy arbitrarily many times. Since efficiency and practicality are the prime focus of this work, we prove the security of our constructions in the random oracle model against static adversaries similar to prior works on \(\mathsf{MA}\text{-}\mathsf{ABE}\) with similar motivations and assumptions. On the technical side, we extend the unbounded \(\mathsf{IPFE}\) techniques of \textit{E. Dufour-Sans} and \textit{D. Pointcheval} [ibid. 11464, 426--441 (2019; Zbl 1458.94231)] to the context of \(\mathsf{MA}\text{-}\mathsf{ABUIPFE}\) by introducing a novel hash-decomposition technique.
For the entire collection see [Zbl 1524.94001].BLEACH: cleaning errors in discrete computations over CKKShttps://zbmath.org/1527.940322024-02-28T19:32:02.718555Z"Drucker, Nir"https://zbmath.org/authors/?q=ai:drucker.nir"Moshkowich, Guy"https://zbmath.org/authors/?q=ai:moshkowich.guy"Pelleg, Tomer"https://zbmath.org/authors/?q=ai:pelleg.tomer"Shaul, Hayim"https://zbmath.org/authors/?q=ai:shaul.hayimSummary: Approximated homomorphic encryption (HE) schemes such as CKKS are commonly used to perform computations over encrypted real numbers. It is commonly assumed that these schemes are not ``exact'' and thus they cannot execute circuits with unbounded depth over discrete sets, such as binary or integer numbers, without error overflows. These circuits are usually executed using BGV and B/FV for integers and TFHE for binary numbers. This artificial separation can cause users to favor one scheme over another for a given computation, without even exploring other, perhaps better, options. We show that by treating step functions as ``clean-up'' utilities and by leveraging the SIMD capabilities of CKKS, we can extend the homomorphic encryption toolbox with efficient tools. These tools use CKKS to run unbounded circuits that operate over binary and small-integer elements and even combine these circuits with fixed-point real numbers circuits. We demonstrate the results using the Turing-complete Conway's Game of Life. In our evaluation, for boards of size \(256 \times 256\), these tools achieved orders of magnitude faster latency than previous implementations using other HE schemes. We argue and demonstrate that for large enough real-world inputs, performing binary circuits over CKKS, while considering it as an ``exact'' scheme, results in comparable or even better performance than using other schemes tailored for similar inputs.Hull attacks on the lattice isomorphism problemhttps://zbmath.org/1527.940332024-02-28T19:32:02.718555Z"Ducas, Léo"https://zbmath.org/authors/?q=ai:ducas.leo"Gibbons, Shane"https://zbmath.org/authors/?q=ai:gibbons.shaneSummary: The lattice isomorphism problem (LIP) asks one to find an isometry between two lattices. It has recently been proposed as a foundation for cryptography in two independent works [\textit{L. Ducas} and \textit{W. van Woerden}, Lect. Notes Comput. Sci. 13277, 643--673 (2022; Zbl 1502.94033); \textit{H. Bennett} et al., Lect. Notes Comput. Sci. 14008, 252--281 (2023; Zbl 07774569)]. This problem is the lattice variant of the code equivalence problem, on which the notion of the hull of a code can lead to devastating attacks.
In this work we study the cryptanalytic role of an adaptation of the hull to the lattice setting, namely, the \(s\)-hull. We first show that the \(s\)-hull is not helpful for creating an arithmetic distinguisher. More specifically, the genus of the \(s\)-hull can be efficiently predicted from \(s\) and the original genus and therefore carries no extra information.
However, we also show that the hull can be helpful for geometric attacks: for certain lattices the minimal distance of the hull is relatively smaller than that of the original lattice, and this can be exploited. The attack cost remains exponential, but the constant in the exponent is halved. This second result gives a counterexample to the general hardness conjecture of LIP proposed by Ducas and van Woerden [loc. cit.].
Our results suggest that one should be very considerate about the geometry of hulls when instantiating LIP for cryptography. They also point to unimodular lattices as attractive options, as they are equal to their dual and their hulls, leaving only the original lattice to an attacker. Remarkably, this is already the case in proposed instantiations, namely the trivial lattice \(\mathbb{Z}^n\) and the Barne\(s\)-Wall lattices.
For the entire collection see [Zbl 1524.94001].Generic models for group actionshttps://zbmath.org/1527.940342024-02-28T19:32:02.718555Z"Duman, Julien"https://zbmath.org/authors/?q=ai:duman.julien"Hartmann, Dominik"https://zbmath.org/authors/?q=ai:hartmann.dominik"Kiltz, Eike"https://zbmath.org/authors/?q=ai:kiltz.eike"Kunzweiler, Sabrina"https://zbmath.org/authors/?q=ai:kunzweiler.sabrina"Lehmann, Jonas"https://zbmath.org/authors/?q=ai:lehmann.jonas"Riepel, Doreen"https://zbmath.org/authors/?q=ai:riepel.doreenSummary: We define the Generic Group Action Model (GGAM), an adaptation of the Generic Group Model to the setting of group actions (such as CSIDH). Compared to a previously proposed definition by \textit{H. Montgomery} and \textit{M. Zhandry} [Lect. Notes Comput. Sci. 13791, 3--32 (2023; Zbl 1519.94175)], our GGAM more accurately abstracts the security properties of group actions.
We are able to prove information-theoretic lower bounds in the GGAM for the discrete logarithm assumption, as well as for non-standard assumptions recently introduced in the setting of threshold and identification schemes on group actions. Unfortunately, in a natural quantum version of the GGAM, the discrete logarithm assumption does not hold.
To this end we also introduce the weaker Quantum Algebraic Group Action Model (QAGAM), where every set element (in superposition) output by an adversary is required to have an explicit representation relative to known elements. In contrast to the Quantum Generic Group Action Model, in the QAGAM we are able to analyze the hardness of group action assumptions: We prove (among other things) the equivalence between the discrete logarithm assumption and non-standard assumptions recently introduced in the setting of QROM security for Password-Authenticated Key Exchange, Non-Interactive Key Exchange, and Public-Key Encryption.
For the entire collection see [Zbl 1524.94001].A thorough treatment of highly-efficient NTRU instantiationshttps://zbmath.org/1527.940352024-02-28T19:32:02.718555Z"Duman, Julien"https://zbmath.org/authors/?q=ai:duman.julien"Hövelmanns, Kathrin"https://zbmath.org/authors/?q=ai:hovelmanns.kathrin"Kiltz, Eike"https://zbmath.org/authors/?q=ai:kiltz.eike"Lyubashevsky, Vadim"https://zbmath.org/authors/?q=ai:lyubashevsky.vadim"Seiler, Gregor"https://zbmath.org/authors/?q=ai:seiler.gregor"Unruh, Dominique"https://zbmath.org/authors/?q=ai:unruh.dominiqueSummary: Cryptography based on the hardness of lattice problems over polynomial rings currently provides the most practical solution for public key encryption in the quantum era. Indeed, three of the four schemes chosen by NIST in the recently-concluded post-quantum standardization effort for encryption and signature schemes are based on the hardness of these problems. While the first encryption scheme utilizing properties of polynomial rings was NTRU [\textit{J. Hoffstein} et al., Lect. Notes Comput. Sci. 1423, 267--288 (1998; Zbl 1067.94538)], the scheme that NIST chose for public key encryption (CRYSTALS-Kyber) is based on the hardness of the somewhat-related Module-LWE problem. One of the reasons for Kyber's selection was the fact that it is noticeably faster than NTRU and a little more compact. And indeed, the practical NTRU encryption schemes in the literature generally lag their Ring/Module-LWE counterparts in either compactness or speed, or both.
In this paper, we put the efficiency of NTRU-based schemes on equal (even slightly better, actually) footing with their Ring/Module-LWE counterparts. We provide several instantiations and transformations, with security given in the ROM and the QROM, that are on par, compactness-wise, with their counterparts based on Ring/Module-LWE. Performance-wise, the NTRU schemes instantiated in this paper over NTT-friendly rings of the form \(\mathbb{Z}_q[X]/(X^d-X^{d/2}+1)\) are the fastest of all public key encryption schemes, whether quantum-safe or not. When compared to the NIST finalist NTRU-HRSS-701, our scheme is \(15\%\) more compact and has a 15X improvement in the round-trip time of ephemeral key exchange, with key generation being 35X faster, encapsulation being 6X faster, and decapsulation enjoying a 9X speedup.
For the entire collection see [Zbl 1524.94001].A lightweight identification protocol based on latticeshttps://zbmath.org/1527.940362024-02-28T19:32:02.718555Z"Düzlü, Samed"https://zbmath.org/authors/?q=ai:duzlu.samed"Krämer, Juliane"https://zbmath.org/authors/?q=ai:kramer.juliane"Pöppelmann, Thomas"https://zbmath.org/authors/?q=ai:poppelmann.thomas"Struck, Patrick"https://zbmath.org/authors/?q=ai:struck.patrickSummary: In this work we present a lightweight lattice-based identification protocol based on the CPA-secured public key encryption scheme \textsf{Kyber}. It is designed as a replacement for existing classical ECC- or RSA-based identification protocols in IoT, smart card applications, or for device authentication. The proposed protocol is simple, efficient, and implementations are supposed to be easy to harden against side-channel attacks. Compared to standard constructions for identification protocols based on lattice-based KEMs, our construction achieves this by avoiding the Fujisaki-Okamoto transform and its impact on implementation security.
Moreover, contrary to prior lattice-based identification protocols or standard constructions using signatures, our work does not require rejection sampling and can use more efficient parameters than signature schemes.
We provide a generic construction from CPA-secured public key encryption schemes to identification protocols and give a security proof of the protocol in the ROM. Moreover, we instantiate the generic construction with \textsf{Kyber}, for which we use the proposed parameter sets for NIST security levels I, III, and V. To show that the protocol is suitable for constrained devices, we implemented one selected parameter set on an ARM Cortex-M4 microcontroller. As the protocol is based on existing algorithms for \textsf{Kyber}, we make use of existing SW components (e.g., fast NTT implementations) for our implementation.
For the entire collection see [Zbl 1524.94001].Rectangular, range, and restricted AONTs: three generalizations of all-or-nothing transformshttps://zbmath.org/1527.940372024-02-28T19:32:02.718555Z"Esfahani, Navid Nasr"https://zbmath.org/authors/?q=ai:esfahani.navid-nasr"Stinson, Douglas R."https://zbmath.org/authors/?q=ai:stinson.douglas-rSummary: All-or-nothing transforms (AONTs) were originally defined by \textit{R. L. Rivest} [Lect. Notes Comput. Sci. 1267, 210--218 (1997; Zbl 1385.94067)] as bijections from \(s\) input blocks to \(s\) output blocks such that no information can be obtained about any input block in the absence of any output block. Numerous generalizations and extensions of all-or-nothing transforms have been discussed in recent years, many of which are motivated by diverse applications in cryptography, information security, secure distributed storage, etc. In particular, \( t \)-AONTs, in which no information can be obtained about any \(t\) input blocks in the absence of any \(t\) output blocks, have received considerable study. In this paper, we study three generalizations of AONTs that are motivated by applications due to \textit{H. Pham} et al. [``Integrating classical preprocessing into an optical encryption scheme'', Entropy 21, No. 9, Article No. 872, 14 p. (2019; \url{doi:10.3390/e21090872})] and \textit{P. F. Oliveira} et al. [``Coding for trusted storage in untrusted networks'', IEEE Trans. Inf. Forensics Secur. 7, No. 6, 1890--1899 (2012; \url{doi:10.1109/TIFS.2012.2217331.12})]. We term these generalizations rectangular, range, and restricted AONTs. Briefly, in a rectangular AONT, the number of outputs is greater than the number of inputs. A range AONT satisfies the \(t \)-AONT property for a range of consecutive values of \(t \). Finally, in a restricted AONT, the unknown outputs are assumed to occur within a specified set of ``secure'' output blocks. We study existence and non-existence and provide examples and constructions for these generalizations. We also demonstrate interesting connections with combinatorial structures such as orthogonal arrays, split orthogonal arrays, MDS codes and difference matrices.SCALLOP: scaling the CSI-FiShhttps://zbmath.org/1527.940382024-02-28T19:32:02.718555Z"Feo, Luca De"https://zbmath.org/authors/?q=ai:de-feo.luca"Fouotsa, Tako Boris"https://zbmath.org/authors/?q=ai:fouotsa.tako-boris"Kutas, Péter"https://zbmath.org/authors/?q=ai:kutas.peter"Leroux, Antonin"https://zbmath.org/authors/?q=ai:leroux.antonin"Merz, Simon-Philipp"https://zbmath.org/authors/?q=ai:merz.simon-philipp"Panny, Lorenz"https://zbmath.org/authors/?q=ai:panny.lorenz"Wesolowski, Benjamin"https://zbmath.org/authors/?q=ai:wesolowski.benjaminSummary: We present SCALLOP: SCALable isogeny action based on oriented supersingular curves with prime conductor, a new group action based on isogenies of supersingular curves. Similarly to CSIDH and OSIDH, we use the group action of an imaginary quadratic order's class group on the set of oriented supersingular curves. Compared to CSIDH, the main benefit of our construction is that it is easy to compute the class-group structure; this data is required to uniquely represent -- and efficiently act by -- arbitrary group elements, which is a requirement in, e.g., the CSI-FiSh signature scheme by \textit{W. Beullens} et al. [Lect. Notes Comput. Sci. 11921, 227--247 (2019; Zbl 1456.94050)]. The index-calculus algorithm used in CSI-FiSh to compute the class-group structure has complexity L(1/2), ruling out class groups much larger than CSIDH-512, a limitation that is particularly problematic in light of the ongoing debate regarding the quantum security of cryptographic group actions.
Hoping to solve this issue, we consider the class group of a quadratic order of large prime conductor inside an imaginary quadratic field of small discriminant. This family of quadratic orders lets us easily determine the size of the class group, and, by carefully choosing the conductor, even exercise significant control on it -- in particular supporting highly smooth choices. Although evaluating the resulting group action still has subexponential asymptotic complexity, a careful choice of parameters leads to a practical speedup that we demonstrate in practice for a security level equivalent to CSIDH-1024, a parameter currently firmly out of reach of index-calculus-based methods. However, our implementation takes 35 s (resp. 12.5 min) for a single group-action evaluation at a CSIDH-512-equivalent (resp. CSIDH-1024-equivalent) security level, showing that, while feasible, the SCALLOP group action does not achieve realistically usable performance yet.
For the entire collection see [Zbl 1524.94001].Rinocchio: SNARKs for ring arithmetichttps://zbmath.org/1527.940392024-02-28T19:32:02.718555Z"Ganesh, Chaya"https://zbmath.org/authors/?q=ai:ganesh.chaya"Nitulescu, Anca"https://zbmath.org/authors/?q=ai:nitulescu.anca"Soria-Vazquez, Eduardo"https://zbmath.org/authors/?q=ai:soria-vazquez.eduardoSummary: Succinct non-interactive arguments of knowledge (SNARKs) enable non-interactive efficient verification of NP computations and admit short proofs. However, all current SNARK constructions assume that the statements to be proven can be efficiently represented as either Boolean or arithmetic circuits over finite fields. For most constructions, the choice of the prime field \(\mathbb{F}_p\) is limited by the existence of groups of matching order for which secure bilinear maps exist. In this work, we overcome such restrictions and enable verifying computations over rings. We construct the first designated-verifier SNARK for statements which are represented as circuits over a broader kind of commutative rings. Our contribution is threefold:
\begin{itemize}
\item[1.] We first introduce Quadratic Ring Programs (QRPs) as a characterization of NP where the arithmetic is over a ring.
\item[2.] Second, inspired by the framework in [\textit{R. Gennaro} et al., Lect. Notes Comput. Sci. 7881, 626--645 (2013; Zbl 1300.94056)], we design SNARKs over rings in a modular way. We generalize preexistent assumptions employed in field-restricted SNARKs to encoding schemes over rings. As our encoding notion is generic in the choice of the ring, it is amenable to different settings.
\item[3.] Finally, we propose two applications for our SNARKs.
\item[--] Our first application is verifiable computation over encrypted data, specifically for evaluations of Ring-LWE-based homomorphic encryption schemes.
\item[--] In the second one, we use Rinocchio to naturally prove statements about circuits over, e.g., \(\mathbb{Z}_{2^{64}}\), which closely matches real-life computer architectures such as standard CPUs.
\end{itemize}Provable security of HADES structurehttps://zbmath.org/1527.940402024-02-28T19:32:02.718555Z"Gao, Yuan"https://zbmath.org/authors/?q=ai:gao.yuan|gao.yuan.1"Guo, Chun"https://zbmath.org/authors/?q=ai:guo.chunSummary: The HADES design strategy combines the classical SPN construction with the Partial SPN (PSPN) construction, in which at every encryption round, the non-linear layer is applied to only a part of the state. In a HADES design, a middle layer that consists of PSPN rounds is surrounded by outer layers of SPN rounds. The security arguments of HADES with respect to statistical attacks usually use only the SPN rounds, disregarding the PSPN rounds. There are few results about provable security of HADES and the research on HADES focuses on its designs (the MDS matrix) and attacks (differential and linear). So in this paper, we show that the four-round HADES with the middle two rounds that consist of partial \(S\)-boxes surrounded by outer two rounds of SPN, on condition that the linear layer has to be MDS matrix, with independent \(S\)-boxes and independent round keys, is birthday bound security.
For the entire collection see [Zbl 1515.68028].Verifiable decryption in the headhttps://zbmath.org/1527.940412024-02-28T19:32:02.718555Z"Gjøsteen, Kristian"https://zbmath.org/authors/?q=ai:gjosteen.kristian"Haines, Thomas"https://zbmath.org/authors/?q=ai:haines.thomas-j"Müller, Johannes"https://zbmath.org/authors/?q=ai:muller.johannes|muller.johannes.2|muller.johannes-c"Rønne, Peter"https://zbmath.org/authors/?q=ai:ronne.peter-b"Silde, Tjerand"https://zbmath.org/authors/?q=ai:silde.tjerandSummary: In this work we present a new approach to verifiable decryption which converts a 2-party passively secure distributed decryption protocol into a 1-party proof of correct decryption. This leads to an efficient and simple verifiable decryption scheme for lattice-based cryptography, especially for large sets of ciphertexts; it has small size and lightweight computations as we reduce the need of zero-knowledge proofs for each ciphertext. We believe the flexibility of the general technique is interesting and provides attractive trade-offs between complexity and security, in particular for the interactive variant with smaller soundness.
Finally, the protocol requires only very simple operations, making it easy to correctly and securely implement in practice. We suggest concrete parameters for our protocol and give a proof of concept implementation, showing that it is highly practical.
For the entire collection see [Zbl 1515.68044].Cryptanalysis of Ushakov-Shpilrain's authentication protocol based on the twisted conjugacy problemhttps://zbmath.org/1527.940422024-02-28T19:32:02.718555Z"Gornova, M. N."https://zbmath.org/authors/?q=ai:gornova.m-n"Kukina, E. G."https://zbmath.org/authors/?q=ai:kukina.e-g"Romankov, V. A."https://zbmath.org/authors/?q=ai:romankov.vitaly-aSummary: We give a cryptanalysis of Ushakov-Shpilrain's authentication protocol based on the twisted conjugacy problem for a pair of endomorphisms on the semigroup of all \(2\times2\) matrices over the ring of truncated one-variable polynomials over the field \(\mathbb{F}_2\). It is shown that the private key of the protocol can be computed by solving the system of linear equations over \(\mathbb{F}_2\). We present a theoretical estimation for the complexity of this cryptanalysis and describe practical results obtained in a computer experiment. It is shown that the protocol is theoretically and practically vulnerable.Truncated differential properties of the diagonal set of inputs for 5-round AEShttps://zbmath.org/1527.940432024-02-28T19:32:02.718555Z"Grassi, Lorenzo"https://zbmath.org/authors/?q=ai:grassi.lorenzo"Rechberger, Christian"https://zbmath.org/authors/?q=ai:rechberger.christianSummary: In the last couple of years, a new wave of results appeared, proposing and exploiting new properties of round-reduced AES. In this paper we survey and combine some of these results (namely, the multiple-of-\(n\) property and the mixture differential cryptanalysis) in a systematic way in order to answer more general questions regarding the probability distribution of encrypted diagonal sets. This allows to analyze this special set of inputs, and report on new properties regarding the probability distribution of the number of different pairs of corresponding ciphertexts are equal in certain anti-diagonal(s) after 5 rounds.
An immediate corollary of the multiple-of-8 property is that the variance of such a distribution can be shown to be higher than for a random permutation. Surprisingly, also the mean of the distribution is significantly different from random, something which cannot be explained by the multiple-of-8 property. We propose a theoretical explanation of this, by assuming an APN-like assumption on the S-Box which closely resembles the AES-Sbox. By combining the multiple-of-8 property, the mixture differential approach, and the results just mentioned about the mean and the variance, we are finally able to formulate the probability distribution of the diagonal set after 5-round AES as a sum of independent binomial distributions.
For the entire collection see [Zbl 1515.68044].Key structures: improved related-key boomerang attack against the full AES-256https://zbmath.org/1527.940442024-02-28T19:32:02.718555Z"Guo, Jian"https://zbmath.org/authors/?q=ai:guo.jian"Song, Ling"https://zbmath.org/authors/?q=ai:song.ling"Wang, Haoyang"https://zbmath.org/authors/?q=ai:wang.haoyangSummary: This paper introduces structure to key, in the related-key attack settings. While the idea of structure has been long used in key-recovery attacks against block ciphers to enjoy the birthday effect, the same had not been applied to key materials due to the fact that key structure results in uncontrolled differences in key and hence affects the validity or probabilities of the differential trails. We apply this simple idea to improve the related-key boomerang attack against \textsf{AES}-256 by \textit{A. Biryukov} et al. [Lect. Notes Comput. Sci. 5677, 231--249 (2009; Zbl 1252.94051)]. Surprisingly, it turns out to be effective, i.e., both data and time complexities are reduced by a factor of about \(2^8\), to \(2^{92}\) and \(2^{91}\) respectively, at the cost of the amount of required keys increased from 4 to \(2^{19}\). There exist some tradeoffs between the data/time complexity and the number of keys. To the best of our knowledge, this is the first essential improvement of the attack against the full \textsf{AES}-256 since 2009. It will be interesting to see if the structure technique can be applied to other \textsf{AES}-like block ciphers, and to tweaks rather than keys of tweakable block ciphers so the amount of required keys of the attack will not be affected.
For the entire collection see [Zbl 1515.68044].Cryptanalysis and repair of a Gabidulin code based cryptosystem from ACISP 2018https://zbmath.org/1527.940452024-02-28T19:32:02.718555Z"Guo, Wenshuo"https://zbmath.org/authors/?q=ai:guo.wenshuo"Fu, Fang-Wei"https://zbmath.org/authors/?q=ai:fu.fangweiSummary: This paper presents a key recovery attack on a rank metric based cryptosystem proposed by \textit{T. S. C. Lau} and \textit{C. H. Tan} [Lect. Notes Comput. Sci. 10946, 750--758 (2018; Zbl 1444.94082)], which uses Gabidulin codes as the underlying decodable code. This attack is shown to cost polynomial time and therefore completely breaks the cryptosystem. Specifically, we convert the problem of recovering the private key into solving a multivariate linear system over the base field. We then present a simple repair for this scheme, which is shown to require exponential complexity for the proposed attack. Additionally, we apply this attack to cryptanalyze another Gabidulin code based cryptosystem proposed by \textit{P. Loidreau} [Lect. Notes Comput. Sci. 10346, 3--17 (2017; Zbl 1437.94070)], and improve \textit{P. Loidreau}'s result in a talk at the CBCrypto 2021 international workshop [``Analysis of a rank metric codes based encryption scheme'', talk held at 9th international workshop CBCrypto 2021, Munich, Germany, June 21--22, 2021].
For the entire collection see [Zbl 1515.68044].\texttt{POLKA}: towards leakage-resistant post-quantum CCA-secure public key encryptionhttps://zbmath.org/1527.940462024-02-28T19:32:02.718555Z"Hoffmann, Clément"https://zbmath.org/authors/?q=ai:hoffmann.clement"Libert, Benoît"https://zbmath.org/authors/?q=ai:libert.benoit"Momin, Charles"https://zbmath.org/authors/?q=ai:momin.charles"Peters, Thomas"https://zbmath.org/authors/?q=ai:peters.thomas-j|peters.thomas-d"Standaert, François-Xavier"https://zbmath.org/authors/?q=ai:standaert.francois-xavierSummary: As for any cryptographic algorithm, the deployment of post-quantum CCA-secure public key encryption schemes may come with the need to be protected against side-channel attacks. For existing post-quantum schemes that have not been developed with leakage in mind, recent results showed that the cost of these protections can make their implementations more expensive by orders of magnitude. In this paper, we describe a new design, coined \texttt{POLKA}, that is specifically tailored to reduce this cost. It leverages various ingredients in order to enable efficient side-channel protected implementations such as: (i) the rigidity property (which intuitively means that the de-randomized encryption and decryption are injective functions) to avoid the very leaky re-encryption step of the Fujisaki-Okamoto transform, (ii) the randomization of the decryption thanks to the incorporation of a dummy ciphertext, removing the adversary's control of its intermediate computations and making these computations ephemeral, (iii) key-homomorphic computations that can be masked against side-channel attacks with overheads that scale linearly in the number of shares, (iv) hard physical learning problems to argue about the security of some critical unmasked operations. Furthermore, we use an explicit rejection mechanism (returning an error symbol for invalid ciphertexts) to avoid the additional leakage caused by implicit rejection. As a result, the operations of \texttt{POLKA} can be protected against leakage in a cheaper way than state-of-the-art designs, opening the way towards schemes that are both quantum-safe and leakage-resistant.
For the entire collection see [Zbl 1524.94001].Beyond the Csiszár-Körner bound: best-possible wiretap coding via obfuscationhttps://zbmath.org/1527.940472024-02-28T19:32:02.718555Z"Ishai, Yuval"https://zbmath.org/authors/?q=ai:ishai.yuval"Korb, Alexis"https://zbmath.org/authors/?q=ai:korb.alexis"Lou, Paul"https://zbmath.org/authors/?q=ai:lou.paul"Sahai, Amit"https://zbmath.org/authors/?q=ai:sahai.amitSummary: A wiretap coding scheme enables Alice to reliably communicate a message \(m\) to an honest Bob by sending an encoding \(c\) over a noisy channel \textsf{ChB}, while at the same time hiding \(m\) from Eve who receives \(c\) over another noisy channel \textsf{ChE}. Wiretap coding is clearly impossible when \textsf{ChB} is a degraded version of \textsf{ChE}, in the sense that the output of \textsf{ChB} can be simulated using only the output of \textsf{ChE}. A classic work of \textit{I. Csiszar} and \textit{J. Körner} [IEEE Trans. Inf. Theory 24, 339--348 (1978; Zbl 0382.94017)] shows that the converse does not hold. This follows from their full characterization of the channel pairs (\textsf{ChB},\textsf{ChE}) that enable information-theoretic wiretap coding. In this work, we show that in fact the converse does hold when considering computational security; that is, wiretap coding against a computationally bounded Eve is possible if and only if \textsf{ChB} is not a degraded version of \textsf{ChE}. Our construction assumes the existence of virtual black-box obfuscation of specific classes of ``evasive'' functions that generalize fuzzy point functions and can be heuristically instantiated using indistinguishability obfuscation. Finally, our solution has the appealing feature of being universal in the sense that Alice's algorithm depends only on \textsf{ChB} and not on \textsf{ChE}.Discrete logarithm problem in finite dimensional algebras over fieldhttps://zbmath.org/1527.940482024-02-28T19:32:02.718555Z"Katyshev, S. Yu."https://zbmath.org/authors/?q=ai:katyshev.sergey-yuSummary: The open key distribution procedure by Diffie-Hellmann algorithm over non associative groupoid is studied. It is proved that the discrete logarithm problem in finite dimensional algebras is polynomially equivalent to the discrete logarithm problem in finite fields.Practical attacks on small private exponent RSA: new records and new insightshttps://zbmath.org/1527.940492024-02-28T19:32:02.718555Z"Li, Qiang"https://zbmath.org/authors/?q=ai:li.qiang.2|li.qiang.6|li.qiang.10|li.qiang.3|li.qiang.4|li.qiang.1|li.qiang.5"Zheng, Qun-xiong"https://zbmath.org/authors/?q=ai:zheng.qunxiong"Qi, Wen-feng"https://zbmath.org/authors/?q=ai:qi.wenfengSummary: As a typical representative of the public key cryptosystem, RSA has attracted a great deal of cryptanalysis since its invention, among which a famous attack is the small private exponent attack. It is well-known that the best theoretical upper bound for the private exponent \(d\) that can be attacked is \(d \leq N^{0.292}\), where \(N\) is a RSA modulus. However, this bound may not be achieved in practical attacks since the lattice constructed by Coppersmith method may have a large enough dimension and the lattice-based reduction algorithms cannot work so well in both efficiency and quality. In this paper, we propose a new practical attack based on the binary search for the most significant bits (MSBs) of prime divisors of \(N\) and the Herrmann-May's attack [\textit{M. Herrmann} and \textit{A. May}, Lect. Notes Comput. Sci. 5912, 487--504 (2009; Zbl 1267.94065)]. The idea of binary search is inspired by the discovery of phenomena called ``multivalued-continuous phenomena'', which can significantly accelerate our attack. Together with several carefully selected parameters according to our exact and effective numerical estimations, we can improve the upper bound of \(d\) that can be practically achieved. More specifically, without the binary search method, we successfully attack RSA with a 1024-bit-modulus \(N\) when \(d \leq N^{0.285}\). Moreover, by our new method, we can implement a successful attack for a 1024-bit-modulus RSA when \(d \leq N^{0.292}\) and for a 2048-bit-modulus RSA when \(d \leq N^{0.287}\) in about a month. We believe our method can provide some inspiration to practical attacks on RSA with mainstream-size moduli.TIDE: a novel approach to constructing timed-release encryptionhttps://zbmath.org/1527.940502024-02-28T19:32:02.718555Z"Loe, Angelique Faye"https://zbmath.org/authors/?q=ai:loe.angelique-faye"Medley, Liam"https://zbmath.org/authors/?q=ai:medley.liam"O'Connell, Christian"https://zbmath.org/authors/?q=ai:oconnell.christian"Quaglia, Elizabeth A."https://zbmath.org/authors/?q=ai:quaglia.elizabeth-aSummary: \textit{P. Chvojka} et al. [Lect. Notes Comput. Sci. 12973, 64--85 (2021; Zbl 1498.94058)] introduced the idea of taking a time-lock puzzle and using its solution to generate the keys of a public key encryption (PKE) scheme. They use this to define a timed-release encryption (TRE) scheme, in which the secret key is encrypted `to the future' using a time-lock puzzle, whilst the public key is published. This allows multiple parties to encrypt a message to the public key of the PKE scheme. Then, once a solver has spent a prescribed length of time evaluating the time-lock puzzle, they obtain the secret key and hence can decrypt all of the messages.
In this work we introduce TIDE (TIme Delayed Encryption), a novel approach to constructing timed-release encryption based upon the RSA cryptosystem, where instead of directly encrypting the secret key to the future, we utilise number-theoretic techniques to allow the solver to factor the RSA modulus, and hence derive the decryption key. We implement TIDE on a desktop PC and on Raspberry Pi devices validating that TIDE is both efficient and practically implementable. We provide evidence of practicality with an extensive implementation study detailing the source code and practical performance of TIDE.
For the entire collection see [Zbl 1515.68044].Security analysis of RSA-BSSAhttps://zbmath.org/1527.940512024-02-28T19:32:02.718555Z"Lysyanskaya, Anna"https://zbmath.org/authors/?q=ai:lysyanskaya.annaSummary: In a blind signature scheme, a user can obtain a digital signature on a message of her choice without revealing anything about the message or the resulting signature to the signer. Blind signature schemes have recently found applications for privacy-preserving web browsing and ad ecosystems, and as such, are ripe for standardization. In this paper, we show that the recent proposed standard of \textit{F. Denis} et al. [``RSA blind signatures'', Preprint, \url{https://datatracker.ietf.org/doc/rfc9474/}] constitutes a strongly one-more-unforgeable blind signature scheme in the random-oracle model under the one-more-RSA assumption. Further, we show that the blind version of RSA-FDH proposed and analyzed by \textit{M. Bellare} et al. [J. Cryptology 16, No. 3, 185--215 (2003; Zbl 1045.94012)] does not satisfy blindness when the public key is chosen maliciously, but satisfies a weaker notion of a blind token.
For the entire collection see [Zbl 1524.94001].Chosen ciphertext secure keyed two-level homomorphic encryptionhttps://zbmath.org/1527.940522024-02-28T19:32:02.718555Z"Maeda, Yusaku"https://zbmath.org/authors/?q=ai:maeda.yusaku"Nuida, Koji"https://zbmath.org/authors/?q=ai:nuida.kojiSummary: Homomorphic encryption (HE) is a useful variant of public key encryption (PKE), but it has a drawback that HE cannot fully achieve IND-CCA2 security, a standard security notion for PKE.
\textit{K. Emura} et al. [Lect. Notes Comput. Sci. 7778, 32--50 (2013; Zbl 1314.94066)] proposed a ``keyed'' version of HE, called KH-PKE, which introduces a separate key for homomorphic evaluation and then achieves security close to IND-CCA2. Current KH-PKE schemes are classified into ones supporting only a single kind of homomorphic operation (addition or multiplication) and others that are fully homomorphic but are consequently not very efficient; no intermediate schemes with both efficiency and richer functionality are known so far. In this paper, we propose a ``two-level'' KH-PKE scheme for evaluating degree-two polynomials, by cleverly combining Emura et al.'s generic framework with a recent efficient two-level HE by \textit{N. Attrapadung} et al. [in: Proceedings of the 2018 on Asia conference on computer and communications security, ASIACCS '18. New York, NY: Association for Computing Machinery (ACM). 685--697 (2018; \url{doi:10.1145/3196494.3196552})].
For the entire collection see [Zbl 1515.68044].Efficient methods to overcome Rabin cryptosystem decryption failurehttps://zbmath.org/1527.940532024-02-28T19:32:02.718555Z"Mahad, Z."https://zbmath.org/authors/?q=ai:mahad.z"Asbullah, M. A."https://zbmath.org/authors/?q=ai:asbullah.muhammad-asyraf"Ariffin, M. R. K."https://zbmath.org/authors/?q=ai:ariffin.muhamad-rezal-kamel|ariffin.mahammad-rezal-kamelSummary: Rabin cryptosystem is an efficient factoring-based scheme, however, its decryption produces 4-to-1 output, which leads to decryption failure. In this work, in order to overcome the 4-to-1 decryption problem for the Rabin cryptosystem, we propose two distinct methods using the modulus of the type \(N=p^2q\) coupled with the restriction on the plaintext space \(M\). In the first method, the plaintext space is limited to \(M \in \mathbb Z_{pq}\). For the second method, we restrict the plaintext in the range of \(M \in (0,2^{2n-2})\). Importantly, we prove that the decryption output of the proposed methods is unique and without decryption failure. The results in this work indicate that the decryption problem of Rabin cryptosystem is overcome.Improved division property for ciphers with complex linear layershttps://zbmath.org/1527.940542024-02-28T19:32:02.718555Z"Mao, Yongxia"https://zbmath.org/authors/?q=ai:mao.yongxia"Wu, Wenling"https://zbmath.org/authors/?q=ai:wu.wenling"Wang, Bolin"https://zbmath.org/authors/?q=ai:wang.bolin"Zhang, Li"https://zbmath.org/authors/?q=ai:zhang.li.14|zhang.li.10|zhang.li.57|zhang.li.27|zhang.li.3|zhang.li.17|zhang.li.5|zhang.li.40|zhang.li.11|zhang.li.69|zhang.li.8|zhang.li.35|zhang.li.20|zhang.li.28|zhang.li.15|zhang.li.56|zhang.li.9|zhang.li.1|zhang.li.16|zhang.li.6|zhang.li.7|zhang.li.13Summary: The division property proposed by \textit{Y. Todo} [Lect. Notes Comput. Sci. 9056, 287--314 (2015; Zbl 1370.94545)]] as a generalized integral property has been applied to many symmetric ciphers. Automatic search methods of the division property assisted by modeling technique, such as Mixed Integer Linear Programming (MILP) and Boolean Satisfiability Problem (SAT), have become the most popular approach to searching integral distinguishers. The accuracy of the model in searching algorithms has an effect on the search results of integral distinguishers. For the block cipher, constructing an accurate and efficient model of the division property propagation on complex linear layers remains hard. This paper observes that the non-independent propagations of the bit-based division property (BDP) on complex linear layers can generate redundant division trails, which will affect the accuracy of the model if it is not taken into account in modeling. Based on this, we propose a method that can build a more accurate model by handling matrices containing non-independent propagations in the linear layer. To verify the effectiveness of our method, we apply the method to two block ciphers uBlock-128 and MIBS. For uBlock-128, our results improve the previous 8-round integral distinguisher by more balanced bits. For MIBS, a 9-round integral distinguisher is given for the first time, which is 4 rounds longer than the previous best.
For the entire collection see [Zbl 1515.68044].Post-quantum anonymity of Kyberhttps://zbmath.org/1527.940552024-02-28T19:32:02.718555Z"Maram, Varun"https://zbmath.org/authors/?q=ai:maram.varun"Xagawa, Keita"https://zbmath.org/authors/?q=ai:xagawa.keitaSummary: Kyber is a key-encapsulation mechanism (KEM) that was recently selected by NIST in its PQC standardization process; it is also the only scheme to be selected in the context of public-key encryption (PKE) and key establishment. The main security target for KEMs, and their associated PKE schemes, in the NIST PQC context has been IND-CCA security. However, some important modern applications also require their underlying KEMs/PKE schemes to provide anonymity. Examples of such applications include anonymous credential systems, cryptocurrencies, broadcast encryption schemes, authenticated key exchange, and auction protocols. It is hence important to analyze the compatibility of NIST's new PQC standard in such ``beyond IND-CCA'' applications.
Some starting steps were taken by \textit{P. Grubbs} et al. [Lect. Notes Comput. Sci. 13277, 402--432 (2022; Zbl 1502.81027)] and \textit{K. Xagawa} [ibid. 13277, 551--581 (2022; Zbl 1513.81040)] wherein they studied the anonymity properties of most NIST PQC third round candidate KEMs. Unfortunately, they were unable to show the anonymity of Kyber because of certain technical barriers.
In this paper, we overcome said barriers and resolve the open problems posed by Grubbs [loc. cit.] and Xagawa [loc. cit.] by establishing the anonymity of Kyber, and the (hybrid) PKE schemes derived from it, in a post-quantum setting. Along the way, we also provide an approach to obtain tight IND-CCA security proofs for Kyber with concrete bounds; this resolves another issue identified by the aforementioned works related to the post-quantum IND-CCA security claims of Kyber from a provable security point-of-view. Our results also extend to Saber, a NIST PQC third round finalist, in a similar fashion.
For the entire collection see [Zbl 1524.94001].PNB-focused differential cryptanalysis of ChaCha stream cipherhttps://zbmath.org/1527.940562024-02-28T19:32:02.718555Z"Miyashita, Shotaro"https://zbmath.org/authors/?q=ai:miyashita.shotaro"Ito, Ryoma"https://zbmath.org/authors/?q=ai:ito.ryoma"Miyaji, Atsuko"https://zbmath.org/authors/?q=ai:miyaji.atsukoSummary: This study focuses on differential cryptanalysis of the ChaCha stream cipher. In the conventional approach, an adversary first searches for an input/output differential pair with the highest differential bias and then analyzes the probabilistic neutral bits (PNB) based on the obtained input/output differential pair. However, although the time and data complexities for the attack can be estimated by the differential bias and PNB obtained by this approach, the combination of the differential bias and PNB is not always optimal. In addition, the existing studies have not performed a comprehensive analysis of the PNB; thus, they have not provided an upper bound on the number of rounds required for a differential attack that uses a single-bit truncated differential to be successful. To address these limitations, we propose a PNB-focused differential attack on reduced-round ChaCha by first comprehensively analyzing the PNB for all possible single-bit truncated output differences and then searching for the input/output differential pair with the highest differential bias based on the obtained PNB. The best existing attack on ChaCha, proposed by \textit{C. Beierle} et al. [Lect. Notes Comput. Sci. 12172, 329--358 (2020; Zbl 1504.94104)], works on up to 7 rounds, whereas the most extended attack we observed works on up to 7.25 rounds using the proposed PNB-focused approach. The time complexity, data complexity, and success probability of the proposed attack are \(2^{255.62}\), \(2^{48.36}\), and 0.5, respectively. Although the proposed attack is less efficient than a brute force attack, it is the first dedicated attack on the target and provides both a baseline and useful components (i.e., differential bias and PNB) for improved attacks.
For the entire collection see [Zbl 1515.68044].Post quantum cryptographyhttps://zbmath.org/1527.940572024-02-28T19:32:02.718555Z"Nitaj, A."https://zbmath.org/authors/?q=ai:nitaj.abderrahmaneSummary: Public key cryptography is widely used for many applications such as signing contracts, electronic voting, encryption, securing transactions over the Internet and storing sensitive data. The discovery of an efficient algorithm based on quantum mechanics for factoring large integers and computing discrete logarithms by Peter Shor in 1994 undermined the security assumptions upon which currently used public key cryptographic algorithms are based, like RSA, El Gamal and ECC. However, some cryptosystems, called post quantum cryptosystems, while not currently in widespread use are believed to be resistant to quantum computing based attacks. In this paper, we provide a survey of quantum and post quantum cryptography. We review the principle of a quantum computer as well as Shor's algorithm and quantum key distribution. Then, we review some cryptosystems undermined by Shor's algorithm as well as some post quantum cryptosystems, that are believed to resist classical and quantum computers.On public-key cryptosystem based on Church-Rosser string-rewriting systems (extended abstract)https://zbmath.org/1527.940582024-02-28T19:32:02.718555Z"Oleshchuk, Vladimir A."https://zbmath.org/authors/?q=ai:oleshchuk.vladimir-aSummary: We propose an approach toward public-key cryptosystems based on finite string-rewriting systems with Church-Rosser property. The approach utilizes an existence of unique normal form for any congruence class modulo such a system and possibility to find it in linear time. Such cryptosystems can be used in the case we are dealing with a large network of communicating parties when it is impractical to use a distinct secret method signing for every pair users and we would like to have a unified secret method for all senders sending to a receiver.
For the entire collection see [Zbl 0847.00053].Speeding-up parallel computation of large smooth-degree isogeny using precedence-constrained schedulinghttps://zbmath.org/1527.940592024-02-28T19:32:02.718555Z"Phalakarn, Kittiphon"https://zbmath.org/authors/?q=ai:phalakarn.kittiphon"Suppakitpaisarn, Vorapong"https://zbmath.org/authors/?q=ai:suppakitpaisarn.vorapong"Hasan, M. Anwar"https://zbmath.org/authors/?q=ai:hasan.m-anwarSummary: Although the supersingular isogeny Diffie-Hellman (SIDH) protocol is one of the most promising post-quantum cryptosystems, it is significantly slower than its main counterparts due to the underlying large smooth-degree isogeny computation. In this work, we address the problem of evaluating and constructing a strategy for computing the large smooth-degree isogeny in the multi-processor setting by formulating them as scheduling problems with dependencies. The contribution of this work is two-fold. For the strategy evaluation, we transform strategies into task dependency graphs and apply precedence-constrained scheduling algorithms to them in order to find their costs. For the strategy construction, we construct strategies from smaller parts that are optimal solutions of integer programming representing the problem. We show via experiments that the proposed two techniques together offer more than 13\% reduction in the strategy costs compared to the best current results by \textit{A. Hutchinson} and \textit{K. Karabina} [Lect. Notes Comput. Sci. 11356, 169--189 (2018; Zbl 1407.94122)].
For the entire collection see [Zbl 1515.68044].A key-recovery attack against Mitaka in the \(t\)-probing modelhttps://zbmath.org/1527.940602024-02-28T19:32:02.718555Z"Prest, Thomas"https://zbmath.org/authors/?q=ai:prest.thomasSummary: \textsc{Mitaka} is a lattice-based signature proposed in [\textit{T. Espitau} et al., Lect. Notes Comput. Sci. 13277, 222--253 (2022; Zbl 1496.94042)]. A key advertised feature of \textsc{Mitaka} is that it can be masked at high orders efficiently, making it attractive in scenarios where side-channel attacks are a concern. \textsc{Mitaka} comes with a claimed security proof in the \(t\)-probing model.
We uncover a flaw in the security proof of \textsc{Mitaka}, and subsequently show that it is not secure in the \(t\)-probing model. For any number of shares \(d \ge 4\), probing \(t < d\) variables per execution allows an attacker to recover the private key efficiently with approximately \(2^{21}\) executions. Our analysis shows that even a constant number of probes suffices \((t = 3)\), as long as the attacker has access to a number of executions that is linear in \(d/t\).
For the entire collection see [Zbl 1524.94001].Bounds for the number of rounds with impossible differences in generalized Feistel schemeshttps://zbmath.org/1527.940612024-02-28T19:32:02.718555Z"Pudovkina, M. A."https://zbmath.org/authors/?q=ai:pudovkina.marina-aleksandrovna"Toktarev, A. V."https://zbmath.org/authors/?q=ai:toktarev.a-vSummary: The class of ciphers described by a generalized Feistel scheme is considered. Some upper and lower bounds for the maximum number of rounds with impossible differences are provided. They do not depend on the type of Feistel scheme and on the number of nonlinear functions or blocks in the register.New vulnerability of RSA modulus type \(N = p^2 q\)https://zbmath.org/1527.940622024-02-28T19:32:02.718555Z"Rahman, N. N. A. R."https://zbmath.org/authors/?q=ai:rahman.n-n-a-r"Arifin, M. R. K."https://zbmath.org/authors/?q=ai:arifin.m-r-kSummary: This paper proposes new attacks on modulus of type \(N=p^2q\). Given \(k\) moduli of the form \(N_i=p^2_iq_i\) for \(k \geq 2\) and \(i= 1,\ldots, k\), the attack works when \(k\) public keys \((N_i, e_i)\) are such that there exist \(k\) relations of the shape \(e_ix - N_iy_i=z_i - (ap^2_i+bq_i^2)y_i\) or of the shape \(e_ix_i - N_iy=z_i -(ap^2_i+bq^2_i)y \) where the parameters \(x,x_i,y,y_i\) and \(z_i\) are suitably small in terms of the prime factors of the moduli. The proposed attacks utilizing the LLL algorithm enables one to factor the \(k\) moduli \(N_i\) simultaneously.A universally composable PAKE with zero communication cost. (And why it shouldn't be considered UC-secure)https://zbmath.org/1527.940632024-02-28T19:32:02.718555Z"Roy, Lawrence"https://zbmath.org/authors/?q=ai:roy.lawrence.1"Xu, Jiayu"https://zbmath.org/authors/?q=ai:xu.jiayuSummary: A Password-Authenticated Key Exchange (PAKE) protocol allows two parties to agree upon a cryptographic key, when the only information shared in advance is a low-entropy password. The standard security notion for PAKE [\textit{R. Canetti} et al., Lect. Notes Comput. Sci. 3494, 404--421 (2005; Zbl 1137.94367)] is in the Universally Composable (UC) framework. We show that unlike most UC security notions, UC PAKE does not imply correctness. While Canetti et al. [loc. cit.] has briefly noticed this issue, we present the first comprehensive study of correctness in UC PAKE:
\begin{itemize}
\item[1.] We show that TrivialPAKE, a no-message protocol that does not satisfy correctness, is a UC PAKE;
\item[2.] We propose nine approaches to guaranteeing correctness in the UC security notion of PAKE, and show that seven of them are equivalent, whereas the other two are unachievable;
\item[3.] We prove that a direct solution, namely changing the UC PAKE functionality to incorporate correctness, is impossible;
\item[4.] Finally, we show how to naturally incorporate correctness by changing the model -- we view PAKE as a three-party protocol, with the man-in-the-middle adversary as the third party.
\end{itemize}
In this way, we hope to shed some light on the very nature of UC-security in the man-in-the-middle setting.
For the entire collection see [Zbl 1524.94001].Another look at key randomisation hypotheseshttps://zbmath.org/1527.940642024-02-28T19:32:02.718555Z"Samajder, Subhabrata"https://zbmath.org/authors/?q=ai:samajder.subhabrata"Sarkar, Palash"https://zbmath.org/authors/?q=ai:sarkar.palashSummary: In the context of linear cryptanalysis of block ciphers, let \(p_0\) (resp. \(p_1)\) be the probability that a particular linear approximation holds for the right (resp. a wrong) key choice. The standard right key randomisation hypothesis states that \(p_0\) is a constant \(p \neq 1/2\) and the standard wrong key randomisation hypothesis states that \(p_1 = 1/2\). Using these hypotheses, the success probability \(P_S\) of the attack can be expressed in terms of the data complexity \(N\). The resulting expression for \(P_S\) is a monotone increasing function of \(N\). Building on earlier work by \textit{L. O'Connor} [Lect. Notes Comput. Sci. 1008, 131--136 (1995; Zbl 0939.94547)] and \textit{J. Daemen} and \textit{V. Rijmen} [J. Math. Cryptol. 1, No. 3, 221--242 (2007; Zbl 1211.94028)], \textit{A. Bogdanov} and \textit{E. Tischhauser} [Lect. Notes Comput. Sci. 8424, 19--38 (2014; Zbl 1321.94043)] argued that \(p_1\) should be considered to be a random variable. They postulated the adjusted wrong key randomisation hypothesis which states that \(p_1\) follows a normal distribution. A non-intuitive consequence is that the resulting expression for \(P_S\) is no longer a monotone increasing function of \(N\). A later work by \textit{C. Blondeau} and \textit{K. Nyberg} [Des. Codes Cryptography 82, No. 1--2, 319--349 (2017; Zbl 1402.94052)] argued that \(p_0\) should also be considered to be a random variable and they postulated the adjusted right key randomisation hypothesis which states that \(p_0\) follows a normal distribution. In this work, we revisit the key randomisation hypotheses. While the argument that \(p_0\) and \(p_1\) should be considered to be random variables is indeed valid, we show that if \(p_0\) and \(p_1\) follow any distributions with supports which are subsets of \([0, 1]\), and \(\mathcal{E}[p_0] = p\) and \(\mathcal{E}[p_1] = 1/2\), then the expression for \(P_S\) that is obtained is exactly the same as the one obtained using the standard key randomisation hypotheses. Consequently, \(P_S\) is a monotone increasing function of \(N\) even when \(p_0\) and \(p_1\) are considered to be random variables.QCCA-secure generic transformations in the quantum random oracle modelhttps://zbmath.org/1527.940652024-02-28T19:32:02.718555Z"Shan, Tianshu"https://zbmath.org/authors/?q=ai:shan.tianshu"Ge, Jiangxia"https://zbmath.org/authors/?q=ai:ge.jiangxia"Xue, Rui"https://zbmath.org/authors/?q=ai:xue.rui|xue.rui.1Summary: The post-quantum security of cryptographic schemes assumes that the quantum adversary only receives the classical result of computations with the secret key. Further, it is unknown whether the post-quantum secure schemes still remain secure if the adversary can obtain a superposition state of the results.
In this paper, we formalize one class of public-key encryption schemes named oracle-masked schemes. Then we define the plaintext extraction procedure for those schemes and this procedure simulates the quantum-accessible decryption oracle with a certain loss.
The construction of the plaintext extraction procedure does not need to take the secret key as input. Based on this property, we prove the IND-qCCA security of the Fujisaki-Okamoto (FO) transformation in the quantum random oracle model (QROM) and our security proof is tighter than the proof given by \textit{M. Zhandry} [Lect. Notes Comput. Sci. 11693, 239--268 (2019; Zbl 1509.81445)]. We also give the first IND-qCCA security proof of the REACT transformation in the QROM. Furthermore, our formalization can be applied to prove the IND-qCCA security of key encapsulation mechanisms with explicit rejection. As an example, we present the IND-qCCA security proof of \(\mathsf{T}_{\mathsf{CH}}\) transformation, proposed by \textit{L. Huguenin-Dumittan} and \textit{S. Vaudenay} [ibid. 13277, 613--642 (2022; Zbl 1496.94049)], in the QROM.
For the entire collection see [Zbl 1524.94001].New attacks on prime power \(N = p^r q\) using good approximation of \(\phi (N)\)https://zbmath.org/1527.940662024-02-28T19:32:02.718555Z"Shehu, S."https://zbmath.org/authors/?q=ai:shehu.sadiq"Ariffin, M. R. K."https://zbmath.org/authors/?q=ai:ariffin.muhamad-rezal-kamel|ariffin.mahammad-rezal-kamelSummary: This paper proposes three new attacks. Our first attack is based on the RSA key equation \(ed - k\phi (N) = 1\) where \(\phi (N) = p^{r-1}(p-1)(q-1)\). Let \(q < p <2q\) and \(2p^{\frac{3r+2}{r+1}}\vert p^{\frac{r-1}{r+1}}-q^{\frac{r-1}{r+1}}\vert < \frac{1}{6} N^{\gamma}\) with \(d=N^{\delta}\). If \(\delta < \frac{1- \gamma}{2}\) we shows that \(\frac{k}{d}\) can be recovered among the convergents of the continued fractions expansions of \(\frac{e}{N-2N^{\frac{r}{r+1}}+N^{\frac{r-1}{r+1}}}\). We furthered our analysis on \(j\) prime power moduli \(N_i=p^r_iq_i\) satisfying a variant of the above mentioned condition. We utilized the LLL algorithm onjprime power public keys \((N_i, e_i)\) with \(N_i=p^r_iq_i\) and we were able to factorize the \(j\) prime power moduli \(N_i=p^r_iq_i\) simultaneously in polynomial time.Multi-client inner product encryption: function-hiding instantiations without random oracleshttps://zbmath.org/1527.940672024-02-28T19:32:02.718555Z"Shi, Elaine"https://zbmath.org/authors/?q=ai:shi.elaine"Vanjani, Nikhil"https://zbmath.org/authors/?q=ai:vanjani.nikhilSummary: In a Multi-Client Functional Encryption (MCFE) scheme, \(n\) clients each obtain a secret encryption key from a trusted authority. During each time step \(t\), each client \(i\) can encrypt its data using its secret key. The authority can use its master secret key to compute a functional key given a function \(f\), and the functional key can be applied to a collection of \(n\) clients' ciphertexts encrypted to the same time step, resulting in the outcome of \(f\) on the clients' data. In this paper, we focus on MCFE for inner-product computations.
If an MCFE scheme hides not only the clients' data, but also the function \(f\), we say it is function hiding. Although MCFE for inner-product computation has been extensively studied, how to achieve function privacy is still poorly understood. The very recent work of \textit{S. Agrawal} et al. [Lect. Notes Comput. Sci. 12828, 208--238 (2021; Zbl 1489.94082)] showed how to construct a function-hiding MCFE scheme for inner-product assuming standard bilinear group assumptions; however, they assume the existence of a random oracle and prove only a relaxed, selective security notion. An intriguing open question is whether we can achieve function-hiding MCFE for inner-product without random oracles.
In this work, we are the first to show a function-hiding MCFE scheme for inner products, relying on standard bilinear group assumptions. Further, we prove adaptive security without the use of a random oracle. Our scheme also achieves succinct ciphertexts, that is, each coordinate in the plaintext vector encrypts to only \(O(1)\) group elements.
Our main technical contribution is a new upgrade from single-input functional encryption for inner-products to a multi-client one. Our upgrade preserves function privacy, that is, if the original single-input scheme is function-hiding, so is the resulting multi-client construction. Further, this new upgrade allows us to obtain a conceptually simple construction.
For the entire collection see [Zbl 1524.94001].Private simultaneous messages based on quadratic residueshttps://zbmath.org/1527.940682024-02-28T19:32:02.718555Z"Shinagawa, Kazumasa"https://zbmath.org/authors/?q=ai:shinagawa.kazumasa"Eriguchi, Reo"https://zbmath.org/authors/?q=ai:eriguchi.reo"Satake, Shohei"https://zbmath.org/authors/?q=ai:satake.shohei"Nuida, Koji"https://zbmath.org/authors/?q=ai:nuida.kojiSummary: Private Simultaneous Messages (PSM) model is a minimal model for secure multiparty computation. \textit{U. Feige} et al. [STOC 1994, 554--563 (1994; Zbl 1344.68030)] and \textit{Y. Ishai} [in: Secure multi-party computation. Amsterdam: IOS Press. 222--248 (2013; \url{doi:10.3233/978-1-61499-169-4-222})] (for a review of the entire collection see [Zbl 1270.94003]) constructed PSM protocols based on quadratic residues. In this paper, we define QR-PSM protocols as a generalization of these protocols. A QR-PSM protocol is a PSM protocol whose decoding function outputs the quadratic residuosity modulo \(p\) of what is computed from messages. We design a QR-PSM protocol for any symmetric function \(f : \{0,1\}^n \rightarrow \{0,1\}\) of communication complexity \(O(n^2)\). As far as we know, it is the most efficient PSM protocol for symmetric functions since the previously known best PSM protocol was of \(O(n^2\log n)\) [\textit{A. Beimel} et al., Lect. Notes Comput. Sci. 8349, 317--342 (2014; Zbl 1326.94072)]. We also study the sizes of the underlying finite fields \(\mathbb{F}_p\) in the protocols since the communication complexity of a QR-PSM protocol is proportional to the bit length of the prime \(p\). We show that there is a prime \(p \leq (1 + o(1))N^22^{2N-2}\) such that any length-\(N\) pattern of quadratic (non)residues appears modulo \(p\) (and hence it can be used for general QR-PSM protocols), which improves the \textit{R. Peralta}'s known result [Math. Comput. 58, No. 197, 433--440 (1992; Zbl 0745.11057)] by a constant factor \((1+\sqrt{2})^2\).Cryptanalysis of a group key transfer protocol: generalization and countermeasureshttps://zbmath.org/1527.940692024-02-28T19:32:02.718555Z"Tentu, Appala Naidu"https://zbmath.org/authors/?q=ai:tentu.appala-naidu"Raju, Kallepu"https://zbmath.org/authors/?q=ai:raju.kallepu"Venkaiah, V. Ch."https://zbmath.org/authors/?q=ai:venkaiah.vadlamudi-chSummary: In Cryptography, group key distribution protocol is a mechanism in which a group key is generated and distributed by Key Generation Centre (KGC) to a set of communicating parties in a group. This group key generally ensures secure communication among communicating parties on an insecure channel. Key establishment protocols allow two or more communicating parties to establish their common secret key called a session key. Harn and Lin protocol(HL) is one such protocol and it is based on Shamir's threshold secret setting. \textit{J. Nam} et al. [Lect. Notes Comput. Sci. 7105, 309--315 (2011; \url{doi:10.1007/978-3-642-27142-7_36})] exposed the vulnerability in HL protocol through their replay attack and proposed a countermeasure using a nonce mechanism. In this paper, we are generalizing the replay attack presented by Nam et al. [loc. cit.] and proposing an alternative countermeasure without using nonce mechanism. The novelty of our countermeasure is that KGC is not required to detect replay messages and hence each user doesn't need to compute authentication messages as in Nam et al. [loc. cit.]. Presented countermeasure thereby brings down the computational complexity of the scheme. Also, we propose an improved version of HL protocol a version that uses short signatures for authentication and overcomes drawbacks, and is resistant to existing attacks.GLUE: generalizing unbounded attribute-based encryption for flexible efficiency trade-offshttps://zbmath.org/1527.940702024-02-28T19:32:02.718555Z"Venema, Marloes"https://zbmath.org/authors/?q=ai:venema.marloes"Alpár, Greg"https://zbmath.org/authors/?q=ai:alpar.gregSummary: Ciphertext-policy attribute-based encryption is a versatile primitive that has been considered extensively to securely manage data in practice. Especially completely unbounded schemes are attractive, because they do not restrict the sets of attributes and policies. So far, any such schemes that support negations in the access policy or that have online/offline extensions have an inefficient decryption algorithm.
In this work, we propose GLUE (Generalized, Large-universe, Unbounded and Expressive), which is a novel scheme that allows for the efficient implementation of the decryption while allowing the support of both negations and online/offline extensions. We achieve these properties simultaneously by uncovering an underlying dependency between encryption and decryption, which allows for a flexible trade-off in their efficiency. For the security proof, we devise a new technique that enables us to generalize multiple existing schemes. As a result, we obtain a completely unbounded scheme supporting negations that, to the best of our knowledge, outperforms all existing such schemes in the decryption algorithm.
For the entire collection see [Zbl 1524.94001].Lattice-based cryptography: a surveyhttps://zbmath.org/1527.940712024-02-28T19:32:02.718555Z"Wang, Xiaoyun"https://zbmath.org/authors/?q=ai:wang.xiaoyun"Xu, Guangwu"https://zbmath.org/authors/?q=ai:xu.guangwu"Yu, Yang"https://zbmath.org/authors/?q=ai:yu.yangSummary: Most of current public key cryptosystems would be vulnerable to the attacks of the future quantum computers. Post-quantum cryptography offers mathematical methods to secure information and communications against such attacks, and therefore has been receiving a significant amount of attention in recent years. Lattice-based cryptography, built on the mathematical hard problems in (high-dimensional) lattice theory, is a promising post-quantum cryptography family due to its excellent efficiency, moderate size and strong security. This survey aims to give a general overview on lattice-based cryptography. To this end, the authors begin with the introduction of the underlying mathematical lattice problems. Then they introduce the fundamental cryptanalytic algorithms and the design theory of lattice-based cryptography.Resumable zero-knowledge for circuits from symmetric key primitiveshttps://zbmath.org/1527.940722024-02-28T19:32:02.718555Z"Zhang, Handong"https://zbmath.org/authors/?q=ai:zhang.handong"Wei, Puwen"https://zbmath.org/authors/?q=ai:wei.puwen"Xue, Haiyang"https://zbmath.org/authors/?q=ai:xue.haiyang"Deng, Yi"https://zbmath.org/authors/?q=ai:deng.yi"Li, Jinsong"https://zbmath.org/authors/?q=ai:li.jinsong"Wang, Wei"https://zbmath.org/authors/?q=ai:wang.wei.23"Liu, Guoxiao"https://zbmath.org/authors/?q=ai:liu.guoxiaoSummary: Consider the scenario that the prover and the verifier perform the zero-knowledge (ZK) proof protocol for the same statement multiple times sequentially, where each proof is modeled as a session. We focus on the problem of how to resume a ZK proof efficiently in such scenario. We introduce a new primitive called resumable honest verifier zero-knowledge proof of knowledge (resumable HVZKPoK) and propose a general construction of the resumable HVZKPoK for circuits based on the ``MPC-in-the-head'' paradigm, where the complexity of the resumed session is less than that of the original ZK proofs. To ensure the knowledge soundness for the resumed session, we identify a property called extractable decomposition. Interestingly, most block ciphers satisfy this property and the cost of resuming session can be reduced dramatically when the underlying circuits are implemented with block ciphers. As a direct application of our resumable HVZKPoK, we construct a post quantum secure stateful signature scheme, which makes \textsf{Picnic3} suitable for blockchain protocol. Using the same parameter setting of \textsf{Picnic3}, the sign/verify time of our subsequent signatures can be reduced to 3.1\%/3.3\% of \textsf{Picnic3} and the corresponding signature size can be reduced to 36\%. Moreover, by applying a parallel version of our method to the well known Cramer, Damgård and Schoenmakers (CDS) transformation [\textit{R. Cramer} et al., Lect. Notes Comput. Sci. 839, 174--187 (1994; Zbl 0939.94546)], we get a compressed one-out-of-\(N\) proof for circuits, which can be further used to construct a ring signature from symmetric key primitives only. When the ring size is less than \(2^4\), the size of our ring signature scheme is only about 1/3 of \textit{J. Katz} et al.'s construction [in: Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, CCS '18. New York, NY: Association for Computing Machinery (ACM). 525--537 (2018; \url{doi:10.1145/3243734.3243805})].
For the entire collection see [Zbl 1515.68044].Improved differential attack on round-reduced LEAhttps://zbmath.org/1527.940732024-02-28T19:32:02.718555Z"Zhang, Yuhan"https://zbmath.org/authors/?q=ai:zhang.yuhan"Wu, Wenling"https://zbmath.org/authors/?q=ai:wu.wenling"Zhang, Lei"https://zbmath.org/authors/?q=ai:zhang.lei.18Summary: LEA is both the national standard of the Republic of Korea and an ISO/IEC standard. In this paper, we focus on differential attack on LEA with the automatic analysis technique and improve the previous searching strategy in ARX ciphers. For new strategy, we no longer just pay attention to the internal difference with only one active bit. By studying the differential property of modular addition, we choose the proper difference in the middle round. We construct a 13-round differential characteristic whose probability is better than the best previous one with a factor of about 2. Furthermore, We take the differential effect into consideration and obtain a 13-round differential whose probability is better than the best previous one with a factor of about 4. Moreover, we mount key-recovery on 14-round LEA-128, 14-round LEA-192 and 15-round LEA-256. Utilizing the property for key schedule, we obtain the lower time complexity than that evaluated by \textit{I. Dinur}'s method [Lect. Notes Comput. Sci. 8781, 147--164 (2014; Zbl 1382.94089)].
For the entire collection see [Zbl 1515.68044].Extendable threshold ring signatures with enhanced anonymityhttps://zbmath.org/1527.940742024-02-28T19:32:02.718555Z"Avitabile, Gennaro"https://zbmath.org/authors/?q=ai:avitabile.gennaro.1"Botta, Vincenzo"https://zbmath.org/authors/?q=ai:botta.vincenzo"Fiore, Dario"https://zbmath.org/authors/?q=ai:fiore.darioSummary: Threshold ring signatures are digital signatures that allow \(t\) parties to sign a message while hiding their identity in a larger set of \(n\) users called ``ring''. Recently, \textit{D. F. Aranha} et al. [Lect. Notes Comput. Sci. 13178, 379--406 (2022; Zbl 1519.94211)] introduced the notion of extendable threshold ring signatures (\(\mathsf{ETRS}\)). \(\mathsf{ETRS}\) allow one to update, in a non-interactive manner, a threshold ring signature on a certain message so that the updated signature has a greater threshold, and/or an augmented set of potential signers. An application of this primitive is anonymous count me in. A first signer creates a ring signature with a sufficiently large ring announcing a proposition in the signed message. After such cause becomes public, other parties can anonymously decide to support that proposal by producing an updated signature. Crucially, such applications rely on partial signatures being posted on a publicly accessible bulletin board since users may not know/trust each other.
In this paper, we first point out that even if anonymous count me in was suggested as an application of \(\mathsf{ETRS}\), the anonymity notion proposed in the previous work is insufficient in many application scenarios. Indeed, the existing notion guarantees anonymity only against adversaries who just see the last signature, and are not allowed to access the ``full evolution'' of an \(\mathsf{ETRS}\). This is in stark contrast with applications where partial signatures are posted in a public bulletin board. We therefore propose stronger anonymity definitions and construct a new \(\mathsf{ETRS}\) that satisfies such definitions. Interestingly, while satisfying stronger anonymity properties, our \(\mathsf{ETRS}\) asymptotically improves on the two \(\mathsf{ETRS}\) presented in prior work [loc. cit.] in terms of both time complexity and signature size. Our \(\mathsf{ETRS}\) relies on extendable non-interactive witness-indistinguishable proof of knowledge (\(\mathsf{ENIWI}\) PoK), a novel technical tool that we formalize and construct, and that may be of independent interest. We build our constructions from pairing groups under the SXDH assumption.
For the entire collection see [Zbl 1524.94001].Fiat-Shamir signatures based on module-NTRUhttps://zbmath.org/1527.940752024-02-28T19:32:02.718555Z"Bai, Shi"https://zbmath.org/authors/?q=ai:bai.shi"Beard, Austin"https://zbmath.org/authors/?q=ai:beard.austin"Johnson, Floyd"https://zbmath.org/authors/?q=ai:johnson.floyd"K. B. Vidhanalage, Sulani"https://zbmath.org/authors/?q=ai:k-b-vidhanalage.sulani"Ngo, Tran"https://zbmath.org/authors/?q=ai:ngo.tran-vuSummary: Module-NTRU lattices, as a generalization of versatile NTRU lattices, were introduced by \textit{J. H. Cheon} et al. [``A new trapdoor over module-NTRU lattice and its application to ID-based encryption'', Preprint, \url{https://ia.cr/2019/1468}], and \textit{C. Chuengsatiansup} et al. [``ModFalcon: compact signatures based on module-NTRU lattices'', Preprint, \url{https://ia.cr/2019/1456}]. The Module-NTRU lattices possess the benefit of being more flexible on the underlying ring dimension. They also show how to efficiently construct trapdoors based on Module-NTRU lattices and apply them to trapdoor-based signatures and identity-based encryption. In this paper, we construct Fiat-Shamir signatures based on variant Module-NTRU lattices. Further generalizing Module-NTRU, we introduce the inhomogeneous Module-NTRU problem. Under the assumption that a variation of the search and decisional problems associated with Module-NTRU and inhomogeneous Module-NTRU are hard, we construct two signature schemes. The first scheme is obtained from a lossy identification scheme via the Fiat-Shamir transform that admits tight security in the quantum random oracle model (QROM), following the framework of \textit{E. Kiltz} et al. [Lect. Notes Comput. Sci. 10822, 552--586 (2018; Zbl 1415.94448)]. The second scheme is a BLISS-like signature scheme based on the search Module-NTRU problem using the bimodal Gaussian for the rejection sampling. At last, we analyze known attacks and propose concrete parameters for the lossy signature scheme. In particular, the signature size is about 4400 bytes, which appears to be the smallest provably secure signature scheme in the QROM achieving 128-bit security.
For the entire collection see [Zbl 1515.68044].Hardening signature schemes via derive-then-derandomize: stronger security proofs for EdDSAhttps://zbmath.org/1527.940762024-02-28T19:32:02.718555Z"Bellare, Mihir"https://zbmath.org/authors/?q=ai:bellare.mihir"Davis, Hannah"https://zbmath.org/authors/?q=ai:davis.hannah"Di, Zijing"https://zbmath.org/authors/?q=ai:di.zijingSummary: We consider a transform, called Derive-then-Derandomize, that hardens a given signature scheme against randomness failure and implementation error. We prove that it works. We then give a general lemma showing indifferentiability of a class of constructions that apply a shrinking output transform to an MD-style hash function. Armed with these tools, we give new proofs for the widely standardized and used \(\mathsf{EdDSA}\) signature scheme, improving prior work in two ways: (1) we give proofs for the case that the hash function is an MD-style one, reflecting the use of SHA512 in the NIST standard, and (2) we improve the tightness of the reduction so that one has guarantees for group sizes in actual use.
For the entire collection see [Zbl 1524.94001].Efficient and universally composable single secret leader election from pairingshttps://zbmath.org/1527.940772024-02-28T19:32:02.718555Z"Catalano, Dario"https://zbmath.org/authors/?q=ai:catalano.dario"Fiore, Dario"https://zbmath.org/authors/?q=ai:fiore.dario"Giunta, Emanuele"https://zbmath.org/authors/?q=ai:giunta.emanueleSummary: Single Secret Leader Election (SSLE) protocols allow a set of users to elect a leader among them so that the identity of the winner remains secret until she decides to reveal herself. This notion was formalized and implemented in a recent result by \textit{D. Boneh} et al. [in: Proceedings of the 2nd ACM conference on advances in financial technologies, AFT '20. New York, NY: Association for Computing Machinery (ACM). 12--24 (2020; \url{doi:10.1145/3419614.3423258})] and finds important applications in the area of Proof of Stake blockchains.
In this paper we put forward new SSLE solutions that advance the state of the art both from a theoretical and a practical front. On the theoretical side we propose a new definition of SSLE in the universal composability framework. We believe this to be the right way to model security in highly concurrent contexts such as those of many blockchain related applications. Next, we propose a UC-realization of SSLE from public key encryption with keyword search (PEKS) and based on the ability of distributing the PEKS key generation and encryption algorithms. Finally, we give a concrete PEKS scheme with efficient distributed algorithms for key generation and encryption and that allows us to efficiently instantiate our abstract SSLE construction.
Our resulting SSLE protocol is very efficient, does not require participants to store any state information besides their secret keys and guarantees so called on-chain efficiency: the information to verify an election in the new block should be of size at most logarithmic in the number of participants. To the best of our knowledge, this is the first efficient SSLE scheme achieving this property.
For the entire collection see [Zbl 1524.94001].Auditable asymmetric password authenticated public key establishmenthttps://zbmath.org/1527.940782024-02-28T19:32:02.718555Z"Faonio, Antonio"https://zbmath.org/authors/?q=ai:faonio.antonio"Gonzalez Vasco, Maria Isabel"https://zbmath.org/authors/?q=ai:gonzalez-vasco.maria-isabel"Soriente, Claudio"https://zbmath.org/authors/?q=ai:soriente.claudio"Truong, Hien Thi Thu"https://zbmath.org/authors/?q=ai:truong.hien-thi-thuSummary: Non-repudiation of user messages is a desirable feature in a number of online applications, but it requires digital signatures and certified cryptographic keys. Unfortunately, the adoption of cryptographic keys often results in poor usability, as users must either carry around their private keys (e.g., in a smart-card) or store them in all of their devices. A user-friendly alternative, adopted by several companies and national administrations, is based on so-called ``cloud-based PKI certificates''. In a nutshell, each user has a certified key-pair stored at a server in the cloud; users authenticate to the server -- via passwords or one-time codes -- and ask it to sign messages on their behalf. However, moving the key-pair from user-private storage to the cloud impairs non-repudiation. In fact, users can always deny having signed a message, by claiming that the signature was produced by the allegedly malicious server without their consent. In this paper we present Auditable Asymmetric Password Authenticated Public Key Establishment (\textsf{A}\textsuperscript{2}\textsf{PAKE}), a cloud-based solution to allow users to manage their signing key-pairs that (i) has the same usability of cloud-based PKI certificates, and (ii) guarantees non-repudiation of signatures. We do so by introducing a new ideal functionality in the Universal Composability framework named \(\mathcal{F}_{\mathsf{A}^2\mathsf{PAKE}} \). The functionality is password-based and allows to generate asymmetric key-pairs, where the public key is output to all the parties, but the secret key is the private output of a single one (e.g., the user). Further, the functionality is auditable: given a public key output by the functionality, a server can prove to a third party (i.e., a judge) that the corresponding secret key is held by a specific user. Thus, if a user signs messages with the secret key obtained via \textsf{A}\textsuperscript{2}\textsf{PAKE}, then signatures are non-repudiable. We provide an efficient instantiation based on distributed oblivious pseudo-random functions for signature schemes based on DLOG. We also develop a prototype implementation of our instantiation and use it to evaluate its performance in realistic settings.
For the entire collection see [Zbl 1515.68028].Tracing a linear subspace: application to linearly-homomorphic group signatureshttps://zbmath.org/1527.940792024-02-28T19:32:02.718555Z"Hébant, Chloé"https://zbmath.org/authors/?q=ai:hebant.chloe"Pointcheval, David"https://zbmath.org/authors/?q=ai:pointcheval.david"Schädlich, Robert"https://zbmath.org/authors/?q=ai:schadlich.robertSummary: When multiple users have power or rights, there is always the risk of corruption or abuse. Whereas there is no solution to avoid those malicious behaviors, from the users themselves or from external adversaries, one can strongly deter them with tracing capabilities that will later help to revoke the rights or negatively impact the reputation. On the other hand, privacy is an important issue in many applications, which seems in contradiction with traceability.
In this paper, we first extend usual tracing techniques based on codes so that not just one contributor can be traced but the full collusion. In a second step, we embed suitable codes into a set \(\mathcal V\) of vectors in such a way that, given a vector \(\mathfrak{U}\in \mathsf{span}(\mathcal V)\), the underlying code can be used to efficiently find a minimal subset \(\mathcal X \subseteq \mathcal V\) such that \(\mathfrak{U}\in \mathsf{span}(\mathcal X)\).
To meet privacy requirements, we then make the vectors of \(\mathsf{span}(\mathcal V)\) anonymous while keeping the efficient tracing mechanism. As an interesting application, we formally define the notion of linearly-homomorphic group signatures and propose a construction from our codes: multiple signatures can be combined to sign any linear subspace in an anonymous way, but a tracing authority is able to trace back all the contributors involved in the signatures of that subspace.
For the entire collection see [Zbl 1524.94001].Structure-preserving linearly homomorphic signature with designated combiner for subspacehttps://zbmath.org/1527.940802024-02-28T19:32:02.718555Z"Li, Yumei"https://zbmath.org/authors/?q=ai:li.yumei"Zhang, Mingwu"https://zbmath.org/authors/?q=ai:zhang.mingwu"Zhang, Futai"https://zbmath.org/authors/?q=ai:zhang.futaiSummary: Linearly homomorphic signature allows signature holders to perform arbitrary linear computation on signed vectors. The special ``function'' makes linearly homomorphic signature suitable for many applications. However, publicly combinable is not advisable in some specific scenarios. Although some schemes with designated combiners have been proposed, they break the homomorphism of the combined signature. The combined vectors cannot be combined again. In this paper, we put forth the notion of structure-preserving linearly homomorphic signatures with the designated combiner. The combined signature is indistinguishable from signatures generated by the signer. Only the signer and the designated entity can generate a valid signature for any combined vector. Finally, we prove our scheme is secure under the CDH problem assumption and show it is efficient.
For the entire collection see [Zbl 1515.68044].Multi-signatures for ECDSA and its applications in blockchainhttps://zbmath.org/1527.940812024-02-28T19:32:02.718555Z"Pan, Shimin"https://zbmath.org/authors/?q=ai:pan.shimin"Chan, Kwan Yin"https://zbmath.org/authors/?q=ai:chan.kwan-yin"Cui, Handong"https://zbmath.org/authors/?q=ai:cui.handong"Yuen, Tsz Hon"https://zbmath.org/authors/?q=ai:yuen.tsz-honSummary: Multi-signatures enable a group of \(t\) signers to sign a message jointly and obtain a single signature. Multi-signatures help validating blockchain transactions, such as transactions with multiple inputs or transactions from multisig addresses. However, multi-signatures schemes are always realised naively in most blockchain systems by directly concatenating \(t\) ECDSA signatures.
In this paper, we give the first multi-signature scheme for ECDSA. Technically, we design a new ephemeral group public key for the set of signers and introduce an interactive signing protocol to output a single ECDSA signature. The signature can be validated by the ephemeral group public key. Then, we instantiate the ECDSA multi-signature scheme with class group, for which we design a secret exchanging mechanism that ensures the hiding content is well-constructed. Moreover, our scheme is able to identify the malicious party in the signing phase and help to minimize unnecessary resource consumption. This ECDSA multi-signatures can be used in blockchain to reduce the transaction cost and provide accountability for signers and backward compatibility with existing ECDSA addresses.
For the entire collection see [Zbl 1515.68044].Cyclotomic cosets, codes and secret sharinghttps://zbmath.org/1527.940822024-02-28T19:32:02.718555Z"Wong, D. C. K."https://zbmath.org/authors/?q=ai:wong.denis-c-kSummary: In this paper, all 2-cyclotomic cosets modulo \(p^n\) are constructed when 2 is a primitive root modulo \(p^n\). When the order of 2 is \(\frac{p-1}{2}\) modulo \(p\) and the order of 2 is \(\frac{p(p-1)}{2}\) modulo \(p^2\), we construct all 2-cyclotomic cosets modulo \(p^2\). Also, when 2 has order \(\frac{p^2(p-1)}{2}\) modulo \(p^3\), we derive all 2-cyclotomic cosets modulo \(p^3\). Furthermore, four results on all \(s\)-cyclotomic cosets modulo \(pq\) are obtained by considering three dierent possible orders of \(s\) modulo \(p\) and \(q\), for distinct odd primes \(p, q\). Finally, we use the 2-cyclotomic cosets modulo 9, 25 and 49 to construct binary codes of length 9, 15 and 49, respectively, and hence the access sets for the secret sharing scheme based on some of these families of binary codes are discussed.List-decoding homomorphism codes with arbitrary codomainshttps://zbmath.org/1527.940832024-02-28T19:32:02.718555Z"Babai, László"https://zbmath.org/authors/?q=ai:babai.laszlo"Black, Timothy J. F."https://zbmath.org/authors/?q=ai:black.timothy-j-f"Wuu, Angela"https://zbmath.org/authors/?q=ai:wuu.angela.1Summary: The codewords of the homomorphism code \(\mathrm{aHom}(G,H)\) are the affine homomorphisms between two finite groups, \(G\) and \(H\), generalizing Hadamard codes. Following the work by \textit{O. Goldreich} and \textit{L. A. Levin} [STOC 1989, 2--32 (1989; \url{doi:10.1145/73007.73010}], \textit{E. Grigorescu} et al. [Lect. Notes Comput. Sci. 4110, 375--385 (2006; Zbl 1155.94408)], \textit{I. Dinur} et al. [STOC 2008, 275--284 (2008; Zbl 1231.94061)], and \textit{A. Guo} and \textit{M. Sudan} [LIPIcs -- Leibniz Int. Proc. Inform. 28, 737--747 (2014; Zbl 1359.94851)], we further expand the range of groups for which local list-decoding is possible up to mindist, the minimum distance of the code. In particular, for the first time, we do not require either \(G\) or \(H\) to be solvable. Specifically, we demonstrate a poly\((1/\varepsilon)\) bound on the list size, i.e., on the number of codewords within distance \((\text{mindist}-\varepsilon)\) from any received word, when \(G\) is either abelian or an alternating group, and \(H\) is an arbitrary (finite or infinite) group. We conjecture that a similar bound holds for all finite simple groups as domains; the alternating groups serve as the first test case.\par The abelian vs. arbitrary result permits us to adapt previous techniques to obtain efficient local list-decoding for this case. We also obtain efficient local list-decoding for the permutation representations of alternating groups (the codomain is a symmetric group) under the restriction that the domain \(G=A_n\) is paired with codomain \(H= S_m\) satisfying \(m < 2^{n-1}/\sqrt{n}\).\par
The limitations on the codomain in the latter case arise from severe technical difficulties stemming from the need to solve the homomorphism extension (HomExt) problem in certain cases; these are addressed in a separate paper [\textit{A. Wuu}, ``Homomorphism extension'', Preprint, \url{arXiv:1802.08656}].\par We introduce an intermediate ``semi-algorithmic'' model we call Certificate List-Decoding that bypasses the HomExt bottleneck and works in the alternating vs. arbitrary setting. A certificate list-decoder produces partial homomorphisms that uniquely extend to the homomorphisms in the list. A homomorphism extender applied to a list of certificates yields the desired list.
For the entire collection see [Zbl 1393.68012].Polar codes with exponentially small error at finite block lengthhttps://zbmath.org/1527.940842024-02-28T19:32:02.718555Z"Błasiok, Jarosław"https://zbmath.org/authors/?q=ai:blasiok.jaroslaw"Guruswami, Venkatesan"https://zbmath.org/authors/?q=ai:guruswami.venkatesan"Sudan, Madhu"https://zbmath.org/authors/?q=ai:sudan.madhuSummary: We show that the entire class of polar codes (up to a natural necessary condition) converge to capacity at block lengths polynomial in the gap to capacity, while simultaneously achieving failure probabilities that are exponentially small in the block length (i.e., decoding fails with probability \(\exp(-N^{\Omega(1)})\) for codes of length \(N\)). Previously this combination was known only for one specific family within the class of polar codes, whereas we establish this whenever the polar code exhibits a condition necessary for any polarization.\par
Our results adapt and strengthen a local analysis of polar codes due to the authors with \textit{J. Blasiok} et al. [STOC 2018, 485--492 (2018; Zbl 1427.94089)]. Their analysis related the time-local behavior of a martingale to its global convergence, and this allowed them to prove that the broad class of polar codes converge to capacity at polynomial block lengths. Their analysis easily adapts to show exponentially small failure probabilities, provided the associated martingale, the ``Aríkan a martingale'', exhibits a corresponding strong local effect. The main contribution of this work is a much stronger local analysis of the Arýkan martingale. This leads to the general result claimed above.\par
In addition to our general result, we also show, for the first time, polar codes that achieve failure probability \(\exp(-N^\beta)\) for any \(\beta<1\) while converging to capacity at block length polynomial in the gap to capacity. Finally we also show that the ``local'' approach can be combined with any analysis of failure probability of an arbitrary polar code to get essentially the same failure probability while achieving block length polynomial in the gap to capacity.
For the entire collection see [Zbl 1393.68012].Codeshttps://zbmath.org/1527.940852024-02-28T19:32:02.718555Z"Britz, Thomas"https://zbmath.org/authors/?q=ai:britz.thomas"Cameron, Peter J."https://zbmath.org/authors/?q=ai:cameron.peter-jSummary: This chapter introduces linear codes and some of their properties, surveys how the Tutte polynomial determines these properties, and provides an overview of more general results as well as applications to coding theory.
\begin{itemize}
\item Linear code, codeword weight, support and weight enumerator, dual code.
\item The MacWilliams identity.
\item The coboundary polynomial, Greene's theorem and the critical theorem.
\item Ordered tuples and subcodes.
\item Equivalence between the Tutte polynomial and generalized weights.
\item Orbital polynomials.
\item Generalizations and applications of Greene's theorem.
\item Wei's duality theorem.
\end{itemize}
For the entire collection see [Zbl 1495.05001].Jacobi polynomials and design theory. Ihttps://zbmath.org/1527.940862024-02-28T19:32:02.718555Z"Chakraborty, Himadri Shekhar"https://zbmath.org/authors/?q=ai:chakraborty.himadri-shekhar"Miezaki, Tsuyoshi"https://zbmath.org/authors/?q=ai:miezaki.tsuyoshi"Oura, Manabu"https://zbmath.org/authors/?q=ai:oura.manabu"Tanaka, Yuuho"https://zbmath.org/authors/?q=ai:tanaka.yuuho\textit{M. Ozeki} [Math. Proc. Camb. Philos. Soc. 121, No. 1, 15--30 (1997; Zbl 0888.94028)] introduced the notion of Jacobi polynomials for codes -- a generalization of weight enumerators for codes -- as an analogue to Jacobi forms [\textit{E. Bannai} and \textit{M. Ozeki}, Proc. Japan Acad., Ser. A 72, No. 1, 12--15 (1996; Zbl 0860.11026); \textit{M. Eichler} and \textit{D. Zagier}, The theory of Jacobi forms. Boston-Basel-Stuttgart: Birkhäuser (1985; Zbl 0554.10018)] as a powerful generalization of modular forms of lattices. ``Roughly speaking,'' in the words of \textit{A. Bonnecaze} et al. [Des. Codes Cryptography 16, No. 3, 215--234 (1999; Zbl 1049.94018)], ``Jacobi polynomials are to binary codes what Jacobi forms are to lattices.'' Later, \textit{A. Bonnecaze} et al. [Des. Codes Cryptography 16, No. 3, 215--234 (1999; Zbl 1049.94018)] provided a formula to compute the Jacobi polynomials of binary codes, and apply them to construct several combinatorial objects like group divisible designs, packing designs, and covering designs, to name a view. In the paper under review, the authors give the generalizations and analogues of some results in [loc. cit.].
Let \(C\) be a linear code of length \(n\) over \(\mathbb{F}_q.\) Then the \textit{Jacobi polynomial} attached to a set \(T\subseteq [1,n]_{\mathbb{Z}}\) of coordinate places of the code \(C\) is defined as follows:
\[
J_{C,T}(w,z,x,y) :=\sum_{\mathbf{u} \in C}w^{m_0(\mathbf{u})} z^{m_1(\mathbf{u})} x^{n_0(\mathbf{u})} y^{n_1(\mathbf{u})},
\]
where
\begin{align*}
m_0(\mathbf{u}) &:= \sharp\{i \in T:~ u_i = 0\},\\
m_1(\mathbf{u}) &:= \sharp\{i \in T:~ u_i \neq 0\},\\
n_0(\mathbf{u}) &:= \sharp\{i \in [1,n]_{\mathbb{Z}}\backslash T:~ u_i = 0\},\\
n_1(\mathbf{u}) &:= \sharp\{i \in [1,n]_{\mathbb{Z}}\backslash T:~ u_i \neq 0\}.
\end{align*}
If \(T\subseteq[1,n]_{\mathbb{Z}}\) is empty, then the Jacobi polynomial is equal to the (Hamming) weight enumerator of \(C\):
\[
J_{C,\emptyset}(w,z,x,y)=W_C(x,y):=\sum_{\mathbf{u} \in C}x^{n-wt(\mathbf{u})} y^{wt(\mathbf{u})}=\sum_{i=0}^{n}A_ix^{n-i}y^i.
\]
The Jacobi polynomial with respect to \(\ell\) reference vectors \(\mathbf{w}_1,\mathbf{w}_2,\ldots,\mathbf{w}_\ell \in \mathbb{F}_q^n\) is define as
\[
J_{C,\mathbf{w}_1,\mathbf{w}_2,\ldots,\mathbf{w}_{\ell}}(\{x_{\mathbf{a}}\}_{\mathbf{a}\in \mathbb{F}_a^{\ell+1}}\}):=\sum_{\mathbf{u} \in C} \prod_{\mathbf{a} \in \mathbb{F}_2^{\ell+1}} x_{\mathbf{a}}^{N_{\mathbf{a}}(\mathbf{u},\mathbf{w}_1,\ldots,\mathbf{w}_{\ell})}.
\]
Here, for \(\mathbf{a} \in\mathbb{F}_2^\ell,\) we denote
\[
N_{\mathbf{a}}(\mathbf{w}_1,\ldots,\mathbf{w}_\ell):=\sharp \{i \in [1,n]_{\mathbb{Z}}:~\mathbf{a}=(\phi(w_{1i}),\ldots,\phi(w_{gi}))\},
\]
where for any \(j \in [1,\ell]_{\mathbb{Z}},\) \(\mathbf{w}_j=(w_{j1},\ldots,w_{jn}),\) \(\phi(w_{ji})= \begin{cases}1, \text{ for } w_{ji} \neq 0,\\
0, \text{ for }w_{ji}=0. \end{cases}\)
Note that the (usual) Jacobi polynomial is exactly the same with the Jacobi polynomial with one reference vector, namely \(\ell=1.\)
Among the main results of this paper are as follows.
\begin{itemize}
\item[(1)] MacWilliams identity for the Jacobi polynomials of the linear codes.
Theorem. [MacWilliams identity (Theorem 3.1)] Let \(C\) be an \(\mathbb{F}_q\)-linear code of length \(n.\) Again let \(\chi\) be a non-trivial character of \(\mathbb{F}_q.\) Let \(J_{C,\mathbf{w}_1,\mathbf{w}_2,\ldots,\mathbf{w}_{\ell}}(\{x_{\mathbf{a}}\}_{\mathbf{a}\in\mathbb{F}_q^{\ell+1}}\})\) be the Jacobi polynomial of \(C\) with respect to the reference vectors \(\mathbf{w}_1,\mathbf{w}_2,\ldots,\mathbf{w}_\ell \in\mathbb{F}_q^n.\) Then
\begin{multline*}
J_{C,\mathbf{w}_1,\mathbf{w}_2,\ldots,\mathbf{w}_{\ell}}(\{x_{\mathbf{a}}\}_{\mathbf{a}\in\mathbb{F}_q^{\ell+1}}\})\\
=\frac{1}{|C|} J_{C,\mathbf{w}_1,\mathbf{w}_2,\ldots,\mathbf{w}_{\ell}} \left(\left\{\sum_{b \in \mathbb{F}_q} \chi(a_1b)x_(\phi(b),\phi(a_2),\ldots,\phi(a_{\ell+1})) \right\}_{\mathbf{a} \in \mathbb{F}_q^{\ell+1}}\right).
\end{multline*}
\item[(2)] Jacobi polynomials in terms of polarization operator.
The authors give a generalized form of the polarization operator (generalizing the same notion introduced in [\textit{A. Bonnecaze} et al. [Des. Codes Cryptography 16, No. 3, 215--234 (1999; Zbl 1049.94018)]), and provide, using this operator, an evaluation to the Jacobi polynomial of a non-binary code associated to the multiple reference vectors.
Let \(P(x_0, x_1)\) be a homogeneous polynomial with indeterminates \(x_0\) and \(x_1.\) Again let \(P^\prime_{x_0}\) (resp. \(P^\prime_{x_1}\)) denote the partial derivative with respect to variable \(x_0\) (resp. \(x_1\)). Define the \textit{polarization operator} \(A_j\) for any integer \(1 \leq j \leq \ell\) as follows:
\[
A_j \cdot P(w_j,z_j,x_0,x_1)=w_j P^\prime_{x_0}(x_0,x_1)+z_j P^\prime_{x_1}(x_0,x_1).
\]
Theorem 4.2. Every code \(C\) is generalized \(1\)-homogeneous if and only if for any \(\ell\)-tuple \((0,\ldots, 0, 1, 0, \ldots, 0)\) having a single non-zero coordinate, say \(j\)-th coordinate with \(1,\) we have
\[
J_{ C,0,\ldots,0,1,0,\ldots,0} = \frac{1}{n}A_j \cdot W_C.
\]
Theorem 4.3. If \(C\) is a generalized \(t\)-homogeneous code and contains no codeword of weight \(\leq t\) then for \(\mathbf{t} =(t_1,t_2, \ldots, t_\ell)\) such that \(t_1+t_2+\cdots+t_\ell=t\) we have
\[
J_{C,t_1,t_2,\ldots,t_\ell}=\frac{1}{n(n-1)\cdots(n-t+1)}A_{\ell}^{t_\ell} A_{\ell-1}^{t_{\ell-1}}\cdots A_1^{t_1} \cdot W_C.
\]
In the above two theorems, a code \(C\) is called \textit{generalized \(t\)-homogeneous} if the codewords of every given weight \(k\) is also a generalized \(t\)-design in the sense of \textit{P. J. Cameron} [Discrete Math. 309, No. 14, 4835--4842 (2009; Zbl 1186.05024)].
\item[(3)] Continuing the study of \textit{A. Bonnecaze} et al. [Des. Codes Cryptography 16, No. 3, 215--234 (1999; Zbl 1049.94018)] where they studied a connection between designs from Type II codes and Jacobi polynomials, the authors also establish the connection between Jacobi polynomials with one reference vector and designs for some Type III and Type IV codes over \(\mathbb{F}_q.\)
\end{itemize}
Reviewer: Djoko Suprijanto (Bandung)Quasi-self-dual codes over a non-unital ring of order \(4\)https://zbmath.org/1527.940872024-02-28T19:32:02.718555Z"Dougherty, Steven T."https://zbmath.org/authors/?q=ai:dougherty.steven-t"Şahinkaya, Serap"https://zbmath.org/authors/?q=ai:sahinkaya.serapThis paper investigates self-dual codes over the non-commutative and non-unital ring \(E= \left\langle a,b \mid 2a=2b=0, a^2=a, b^2=b, ab=a, ba=b \right\rangle\). It has been shown that additive self-dual codes over \(E\) exist for all symmetric dualities for all lengths and Type IV additive self-dual codes over \(E\) exist for all symmetric dualities for all even lengths. This allowed to prove that the Hamming weight enumerator of a quasi-self-dual code (respectively quasi-self-dual code with only even weights) belongs to the same ring of invariants that self-dual codes(respectively Type IV codes) over the field of order 4.
For the entire collection see [Zbl 1518.16001].
Reviewer: Joël Kabore (Ouagadougou)Minimal PD-sets for codes associated with the graphs \(Q^m_2\), \(m\) evenhttps://zbmath.org/1527.940882024-02-28T19:32:02.718555Z"Key, J. D."https://zbmath.org/authors/?q=ai:key.jennifer-d"Rodrigues, B. G."https://zbmath.org/authors/?q=ai:rodrigues.bernardo-gabrielThis is yet another important contribution by the authors to the growing literature on the determination of PD-sets for permutation decoding for codes obtained from graphs and designs. It follows up on their earlier work on binary codes from adjacency matrices of \(m\)-ary \(n\)-cubes \(Q_n^m\) where adjacency is defined by the Lee metric. In it, \(S\)-PD-sets for \(s\leq t-1\) where \(t\) is the error-correcting capacity of the code were determined.
This paper is a natural extension of the earlier work. It focusses on a subcode \(D\) of the dual of the \(p\)-ary codes, for any prime \(p\), from the adjacency matrices for the \(m\)-ary 2-cube \(Q_2^m\) where \(m\geq 4\) is even. Various properties of the codes have been determined. For instance, the results on the codes begin with a determination of some codewords of weight \(m\) in the dual. This implies that the minimum weight of \(D\) is at most \(m\). Building on this, it is shown that the code and its dual have parameters \([m^2, 2m-2, m]_p\) and \([m^2, m^2-2m+2, 2]_p\), respectively. Notably, if \(p=2\) the code is shown to be self-orthogonal while if \(p\) is odd and \(p\not\vert m\) then the code is LCD. The case of \(p\) odd and \(p\vert m\) has also been discussed.
The main result of the paper is the determination of an PD-set of minimal size \(s+1\) for full error-correction in Proposition 2 and as summarized in Theorem 1. This PD-set is determined with respect to an information set \(\mathcal{I}\) for the code given in Proposition 2. It is standard practice for work of this nature to first determine a convenient information set, fix an ordering of the coordinate positions and then exhibit a PD-set for information decoding.
The result is significant because PD-sets for full error-correction are generally scarce. Typically, most PD-sets obtained in similar work have tended to be much larger than the Gordon bound and yet permutation decoding is more efficient the closer the size of a PD-set to the Gordon bound. The determination of these minimal PD-sets is therefore an important addition to the existing literature.
Reviewer: Khumbo Kumwenda (Mzuzu)Small weight bases for Hamming codeshttps://zbmath.org/1527.940892024-02-28T19:32:02.718555Z"Tromp, John"https://zbmath.org/authors/?q=ai:tromp.john-t"Zhang, Louxin"https://zbmath.org/authors/?q=ai:zhang.louxin"Zhao, Ying"https://zbmath.org/authors/?q=ai:zhao.yingSummary: We present constructions of bases for a Hamming code having small \textit{width} and \textit{height}, i.e. number of 1s in each row and column in the corresponding matrix. Apart from being combinatorially interesting in their own right, these bases also lead to improved embeddings of a hypercube of cliques into a same-sized hypercube
For the entire collection see [Zbl 0847.00053].On the idempotents of cyclic codes over \(\mathbb{F}_{2^t}\)https://zbmath.org/1527.940902024-02-28T19:32:02.718555Z"Han, Sunghyu"https://zbmath.org/authors/?q=ai:han.sunghyuSummary: We study cyclic codes of length \(n\) over \(\mathbb{F}_{2^t} \). Cyclic codes can be viewed as ideals in \(\mathcal{R}_n = \mathbb{F}_{2^t}[x]/(x^n - 1)\). It is known that there is a unique generating idempotent for each ideal. Let \(e(x) \in \mathcal{R}_n\). If \(t = 1\) or \(t = 2\), then there is a necessary and sufficient condition that \(e(x)\) is an idempotent. But there is no known similar result for \(t \geq 3\). In this paper we give an answer for this problem.Higher Grassmann codeshttps://zbmath.org/1527.940912024-02-28T19:32:02.718555Z"Can, Mahir Bilen"https://zbmath.org/authors/?q=ai:can.mahir-bilen"Joshua, Roy"https://zbmath.org/authors/?q=ai:joshua.roy"Ravindra, G. V."https://zbmath.org/authors/?q=ai:ravindra.g-vThe paper investigates a natural generalization of the well-known Reed-Muller codes, known as Grassmann codes. These codes are obtained by evaluating sections of the line bundle on the Grassmannian variety, which is a generalization of the projective spaces. The authors compute the parameters for a general Grassmann code over \(\mathbb{F}_q\) and extend these calculations to some other important subvarieties of Grassmannians and partial flag varieties. They also explicitly determine the parameters of all such codes obtained from the Grassmannian. This work is part of a larger effort to compute parameters of codes produced from the large class of algebraic varieties called projective spherical varieties.
Reviewer: Matteo Bonini (Aalborg)A comparison between Suzuki invariant code and one-point Hermitian codehttps://zbmath.org/1527.940922024-02-28T19:32:02.718555Z"Eid, Abdulla"https://zbmath.org/authors/?q=ai:eid.abdullaSummary: In this paper we compare the performance of two algebraic geometry codes (Suzuki and Hermitian codes) constructed using maximal algebraic curves over \(\mathbb{F}_{q^4}\) with large automorphism groups by choosing specific divisors. We discuss their parameters, compare the rate of these codes as well as their relative minimum distances, and we show that both codes are asymptotically good in terms of the rate which is in contrast to their behavior in terms of the relative minimum distance.Improved list-decodability of random linear binary codeshttps://zbmath.org/1527.940932024-02-28T19:32:02.718555Z"Li, Ray"https://zbmath.org/authors/?q=ai:li.ray"Wootters, Mary"https://zbmath.org/authors/?q=ai:wootters.mary.5Summary: There has been a great deal of work establishing that random linear codes are as list-decodable as uniformly random codes, in the sense that a random linear binary code of rate \(1-H(p)-\varepsilon\) is \((p,O(1/\varepsilon))\)-list-decodable with high probability. In this work, we show that such codes are \((p,H(p)/\varepsilon+2)\)-list-decodable with high probability, for any \(p\in(0,1/2)\) and \(\varepsilon>0\). In addition to improving the constant in known list-size bounds, our argument -- which is quite simple -- works simultaneously for all values of \(p\), while previous works obtaining \(L= O(1/\varepsilon)\) patched together different arguments to cover different parameter regimes.\par Our approach is to strengthen an existential argument by \textit{V. Guruswami} et al. [IEEE Trans. Inf. Theory 48, No. 5, 1021--1034 (2002; Zbl 1061.94074)] to hold with high probability. To complement our upper bound for random linear binary codes, we also improve an argument by \textit{V. Guruswami} and \textit{S. Narayanan} [IEEE Trans. Inf. Theory 60, No. 10, 5827--5842 (2014; Zbl 1360.94436)] to obtain a tight lower bound of \(1/\varepsilon\) on the list size of uniformly random binary codes; this implies that random linear binary codes are in fact more list-decodable than uniformly random binary codes, in the sense that the list sizes are strictly smaller. To demonstrate the applicability of these techniques, we use them to (a) obtain more information about the distribution of list sizes of random linear binary codes and (b) to prove a similar result for random linear rank-metric codes.
The journal version appeared in [IEEE Trans. Inf. Theory 67, No. 3, 1522--1536 (2021; Zbl 1473.94156)].
For the entire collection see [Zbl 1393.68012].How long can optimal locally repairable codes be?https://zbmath.org/1527.940942024-02-28T19:32:02.718555Z"Guruswami, Venkatesan"https://zbmath.org/authors/?q=ai:guruswami.venkatesan"Xing, Chaoping"https://zbmath.org/authors/?q=ai:xing.chaoping"Yuan, Chen"https://zbmath.org/authors/?q=ai:yuan.chenSummary: A locally repairable code (LRC) with locality \(r\) allows for the recovery of any erased codeword symbol using only \(r\) other codeword symbols. A Singleton-type bound dictates the best possible trade-off between the dimension and distance of LRCs -- an LRC attaining this trade-off is deemed optimal. Such optimal LRCs have been constructed over alphabets growing linearly in the block length. Unlike the classical Singleton bound, however, it was not known if such a linear growth in the alphabet size is necessary, or for that matter even if the alphabet needs to grow at all with the block length. Indeed, for small code distances 3, 4, arbitrarily long optimal LRCs were known over fixed alphabets.\par
Here, we prove that for distances \(d\ge 5\), the code length \(n\) of an optimal LRC over an alphabet of size \(q\) must be at most roughly \(O(dq^3)\). For the case \(d=5\), our upper bound is \(O(q^2)\). We complement these bounds by showing the existence of optimal LRCs of length \(\Omega_{d,r}(q^{1+1/b(d-3)/2c})\) when \(d\le r+2\). Our bounds match when \(d=5\), pinning down \(n=\Theta(q^2)\) as the asymptotically largest length of an optimal LRC for this case.
The journal version appeared in [IEEE Trans. Inf. Theory 65, No. 6, 3662--3670 (2019; Zbl 1432.94213)].
For the entire collection see [Zbl 1393.68012].Grassl-Rötteler cyclic and constacyclic MDS codes are generalised Reed-Solomon codeshttps://zbmath.org/1527.940952024-02-28T19:32:02.718555Z"Ball, Simeon"https://zbmath.org/authors/?q=ai:ball.simeonSummary: We prove that the cyclic and constacyclic codes constructed by Grassl and Rötteler in International Symposium on Information Theory (ISIT), pp. 1104--1108 (2015) are generalised Reed-Solomon codes. This note can be considered as an addendum to Grassl and Rötteler International Symposium on Information Theory (ISIT), pp 1104--1108 (2015). It can also be considered as an appendix to Ball and Vilar IEEE Trans Inform Theory 68:3796--3805, (2022) where Conjecture 11 of International Symposium on Information Theory (ISIT), pp 1104--1108 (2015), which was stated for Grassl-Rötteler codes, is proven for generalised Reed-Solomon codes. The content of this note, together with IEEE Trans Inform Theory 68:3796--3805, (2022) therefore implies that Conjecture 11 from International Symposium on Information Theory (ISIT), pp. 1104--1108 (2015) is true.On lower bounds for complexity over infinite bases for functions of multi-valued logichttps://zbmath.org/1527.940962024-02-28T19:32:02.718555Z"Andreev, A. A."https://zbmath.org/authors/?q=ai:andreev.aleksandr-anatolevichSummary: The complexity and the depth of multi-valued logic functions realization by formulas and by circuits of functional gates over infinite incomplete bases are estimated. Some examples of infinite bases allowing high (including overexponential) lower bounds for complexity are presented.Improvement of nonmonotone complexity estimates of \(k\)-valued logic functionshttps://zbmath.org/1527.940972024-02-28T19:32:02.718555Z"Kochergin, V. V."https://zbmath.org/authors/?q=ai:kochergin.vadim-vasilevich"Mikhailovich, A. V."https://zbmath.org/authors/?q=ai:mikhailovich.anna-vitalevnaSummary: The problem of determining the nonmonotone complexity of the implementation of \(k\)-valued logic functions by logic circuits in bases consisting of all monotone (with respect to the standard order) functions and finitely many nonmonotone functions is investigated. In calculating the complexity measure under examination only those elements of the circuit which are assigned nonmonotone basis functions are taken into account. The nonmonotone complexity of an arbitrary \(k\)-valued logic function is determined with high accuracy, namely, upper and lower bounds which differ by a constant not exceeding \(3 \log_2 k+4\) are found.On the complexity of circuits in bases containing monotone elements with zero weightshttps://zbmath.org/1527.940982024-02-28T19:32:02.718555Z"Kochergin, V. V."https://zbmath.org/authors/?q=ai:kochergin.vadim-vasilevich"Mikhaĭlovich, A. V."https://zbmath.org/authors/?q=ai:mikhailovich.anna-vitalevnaSummary: Complexity of Boolean functions and Boolean function systems realization in a base consisting of all monotone functions and a finite number of non-monotone functions is researched. The weight of any monotone function in the base is supposed to be 0, the weight of non-monotone function in it is positive. A. A. Markov studied special case of this base, where the non-monotone part consisted of the negation function. He showed that the minimum number of negation elements which are needed to realize an arbitrary function \(f\) equals \(\lceil\log_2(d(f)+1)\rceil \). Here \(d(f)\) is the maximum number of value changes from 1 to 0 over all increasing chains of arguments tuples. The aim of this paper is to prove that the complexity of a Boolean function \(f\) realization equals \(\rho\lceil\log_2(d(f)+1)\rceil-\mathrm{O}(1)\), where \(\rho\) is the minimum weight of non-monotone elements in the base. Similar generalization of the classical Markov's result concerning the realization of Boolean function systems is obtained too.Synthesis of combinational circuits by means of bi-decomposition of Boolean functionshttps://zbmath.org/1527.940992024-02-28T19:32:02.718555Z"Pottosin, Yu. V."https://zbmath.org/authors/?q=ai:pottosin.yu-vSummary: The problem of combinational circuits synthesis in the basis of two-input gates is considered. Those gates are AND, OR, NAND and NOR. A method for solving this problem by means of Boolean functions bi-decomposition is suggested. The method reduces the problem to the search for a weighted two-block cover of the orthogonality graph of ternary matrix rows representing the given Boolean function by complete bipartite subgraphs (bi-cliques). Each bi-clique in the obtained cover is assigned in a certain way with a set of variables that are the arguments of the function. This set is the weight of the bi-clique. Each of those bi-cliques defines a Boolean function whose arguments are the variables assigned to it. The functions obtained in such a way constitute the required decomposition. The process of combinational circuit synthesis consists in successively applying bi-decomposition to the functions obtained. The method for two-block covering the orthogonality graph of ternary matrix rows is described.Functional decomposability criteria for quadratic threshold Boolean functionshttps://zbmath.org/1527.941002024-02-28T19:32:02.718555Z"Shurupov, A. N."https://zbmath.org/authors/?q=ai:shurupov.a-nSummary: Threshold functions provide a simple but fundamental model for many questions investigated in image recognition, artificial neural networks and many other areas. In this paper, the results in Boolean threshold function decomposition are advanced to Boolean functions represented by one quadratic inequality. Quadratic polynomials are the most compact non-linear polynomials and this property sometimes is quite important. We prove three criteria for non-trivial decomposition of quadratic Boolean threshold functions. One of them can be applied without analysis of truth table and only uses the threshold structure parameters.Constructing more quadratic APN functions with the QAM methodhttps://zbmath.org/1527.941012024-02-28T19:32:02.718555Z"Yu, Yuyin"https://zbmath.org/authors/?q=ai:yu.yuyin"Perrin, Léo"https://zbmath.org/authors/?q=ai:perrin.leoBased on the QAM method, [\textit{Y. Yu} et al., Des. Codes Cryptography 73, No. 2, 587--600 (2014; Zbl 1320.11122)], by modifying small parts of the corresponding QAM, the authors generate 6794 quadratic APN functions in 8 variables. Investigating EA-equivalence by determining differential and extended Walsh spectrum of the corresponding ortho-derivatives, [\textit{A. Canteaut} et al., IEEE Trans. Inf. Theory 68, No. 9, 6187--6206 (2022; Zbl 1515.94120)], they show that (at least) 5412 of them represent new EA-equivalence classes. This increases the number of by that time known CCZ-equivalence classes of APN functions in \(8\) variables to 26525. The authors point out that none of their APN functions is CCZ-equivalent to a permutation.
Reviewer: Wilfried Meidl (Klagenfurt)