Recent zbMATH articles in MSC 94A https://zbmath.org/atom/cc/94A 2022-06-24T15:10:38.853281Z Unknown author Werkzeug Cyclic arrangements with minimum modulo $$m$$ winding numbers https://zbmath.org/1485.05019 2022-06-24T15:10:38.853281Z "Qian, Chengyang" https://zbmath.org/authors/?q=ai:qian.chengyang "Wu, Yaokun" https://zbmath.org/authors/?q=ai:wu.yaokun "Xiong, Yanzhen" https://zbmath.org/authors/?q=ai:xiong.yanzhen Summary: Let $$m$$ and $$n$$ be two positive integers. We use $$[m]$$ for the set $$\{1,\ldots ,m\}$$. For any integer $$i$$, $${\left\langle i \right\rangle}_m$$ designates the minimum nonnegative integer that is congruent to $$i$$ modulo $$m$$, and $${\mathcal{S}}_{m,i}^n$$ stands for the set of those $$x\in [m]^{[n]}$$ satisfying $$\sum_{t=1}^nx(t)$$ is congruent to $$i$$ modulo $$m$$. An enumeration of $${\mathcal{S}}_{m,i}^n$$ as $$x_1,\ldots, x_{m^{n-1}}$$ is called a tight balanced cyclic sequence if $$\sum_{p\in [m^{n-1}]} {\left\langle x_{p+1}(t)- x_p(t) \right\rangle}_m= \frac{m^n}{n}$$ holds for each $$t\in [n]$$, where the subscript $$m^{n-1}+1$$ should be understood as 1. Assuming that $$m^n$$ is a multiple of $$n$$, is there always a tight balanced cyclic sequence over $${\mathcal{S}}_{m,i}^n$$? We provide a positive answer to this question when $$n$$ is a power of $$m$$. Mosaics of combinatorial designs for information-theoretic security https://zbmath.org/1485.05021 2022-06-24T15:10:38.853281Z "Wiese, Moritz" https://zbmath.org/authors/?q=ai:wiese.moritz "Boche, Holger" https://zbmath.org/authors/?q=ai:boche.holger Summary: We study security functions which can serve to establish semantic security for the two central problems of information-theoretic security: the wiretap channel, and privacy amplification for secret key generation. The security functions are functional forms of mosaics of combinatorial designs, more precisely, of group divisible designs and balanced incomplete block designs. Every member of a mosaic is associated with a unique color, and each color corresponds to a unique message or key value. Every block index of the mosaic corresponds to a public seed shared between the two trusted communicating parties. The seed set should be as small as possible. We give explicit examples which have an optimal or nearly optimal trade-off of seed length versus color (i.e., message or key) rate. We also derive bounds for the security performance of security functions given by functional forms of mosaics of designs. Monomial generalized almost perfect nonlinear functions https://zbmath.org/1485.11169 2022-06-24T15:10:38.853281Z "Kuroda, Masamichi" https://zbmath.org/authors/?q=ai:kuroda.masamichi Decomposing signals from dynamical systems using shadow manifold interpolation https://zbmath.org/1485.37070 2022-06-24T15:10:38.853281Z "George, Erin" https://zbmath.org/authors/?q=ai:george.erin "Chan, Colleen E." https://zbmath.org/authors/?q=ai:chan.colleen-e "Dimand, Gal" https://zbmath.org/authors/?q=ai:dimand.gal "Chakmak, Ryan M." https://zbmath.org/authors/?q=ai:chakmak.ryan-m "Falcon, Claudia" https://zbmath.org/authors/?q=ai:falcon.claudia "Eckhardt, Daniel" https://zbmath.org/authors/?q=ai:eckhardt.daniel-q "Martin, Robert" https://zbmath.org/authors/?q=ai:martin.robert-scott|martin.robert-l|martin.robert-t-w|martin.robert-h-jun|martin.robert-paul|martin.robert-f|martin.robert-j.1|martin.robert-j On the stability of Fourier phase retrieval https://zbmath.org/1485.42009 2022-06-24T15:10:38.853281Z "Steinerberger, Stefan" https://zbmath.org/authors/?q=ai:steinerberger.stefan Summary: Phase retrieval is concerned with recovering a function $$f$$ from the absolute value of its Fourier transform $$|\widehat{f}|$$. We study the stability properties of this problem in Lebesgue spaces. Our main results shows that $\|f-g\|_{L^2(\mathbb{R}^n)} \le 2\cdot \| |\widehat{f}| - |\widehat{g}| \|_{L^2(\mathbb{R}^n)} + h_f\Bigl(\|f-g\|_{L^p(\mathbb{R}^n)}\Bigr) + J(\widehat{f}, \widehat{g}),$ where $$1 \le p < 2$$, $$h_f$$ is an explicit nonlinear function depending on the smoothness of $$f$$ and $$J$$ is an explicit term capturing the invariance under translations. A noteworthy aspect is that the stability is phrased in terms of $$L^p$$ for $$1 \le p < 2$$: in this region $$L^p$$ cannot be used to control $$L^2$$, our stability estimate has the flavor of an inverse Hölder inequality. It seems conceivable that the estimate is optimal up to constants. An inertial iterative scheme for solving variational inclusion with application to Nash-Cournot equilibrium and image restoration problems https://zbmath.org/1485.47087 2022-06-24T15:10:38.853281Z "Abubakar, Jamilu" https://zbmath.org/authors/?q=ai:abubakar.jamilu "Kumam, Poom" https://zbmath.org/authors/?q=ai:kumam.poom "Garba, Abor Isa" https://zbmath.org/authors/?q=ai:garba.abor-isa "Abdullahi, Muhammad Sirajo" https://zbmath.org/authors/?q=ai:abdullahi.muhammad-sirajo "Ibrahim, Abdulkarim Hassan" https://zbmath.org/authors/?q=ai:ibrahim.abdulkarim-hassan "Sitthithakerngkiet, Kanokwan" https://zbmath.org/authors/?q=ai:sitthithakerngkiet.kanokwan Summary: Variational inclusion is an important general problem consisting of many useful problems like variational inequality, minimization problem and nonlinear monotone equations. In this article, a~new scheme for solving variational inclusion problem is proposed and the scheme uses inertial and relaxation techniques. Moreover, the scheme is self adaptive, that is, the stepsize does not depend on the factorial constants of the underlying operator, instead it can be computed using a simple updating rule. Weak convergence analysis of the iterates generated by the new scheme is presented under mild conditions. In addition, schemes for solving variational inequality problem and split feasibility problem are derived from the proposed scheme and applied in solving Nash-Cournot equilibrium problem and image restoration. Experiments to illustrate the implementation and potential applicability of the proposed schemes in comparison with some existing schemes in the literature are presented. A fast viscosity forward-backward algorithm for convex minimization problems with an application in image recovery https://zbmath.org/1485.47097 2022-06-24T15:10:38.853281Z "Jailoka, Pachara" https://zbmath.org/authors/?q=ai:jailoka.pachara "Suantai, Suthep" https://zbmath.org/authors/?q=ai:suantai.suthep "Hanjing, Adisak" https://zbmath.org/authors/?q=ai:hanjing.adisak Summary: The purpose of this paper is to invent an accelerated algorithm for the convex minimization problem which can be applied to the image restoration problem. Theoretically, we first introduce an algorithm based on viscosity approximation method with the inertial technique for finding a common fixed point of a countable family of nonexpansive operators. Under some suitable assumptions, a strong convergence theorem of the proposed algorithm is established. Subsequently, we utilize our proposed algorithm to solving a convex minimization problem of the sum of two convex functions. As an application, we apply and analyze our algorithm to image restoration problems. Moreover, we compare convergence behavior and efficiency of our algorithm with other well-known methods such as the forward-backward splitting algorithm and the fast iterative shrinkage-thresholding algorithm. By using image quality metrics, numerical experiments show that our algorithm has a higher efficiency than the mentioned algorithms. Iterative methods for optimization problems and image restoration https://zbmath.org/1485.47104 2022-06-24T15:10:38.853281Z "Padcharoen, Anantachai" https://zbmath.org/authors/?q=ai:padcharoen.anantachai "Kitkuan, Duangkamon" https://zbmath.org/authors/?q=ai:kitkuan.duangkamon Summary: In this paper, we introduce a new accelerated iterative method for finding a common fixed point of a countable family of nonexpansive mappings in the Hilbert spaces framework. Using our main result, we obtain a new accelerated image restoration iterative method for solving a minimization problem in the form of the sum of two proper lower semi-continuous and convex functions. As applications, we apply our algorithm to solving image restoration problems. Linear transformations in signal and optical systems https://zbmath.org/1485.47126 2022-06-24T15:10:38.853281Z "Zayed, Ahmed I." https://zbmath.org/authors/?q=ai:zayed.ahmed-i Summary: In this survey article some linear transformations that play a fundamental role in signal processing and optical systems are reviewed. After a brief discussion of the general theory of linear systems, specific linear transformations are introduced. An important class of signals to which most of these linear transformations are applied is the class of bandlimited signals and some of its generalizations. The article begins by an introduction to this class of signals and some of its properties, in particular, the property that a bandlimited signal can be perfectly reconstructed from its samples on a discrete set of points. The main tool for the reconstruction is known as the sampling theorem. Some of the transformations presented, such as the windowed Fourier transform, the continuous wavelet transform, the Wigner distribution function, the radar ambiguity function, and the ambiguity transformation, fall into the category of time-frequency, scale-translation, or phase-space representations. Such transformations make it possible to study physical systems from two different perspectives simultaneously. Another group of transformations presented is closely related to the Fourier transform, such as the fractional Fourier transform. Generalizations of the fractional Fourier transform, including the special affine Fourier transformation, and their applications in optical systems are introduced, together with sampling theorems for signals bandlimited in the domains of the aforementioned transformations. For the entire collection see [Zbl 1325.47001]. A generative variational model for inverse problems in imaging https://zbmath.org/1485.49042 2022-06-24T15:10:38.853281Z "Habring, Andreas" https://zbmath.org/authors/?q=ai:habring.andreas "Holler, Martin" https://zbmath.org/authors/?q=ai:holler.martin Axial preferred solutions for multiobjective optimal control problems: an application to chemical processes https://zbmath.org/1485.49054 2022-06-24T15:10:38.853281Z "Askarirobati, G. H." https://zbmath.org/authors/?q=ai:askarirobati.gholam-hosein "Borzabadi, A. H." https://zbmath.org/authors/?q=ai:borzabadi.akbar-hashemi "Heydari, Aghileh" https://zbmath.org/authors/?q=ai:heydari.aghileh Summary: Detecting the Pareto optimal solutions on the Pareto frontier is one of the most important topics in multiobjective optimal control problems. In real-world control systems, there is needed for the decision-maker to apply their own opinion to find the preferred solution from a large list of Pareto optimal solutions. This paper presents a class of axial preferred solutions for multiobjective optimal control problems in contexts in which partial information on preference weights of objectives is available. These solutions combine both the idea of improvement axis and Pareto optimality with respect to preference information. The axial preferred solution, in addition to taking considerations of decision-makers, provides continuous functions for control ling chemical processes. Numerical results are presented for two problems of chemical processes with two different preferential situations. A discrete framework to find the optimal matching between manifold-valued curves https://zbmath.org/1485.58013 2022-06-24T15:10:38.853281Z "Le Brigant, Alice" https://zbmath.org/authors/?q=ai:le-brigant.alice Summary: The aim of this paper is to find an optimal matching between manifold-valued curves, and thereby adequately compare their shapes, seen as equivalent classes with respect to the action of reparameterization. Using a canonical decomposition of a path in a principal bundle, we introduce a simple algorithm that finds an optimal matching between two curves by computing the geodesic of the infinite-dimensional manifold of curves that is at all time horizontal to the fibers of the shape bundle. We focus on the elastic metric studied in [\textit{A. Le Brigant}, J. Geom. Mech. 9, No. 2, 131--156 (2017; Zbl 1366.58005)] using the so-called square root velocity framework. The quotient structure of the shape bundle is examined, and in particular horizontality with respect to the fibers. These results are more generally given for any elastic metric. We then introduce a comprehensive discrete framework which correctly approximates the smooth setting when the base manifold has constant sectional curvature. It is itself a Riemannian structure on the product manifold $$M^n$$ of discrete curves'' given by $$n$$ points, and we show its convergence to the continuous model as the size $$n$$ of the discretization goes to $$\infty$$. Illustrations of geodesics and optimal matchings between discrete curves are given in the hyperbolic plane, the plane and the sphere, for synthetic and real data, and comparison with dynamic programming [\textit{A. Srivastava} and \textit{E. P. Klassen}, Functional and shape data analysis. New York, NY: Springer (2016; Zbl 1376.62003)] is established. PAS-TA-U: password-based threshold authentication with password update https://zbmath.org/1485.68026 2022-06-24T15:10:38.853281Z "Rawat, Rachit" https://zbmath.org/authors/?q=ai:rawat.rachit "Jhanwar, Mahabir Prasad" https://zbmath.org/authors/?q=ai:jhanwar.mahabir-prasad Summary: A single-sign-on (SSO) is an authentication system that allows a user to log in with a single identity and password to any of several related, yet independent, server applications. SSO solutions eliminate the need for users to repeatedly prove their identities to different applications and hold different credentials for each application. \textit{Token-based authentication} is commonly used to enable a SSO experience on the web, and on enterprise networks. A large body of work considers distributed token generation which can protect the long term keys against a subset of breached servers. A recent work [\textit{S. Agrawal} et al., PASTA: password-based threshold authentication'', in: Proceedings of the 2018 ACM SIGSAC conference on computer and communications security, CCS'18. New York, NY: Association for Computing Machinery (ACM). 2042--2059 (2018; \url{doi:10.1145/3243734.3243839})] introduced the notion of Password-based Threshold Authentication (PbTA) with the goal of making password-based token generation for SSO secure against server breaches that could compromise both long-term keys and user credentials. They also introduced a generic framework called PASTA that can instantiate a PbTA system. The existing SSO systems built on distributed token generation techniques, including the PASTA framework, do not admit password-update functionality. In this work, we address this issue by proposing a password-update functionality into the PASTA framework. We call the modified framework PAS-TA-U. As a concrete application, we instantiate PAS-TA-U to implement in Python a distributed SSH key manager for enterprise networks (ESKM) that also admits a password-update functionality for its clients. Our experiments show that the overhead of protecting secrets and credentials against breaches in our system compared to a traditional single server setup is low (average 119 ms in a 10-out-of-10 server setting on Internet with 80 ms round trip latency). For the entire collection see [Zbl 1475.68018]. Controlling and restoring the integrity of multi-dimensional data arrays through cryptocode constructs https://zbmath.org/1485.68090 2022-06-24T15:10:38.853281Z "Dichenko, S. A." https://zbmath.org/authors/?q=ai:dichenko.s-a "Finko, O. A." https://zbmath.org/authors/?q=ai:finko.o-a Summary: A method for controlling and restoring data integrity in multidimensional storage systems is developed. The proposed constructs for controlling and restoring the integrity of multi-dimensional data arrays are based on the aggregation of cryptographic methods and error correction codes. Based on the cryptocode structures, we demonstrate a specific feature of the complexation of these approaches, namely, an increased probability of information integrity under conditions of damaging effects induced by a malefactor and environment in multi-dimensional data storage systems. For the developed method, we estimate the probability of detection and correction of errors that occur in multi-dimensional data arrays and violate their integrity. A secure cloud-based IDPS using cryptographic traces and revocation protocol https://zbmath.org/1485.68091 2022-06-24T15:10:38.853281Z "Idrissi, Hind" https://zbmath.org/authors/?q=ai:idrissi.hind "Ennahbaoui, Mohammed" https://zbmath.org/authors/?q=ai:ennahbaoui.mohammed "El Hajji, Said" https://zbmath.org/authors/?q=ai:elhajji.said "Souidi, El Mamoun" https://zbmath.org/authors/?q=ai:souidi.el-mamoun Summary: Cloud computing is a revolutionary information technology, that aims to provide reliable, customized and quality of service guaranteed environments, where virtualized and dynamic data are stored and shared among cloud users. Thanks to its significant benefits such as: on demand resources and low maintenance costs, cloud computing becomes a trend in the area of new technologies that facilitates communication and access to information. Despite the aforementioned facts, the distributed and open nature of this paradigm makes privacy and security of the stored resources a major challenge, that limits the use and agreement of cloud computing in practice. Among the strong security policies adopted to address this problem, there are Intrusion Detection and Prevention Systems (IDPS), that enable the cloud architecture to detect anomalies through monitoring the usage of stored resources, and then reacting prevent their expansion. In this paper, we propose a secure, reliable and flexible IDPS mainly based on autonomous mobile agents, that are associated with tracing and revocation protocol. While roaming among multiple cloud servers, our mobile agent is charged with executing requested tasks and collecting needed information. Thus, on each cloud server a cryptographic trace'' is produced in which all behaviors, results and data involved in the execution are recorded, which allow to identify any possible intrusions and hence predict a response to prevent them or end their processing, through using a server revocation technique based on trust threshold. For the entire collection see [Zbl 1357.68009]. Integrated encryption in dynamic arithmetic compression https://zbmath.org/1485.68092 2022-06-24T15:10:38.853281Z "Klein, Shmuel T." https://zbmath.org/authors/?q=ai:klein.shmuel-tomi "Shapira, Dana" https://zbmath.org/authors/?q=ai:shapira.dana Summary: A compression cryptosystem based on adaptive arithmetic coding is proposed, in which the updates of the frequency tables for the underlying alphabet are done selectively, according to some secret key $$K$$. We give empirical evidence that the compression performance is not hurt, and discuss also aspects of the system being used as an encryption method. For the entire collection see [Zbl 1358.68010]. Linearly homomorphic authenticated encryption with provable correctness and public verifiability https://zbmath.org/1485.68093 2022-06-24T15:10:38.853281Z "Struck, Patrick" https://zbmath.org/authors/?q=ai:struck.patrick "Schabhüser, Lucas" https://zbmath.org/authors/?q=ai:schabhuser.lucas "Demirel, Denise" https://zbmath.org/authors/?q=ai:demirel.denise "Buchmann, Johannes" https://zbmath.org/authors/?q=ai:buchmann.johannes-a Summary: In this work the first linearly homomorphic authenticated encryption scheme with public verifiability and provable correctness, called \textsf{LEPCoV}, is presented. It improves the initial proposal by avoiding false negatives during the verification algorithm. This work provides a detailed description of \textsf{LEPCoV}, a comparison with the original scheme, a security and correctness proof, and a performance analysis showing that all algorithms run in reasonable time for parameters that are currently considered secure. The scheme presented here allows a user to outsource computations on encrypted data to the cloud, such that any third party can verify the correctness of the computations without having access to the original data. This makes this work an important contribution to cloud computing and applications where operations on sensitive data have to be performed, such as statistics on medical records and tallying of electronically cast votes. For the entire collection see [Zbl 1357.68009]. Multi-party protocols, information complexity and privacy https://zbmath.org/1485.68097 2022-06-24T15:10:38.853281Z "Kerenidis, Iordanis" https://zbmath.org/authors/?q=ai:kerenidis.iordanis "Rosén, Adi" https://zbmath.org/authors/?q=ai:rosen.adi "Urrutia, Florent" https://zbmath.org/authors/?q=ai:urrutia.florent Quantum algorithms for computing general discrete logarithms and orders with tradeoffs https://zbmath.org/1485.68100 2022-06-24T15:10:38.853281Z "Ekerå, Martin" https://zbmath.org/authors/?q=ai:ekera.martin The author generalizes earlier works about a modified version of Shor's algorithm for computing short discrete logarithms with tradeoffs [\textit{M. Ekerå} and \textit{J. Håstad}, Lect. Notes Comput. Sci. 10346, 347--363 (2017; Zbl 1437.94058)]. The modified algorithm does not need the order of the group to be known because it computes both the order and the logarithm. However, it requires the logarithm to be small in comparison with the group order (short discrete logarithm'). The paper also includes discussion of results of partial classical simulations of the suggested quantum algorithm. Reviewer: Alexander Yurevich Vlasov (Sankt-Peterburg) Global-affine and local-specific generative adversarial network for semantic-guided image generation https://zbmath.org/1485.68228 2022-06-24T15:10:38.853281Z "Zhang, Susu" https://zbmath.org/authors/?q=ai:zhang.susu "Ni, Jiancheng" https://zbmath.org/authors/?q=ai:ni.jiancheng "Hou, Lijun" https://zbmath.org/authors/?q=ai:hou.lijun "Zhou, Zili" https://zbmath.org/authors/?q=ai:zhou.zili "Hou, Jie" https://zbmath.org/authors/?q=ai:hou.jie "Gao, Feng" https://zbmath.org/authors/?q=ai:gao.feng.2|gao.feng.1|gao.feng.3|gao.feng|gao.feng.6 Summary: The recent progress in learning image feature representations has opened the way for tasks such as label-to-image or text-to-image synthesis. However, one particular challenge widely observed in existing methods is the difficulty of synthesizing fine-grained textures and small-scale instances. In this paper, we propose a novel Global-Affine and Local-Specific Generative Adversarial Network (GALS-GAN) to explicitly construct global semantic layouts and learn distinct instance-level features. To achieve this, we adopt the graph convolutional network to calculate the instance locations and spatial relationships from scene graphs, which allows our model to obtain the high-fidelity semantic layouts. Also, a local-specific generator, where we introduce the feature filtering mechanism to separately learn semantic maps for different categories, is utilized to disentangle and generate specific visual features. Moreover, we especially apply a weight map predictor to better combine the global and local pathways considering the highly complementary between these two generation sub-networks. Extensive experiments on the COCO-Stuff and Visual Genome datasets demonstrate the superior generation performance of our model against previous methods, our approach is more capable of capturing photo-realistic local characteristics and rendering small-sized entities with more details. Inpainting via sparse recovery with directional constraints https://zbmath.org/1485.68273 2022-06-24T15:10:38.853281Z "Chen, Xuemei" https://zbmath.org/authors/?q=ai:chen.xuemei "Dobrosotskaya, Julia" https://zbmath.org/authors/?q=ai:dobrosotskaya.julia-a Summary: Image inpainting is a particular case of image completion problem. We describe a novel method allowing to amend the general scenario of using sparse or TV-based recovery for inpainting purposes by an efficient use of adaptive one-dimensional directional sensing'' into the unknown domain. We analyze the smoothness of the image near each pixel on the boundary of the unknown domain and formulate linear constraints designed to promote smooth transitions from the known domain in the directions where smooth behavior have been detected. We include a theoretical result relaxing the widely known sufficient condition of sparse recovery based on coherence, as well as observations on how adding the directional constraints can improve the well-posedness of sparse inpainting. The numerical implementation of our method is based on ADMM. Examples of inpainting of natural images and binary images with edges crossing the unknown domain demonstrate significant improvement of recovery quality in the presence of adaptive directional constraints. We conclude that the introduced framework is general enough to offer a lot of flexibility and be successfully utilized in a multitude of image recovery scenarios. Morphological decomposition and compression of binary images via a minimum set cover algorithm https://zbmath.org/1485.68274 2022-06-24T15:10:38.853281Z "Farazmanesh, Davod" https://zbmath.org/authors/?q=ai:farazmanesh.davod "Tavakoli, Ali" https://zbmath.org/authors/?q=ai:tavakoli.ali|tavakoli.ali-reza Summary: In this paper, by a novel morphological decomposition method, we propose an efficient compression approach. Our presented decomposition arises from solving a minimum set cover problem (MSCP) obtained from the image skeleton data. We first use the skeleton pixels to create a collection of blocks which cover the foreground. The given blocks are both overlapped and too many. Hence, in order to find the minimum number of the blocks that cover the foreground, we form an MCSP. To solve this problem, we present a new algorithm of which accuracy is better than those of the known methods in the literature. Also, its error analysis is studied to obtain an error bound. In the sequel, we present a fast algorithm which include an extra parameter by which the relation between the accuracy and CPU time can be controlled. Finally, several examples are given to confirm the efficiency of our approach. Semi-supervised classification of hyperspectral images using discrete nonlocal variation Potts model https://zbmath.org/1485.68275 2022-06-24T15:10:38.853281Z "Ge, Linyao" https://zbmath.org/authors/?q=ai:ge.linyao "Huang, Baoxiang" https://zbmath.org/authors/?q=ai:huang.baoxiang "Wei, Weibo" https://zbmath.org/authors/?q=ai:wei.weibo "Pan, Zhenkuan" https://zbmath.org/authors/?q=ai:pan.zhenkuan Summary: The classification of Hyperspectral Image (HSI) plays an important role in various fields. To achieve more precise multi-target classification in a short time, a method for combining discrete non-local theory with traditional variable fraction Potts models is presented in this paper. The nonlocal operator makes better use of the information in a certain region centered on that pixel. Meanwhile, adding the constraint in the model can ensure that every pixel in HSI has only one class. The proposed model has the characteristics of non-convex, nonlinear, and non-smooth so that it is difficult to achieve global optimization results. By introducing a series of auxiliary variables and using the alternating direction method of multipliers, the proposed classification model is transformed into a series of convex subproblems. Finally, we conducted comparison experiments with support vector machine (SVM), K-nearest neighbor (KNN), and convolutional neural network (CNN) on five different dimensional HSI data sets. The numerical results further illustrate that the proposed method is stable and efficient and our algorithm can get more accurate predictions in a shorter time, especially when classifying data sets with more spectral layers. Nonconvex total generalized variation model for image inpainting https://zbmath.org/1485.68276 2022-06-24T15:10:38.853281Z "Liu, Xinwu" https://zbmath.org/authors/?q=ai:liu.xinwu Summary: It is a challenging task to prevent the staircase effect and simultaneously preserve sharp edges in image inpainting. For this purpose, we present a novel nonconvex extension model that closely incorporates the advantages of total generalized variation and edge-enhancing nonconvex penalties. This improvement contributes to achieve the more natural restoration that exhibits smooth transitions without penalizing fine details. To efficiently seek the optimal solution of the resulting variational model, we develop a fast primal-dual method by combining the iteratively reweighted algorithm. Several experimental results, with respect to visual effects and restoration accuracy, show the excellent image inpainting performance of our proposed strategy over the existing powerful competitors. Distortion matrix approach for ultrasound imaging of random scattering media https://zbmath.org/1485.74048 2022-06-24T15:10:38.853281Z "Lambert, William" https://zbmath.org/authors/?q=ai:lambert.william.1 "Cobus, Laura A." https://zbmath.org/authors/?q=ai:cobus.laura-a "Aubry, Alexandre" https://zbmath.org/authors/?q=ai:aubry.alexandre Summary: Focusing waves inside inhomogeneous media is a fundamental problem for imaging. Spatial variations of wave velocity can strongly distort propagating wave fronts and degrade image quality. Adaptive focusing can compensate for such aberration but is only effective over a restricted field of view. Here, we introduce a full-field approach to wave imaging based on the concept of the distortion matrix. This operator essentially connects any focal point inside the medium with the distortion that a wave front, emitted from that point, experiences due to heterogeneities. A time-reversal analysis of the distortion matrix enables the estimation of the transmission matrix that links each sensor and image voxel. Phase aberrations can then be unscrambled for any point, providing a full-field image of the medium with diffraction-limited resolution. Importantly, this process is particularly efficient in random scattering media, where traditional approaches such as adaptive focusing fail. Here, we first present an experimental proof of concept on a tissue-mimicking phantom and then, apply the method to in vivo imaging of human soft tissues. While introduced here in the context of acoustics, this approach can also be extended to optical microscopy, radar, or seismic imaging. Large-to-small scale frequency modulation analysis in wall-bounded turbulence via visibility networks https://zbmath.org/1485.76054 2022-06-24T15:10:38.853281Z "Iacobello, Giovanni" https://zbmath.org/authors/?q=ai:iacobello.giovanni "Ridolfi, Luca" https://zbmath.org/authors/?q=ai:ridolfi.luca "Scarsoglio, Stefania" https://zbmath.org/authors/?q=ai:scarsoglio.stefania Summary: Scale interaction is studied in wall-bounded turbulence by focusing on the frequency modulation (FM) mechanism of the large scales on small-scale velocity fluctuations. Different from the analysis of amplitude modulation (AM), FM has been less investigated owing to the difficulty of developing robust tools for broadband signals. To tackle this issue, the natural visibility graph approach is proposed in this work to map the full velocity signals into complex networks. We show that the network degree centrality is able to capture the signal structure at local scales directly from the full signal, thereby quantifying FM. Velocity signals from numerically-simulated turbulent channel flows and an experimental turbulent boundary layer are investigated at different Reynolds numbers. A correction of Taylor's hypothesis for time series is proposed to overcome the overprediction of near-wall FM obtained when local mean velocity is used as the convective velocity. Results provide network-based evidence of the large-to-small FM features for all the three velocity components in the near-wall region, with a reversal mechanism emerging far from the wall. Additionally, scaling arguments, in view of the quasi-steady quasi-homogeneous hypothesis, are discussed, and a delay time between the large and small scales is detected that is very close to the characteristic time of the near-wall cycle. Results show that the visibility graph is a parameter-free tool that turns out to be effective and robust to detect FM in different configurations of wall-bounded turbulent flows. Based on the present findings, the visibility network-based approach can represent a reliable tool to systematically investigate scale interaction mechanisms in wall-bounded turbulence. Modulus of continuity of the quantum $$f$$-entropy with respect to the trace distance https://zbmath.org/1485.81014 2022-06-24T15:10:38.853281Z "Pinelis, Iosif" https://zbmath.org/authors/?q=ai:pinelis.iosif-froimovich Summary: A well-known result due to \textit{M. Fannes} [Commun. Math. Phys. 31, 291--294 (1973; Zbl 1125.82310)] is a certain upper bound on the modulus of continuity of the von Neumann entropy with respect to the trace distance between density matrices; this distance is the maximum probability of distinguishing between the corresponding quantum states. Much more recently, \textit{K. M. R. Audenaert} [J. Phys. A, Math. Theor. 40, No. 28, 8127--8136 (2007; Zbl 1119.81017)] obtained an exact expression of this modulus of continuity. In the present note, Audenaert's result is extended to a broad class of entropy functions indexed by arbitrary continuous convex functions $$f$$ in place of the Shannon-von Neumann function $$x\mapsto x\log_2x$$. The proof is based on the Schur majorization. Machine-learning iterative calculation of entropy for physical systems https://zbmath.org/1485.81015 2022-06-24T15:10:38.853281Z "Nir, Amit" https://zbmath.org/authors/?q=ai:nir.amit "Sela, Eran" https://zbmath.org/authors/?q=ai:sela.eran "Bar-Sinai, Yohai" https://zbmath.org/authors/?q=ai:bar-sinai.yohai Summary: Characterizing the entropy of a system is a crucial, and often computationally costly, step in understanding its thermodynamics. It plays a key role in the study of phase transitions, pattern formation, protein folding, and more. Current methods for entropy estimation suffer from a high computational cost, lack of generality, or inaccuracy and inability to treat complex, strongly interacting systems. In this paper, we present a method, termed machine-learning iterative calculation of entropy (MICE), for calculating the entropy by iteratively dividing the system into smaller subsystems and estimating the mutual information between each pair of halves. The estimation is performed with a recently proposed machine-learning algorithm which works with arbitrary network architectures that can be chosen to fit the structure and symmetries of the system at hand. We show that our method can calculate the entropy of various systems, both thermal and athermal, with state-of-the-art accuracy. Specifically, we study various classical spin systems and identify the jamming point of a bidisperse mixture of soft disks. Finally, we suggest that besides its role in estimating the entropy, the mutual information itself can provide an insightful diagnostic tool in the study of physical systems. Improving data-driven model-independent reconstructions and updated constraints on dark energy models from Horndeski cosmology https://zbmath.org/1485.83056 2022-06-24T15:10:38.853281Z "Reyes, Mauricio" https://zbmath.org/authors/?q=ai:reyes.mauricio "Escamilla-Rivera, Celia" https://zbmath.org/authors/?q=ai:escamilla-rivera.celia Central charges for AdS black holes https://zbmath.org/1485.83088 2022-06-24T15:10:38.853281Z "Perry, Malcolm" https://zbmath.org/authors/?q=ai:perry.malcolm-j "Rodriguez, Maria J." https://zbmath.org/authors/?q=ai:rodriguez.maria-j-g Is cosmological tuning fine or coarse? https://zbmath.org/1485.83133 2022-06-24T15:10:38.853281Z "Díaz-Pachón, Daniel Andrés" https://zbmath.org/authors/?q=ai:diaz-pachon.daniel-andres "Hössjer, Ola" https://zbmath.org/authors/?q=ai:hossjer.ola-g "Marks, Robert J. II" https://zbmath.org/authors/?q=ai:marks.robert-j-ii Sourcing curvature modes with entropy perturbations in non-singular bouncing cosmologies https://zbmath.org/1485.83142 2022-06-24T15:10:38.853281Z "Ijjas, Anna" https://zbmath.org/authors/?q=ai:ijjas.anna "Kolevatov, Roman" https://zbmath.org/authors/?q=ai:kolevatov.roman Multifield inflation beyond $$N_{\mathrm{field}}=2$$: non-Gaussianities and single-field effective theory https://zbmath.org/1485.83172 2022-06-24T15:10:38.853281Z "Pinol, Lucas" https://zbmath.org/authors/?q=ai:pinol.lucas A necessary and sufficient condition for sparse vector recovery via $$\ell_1-\ell_2$$ minimization https://zbmath.org/1485.90095 2022-06-24T15:10:38.853281Z "Bi, Ning" https://zbmath.org/authors/?q=ai:bi.ning "Tang, Wai-Shing" https://zbmath.org/authors/?q=ai:tang.wai-shing Summary: In this paper, we focus on $$\ell_1-\ell_2$$ minimization model, i.e., investigating the nonconvex model: $\min \|x\|_1-\|x\|_2\text{ s.t. }Ax=y$ and provide a null space property of the measurement matrix $$A$$ such that a vector $$x$$ can be recovered from $$Ax$$ via $$\ell_1-\ell_2$$ minimization. The $$\ell_1-\ell_2$$ minimization model was first proposed by \textit{E. Esser} et al. [SIAM J. Imaging Sci. 6, No. 4, 2010--2046 (2013; Zbl 1282.90239)]. As a nonconvex model, it is well known that global minimizer and local minimizer are usually inconsistent. In this paper, we present a necessary and sufficient condition for the measurement matrix $$A$$ such that (1) a vector $$x$$ can be recovered from $$Ax$$ via $$\ell_1-\ell_2$$ local minimization (Theorem 4); (2) any $$k$$-sparse vector $$x$$ can be recovered from $$Ax$$ via $$\ell_1-\ell_2$$ local minimization (Theorem 5); (3) any $$k$$-sparse vector $$x$$ can be recovered from $$Ax$$ via $$\ell_1-\ell_2$$ global minimization (Theorem 6). Non-extensive minimal entropy martingale measures and semi-Markov regime switching interest rate modeling https://zbmath.org/1485.91235 2022-06-24T15:10:38.853281Z "Sheraz, Muhammad" https://zbmath.org/authors/?q=ai:sheraz.muhammad "Preda, Vasile" https://zbmath.org/authors/?q=ai:preda.vasile-c "Dedu, Silvia" https://zbmath.org/authors/?q=ai:dedu.silvia-cristina Summary: A minimal entropy martingale measure problem is studied to investigate risk-neutral densities and interest rate modelling. Hunt \& Devolder focused on the method of Shannon minimal entropy martingale measure to select the best measure among all the equivalent martingale measures and, proposed a generalization of the Ho \& Lee model in the semi-Markov regime-switching framework [\textit{J. Hunt} and \textit{P. Devolder}, Semi-Markov regime switching interest rate models and minimal entropy measure'', Physica A 390, No. 21--22, 3767--3781 (2011; \url{doi:10.1016/j.physa.2011.04.036})]. We formulate and solve the optimization problem of Hunt \& Devolder for deriving risk-neutral densities using a new non-extensive entropy measure [\textit{F. Shafee}, IMA J. Appl. Math. 72, No. 6, 785--800 (2007; Zbl 1180.82007)]. We use the Lambert function and a new type of approach to obtain results without depending on stochastic calculus techniques. ECG signals recruitment to implement a new technique for medical image encryption https://zbmath.org/1485.92058 2022-06-24T15:10:38.853281Z "Abdulbaqi, Azmi Shawkat" https://zbmath.org/authors/?q=ai:abdulbaqi.azmi-shawkat "Obaid, Ahmed J." https://zbmath.org/authors/?q=ai:obaid.ahmed-j "Mohammed, Alyaa Hashem" https://zbmath.org/authors/?q=ai:mohammed.alyaa-hashem Summary: Electrocardiogram (ECG) signal utilization for medical image encryption (ImgEnc) algorithm was proposed in this manuscript. It will utilize the chaotic logistic map and the generalized Arnold map (Cat Map). To generate initial conditions for chaotic maps (ChMp), the ECG signal and wolf algorithm are utilized. Autoblocking diffusion (AutBlk) is done only in the process of encryption. A control parameter generated from the plain-image (Plainimg) generates the main stream, which is effective and safe against selected plaintext (Plaintxt) and known Plaintxt attacks. Experimental findings show that with good efficiency, the proposed algorithm can achieve high security. A joint medical image compression and encryption using AMBTC and hybrid chaotic system https://zbmath.org/1485.92065 2022-06-24T15:10:38.853281Z "Yadav, Anju" https://zbmath.org/authors/?q=ai:yadav.anju "Saini, Bhavna" https://zbmath.org/authors/?q=ai:saini.bhavna "Verma, Vivek K." https://zbmath.org/authors/?q=ai:verma.vivek-kumar "Pal, Vipin" https://zbmath.org/authors/?q=ai:pal.vipin-chandra Summary: In wireless communications several imaging methods are developed. During this pandemic situation in worldwide due to COVID 19, telemedicine plays a vital role in maintaining social distancing. In telemedicine, there is demand for high compression and encryption approaches to overcome bandwidth and security issues. In this work, a joint compression and encryption (JCE) approach for images is proposed. The proposed model is more secure with faster computational time. For compression absolute moment truncation coding approach is used and furthermore for encryption two chaotic maps i.e., Arnold's cat and Henon map are applied on the compressed image. Further, the experimental results were studied to measure the performance of the model for brute force, differential and statistical attack. The proposed schemes have shown good results for compression and even for security i.e., NPCR 99.86, UCAI=30.29, MSE= 108.91, PSNR=28.34, correlation coefficient = $$-0.0054$$ and entropy 7.97 against the various other approaches. Synchronization of a memristor chaotic system and image encryption https://zbmath.org/1485.93109 2022-06-24T15:10:38.853281Z "Li, Haoyu" https://zbmath.org/authors/?q=ai:li.haoyu "Wang, Leimin" https://zbmath.org/authors/?q=ai:wang.leimin "Lai, Qiang" https://zbmath.org/authors/?q=ai:lai.qiang Cepstral identification of autoregressive systems https://zbmath.org/1485.93129 2022-06-24T15:10:38.853281Z "Lauwers, Oliver" https://zbmath.org/authors/?q=ai:lauwers.oliver "Vermeersch, Christof" https://zbmath.org/authors/?q=ai:vermeersch.christof "De Moor, Bart" https://zbmath.org/authors/?q=ai:de-moor.bart-l-r Summary: Recent research has provided a better understanding of the power cepstrum, which has led to several applications in time series clustering, classification, and anomaly detection. It has also provided a deeper understanding of the theoretical framework that relates the power cepstrum with some system theoretic properties of the underlying dynamics. In this paper, we pursue the intricate connections between the power cepstrum of a signal and the pole polynomial of the underlying generative model. In this way, we develop a simple and extremely efficient method to identify an autoregressive (AR) system, starting from the power cepstrum of its output signal. This general framework uses Newton's identities to set up a system of elementary symmetric polynomials over the cepstral coefficients and results in an identification algorithm that is independent of the length of the power cepstrum, with computational complexity only linearly dependent on the order of the model. We provide several numerical examples, first on synthetic time series, then on the classical Yule sunspot numbers modeling problem, and finally on a contemporary application involving structural health monitoring. Subsequently, the novel system identification algorithm is employed to provide insight in the results of weighted cepstral clustering, showing that the model estimated from the center of a cluster provides a good estimator for the dynamics in that cluster. Inverse variational principles for control systems https://zbmath.org/1485.93248 2022-06-24T15:10:38.853281Z "Wang, Tao" https://zbmath.org/authors/?q=ai:wang.tao.2|wang.tao.10|wang.tao.7|wang.tao|wang.tao.11|wang.tao.12|wang.tao.1|wang.tao.4|wang.tao.6|wang.tao.9|wang.tao.8|wang.tao.3|wang.tao.5 "Huang, Yu" https://zbmath.org/authors/?q=ai:huang.yu Advances in signal processing. Theories, algorithms, and system control https://zbmath.org/1485.94001 2022-06-24T15:10:38.853281Z Publisher's description: This book attempts to improve algorithms by novel theories and complex data analysis in different scopes including object detection, remote sensing, data transmission, data fusion, gesture recognition, and medical image processing and analysis. The book is directed to the Ph.D. students, professors, researchers, and software developers working in the areas of digital video processing and computer vision technologies. The articles of this volume will be reviewed individually. Secure communication scheme based on synchronization of non-identical hyperchaotic systems https://zbmath.org/1485.94002 2022-06-24T15:10:38.853281Z "Olusola, O. I." https://zbmath.org/authors/?q=ai:olusola.olasunkanmi-i "Oyeleke, K. S." https://zbmath.org/authors/?q=ai:oyeleke.k-s "Vincent, U. E." https://zbmath.org/authors/?q=ai:vincent.uchechukwu-e "Njah, A. N." https://zbmath.org/authors/?q=ai:njah.abdulahi-n Summary: In this work, a secure communication scheme based on the synchronization of non-identical hyperchaotic systems is considered. Using hyperchaotic Lorenz and Lü as prototype oscillators, a secure communication scheme based on synchronization of different hyperchaotic systems with unknown parameters is presented. The communication scheme consists of a transmitter, which comprises the hyperchaotic carrier and modulator, and a receiver, which comprises the hyperchaotic response and a demodulator. Appropriate controllers and parameter update laws are designed to achieve synchronization between the hyperchaotic drive and hyperchaotic response systems and achieve the estimate of unknown parameters simultaneously. The message signal is recovered by the identified parameter and the corresponding demodulation method. Numerical simulations were performed to show the validity and feasibility of the designed secure communication scheme. Multiscale structural complexity of natural patterns https://zbmath.org/1485.94003 2022-06-24T15:10:38.853281Z "Bagrov, Andrey A." https://zbmath.org/authors/?q=ai:bagrov.andrey-a "Iakovlev, Ilia A." https://zbmath.org/authors/?q=ai:iakovlev.ilia-a "Mazurenko, Vladimir V." https://zbmath.org/authors/?q=ai:mazurenko.vladimir-v Summary: Complexity of patterns is key information for human brain to differ objects of about the same size and shape. Like other innate human senses, the complexity perception cannot be easily quantified. We propose a transparent and universal machine method for estimating structural (effective) complexity of two-dimensional and three-dimensional patterns that can be straightforwardly generalized onto other classes of objects. It is based on multistep renormalization of the pattern of interest and computing the overlap between neighboring renormalized layers. This way, we can define a single number characterizing the structural complexity of an object. We apply this definition to quantify complexity of various magnetic patterns and demonstrate that not only does it reflect the intuitive feeling of what is complex'' and what is simple'' but also, can be used to accurately detect different phase transitions and gain information about dynamics of nonequilibrium systems. When employed for that, the proposed scheme is much simpler and numerically cheaper than the standard methods based on computing correlation functions or using machine learning techniques. Image segmentation using level set driven by generalized divergence https://zbmath.org/1485.94004 2022-06-24T15:10:38.853281Z "Dai, Ming" https://zbmath.org/authors/?q=ai:dai.ming "Zhou, Zhiheng" https://zbmath.org/authors/?q=ai:zhou.zhiheng "Wang, Tianlei" https://zbmath.org/authors/?q=ai:wang.tianlei "Guo, Yongfan" https://zbmath.org/authors/?q=ai:guo.yongfan Summary: Image segmentation is an important analysis tool in the field of computer vision. In this paper, on the basis of the traditional level set method, a novel segmentation model using generalized divergences is proposed. The main advantage of generalized divergences is their smooth connection performance among various kinds of well-known and frequently used fundamental divergences with one formula. Therefore, the discrepancy between two probability distributions of segmented image parts can be measured by generalized divergences. We also found a solution to determine the optimal divergence automatically for different images. Experimental results on a variety of synthetic and natural images are presented, which demonstrate the potential of the proposed method. Compared with the previous active contour models formulated to solve the same nonparametric statistical segmentation problem, our method performs better both qualitatively and quantitatively. Metamorphic image registration using a semi-Lagrangian scheme https://zbmath.org/1485.94005 2022-06-24T15:10:38.853281Z "François, Anton" https://zbmath.org/authors/?q=ai:francois.anton "Gori, Pietro" https://zbmath.org/authors/?q=ai:gori.pietro "Glaunès, Joan" https://zbmath.org/authors/?q=ai:glaunes.joan-alexis Summary: In this paper, we propose an implementation of both Large Deformation Diffeomorphic Metric Mapping (LDDMM) and Metamorphosis image registration using a semi-Lagrangian scheme for geodesic shooting. We propose to solve both problems as an inexact matching providing a single and unifying cost function. We demonstrate that for image registration the use of a semi-Lagrangian scheme is more stable than a standard Eulerian scheme. Our GPU implementation is based on \texttt{PyTorch}, which greatly simplifies and accelerates the computations thanks to its powerful automatic differentiation engine. It will be freely available at \url{https://github.com/antonfrancois/Demeter_metamorphosis}. For the entire collection see [Zbl 1482.94007]. Currents and $$K$$-functions for fiber point processes https://zbmath.org/1485.94006 2022-06-24T15:10:38.853281Z "Hansen, Pernille E. H." https://zbmath.org/authors/?q=ai:hansen.pernille-e-h "Waagepetersen, Rasmus" https://zbmath.org/authors/?q=ai:waagepetersen.rasmus-plenge "Svane, Anne Marie" https://zbmath.org/authors/?q=ai:svane.anne-marie "Sporring, Jon" https://zbmath.org/authors/?q=ai:sporring.jon "Stephensen, Hans J. T." https://zbmath.org/authors/?q=ai:stephensen.hans-j-t "Hasselholt, Stine" https://zbmath.org/authors/?q=ai:hasselholt.stine "Sommer, Stefan" https://zbmath.org/authors/?q=ai:sommer.stefan Summary: Analysis of images of sets of fibers such as myelin sheaths or skeletal muscles must account for both the spatial distribution of fibers and differences in fiber shape. This necessitates a combination of point process and shape analysis methodology. In this paper, we develop a $$K$$-function for fiber-valued point processes by embedding shapes as currents, thus equipping the point process domain with metric structure inherited from a reproducing kernel Hilbert space. We extend Ripley's $$K$$-function which measures deviations from complete spatial randomness of point processes to fiber data. The paper provides a theoretical account of the statistical foundation of the $$K$$-function, and we apply the $$K$$-function on simulated data and a data set of myelin sheaths. This includes a fiber data set consisting of myelin sheaths configurations at different debts. For the entire collection see [Zbl 1482.94007]. Bidimensional empirical mode decomposition-based diffusion filtering for image denoising https://zbmath.org/1485.94007 2022-06-24T15:10:38.853281Z "Kommuri, Sethu Venkata Raghavendra" https://zbmath.org/authors/?q=ai:kommuri.sethu-venkata-raghavendra "Singh, Himanshu" https://zbmath.org/authors/?q=ai:singh.himanshu "Kumar, Anil" https://zbmath.org/authors/?q=ai:kumar.anil "Bajaj, Varun" https://zbmath.org/authors/?q=ai:bajaj.varun Summary: In this paper, a new method is proposed for image denoising inspired by the bidimensional empirical mode decomposition algorithm and the diffusion-based filtering. In the bidimensional empirical mode decomposition technique, the noisy image is decomposed into its respective intrinsic mode functions and the high-frequency and the low-frequency noises are removed with the help of the proposed diffusion filtering method using its parameters such as connectivity, conductance function, gradient threshold and the number of iterations. The image is reconstructed with the help of these denoised intrinsic mode functions. Performance of the proposed technique is evaluated in terms of the peak signal-to-noise ratio, the structural similarity index, the mean square error and is then compared with the already existing techniques on the image denoising. From the experimentation and results obtained, it is clear that the bidimensional empirical mode decomposition method using the diffusion filtering algorithm is efficient for the denoising of the images both qualitatively and quantitatively. An efficient total variation minimization method for image restoration https://zbmath.org/1485.94008 2022-06-24T15:10:38.853281Z "Pham, Cong Thang" https://zbmath.org/authors/?q=ai:pham.cong-thang "Tran, Thi Thu Thao" https://zbmath.org/authors/?q=ai:tran.thi-thu-thao "Gamard, Guilhem" https://zbmath.org/authors/?q=ai:gamard.guilhem Summary: In this paper, we present an effective algorithm for solving the Poisson-Gaussian total variation model. The existence and uniqueness of solution for the mixed Poisson-Gaussian model are proved. Due to the strict convexity of the model, the split-Bregman method is employed to solve the minimization problem. Experimental results show the effectiveness of the proposed method for mixed Poisson-Gaussion noise removal. Comparison with other existing and well-known methods is provided as well. Local reliability weighting explains identification of partially masked objects in natural images https://zbmath.org/1485.94009 2022-06-24T15:10:38.853281Z "Sebastian, Stephen" https://zbmath.org/authors/?q=ai:sebastian.stephen "Seemiller, Eric S." https://zbmath.org/authors/?q=ai:seemiller.eric-s "Geisler, Wilson S." https://zbmath.org/authors/?q=ai:geisler.wilson-s.1 Summary: A fundamental natural visual task is the identification of specific target objects in the environments that surround us. It has long been known that some properties of the background have strong effects on target visibility. The most well-known properties are the luminance, contrast, and similarity of the background to the target. In previous studies, we found that these properties have highly lawful effects on detection in natural backgrounds. However, there is another important factor affecting detection in natural backgrounds that has received little or no attention in the masking literature, which has been concerned with detection in simpler backgrounds. Namely, in natural backgrounds the properties of the background often vary under the target, and hence some parts of the target are masked more than others. We began studying this factor, which we call the partial masking factor,'' by measuring detection thresholds in backgrounds of contrast-modulated white noise that was constructed so that the standard template-matching (TM) observer performs equally well whether or not the noise contrast modulates in the target region. If noise contrast is uniform in the target region, then this TM observer is the Bayesian optimal observer. However, when the noise contrast modulates then the Bayesian optimal observer weights the template at each pixel location by the estimated reliability at that location. We find that human performance for modulated noise backgrounds is predicted by this reliability-weighted TM (RTM) observer. More surprisingly, we find that human performance for natural backgrounds is also predicted by the RTM observer. Differential pulse code modulation and motion aligned optimal reconstruction for block-based compressive video sensing using conditional autoregressive-salp swarm algorithm https://zbmath.org/1485.94010 2022-06-24T15:10:38.853281Z "Sekar, R." https://zbmath.org/authors/?q=ai:sekar.r-chandra-guru|sekar.ramamurthy|sekar.rajam "Ravi, G." https://zbmath.org/authors/?q=ai:ravi.g Research on image signal identification based on adaptive array stochastic resonance https://zbmath.org/1485.94011 2022-06-24T15:10:38.853281Z "Zhao, Jingjing" https://zbmath.org/authors/?q=ai:zhao.jingjing "Ma, Yumei" https://zbmath.org/authors/?q=ai:ma.yumei "Pan, Zhenkuan" https://zbmath.org/authors/?q=ai:pan.zhenkuan "Zhang, Huage" https://zbmath.org/authors/?q=ai:zhang.huage Summary: Aiming at the problems of low accuracy of image signal identification and poor anti-noise signal interference ability under strong noise environment, a signal identification method of correlated noisy image based on adaptive array stochastic resonance (SR) is proposed in this paper. Firstly, the two-dimensional grayscale image is transformed to a one-dimensional binary pulse amplitude modulation (BPAM) signal with periodicity by the row or column scanning method, encoding and modulation. Then, the one-dimensional low signal-to-noise ratio BPAM signal can be applied to the saturating nonlinearity array SR module for image signal identification processing and part of the noise energy is converted into signal energy. Finally, the one-dimensional image signal processed by the nonlinearities is demodulated, decoded and reverse scanned to get the restored grayscale image. The simulation results show that the image signal identification method proposed in this paper is highly efficient and accurate for the identification of noisy image signals of different sizes, and the bit error rate (BER) is also significantly reduced. A grid-based nonlinear approach to noise reduction and deconvolution for coupled systems https://zbmath.org/1485.94012 2022-06-24T15:10:38.853281Z "Araki, Samuel J." https://zbmath.org/authors/?q=ai:araki.samuel-j "Koo, Justin W." https://zbmath.org/authors/?q=ai:koo.justin-w "Martin, Robert S." https://zbmath.org/authors/?q=ai:martin.robert-scott "Dankongkakul, Ben" https://zbmath.org/authors/?q=ai:dankongkakul.ben Summary: To varying degrees, all experimental measurements are corrupted by real-world noise sources including electronic noise in the acquisition system, far-field perturbations in the surrounding environment, and local physical phenomena. This paper presents a grid-based nonlinear analysis technique for causally coupled systems which leverages the availability of a high-fidelity reference signal to effectively denoise a target measurement signal. The foundation of this approach, essentially an ensemble-averaging procedure in multidimensional phase space, is strongly motivated by work from dynamical systems theory. Furthermore, a straightforward extension of this technique allows for recovery of a time-resolved representation from the underresolved measurement signal. Both the nonlinear noise reduction and this temporal deconvolution extension are applied to signals from three different coupled dynamic systems (sine wave system, Lorenz system, and Hall-effect thruster) to demonstrate effectiveness on periodic, chaotic, and experimental systems. Geometry of the phase retrieval problem. Graveyard of algorithms https://zbmath.org/1485.94013 2022-06-24T15:10:38.853281Z "Barnett, Alexander H." https://zbmath.org/authors/?q=ai:barnett.alexander-h "Epstein, Charles L." https://zbmath.org/authors/?q=ai:epstein.charles-l "Greengard, Leslie" https://zbmath.org/authors/?q=ai:greengard.leslie-f "Magland, Jeremy" https://zbmath.org/authors/?q=ai:magland.jeremy-f Publisher's description: Recovering the phase of the Fourier transform is a ubiquitous problem in imaging applications from astronomy to nanoscale X-ray diffraction imaging. Despite the efforts of a multitude of scientists, from astronomers to mathematicians, there is, as yet, no satisfactory theoretical or algorithmic solution to this class of problems. Written for mathematicians, physicists and engineers working in image analysis and reconstruction, this book introduces a conceptual, geometric framework for the analysis of these problems, leading to a deeper understanding of the essential, algorithmically independent, difficulty of their solutions. Using this framework, the book studies standard algorithms and a range of theoretical issues in phase retrieval and provides several new algorithms and approaches to this problem with the potential to improve the reconstructed images. The book is lavishly illustrated with the results of numerous numerical experiments that motivate the theoretical development and place it in the context of practical applications. Two-dimensional robust source localization under non-Gaussian noise https://zbmath.org/1485.94014 2022-06-24T15:10:38.853281Z "Bouiba, A." https://zbmath.org/authors/?q=ai:bouiba.a "Korso, M. N. El" https://zbmath.org/authors/?q=ai:korso.m-n-el "Breloy, A." https://zbmath.org/authors/?q=ai:breloy.arnaud "Forster, P." https://zbmath.org/authors/?q=ai:forster.philippe "Hamadouche, M." https://zbmath.org/authors/?q=ai:hamadouche.mohand-ameziane|hamadouche.mhamed "Lagha, M." https://zbmath.org/authors/?q=ai:lagha.mohand|lagha.maher Summary: Various methods have been proposed to estimate the direction of arrival (DOA) of sources under the assumption of Gaussian noise. This assumption, based on the central limit theorem, has been mainly used because it offers an appropriate model in a homogeneous environment. Nevertheless, under certain conditions, the Gaussian hypothesis cannot fully represent the noisy environment. Consequently, these classical estimation methods are no longer suitable, and the use of non-Gaussian noise model is necessary. In this paper, we therefore treat the DOA estimation problem in a non-Gaussian framework based on the maximum likelihood approach using a compound-Gaussian (CG) noise model. We then propose the use of the expectation maximization (EM) algorithm to reduce the computational cost of the proposed algorithm. Several simulations are carried out to illustrate the interest of our approach compared to the state of the art. Solving phase retrieval with random initial guess is nearly as good as by spectral initialization https://zbmath.org/1485.94015 2022-06-24T15:10:38.853281Z "Cai, Jian-Feng" https://zbmath.org/authors/?q=ai:cai.jian-feng "Huang, Meng" https://zbmath.org/authors/?q=ai:huang.meng "Li, Dong" https://zbmath.org/authors/?q=ai:li.dong|li.dong.4|li.dong.1|li.dong.2|li.dong.3 "Wang, Yang" https://zbmath.org/authors/?q=ai:wang.yang.1|wang.yang Summary: The problem of recovering a signal $$\mathfrak{x} \in \mathbb{R}^n$$ from a set of magnitude measurements $$y_i = | \langle \mathfrak{a}_i, \mathfrak{x} \rangle |$$, $$i = 1, \ldots, m$$ is referred as phase retrieval, which has many applications in fields of physical sciences and engineering. In this paper we show that the smoothed amplitude flow based model for phase retrieval has benign geometric structure under the optimal sampling complexity. In particular, we show that when the measurements $$\mathfrak{a}_i \in \mathbb{R}^n$$ are Gaussian random vectors and the number of measurements $$m \geq C n$$, our smoothed amplitude flow based model has no spurious local minimizers with high probability, i.e., the target solution $$\mathfrak{x}$$ is the unique global minimizer (up to a global phase) and the loss function has a negative directional curvature around each saddle point. Due to this benign geometric landscape, the phase retrieval problem can be solved by the gradient descent algorithms without spectral initialization. Numerical experiments show that the gradient descent algorithm with random initialization performs well even comparing with state-of-the-art algorithms with spectral initialization in empirical success rate and convergence speed. Communication-reducing algorithm of distributed least mean square algorithm with neighbor-partial diffusion https://zbmath.org/1485.94016 2022-06-24T15:10:38.853281Z "Chen, Feng" https://zbmath.org/authors/?q=ai:chen.feng|chen.feng.1 "Deng, Shuwei" https://zbmath.org/authors/?q=ai:deng.shuwei "Hua, Yi" https://zbmath.org/authors/?q=ai:hua.yi "Duan, Shukai" https://zbmath.org/authors/?q=ai:duan.shukai "Wang, Lidan" https://zbmath.org/authors/?q=ai:wang.lidan "Wu, Jiagui" https://zbmath.org/authors/?q=ai:wu.jiagui Summary: With the development of distributed algorithms, many researchers are committed to the goal of maintaining the long-term stability of the network by reducing the communication cost. However, many algorithms that lessen communication costs often result in a significant decrease in estimation accuracy. In order to reduce the communication cost with less performance degradation, the distributed neighbor-partial diffusion least-mean-square algorithm (NPDLMS) is proposed in this paper. Besides, considering the data redundancy in the network, we offer the distributed data selection NPDLMS algorithm, which further improves the estimation accuracy and reduces the communication cost. In the performance analysis, the stability and the communication cost of the algorithms are given. Wigner analysis of operators. I: Pseudodifferential operators and wave fronts https://zbmath.org/1485.94017 2022-06-24T15:10:38.853281Z "Cordero, Elena" https://zbmath.org/authors/?q=ai:cordero.elena "Rodino, Luigi" https://zbmath.org/authors/?q=ai:rodino.luigi Summary: We perform Wigner analysis of linear operators. Namely, the standard time-frequency representation Short-time Fourier Transform (STFT) is replaced by the $$\mathcal{A}$$-Wigner distribution defined by $$W_{\mathcal{A}}(f) = \mu(\mathcal{A})(f \otimes \bar{f})$$, where $$\mathcal{A}$$ is a $$4 d \times 4 d$$ symplectic matrix and $$\mu(\mathcal{A})$$ is an associate metaplectic operator. Basic examples are given by the so-called $$\tau$$-Wigner distributions. Such representations provide a new characterization for modulation spaces when $$\tau \in(0, 1)$$. Furthermore, they can be efficiently employed in the study of the off-diagonal decay for pseudodifferential operators with symbols in the Sjöstrand class (in particular, in the Hörmander class $$S_{0 , 0}^0)$$. The novelty relies on defining time-frequency representations via metaplectic operators, developing a conceptual framework and paving the way for a new understanding of quantization procedures. We deduce micro-local properties for pseudodifferential operators in terms of the Wigner wave front set. Finally, we compare the Wigner with the global Hörmander wave front set and identify the possible presence of a ghost region in the Wigner wave front. In the second part of the paper applications to Fourier integral operators and Schrödinger equations will be given. Hierarchical isometry properties of hierarchical measurements https://zbmath.org/1485.94018 2022-06-24T15:10:38.853281Z "Flinth, Axel" https://zbmath.org/authors/?q=ai:flinth.axel "Groß, Benedikt" https://zbmath.org/authors/?q=ai:gross.benedict-h "Roth, Ingo" https://zbmath.org/authors/?q=ai:roth.ingo "Eisert, Jens" https://zbmath.org/authors/?q=ai:eisert.jens "Wunder, Gerhard" https://zbmath.org/authors/?q=ai:wunder.gerhard Summary: Compressed sensing studies linear recovery problems under structure assumptions. We introduce a new class of measurement operators, coined hierarchical measurement operators, and prove results guaranteeing the efficient, stable and robust recovery of hierarchically structured signals from such measurements. We derive bounds on their hierarchical restricted isometry properties based on the restricted isometry constants of their constituent matrices, generalizing and extending prior work on Kronecker-product measurements. As an exemplary application, we apply the theory to two communication scenarios. The fast and scalable HiHTP algorithm is shown to be suitable for solving these types of problems and its performance is evaluated numerically in terms of sparse signal recovery and block detection capability. Tell me about \dots\ compressive sensing https://zbmath.org/1485.94019 2022-06-24T15:10:38.853281Z "Foucart, Simon" https://zbmath.org/authors/?q=ai:foucart.simon Summary: Aux alentours de 2005, la communauté scientifique et technologique s'enflamma pour un nouveau sujet appelé compressive sensing, terme interchangeable avec compressed sensing et compressive sampling. En ce début de 2021, une recherche combinée de ces trois termes sur Google Scholar produit à peu pres dix-sept mille résultats. Si cette profusion s'explique par un intérêt venu de nombreuses disciplines, il est bon de noter que le sujet possède bel et bien une origine mathématique dans les travaux fondateurs \textit{E. J. Candès} et al. [IEEE Trans. Inf. Theory 52, No. 2, 489--509 (2006; Zbl 1231.94017)] et \textit{D. L. Donoho} [Commun. Pure Appl. Math. 59, No. 6, 797--829 (2006; Zbl 1113.15004)]. Une quinzaine d'années après ces travaux, il est opportun de faire un point sur la théorie développée depuis. De nombreux lecteurs ont entendu vaguement parler du compressive sensing, y associant sans doute les mots-clefs de parcimonie, minimisation $$\ell_1$$, et matrices aléatoires, peut-être réduisant le sujet à (un sous-ensemble de) ces trois notions. Je vais m'atteler à en donner une description plus large, mais forcément incomplete. Les lecteurs souhaitant approfondir trouveront plus de détails dans les ouvrages [\textit{D. Chafaï} et al., Interactions between compressed sensing random matrices and high dimensional geometry. Paris: Société Mathématique de France (SMF) (2012; Zbl 1396.94015); \textit{Y. Eldar} et \textit{G. Kutyniok}, Compressed sensing: theory and applications. Cambridge: Cambridge University Press (2012); \textit{S. Foucart} and \textit{H. Rauhut}, A mathematical introduction to compressive sensing. New York, NY: Birkhäuser/Springer (2013; Zbl 1315.94002)] ou dans l'article \textit{S. Foucart} [Springer Proc. Math. Stat. 201, 61--104 (2017; Zbl 1391.94211)]. Diffusion-probabilistic least mean square algorithm https://zbmath.org/1485.94020 2022-06-24T15:10:38.853281Z "Guan, Sihai" https://zbmath.org/authors/?q=ai:guan.sihai "Meng, Chun" https://zbmath.org/authors/?q=ai:meng.chun "Biswal, Bharat" https://zbmath.org/authors/?q=ai:biswal.bharat-b Summary: In this paper, a novel diffusion estimation algorithm is proposed from a probabilistic perspective by combining the diffusion strategy and the probabilistic least mean square (LMS) at all distributed network nodes. The proposed method, namely diffusion-probabilistic LMS (DPLMS), is more robust to the input signal and impulsive noise than previous algorithms like the diffusion sign-error LMS (DSE-LMS), diffusion robust variable step-size LMS (DRVSSLMS), and diffusion least logarithmic absolute difference (DLLAD) algorithms. Instead of minimizing the estimation error, the DPLMS algorithm is based on approximating the posterior distribution with an isotropic Gaussian distribution. In this paper, the stability of the mean estimation error and the computational complexity of the DPLMS algorithm are analyzed theoretically. Simulation experiments are conducted to explore the mean estimation error for the DPLMS algorithm with varied conditions for input signals and impulsive interferences, compared to the DSE-LMS, DRVSSLMS, and DLLAD algorithms. Both results from the theoretical analysis and simulation suggest that the DPLMS algorithm has superior performance than the DSE-LMS, DRVSSLMS, and DLLAD algorithms when estimating the unknown linear system under the changeable impulsive noise environments. An efficient carrier frequency offset tracking for OFDMA systems using normalized least-mean-square algorithm https://zbmath.org/1485.94021 2022-06-24T15:10:38.853281Z "Ilaiyaraja, S." https://zbmath.org/authors/?q=ai:ilaiyaraja.s "Balasubadra, K." https://zbmath.org/authors/?q=ai:balasubadra.k "Senthil, B." https://zbmath.org/authors/?q=ai:senthil.balakrishnan Summary: In orthogonal frequency-division multiple access (OFDMA), carrier frequency offset (CFO) destroys the orthogonality between subcarriers due to Doppler effect of the channel and also causes inter-carrier interference (ICI) as well as multiple access interference (MAI). In this paper, CFO tracking algorithm for OFDMA system is presented to overcome the effect of ICI as well as MAI. The proposed algorithm employs normalized least-mean-square (NLMS) algorithm which tracks the residual carrier frequency offset in OFDMA system. The proposed NLMS-based tracking algorithm uses average of all residual carrier frequency offset (RCFO) for improving the mean square error (MSE) performance. The proposed NLMS-based frequency tracking along with frequency offset compensator improves the error performance by reducing the effect of ICI and MAI. Simulation results indicate that the proposed algorithm achieves better tracking performance in the presence of frequency offset. From the results, it is observed that the MSE of the proposed estimator is improved about 3.8 dB at SNR = 20 dB with the frequency offset. An adaptive algorithm for calculating crosstalk error for blind source separation https://zbmath.org/1485.94022 2022-06-24T15:10:38.853281Z "Lang, Rongling" https://zbmath.org/authors/?q=ai:lang.rongling "Ye, Wanyang" https://zbmath.org/authors/?q=ai:ye.wanyang "Zhao, Fei" https://zbmath.org/authors/?q=ai:zhao.fei "Li, Zi" https://zbmath.org/authors/?q=ai:li.zi Summary: The crosstalk error is widely used to evaluate the performance of blind source separation. However, it needs to know the global separation matrix in advance, and it is not robust. In order to solve these problems, a new adaptive algorithm for calculating crosstalk error is presented, which calculates the crosstalk error by a cost function of least squares criterion, and the robustness of the crosstalk error is improved by introducing the position information of the maximum value in the global separation matrix. Finally, the method is compared with the conventional RLS algorithms in terms of performance, robustness and convergence rate. Furthermore, its validity is verified by simulation experiments and real world signals experiments. Efficient clustering of non-coherent and coherent components regardless of sources' powers for 2D DOA estimation https://zbmath.org/1485.94023 2022-06-24T15:10:38.853281Z "Molaei, Amir Masoud" https://zbmath.org/authors/?q=ai:molaei.amir-masoud "Zakeri, Bijan" https://zbmath.org/authors/?q=ai:zakeri.bijan "Hosseini Andargoli, Seyed Mehdi" https://zbmath.org/authors/?q=ai:andargoli.seyed-mehdi-hosseini Summary: Conventional decorrelation techniques that resolve all signals simultaneously are not efficient in mixture scenarios of non-coherent and coherent signals. In newer methods for one-dimensional arrays, non-coherent signals and coherent groups are resolved separately. However, employing an unreliable and non-adaptive threshold is the most significant disadvantage of these methods. On the other hand, they cannot be implemented for two-dimensional arrays. To deal with these issues, the signals separation using $$k$$-medoids clustering (SSKMC) algorithm was presented. Although the SSKMC algorithm does not have any of the shortcomings mentioned above, it relies on a basic limiting assumption that the sources should be equi-power. Therefore, the practical application of the SSKMC algorithm is facing a serious problem. In this paper, the SSKMC algorithm is extended so that it can be used even if the sources' powers are not the same. First, the two-dimensional array is divided into several parallel linear sub-arrays. Then, by defining a components separation matrix, and employing its eigenvalues, the non-coherent and coherent components are identified. The effectiveness of the proposed solution is proven by mathematical facts. Simulation results verify the proofs and the benefit of the proposed solution. A note on reconstruction of bandlimited signals of several variables sampled at Nyquist rate https://zbmath.org/1485.94024 2022-06-24T15:10:38.853281Z "Norvidas, Saulius" https://zbmath.org/authors/?q=ai:norvidas.saulius Summary: A standard problem in certain applications requires one to find a reconstruction of an analogue signal $$f$$ from a sequence of its samples $$f(t_k)_k$$. The great success of such a reconstruction consists, under additional assumptions, in the fact that an analogue signal $$f$$ of a real variable $$t \in \mathbb{R}$$ can be represented equivalently by a sequence of complex numbers $$f(t_k)_k$$, i.e. by a digital signal. In the sequel, this digital signal can be processed and filtered very efficiently, for example, on digital computers. The sampling theory is one of the theoretical foundations of the conversion from analog to digital signals. There is a long list of impressive research results in this area starting with the classical work of Shannon. Note that the well known Shannon sampling theory is mainly for one variable signals. In this paper, we concern with bandlimited signals of several variables, whose restriction to Euclidean space $$\mathbb{R}^n$$ has finite $$p$$-energy. We present sampling series, where signals are sampled at Nyquist rate. These series involve digital samples of signals and also samples of their partial derivatives. It is important that our reconstruction is stable in the sense that sampling series converge absolutely and uniformly on the whole $$\mathbb R^n$$. Therefore, having a stable reconstruction process, it is possible to bound the approximation error, which is made by using only of the partial sum with finitely many samples. Generalized fractional ambiguity function and its applications https://zbmath.org/1485.94025 2022-06-24T15:10:38.853281Z "Sahay, Peeyush" https://zbmath.org/authors/?q=ai:sahay.peeyush "Shaik Rasheed, Izaz Ahamed" https://zbmath.org/authors/?q=ai:shaik-rasheed.izaz-ahamed "Kulkarni, Pranav" https://zbmath.org/authors/?q=ai:kulkarni.pranav "Jain, Shubham Anand" https://zbmath.org/authors/?q=ai:jain.shubham-anand "Anjarlekar, Ameya" https://zbmath.org/authors/?q=ai:anjarlekar.ameya "Radhakrishna, P." https://zbmath.org/authors/?q=ai:radhakrishna.p "Gadre, Vikram M." https://zbmath.org/authors/?q=ai:gadre.vikram-m Summary: The ambiguity function (AF) is an essential time-frequency analysis tool to analyze the radar waveform properties in radar applications. It can be used effectively and reliably to analyze properties like the peak-to-side-lobe ratio, time delay resolution, Doppler resolution and tolerance characteristic. However, it fails to analyze higher-order chirp waveforms and is unable to estimate their parameters. To solve this problem, a generalized time-frequency transform-based generalized fractional AF (GFAF) and generalized fractional Wigner-Ville distribution (GFWVD) are proposed. GFAF is also a generalization of the Fourier transform-based ambiguity function and the fractional Fourier transform-based ambiguity function. The uncertainty principle for GFAF and GFWVD is derived. Examples are presented to demonstrate the effectiveness of GFAF in analyzing cubic chirp waveforms and estimating parameters of multicomponent cubic chirps. The superiority of GFAF is demonstrated by comparing the mean square error to Cramer-Rao lower bound and high-order ambiguity function under different input-signal-to-noise ratio conditions. The robustness is demonstrated by comparing the signal-to-noise ratio gain to that of the time domain-matched filtering and other ambiguity functions. Finally, fourth-order parameters of a real bat echolocation signal are estimated. Non-convex total variation regularization for convex denoising of signals https://zbmath.org/1485.94026 2022-06-24T15:10:38.853281Z "Selesnick, Ivan" https://zbmath.org/authors/?q=ai:selesnick.ivan-w "Lanza, Alessandro" https://zbmath.org/authors/?q=ai:lanza.alessandro "Morigi, Serena" https://zbmath.org/authors/?q=ai:morigi.serena "Sgallari, Fiorella" https://zbmath.org/authors/?q=ai:sgallari.fiorella Summary: Total variation (TV) signal denoising is a popular nonlinear filtering method to estimate piecewise constant signals corrupted by additive white Gaussian noise. Following a convex non-convex'' strategy, recent papers have introduced non-convex regularizers for signal denoising that preserve the convexity of the cost function to be minimized. In this paper, we propose a non-convex TV regularizer, defined using concepts from convex analysis, that unifies, generalizes, and improves upon these regularizers. In particular, we use the generalized Moreau envelope which, unlike the usual Moreau envelope, incorporates a matrix parameter. We describe a novel approach to set the matrix parameter which is essential for realizing the improvement we demonstrate. Additionally, we describe a new set of algorithms for non-convex TV denoising that elucidate the relationship among them and which build upon fast exact algorithms for classical TV denoising. Two-dimensional off-grid DOA estimation with improved three-parallel coprime arrays on moving platform https://zbmath.org/1485.94027 2022-06-24T15:10:38.853281Z "Si, Weijian" https://zbmath.org/authors/?q=ai:si.weijian "Zeng, Fuhong" https://zbmath.org/authors/?q=ai:zeng.fuhong "Zhang, Chunjie" https://zbmath.org/authors/?q=ai:zhang.chunjie "Peng, Zhanli" https://zbmath.org/authors/?q=ai:peng.zhanli Summary: In this paper, two-dimensional (2-D) direction-of-arrival (DOA) estimation issue is investigated by constructing array model on moving platform. Based on the moving array model, we propose two improved three-parallel coprime arrays (TPCPAs), which utilize the redundancy in the physical structure of moving TPCPA (MTPCPA) and are able to generate the same virtual array as MTPCPA using much fewer sensors. Accordingly, higher sensor utilization is achieved by the proposed arrays, which can contribute to the increase in the number of degrees of freedom. Besides, with the proposed arrays, we also present a 2-D off-grid DOA estimation algorithm, in which the estimated 2-D angles are automatically paired without extra pairing procedure. Particularly, by utilizing the $$\ell_p$$ $$(0<p<1)$$ norm and majorization-minimization method jointly, the proposed algorithm solves the grid mismatch problem effectively and accordingly enhances the 2-D DOA estimation performance. Finally, numerical simulations demonstrate the effectiveness and superiority of the proposed arrays and 2-D off-grid DOA estimation algorithm. Direct symbol decoding using GA-SVM in chaotic baseband wireless communication system https://zbmath.org/1485.94028 2022-06-24T15:10:38.853281Z "Yin, Hui-Ping" https://zbmath.org/authors/?q=ai:yin.hui-ping "Ren, Hai-Peng" https://zbmath.org/authors/?q=ai:ren.haipeng Summary: To retrieve the information from the serious distorted received signal is the key challenge of communication signal processing. The chaotic baseband communication promises theoretically to eliminate the inter-symbol interference (ISI), however, it needs complicated calculation, if it is not impossible. In this paper, a genetic algorithm support vector machine (GA-SVM) based symbol detection method is proposed for chaotic baseband wireless communication system (CBWCS), by this way, treating the problem from a different viewpoint, the symbol decoding process is converted to be a binary classification through GA-SVM model. A trained GA-SVM model is used to decode the symbols directly at the receiver, so as to improve the bit error rate (BER) performance of the CBWCS and simplify the symbol detection process by removing the channel identification and the threshold calculation process as compared to that using the calculated threshold to decode symbol in the traditional methods. The simulation results show that the proposed method has better BER performance in both the static and time-varying wireless channels. The experimental results, based on the wireless open-access research platform, indicate that the BER of the proposed GA-SVM based symbol detection approach is superior to the other counterparts under a practical wireless multipath channel. Steering vector optimization using subspace-based constraints for robust adaptive beamforming https://zbmath.org/1485.94029 2022-06-24T15:10:38.853281Z "Zhang, Pan" https://zbmath.org/authors/?q=ai:zhang.pan Summary: To address the issue of steering vector mismatch, a robust adaptive beamforming design via steering vector optimization is proposed in this paper. Different from conventional studies, this paper resolves the exact desired signal (DS) steering vector through formulating an array output power maximization problem subjected to noise subspace (NS) based and interference subspace (IS) based constraints. Under the condition that the NS is ready to be attained while the IS is hard to be got, two efficient interference-plus-noise covariance matrix (INCM) reconstruction means, i.e. direct DS matrix removal from sample covariance matrix and indirect DS blocking from training data and matrix transition, are derived to estimate the IS with high accuracy. Herein, after resolving DS steering vector, the weight vectors are thereby extracted with orthogonal projection (OP) criterion. Numerical simulations verify that the devised methods can outperform the existing ones and obtain almost optimal performance across a wide range of DS power. Gridless super-resolution sparse recovery for non-sidelooking STAP using reweighted atomic norm minimization https://zbmath.org/1485.94030 2022-06-24T15:10:38.853281Z "Zhang, Tao" https://zbmath.org/authors/?q=ai:zhang.tao.2|zhang.tao.4|zhang.tao.1|zhang.tao.5|zhang.tao.6 "Hu, Yongsheng" https://zbmath.org/authors/?q=ai:hu.yongsheng "Lai, Ran" https://zbmath.org/authors/?q=ai:lai.ran Summary: The sparse recovery space-time adaptive processing (SR-STAP) can reduce the requirements of clutter samples and suppress clutter effectively using limited training samples for airborne radar. Commonly, the whole angle-Doppler plane is uniformly discretized into small grid points in SR-STAP methods. However, the clutter patches deviate from the pre-discretized grid points in a non-sidelooking SR-STAP radar. The off-grid effect degrades the SR-STAP performance significantly. In this paper, a gridless SR-STAP method based on reweighted atomic norm minimization is proposed, in which the clutter spectrum is precisely estimated in the continuous angle-Doppler domain without resolution limit. Numerical simulations are conducted and the results show that the proposed method can achieve better performance than the SR-STAP methods with discretized dictionaries and the SR-STAP methods utilizing atomic norm minimization. An off-grid block-sparse Bayesian method for direction of arrival and polarization estimation https://zbmath.org/1485.94031 2022-06-24T15:10:38.853281Z "Zhao, Pinjiao" https://zbmath.org/authors/?q=ai:zhao.pinjiao "Hu, Guobing" https://zbmath.org/authors/?q=ai:hu.guobing "Zhou, Hongcheng" https://zbmath.org/authors/?q=ai:zhou.hongcheng Summary: The problem of DOA and polarization parameter estimation is considered in this paper from a perspective of sparse reconstruction. We present a novel off-grid hierarchical block-sparse Bayesian method for DOA and polarization parameter estimation to improve the estimation accuracy. Firstly, an off-grid model is formulated via the first-order Taylor expansion of the source steering vector. Then, a block-sparse vector is constructed based on sparse Bayesian inference, on which a two-layer hierarchical prior is imposed to promote block sparsity and internal sparsity simultaneously. Finally, the variables and model parameters are updated alternately by adopting the variational Bayesian approximation. In addition, the Cramer-Rao bound for DOA and polarization estimation, the convergence property and the computational complexity analysis of the proposed method are derived. Compared with the existing sparse reconstruction methods and the traditional subspace-based methods, the proposed method can achieve higher estimation accuracy. Simulation results demonstrate the effectiveness and notable performance of the proposed method. Uncertainty principles for the two-sided quaternion linear canonical transform https://zbmath.org/1485.94032 2022-06-24T15:10:38.853281Z "Zhu, Xiaoyu" https://zbmath.org/authors/?q=ai:zhu.xiaoyu "Zheng, Shenzhou" https://zbmath.org/authors/?q=ai:zheng.shenzhou Summary: The quaternion linear canonical transform (QLCT), as a generalized form of the quaternion Fourier transform, is a powerful analyzing tool in image and signal processing. In this paper, we propose three different forms of uncertainty principles for the two-sided QLCT, which include Hardy's uncertainty principle, Beurling's uncertainty principle and Donoho-Stark's uncertainty principle. These consequences actually describe the quantitative relationships of the quaternion-valued signal in arbitrary two different QLCT domains, which have many applications in signal recovery and color image analysis. In addition, in order to analyze the non-stationary signal and time-varying system, we present Lieb's uncertainty principle for the two-sided short-time quaternion linear canonical transform (SQLCT) based on the Hausdorff-Young inequality. By adding the nonzero quaternion-valued window function, the two-sided SQLCT has a great significant application in the study of signal local frequency spectrum. Finally, we also give a lower bound for the essential support of the two-sided SQLCT. Calibration method of anchor position in indoor environment based on two-step extended Kalman filter https://zbmath.org/1485.94033 2022-06-24T15:10:38.853281Z "Tian, Xin" https://zbmath.org/authors/?q=ai:tian.xin "Wei, Guoliang" https://zbmath.org/authors/?q=ai:wei.guoliang "Zhou, Jun" https://zbmath.org/authors/?q=ai:zhou.jun.1|zhou.jun.3|zhou.jun.2 Summary: In this paper, a calibration method based on a two-step extended Kalman filter (EKF) is proposed. Firstly, the four vertex positions of a rectangle in the environment are calibrated. Specifically, in the first stage the initial position states of all anchor nodes are obtained using a rough localization method. In the second stage, the first and second step EKFs are used to obtain the real-time state of the measured target and all anchor nodes. The state estimation of all anchor nodes is achieved by employing the iterative process of the two-step EKF. The effectiveness and stability of the proposed algorithm are verified by simulations and experiment. On information links https://zbmath.org/1485.94034 2022-06-24T15:10:38.853281Z "Baudot, Pierre" https://zbmath.org/authors/?q=ai:baudot.pierre Summary: In [\textit{P. Baudot} and \textit{D. Bennequin}, The homological nature of entropy'', Entropy 17, No. 5, 3253--3318 (2015; \url{doi:10.3390/e17053253})], we suggested that the (negative) minima of the 3-way multivariate mutual information correspond to Borromean links, paving the way for providing probabilistic analogs of linking numbers. This short note generalizes the correspondence of the minima of $$k$$ multivariate interaction information with $$k$$ Brunnian links in the binary variable case. Following [\textit{A. Jakulin} and \textit{I. Bratko}, Quantifying and visualizing attribute interactions'', Preprint, \url{arXiv:cs/0308002}], the negativity of the associated K-L divergence of the joint probability law with its Kirkwood approximation implies an obstruction to local decomposition into lower order interactions than $$k$$, defining a local decomposition inconsistency that reverses Abramsky's contextuality local-global relation [\textit{S. Abramsky} and \textit{A. Brandenburger}, New J. Phys. 13, No. 11, Article ID 113036, 39 p. (2011; Zbl 1448.81028)]. Those negative $$k$$-links provide a straightforward definition of collective emergence in complex $$k$$-body interacting systems or dataset. For the entire collection see [Zbl 1482.94007]. A primer on alpha-information theory with application to leakage in secrecy systems https://zbmath.org/1485.94035 2022-06-24T15:10:38.853281Z "Rioul, Olivier" https://zbmath.org/authors/?q=ai:rioul.olivier Summary: We give an informative review of the notions of Rényi's $$\alpha$$-entropy and $$\alpha$$-divergence, Arimoto's conditional $$\alpha$$-entropy, and Sibson's $$\alpha$$-information, with emphasis on the various relations between them. All these generalize Shannon's classical information measures corresponding to $$\alpha =1$$. We present results on data processing inequalities and provide some new generalizations of the classical Fano's inequality for any $$\alpha >0$$. This enables one to $$\alpha$$-information as an information theoretic metric of leakage in secrecy systems. Such metric can bound the gain of an adversary in guessing some secret (any potentially random function of some sensitive dataset) from disclosed measurements, compared with the adversary's prior belief (without access to measurements). For the entire collection see [Zbl 1482.94007]. End-to-end similarity learning and hierarchical clustering for unfixed size datasets https://zbmath.org/1485.94036 2022-06-24T15:10:38.853281Z "Gigli, Leonardo" https://zbmath.org/authors/?q=ai:gigli.leonardo "Marcotegui, Beatriz" https://zbmath.org/authors/?q=ai:marcotegui.beatriz "Velasco-Forero, Santiago" https://zbmath.org/authors/?q=ai:velasco-forero.santiago Summary: Hierarchical clustering (HC) is a powerful tool in data analysis since it allows discovering patterns in the observed data at different scales. Similarity-based HC methods take as input a fixed number of points and the matrix of pairwise similarities and output the dendrogram representing the nested partition. However, in some cases, the entire dataset cannot be known in advance and thus neither the relations between the points. In this paper, we consider the case in which we have a collection of realizations of a random distribution, and we want to extract a hierarchical clustering for each sample. The number of elements varies at each draw. Based on a continuous relaxation of Dasgupta's cost function, we propose to integrate a triplet loss function to Chami's formulation in order to learn an optimal similarity function between the points to use to compute the optimal hierarchy. Two architectures are tested on four datasets as approximators of the similarity function. The results obtained are promising and the proposed method showed in many cases good robustness to noise and higher adaptability to different datasets compared with the classical approaches. For the entire collection see [Zbl 1482.94007]. Transport information Hessian distances https://zbmath.org/1485.94037 2022-06-24T15:10:38.853281Z "Li, Wuchen" https://zbmath.org/authors/?q=ai:li.wuchen Summary: We formulate closed-form Hessian distances of information entropies in one-dimensional probability density space embedded with the $$L^2$$-Wasserstein metric. Some analytical examples are provided. For the entire collection see [Zbl 1482.94007]. Towards a functorial description of quantum relative entropy https://zbmath.org/1485.94038 2022-06-24T15:10:38.853281Z "Parzygnat, Arthur J." https://zbmath.org/authors/?q=ai:parzygnat.arthur-j Summary: A Bayesian functorial characterization of the classical relative entropy (KL divergence) of finite probabilities was recently obtained by \textit{J. C. Baez} and \textit{T. Fritz} [Theory Appl. Categ. 29, 422--456 (2014; Zbl 1321.94023)]. This was then generalized to standard Borel spaces by \textit{N. Gagné} and \textit{P. Panangaden} [Electron. Notes Theor. Comput. Sci. 336, 135--153 (2018; Zbl 07513459)]. Here, we provide preliminary calculations suggesting that the finite-dimensional quantum (Umegaki) relative entropy might be characterized in a similar way. Namely, we explicitly prove that it defines an affine functor in the special case where the relative entropy is finite. A recent non-commutative disintegration theorem provides a key ingredient in this proof. For the entire collection see [Zbl 1482.94007]. Entropy under disintegrations https://zbmath.org/1485.94039 2022-06-24T15:10:38.853281Z "Vigneaux, Juan Pablo" https://zbmath.org/authors/?q=ai:vigneaux.juan-pablo Summary: We consider the differential entropy of probability measures absolutely continuous with respect to a given $$\sigma$$-finite reference'' measure on an arbitrary measure space. We state the asymptotic equipartition property in this general case; the result is part of the folklore but our presentation is to some extent novel. Then we study a general framework under which such entropies satisfy a chain rule: disintegrations of measures. We give an asymptotic interpretation for conditional entropies in this case. Finally, we apply our result to Haar measures in canonical relation. For the entire collection see [Zbl 1482.94007]. Information cohomology of classical vector-valued observables https://zbmath.org/1485.94040 2022-06-24T15:10:38.853281Z "Vigneaux, Juan Pablo" https://zbmath.org/authors/?q=ai:vigneaux.juan-pablo Summary: We provide here a novel algebraic characterization of two information measures associated with a vector-valued random variable, its differential entropy and the dimension of the underlying space, purely based on their recursive properties (the chain rule and the nullity-rank theorem, respectively). More precisely, we compute the information cohomology of \textit{P. Baudot} and \textit{D. Bennequin} [The homological nature of entropy'', Entropy 17, No. 5, 3253--3318 (2015), \url{https://mdpi-res.com/d_attachment/entropy/entropy-17-03253/article_deploy/entropy-17-03253.pdf}] with coefficients in a module of continuous probabilistic functionals over a category that mixes discrete observables and continuous vector-valued observables, characterizing completely the 1-cocycles; evaluated on continuous laws, these cocycles are linear combinations of the differential entropy and the dimension. For the entire collection see [Zbl 1482.94007]. Correcting the side effects of ADC filtering in MR image reconstruction https://zbmath.org/1485.94041 2022-06-24T15:10:38.853281Z "Lazarus, Carole" https://zbmath.org/authors/?q=ai:lazarus.carole "März, Maximilian" https://zbmath.org/authors/?q=ai:marz.maximilian "Weiss, Pierre" https://zbmath.org/authors/?q=ai:weiss.pierre|weiss.pierre-elie Summary: This work investigates the role of the filters implemented on analog-to-digital converters for the reconstruction of magnetic resonance images. We analyze the effects of these filters both from a theoretical and an experimental point of view and demonstrate how it may lead to severe degradation of the reconstructed images when the distance between consecutive samples is larger than Shannon's limit. Based on these findings, we propose a mathematical model and a numerical algorithm that allow to mitigate such filtering effects both for linear and nonlinear reconstructions. Experiments on simulated and real data on a 7 Tesla scanner show that the proposed ideas allow to significantly improve the overall image quality. These findings are particularly relevant for high resolution imaging and for recent sampling schemes saturating the maximum gradient amplitude. They also open new challenges in sampling theory. The noise collector for sparse recovery in high dimensions https://zbmath.org/1485.94042 2022-06-24T15:10:38.853281Z "Moscoso, Miguel" https://zbmath.org/authors/?q=ai:moscoso.miguel "Novikov, Alexei" https://zbmath.org/authors/?q=ai:novikov.alexei "Tsogka, Chrysoula" https://zbmath.org/authors/?q=ai:tsogka.chrysoula (no abstract) On pseudorandom encodings https://zbmath.org/1485.94043 2022-06-24T15:10:38.853281Z "Agrikola, Thomas" https://zbmath.org/authors/?q=ai:agrikola.thomas "Couteau, Geoffroy" https://zbmath.org/authors/?q=ai:couteau.geoffroy "Ishai, Yuval" https://zbmath.org/authors/?q=ai:ishai.yuval "Jarecki, Stanisław" https://zbmath.org/authors/?q=ai:jarecki.stanislaw "Sahai, Amit" https://zbmath.org/authors/?q=ai:sahai.amit Summary: We initiate a study of pseudorandom encodings: efficiently computable and decodable encoding functions that map messages from a given distribution to a random-looking distribution. For instance, every distribution that can be perfectly and efficiently compressed admits such a pseudorandom encoding. Pseudorandom encodings are motivated by a variety of cryptographic applications, including password-authenticated key exchange, honey encryption'' and steganography. The main question we ask is whether every efficiently samplable distribution admits a pseudorandom encoding. Under different cryptographic assumptions, we obtain positive and negative answers for different flavors of pseudorandom encodings, and relate this question to problems in other areas of cryptography. In particular, by establishing a two-way relation between pseudorandom encoding schemes and efficient invertible sampling algorithms, we reveal a connection between adaptively secure multiparty computation for randomized functionalities and questions in the domain of steganography. For the entire collection see [Zbl 1482.94009]. Three-phase Z-complementary triads and almost complementary triads https://zbmath.org/1485.94044 2022-06-24T15:10:38.853281Z "Liu, Chen" https://zbmath.org/authors/?q=ai:liu.chen "Liu, Shuaijun" https://zbmath.org/authors/?q=ai:liu.shuaijun "Lei, Xianfu" https://zbmath.org/authors/?q=ai:lei.xianfu "Adhikary, Avik. R." https://zbmath.org/authors/?q=ai:adhikary.avik-ranjan "Zhou, Zhengchun" https://zbmath.org/authors/?q=ai:zhou.zhengchun Summary: A 3-phase Golay complementary triad (GCT) is a set of three sequences over the 3-phase alphabet $$\{1, \omega, \omega^2\}$$, where $$\omega =e^{\frac{2\pi \sqrt{-1}}{3}}$$ is a 3rd root of unity, whose aperiodic autocorrelations sum up to zero for each out-of-phase non-zero time-shifts. Recent results by \textit{A. A. Avis} and \textit{J. Jedwab} [J. Comb. Theory, Ser. A 180, Article ID 105422, 23 p. (2021; Zbl 1461.94069)] proved the non-existence of 3-phase GCTs of length $$N \equiv 4\pmod{6}$$. In this paper, we introduce 3-phase Z-complementary triads (ZCTs) and almost-complementary triads (ACTs). We present systematic constructions of 3-phase ZCTs for various lengths including the case when $$N \equiv 4\pmod{6}$$. We also analyse the peak-to-mean envelope power ratio (PMEPR) upper-bounds of the proposed ZCTs and ACTs. A generic method for investigating nonsingular Galois NFSRs https://zbmath.org/1485.94045 2022-06-24T15:10:38.853281Z "Wang, Xiao-Juan" https://zbmath.org/authors/?q=ai:wang.xiaojuan "Tian, Tian" https://zbmath.org/authors/?q=ai:tian.tian "Qi, Wen-Feng" https://zbmath.org/authors/?q=ai:qi.wenfeng Summary: Let $$n$$ be a positive integer. An $$n$$-stage Galois NFSR has $$n$$ registers and each register is updated by a feedback function. Then a Galois NFSR is called nonsingular if every register generates (strictly) periodic sequences, i.e., no branch points. In this paper, a generic method for investigating nonsingular Galois NFSRs is provided. Two fundamental concepts that are standard Galois NFSRs and the simplified feedback function of a standard Galois NFSR are proposed. Based on the new concepts, a sufficient condition is given for nonsingular Galois NFSRs. In particular, for the class of Galois NFSRs with linear simplified feedback functions, a necessary and sufficient condition is presented. 2-adic complexity of two constructions of binary sequences with period $$4N$$ and optimal autocorrelation magnitude https://zbmath.org/1485.94046 2022-06-24T15:10:38.853281Z "Xiao, Zibi" https://zbmath.org/authors/?q=ai:xiao.zibi "Zeng, Xiangyong" https://zbmath.org/authors/?q=ai:zeng.xiangyong Summary: Three constructions of binary sequences with period $$4N$$ and optimal autocorrelation value or optimal autocorrelation magnitude have been presented by \textit{X. Tang} and \textit{G. Gong} [IEEE Trans. Inf. Theory 56, No. 3, 1278--1286 (2010; Zbl 1366.94458)] based on interleaving technique. In this paper, the 2-adic complexity of the sequences with optimal autocorrelation magnitude constructed from the Legendre sequence pair or the twin-prime sequence pair is investigated. With the method proposed by \textit{H. Hu} [IEEE Trans. Inf. Theory 60, No. 9, 5803--5804 (2014; Zbl 1360.94277)], we completely determine the 2-adic complexity of the sequences by calculating the exact autocorrelation distribution of the sequences and discussing the greatest common divisors. Results show that the 2-adic complexity of these sequences is either maximum or very close to maximum. Cryptanalysis of the class of maximum period Galois NLFSR-based stream ciphers https://zbmath.org/1485.94047 2022-06-24T15:10:38.853281Z "Yao, Ge" https://zbmath.org/authors/?q=ai:yao.ge "Parampalli, Udaya" https://zbmath.org/authors/?q=ai:parampalli.udaya Summary: Espresso cipher is designed targeting 5G wireless communication systems. To achieve high efficiency, a maximum period Galois NLFSR is used as the only building block. The Galois NLFSR is constructed by a scalable method which converts a maximum LFSR to a Galois NLFSR. Based on this method, a new class of stream ciphers, namely maximum period Galois NLFSR-based stream ciphers can be designed. However, we identify a conditional equivalence problem in the design method and adopt the Type-II-to-Fibonacci transformation algorithm. We apply the algorithm to the Espresso cipher and successfully transform the Galois NLFSR to a Fibonacci LFSR with a nonlinear output function. The Espresso cipher is transformed to an LFSR filter generator. We break it by the fast algebraic attack and the Rønjom-Helleseth attack with complexity of $$2^{68.50}$$ and $$2^{48.59}$$ logical operations respectively. Moreover, we show that the entire class of maximum period Galois NLFSR-based stream ciphers can be transformed to LFSRs. Therefore, this kind of cipher is always equivalent to an LFSR filter generator. We discuss other related attacks and give suggestions for future design. Nearly optimal balanced quaternary sequence pairs of prime period $$N\equiv 5\pmod 8$$ https://zbmath.org/1485.94048 2022-06-24T15:10:38.853281Z "Zhao, Mengzhen" https://zbmath.org/authors/?q=ai:zhao.mengzhen "Fan, Cuiling" https://zbmath.org/authors/?q=ai:fan.cuiling "Tian, Zihong" https://zbmath.org/authors/?q=ai:tian.zihong Summary: In this paper, for balanced quaternary sequence pairs (BQSPs for short) of period $$N\equiv 1\pmod 4$$, the optimal cross-correlation is determined and a lower bound on the maximum autocorrelation magnitude is derived. Based on cyclotomic classes of order 4, we obtain two classes of nearly optimal BQSPs of period $$N$$, where $$N\equiv 5\pmod 8$$ is a prime. Those pairs have optimal cross-correlation $$\{1,-1\}$$ and maximum out-of-phase autocorrelation magnitude no more than $$\sqrt{N-4}+4$$, whose difference with a lower bound of maximum autocorrelation magnitude is less than 4. Standard model leakage-resilient authenticated key exchange using inner-product extractors https://zbmath.org/1485.94049 2022-06-24T15:10:38.853281Z "Alawatugoda, Janaka" https://zbmath.org/authors/?q=ai:alawatugoda.janaka "Okamoto, Tatsuaki" https://zbmath.org/authors/?q=ai:okamoto.tatsuaki Summary: With the development of side-channel attacks, a necessity arises to invent authenticated key exchange protocols in a leakage-resilient manner. Constructing authenticated key exchange protocols using existing cryptographic schemes is an effective method, as such construction can be instantiated with any appropriate scheme in a way that the formal security argument remains valid. In parallel, constructing authenticated key exchange protocols that are proven to be secure in the standard model is more preferred as they rely on real-world assumptions. In this paper, we present a Diffie-Hellman-style construction of a leakage-resilient authenticated key exchange protocol, that can be instantiated with any CCLA2-secure public-key encryption scheme and a function from the pseudo-random function family. Our protocol is proven to be secure in the standard model assuming the hardness of the decisional Diffie-Hellman problem. Furthermore, it is resilient to continuous partial leakage of long-term secret keys, that happens even after the session key is established, while satisfying the security features defined by the eCK security model. The resiliency of MPC with low interaction: the benefit of making errors (extended abstract) https://zbmath.org/1485.94050 2022-06-24T15:10:38.853281Z "Applebaum, Benny" https://zbmath.org/authors/?q=ai:applebaum.benny "Kachlon, Eliran" https://zbmath.org/authors/?q=ai:kachlon.eliran "Patra, Arpita" https://zbmath.org/authors/?q=ai:patra.arpita Summary: We study information-theoretic secure multiparty protocols that achieve full security, including guaranteed output delivery, at the presence of an active adversary that corrupts a constant fraction of the parties. It is known that 2 rounds are insufficient for such protocols even when the adversary corrupts only two parties [\textit{R. Gennaro} et al., Lect. Notes Comput. Sci. 2442, 178--193 (2002; Zbl 1026.94527)], and that perfect protocols can be implemented in 3 rounds as long as the adversary corrupts less than a quarter of the parties [\textit{B. Applebaum} et al., ibid. 11477, 504--531 (2019; Zbl 07164007)]. Furthermore, it was recently shown that the quarter threshold is tight for any 3-round perfectly-secure protocol [\textit{B. Applebaum} et al., The round complexity of perfect mpc with active security and optimal resiliency'', Preprint, \url{https://eprint.iacr.org/2020/581.pdf}]. Nevertheless, one may still hope to achieve a better-than-quarter threshold at the expense of allowing some negligible correctness errors and/or statistical deviations in the security. Our main results show that this is indeed the case. Every function can be computed by 3-round protocols with statistical security as long as the adversary corrupts less than third of the parties. Moreover, we show that any better resiliency threshold requires 4 rounds. Our protocol is computationally inefficient and has an exponential dependency in the circuit's depth $$d$$ and in the number of parties $$n$$. We show that this overhead can be avoided by relaxing security to computational, assuming the existence of a non-interactive commitment (NICOM). Previous 3-round computational protocols were based on stronger public-key assumptions. When instantiated with statistically-hiding NICOM, our protocol provides everlasting statistical security, i.e., it is secure against adversaries that are computationally unlimited after the protocol execution. To prove these results, we introduce a new hybrid model that allows for 2-round protocols with linear resiliency threshold. Here too we prove that, for perfect protocols, the best achievable resiliency is $$n/4$$, whereas statistical protocols can achieve a threshold of $$n/3$$. In the plain model, we also construct the first 2-round $$n/3$$-statistical verifiable secret sharing that supports second-level sharing and prove a matching lower-bound, extending the results of \textit{A. Patra} et al. [Lect. Notes Comput. Sci. 5677, 487--504 (2009; Zbl 1252.94110)]. Overall, our results refine the differences between statistical and perfect models of security, and show that there are efficiency gaps even for thresholds that are realizable in both models. For the entire collection see [Zbl 1482.94008]. Universal composition with global subroutines: capturing global setup within plain UC https://zbmath.org/1485.94051 2022-06-24T15:10:38.853281Z "Badertscher, Christian" https://zbmath.org/authors/?q=ai:badertscher.christian "Canetti, Ran" https://zbmath.org/authors/?q=ai:canetti.ran "Hesse, Julia" https://zbmath.org/authors/?q=ai:hesse.julia "Tackmann, Björn" https://zbmath.org/authors/?q=ai:tackmann.bjorn "Zikas, Vassilis" https://zbmath.org/authors/?q=ai:zikas.vassilis Summary: The Global and Externalized UC frameworks [\textit{R. Canetti} et al., Lect. Notes Comput. Sci. 4392, 61--85 (2007; Zbl 1129.94014)] extend the plain UC framework to additionally handle protocols that use a global setup'', namely a mechanism that is also used by entities outside the protocol. These frameworks have broad applicability: Examples include public-key infrastructures, common reference strings, shared synchronization mechanisms, global blockchains, or even abstractions such as the random oracle. However, the need to work in a specialized framework has been a source of confusion, incompatibility, and an impediment to broader use. We show how security in the presence of a global setup can be captured within the plain UC framework, thus significantly simplifying the treatment. This is done as follows: \par i) We extend UC-emulation to the case where both the emulating protocol $$\pi$$ and the emulated protocol $$\phi$$ make subroutine calls to protocol $$\gamma$$ that is accessible also outside $$\pi$$ and $$\phi$$. As usual, this notion considers only a single instance of $$\phi$$ or $$\pi$$ (alongside $$\gamma )$$. \par ii) We extend the UC theorem to hold even with respect to the new notion of UC emulation. That is, we show that if $$\pi$$ UC-emulates $$\phi$$ in the presence of $$\gamma$$, then $$\rho^{\phi \rightarrow \pi }$$ UC-emulates $$\rho$$ for any protocol $$\rho$$, even when $$\rho$$ uses $$\gamma$$ directly, and in addition calls many instances of $$\phi$$, all of which use the same instance of $$\gamma$$. We prove this extension using the existing UC theorem as a black box, thus further simplifying the treatment. We also exemplify how our treatment can be used to streamline, within the plain UC model, proofs of security of systems that involve global set-up, thus providing greater simplicity and flexibility. For the entire collection see [Zbl 1482.94009]. WARP: revisiting GFN for lightweight 128-bit block cipher https://zbmath.org/1485.94052 2022-06-24T15:10:38.853281Z "Banik, Subhadeep" https://zbmath.org/authors/?q=ai:banik.subhadeep "Bao, Zhenzhen" https://zbmath.org/authors/?q=ai:bao.zhenzhen "Isobe, Takanori" https://zbmath.org/authors/?q=ai:isobe.takanori "Kubo, Hiroyasu" https://zbmath.org/authors/?q=ai:kubo.hiroyasu "Liu, Fukang" https://zbmath.org/authors/?q=ai:liu.fukang "Minematsu, Kazuhiko" https://zbmath.org/authors/?q=ai:minematsu.kazuhiko "Sakamoto, Kosei" https://zbmath.org/authors/?q=ai:sakamoto.kosei "Shibata, Nao" https://zbmath.org/authors/?q=ai:shibata.nao "Shigeri, Maki" https://zbmath.org/authors/?q=ai:shigeri.maki Summary: In this article, we present \texttt{WARP}, a lightweight 128-bit block cipher with a 128-bit key. It aims at small-footprint circuit in the field of 128-bit block ciphers, possibly for a unified encryption and decryption functionality. The overall structure of \texttt{WARP} is a variant of 32-nibble Type-2 Generalized Feistel Network (GFN), with a permutation over nibbles designed to optimize the security and efficiency. We conduct a thorough security analysis and report comprehensive hardware and software implementation results. Our hardware results show that \texttt{WARP} is the smallest 128-bit block cipher for most of typical hardware implementation strategies. A serialized circuit of \texttt{WARP} achieves around 800 Gate Equivalents (GEs), which is much smaller than previous state-of-the-art implementations of lightweight 128-bit ciphers (they need more than 1,000 GEs). While our primary metric is hardware size, \texttt{WARP} also enjoys several other features, most notably low energy consumption. This is somewhat surprising, since GFN generally needs more rounds than substitution permutation network (SPN), and thus GFN has been considered to be less advantageous in this regard. We show a multi-round implementation of \texttt{WARP} is quite low-energy. Moreover, \texttt{WARP} also performs well on software: our SIMD implementation is quite competitive to known hardware-oriented 128-bit lightweight ciphers for long input, and even much better for small inputs due to the small number of parallel blocks. On 8-bit microcontrollers, the results of our assembly implementations show that \texttt{WARP} is flexible to achieve various performance characteristics. For the entire collection see [Zbl 1482.94005]. An algebraic approach to the rank support learning problem https://zbmath.org/1485.94053 2022-06-24T15:10:38.853281Z "Bardet, Magali" https://zbmath.org/authors/?q=ai:bardet.magali "Briaud, Pierre" https://zbmath.org/authors/?q=ai:briaud.pierre Summary: Rank-metric code-based cryptography relies on the hardness of decoding a random linear code in the rank metric. The Rank Support Learning problem (RSL) is a variant where an attacker has access to $$N$$ decoding instances whose errors have the same support and wants to solve one of them. This problem is for instance used in the Durandal signature scheme [\textit{N. Aragon} et al., Lect. Notes Comput. Sci. 11478, 728--758 (2019; Zbl 07162746)]. In this paper, we propose an algebraic attack on RSL which often outperforms the previous attacks to solve this problem when the number of instances is of the order of the one used in Durandal. We build upon [\textit{M. Bardet}, Improvements of algebraic attacks for solving the rank decoding and MinRank problems'', ibid. 12491, 507--536 (2020; \url{doi:10.1007/978-3-030-64837-4_17})], where similar techniques are used to solve MinRank and RD. However, our analysis is simpler and overall our attack relies on very elementary assumptions compared to standard Gröbner bases attacks. In particular, our results show that key recovery attacks on Durandal are more efficient than was previously thought, especially in a parameter range where the RD attack from Bardet [loc. cit.] is not the most powerful. For the entire collection see [Zbl 1482.94004]. On the round complexity of the shuffle model https://zbmath.org/1485.94054 2022-06-24T15:10:38.853281Z "Beimel, Amos" https://zbmath.org/authors/?q=ai:beimel.amos "Haitner, Iftach" https://zbmath.org/authors/?q=ai:haitner.iftach "Nissim, Kobbi" https://zbmath.org/authors/?q=ai:nissim.kobbi "Stemmer, Uri" https://zbmath.org/authors/?q=ai:stemmer.uri Summary: The shuffle model of differential privacy [\textit{A. Bittau} et al. Prochlo: strong privacy for analytics in the crowd'', in: Proceedings of the 26th symposium on operating systems principles, SOSP '17, Shanghai, China, October 28--31, 2017. New York, NY: Association for Computing Machinery (ACM). 441--459 (2017; \url{doi:10.1145/3132747.3132769}); \textit{Ú. Erlingsson} et al., in: Proceedings of the 30th annual ACM-SIAM symposium on discrete algorithms, SODA 2019, San Diego, CA, USA, January 6--9, 2019. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM); New York, NY: Association for Computing Machinery (ACM). 2468--2479 (2019; Zbl 1432.68133); \textit{A. Cheu} et al., Lect. Notes Comput. Sci. 11476, 375--403 (2019; Zbl 1470.94081)] was proposed as a viable model for performing distributed differentially private computations. Informally, the model consists of an untrusted analyzer that receives messages sent by participating parties via a shuffle functionality, the latter potentially disassociates messages from their senders. Prior work focused on one-round differentially private shuffle model protocols, demonstrating that functionalities such as addition and histograms can be performed in this model with accuracy levels similar to that of the curator model of differential privacy, where the computation is performed by a fully trusted party. A model closely related to the shuffle model was presented in the seminal work of \textit{Y. Ishai} et al. [Cryptography from anonymity'', in: Proceedings of the 47th annual IEEE symposium on foundations of computer science, FOCS 2006, Berkeley, CA: October 21--24, 2006. Los Alamitos, CA: IEEE Computer Society. 239--248 (2006; \url{doi:10.1109/FOCS.2006.25})] on establishing cryptography from anonymous communication. Focusing on the round complexity of the shuffle model, we ask in this work what can be computed in the shuffle model of differential privacy with two rounds. Ishai et al. [loc. cit.] showed how to use one round of the shuffle to establish secret keys between every two parties. Using this primitive to simulate a general secure multi-party protocol increases its round complexity by one. We show how two parties can use one round of the shuffle to send secret messages without having to first establish a secret key, hence retaining round complexity. Combining this primitive with the two-round semi-honest protocol of Applebaum, Brakerski, and Tsabary [\textit{B. Applebaum} et al., Lect. Notes Comput. Sci. 11239, 152--174 (2018; Zbl 1443.94042)], we obtain that every randomized functionality can be computed in the shuffle model with an honest majority, in merely two rounds. This includes any differentially private computation. We hence move to examine differentially private computations in the shuffle model that (i) do not require the assumption of an honest majority, or (ii) do not admit one-round protocols, even with an honest majority. For that, we introduce two computational tasks: common element, and nested common element with parameter $$\alpha$$. For the common element problem we show that for large enough input domains, no one-round differentially private shuffle protocol exists with constant message complexity and negligible $$\delta$$, whereas a two-round protocol exists where every party sends a single message in every round. For the nested common element we show that no one-round differentially private protocol exists for this problem with adversarial coalition size $$\alpha n$$. However, we show that it can be privately computed in two rounds against coalitions of size $$cn$$ for every $$c<1$$. This yields a separation between one-round and two-round protocols. We further show a one-round protocol for the nested common element problem that is differentially private with coalitions of size smaller than $$cn$$ for all $$0<c<\alpha <1/2$$. For the entire collection see [Zbl 1482.94008]. Unintended features of APIs: cryptanalysis of incremental HMAC https://zbmath.org/1485.94055 2022-06-24T15:10:38.853281Z "Benmocha, Gal" https://zbmath.org/authors/?q=ai:benmocha.gal "Biham, Eli" https://zbmath.org/authors/?q=ai:biham.eli "Perle, Stav" https://zbmath.org/authors/?q=ai:perle.stav Summary: Many cryptographic APIs provide extra functionality that was not intended by the designers. In this paper we discuss such an unintended functionality in the API of HMAC, and study the security implications of it's use by applications. HMAC authenticates a single message at a time with a single authentication tag. However, most HMAC implementations do not complain when extra data is added to the stream after that tag is computed, nor they undo the side effects of the tag computation. Think of it as an API of a new authentication primitive, that provides tags to prefixes, rather than just to the full message. We call such primitives Incremental MACs (IncMACs). IncMACs may be used by applications to efficiently authenticate long messages, broken into fragments, which need their own individual authentication tag for performing an early abort or to retransmit only bad fragments, while each tag (strongly) authenticates the message prefix so far, and the last tag fully authenticates the full message. It appears that some applications (e.g., Siemens S7 protocol) use the standard HMAC API to provide an incremental MAC, allowing to identify transmission errors as soon as the first error occurs, while also directly authenticating the full message. We discuss two common implementations, used by cryptographic libraries and programs, whose APIs do not forbid using them incrementally, continuing with extra data after computing the tag. The most common one, which Siemens uses, uses a naive implementation (as natively coded from the RFCs). The other is the implementation of the OpenSSL library. We discuss these implementations, and show that they are not as secure as HMAC. Moreover, some of them may even be highly insecure when used incrementally, where in the particular case of OpenSSL it is possible to instantly find collisions and multi-collisions, which are also colliding under any key. We also discuss the fine details of the definition of IncMACs, and propose secure versions of such a primitive. For the entire collection see [Zbl 1482.94005]. Implementation of lattice trapdoors on modules and applications https://zbmath.org/1485.94056 2022-06-24T15:10:38.853281Z "Bert, Pauline" https://zbmath.org/authors/?q=ai:bert.pauline "Eberhart, Gautier" https://zbmath.org/authors/?q=ai:eberhart.gautier "Prabel, Lucas" https://zbmath.org/authors/?q=ai:prabel.lucas "Roux-Langlois, Adeline" https://zbmath.org/authors/?q=ai:roux-langlois.adeline "Sabt, Mohamed" https://zbmath.org/authors/?q=ai:sabt.mohamed Summary: We develop and implement efficient Gaussian preimage sampling techniques on module lattices, which rely on the works of \textit{D. Micciancio} and \textit{C. Peikert} [Lect. Notes Comput. Sci. 7237, 700--718 (2012; Zbl 1297.94090)], and \textit{N. Genise} and \textit{D. Micciancio} [ibid. 10820, 174--203 (2018; Zbl 1423.94073)]. The main advantage of our implementation is its modularity, which makes it practical to use for signature schemes, but also for more advanced constructions using trapdoors such as identity-based encryption. In particular, it is easy to use in the ring or module setting, and to modify the arithmetic on $$\mathcal R_q$$ (as different schemes have different conditions on $$q$$). Relying on these tools, we also present two instantiations and implementations of proven trapdoor-based signature schemes in the module setting: GPV in the random oracle model and a variant of it in the standard model presented in [\textit{P. Bert} et al., ibid. 10786, 271--291 (2018; Zbl 1425.94048)]. For that last scheme, we address a security issue and correct obsolescence problems in their implementation by building ours from scratch. To the best of our knowledge, this is the first efficient implementation of a lattice-based signature scheme in the standard model. Relying on that last signature, we also present the implementation of a standard model IBE in the module setting. We show that while the resulting schemes may not be competitive with the most efficient NIST candidates, they are practical and run on a standard laptop in acceptable time, which paves the way for practical advanced trapdoor-based constructions. For the entire collection see [Zbl 1482.94004]. \textsf{CSI-RAShi}: distributed key generation for CSIDH https://zbmath.org/1485.94057 2022-06-24T15:10:38.853281Z "Beullens, Ward" https://zbmath.org/authors/?q=ai:beullens.ward "Disson, Lucas" https://zbmath.org/authors/?q=ai:disson.lucas "Pedersen, Robi" https://zbmath.org/authors/?q=ai:pedersen.robi "Vercauteren, Frederik" https://zbmath.org/authors/?q=ai:vercauteren.frederik Summary: We present an honest-majority Distributed Key Generation protocol (DKG) based on Shamir's $$(k, n)$$-threshold secret sharing in the setting of Very Hard Homogenous Spaces (VHHS). DKGs in the discrete logarithm setting use Pedersen commitments, for which there is no known analogue in the VHHS setting. As a replacement, we introduce a new primitive called piecewise verifiable proofs, which allow a prover to prove that a list of NP-statements is valid with respect to a common witness, and such that the different statements can be verified individually. Our protocol is robust and actively secure in the Quantum Random Oracle Model. For $$n$$ participants, the total runtime of our protocol is $$2+\lambda +n(1+4\lambda )$$ group action evaluations, where $$\lambda$$ is the underlying security parameter, and is thus independent of the threshold k. When instantiated with CSIDH-512, this amounts to approximately $$4.5+18n$$ seconds. For the entire collection see [Zbl 1482.94004]. Linear cryptanalysis of FF3-1 and FEA https://zbmath.org/1485.94058 2022-06-24T15:10:38.853281Z "Beyne, Tim" https://zbmath.org/authors/?q=ai:beyne.tim Summary: Improved attacks on generic small-domain Feistel ciphers with alternating round tweaks are obtained using linear cryptanalysis. This results in practical distinguishing and message-recovery attacks on the United States format-preserving encryption standard FF3-1 and the South-Korean standards FEA-1 and FEA-2. The data complexity of the proposed attacks on FF3-1 and FEA-1 is $$\widetilde{\mathcal{O}}(N^{r/2 - 1.5})$$, where $$N^2$$ is the domain size and $$r$$ is the number of rounds. For example, FF3-1 with $$N = 10^3$$ can be distinguished from an ideal tweakable block cipher with advantage $$\ge 1/10$$ using $$2^{23}$$ encryption queries. Recovering the left half of a message with similar advantage requires $$2^{24}$$ data. The analysis of FF3-1 serves as an interesting real-world application of (generalized) linear cryptanalysis over the group $$\mathbb{Z}/N\mathbb{Z}$$. For the entire collection see [Zbl 1483.94004]. Generating cryptographically-strong random lattice bases and recognizing rotations of $$\mathbb{Z}^n$$ https://zbmath.org/1485.94059 2022-06-24T15:10:38.853281Z "Blanks, Tamar Lichter" https://zbmath.org/authors/?q=ai:blanks.tamar-lichter "Miller, Stephen D." https://zbmath.org/authors/?q=ai:miller.stephen-d Summary: Lattice-based cryptography relies on generating random bases which are difficult to fully reduce. Given a lattice basis (such as the private basis for a cryptosystem), all other bases are related by multiplication by matrices in $$\mathrm{GL}(n,\mathbb{Z})$$. We compare the strengths of various methods to sample random elements of $$\mathrm{GL}(n,\mathbb{Z})$$, finding some are stronger than others with respect to the problem of recognizing rotations of the $$\mathbb{Z}^n$$ lattice. In particular, the standard algorithm of multiplying unipotent generators together (as implemented in Magma's \textsf{RandomSLnZ} command) generates instances of this last problem which can be efficiently broken, even in dimensions nearing 1,500. Likewise, we find that the random basis generation method in one of the NIST Post-Quantum Cryptography competition submissions (DRS) generates instances which can be efficiently broken, even at its 256-bit security settings. Other random basis generation algorithms (some older, some newer) are described which appear to be much stronger. For the entire collection see [Zbl 1482.94004]. Public-coin zero-knowledge arguments with (almost) minimal time and space overheads https://zbmath.org/1485.94060 2022-06-24T15:10:38.853281Z "Block, Alexander R." https://zbmath.org/authors/?q=ai:block.alexander-r "Holmgren, Justin" https://zbmath.org/authors/?q=ai:holmgren.justin "Rosen, Alon" https://zbmath.org/authors/?q=ai:rosen.alon "Rothblum, Ron D." https://zbmath.org/authors/?q=ai:rothblum.ron-d "Soni, Pratik" https://zbmath.org/authors/?q=ai:soni.pratik Summary: Zero-knowledge protocols enable the truth of a mathematical statement to be certified by a verifier without revealing any other information. Such protocols are a cornerstone of modern cryptography and recently are becoming more and more practical. However, a major bottleneck in deployment is the efficiency of the prover and, in particular, the space-efficiency of the protocol. For every NP relation that can be verified in time $$T$$ and space $$S$$, we construct a public-coin zero-knowledge argument in which the prover runs in time $$T \cdot \operatorname{polylog}(T)$$ and space $$S \cdot \operatorname{polylog}(T)$$. Our proofs have length $$\operatorname{polylog}(T)$$ and the verifier runs in time $$T \cdot \operatorname{polylog}(T)$$ (and space $$\operatorname{polylog}(T))$$. Our scheme is in the random oracle model and relies on the hardness of discrete log in prime-order groups. Our main technical contribution is a new space efficient polynomial commitment scheme for multi-linear polynomials. Recall that in such a scheme, a sender commits to a given multi-linear polynomial $$P:{{\mathbb{F}}}^n \rightarrow{{\mathbb{F}}}$$ so that later on it can prove to a receiver statements of the form $$P(x)=y$$''. In our scheme, which builds on commitments schemes of \textit{J. Bootle} et al. [Lect. Notes Comput. Sci. 9666, 327--357 (2016; Zbl 1369.94520)] and \textit{B. Bünz} et al. [Bulletproofs: Short proofs for confidential transactions and more'', in: Proceedings of the 2018 IEEE symposium on security and privacy, San Francisco, CA, USA, 2018. Los Alamitos, CA: IEEE Computer Society. 315--334 (2018; \url{doi:10.1109/SP.2018.00020})], we assume that the sender is given multi-pass streaming access to the evaluations of $$P$$ on the Boolean hypercube and we show how to implement both the sender and receiver in roughly time $$2^n$$ and space $$n$$ and with communication complexity roughly $$n$$. For the entire collection see [Zbl 1482.94008]. \textsf{Halo Infinite}: proof-carrying data from additive polynomial commitments https://zbmath.org/1485.94061 2022-06-24T15:10:38.853281Z "Boneh, Dan" https://zbmath.org/authors/?q=ai:boneh.dan "Drake, Justin" https://zbmath.org/authors/?q=ai:drake.justin "Fisch, Ben" https://zbmath.org/authors/?q=ai:fisch.ben-a "Gabizon, Ariel" https://zbmath.org/authors/?q=ai:gabizon.ariel Summary: Polynomial commitment schemes (PCS) have recently been in the spotlight for their key role in building SNARKs. A PCS provides the ability to commit to a polynomial over a finite field and prove its evaluation at points. A succinct PCS has commitment and evaluation proof size sublinear in the degree of the polynomial. An efficient PCS has sublinear proof verification. Any efficient and succinct PCS can be used to construct a SNARK with similar security and efficiency characteristics (in the random oracle model). Proof-carrying data (PCD) enables a set of parties to carry out an indefinitely long distributed computation where every step along the way is accompanied by a proof of correctness. It generalizes incrementally verifiable computation and can even be used to construct SNARKs. Until recently, however, the only known method for constructing PCD required expensive SNARK recursion. A system called \textsf{Halo} first demonstrated a new methodology for building PCD without SNARKs, exploiting an aggregation property of the Bulletproofs inner-product argument. The construction was heuristic because it makes non-black-box use of a concrete instantiation of the Fiat-Shamir transform. We expand upon this methodology to show that PCD can be (heuristically) built from any homomorphic polynomial commitment scheme (PCS), even if the PCS evaluation proofs are neither succinct nor efficient. In fact, the Halo methodology extends to any PCS that has an even more general property, namely the ability to aggregate linear combinations of commitments into a new succinct commitment that can later be opened to this linear combination. Our results thus imply new constructions of SNARKs and PCD that were not previously described in the literature and serve as a blueprint for future constructions as well. For the entire collection see [Zbl 1483.94004]. Sumcheck arguments and their applications https://zbmath.org/1485.94062 2022-06-24T15:10:38.853281Z "Bootle, Jonathan" https://zbmath.org/authors/?q=ai:bootle.jonathan "Chiesa, Alessandro" https://zbmath.org/authors/?q=ai:chiesa.alessandro "Sotiraki, Katerina" https://zbmath.org/authors/?q=ai:sotiraki.katerina Summary: We introduce a class of interactive protocols, which we call sumcheck arguments, that establishes a novel connection between the sumcheck protocol [\textit{C. Lund} et al., J. Assoc. Comput. Mach. 39, No. 4, 859--868 (1992; Zbl 0799.68097)] and folding techniques for Pedersen commitments [\textit{J. Bootle} [A non-PCP approach to succinct quantum-safe zero-knowledge'', Lect. Notes Comput. Sci. 12171, 441--469 (2020; \url{doi:10.1007/978-3-030-56880-1_16})]. We define a class of sumcheck-friendly commitment schemes over modules that captures many examples of interest, and show that the sumcheck protocol applied to a polynomial associated with the commitment scheme yields a succinct argument of knowledge for openings of the commitment. Building on this, we additionally obtain succinct arguments for the NP-complete language R1CS over certain rings. Sumcheck arguments enable us to recover as a special case numerous prior works in disparate cryptographic settings (discrete logarithms, pairings, groups of unknown order, lattices), providing one framework to understand them all. Further, we answer open questions raised in prior works, such as obtaining a lattice-based succinct argument from the SIS assumption for satisfiability problems over rings. For the entire collection see [Zbl 1483.94004]. Secure hybrid encryption in the standard model from hard learning problems https://zbmath.org/1485.94063 2022-06-24T15:10:38.853281Z "Boyen, Xavier" https://zbmath.org/authors/?q=ai:boyen.xavier "Izabachène, Malika" https://zbmath.org/authors/?q=ai:izabachene.malika "Li, Qinyi" https://zbmath.org/authors/?q=ai:li.qinyi Summary: We present chosen-ciphertext secure hybrid encryption systems in the standard model from the learning with errors problem and low-noise learning parity with noise problem. The systems consist of public-key key encapsulation mechanisms that are not chosen-ciphertext secure. The systems are more efficient than the existing chosen-ciphertext secure hybrid encryption systems in the standard model based on the same hard learning problems. For the entire collection see [Zbl 1482.94004]. PRINCEv2. More security for (almost) no overhead https://zbmath.org/1485.94064 2022-06-24T15:10:38.853281Z "Božilov, Dušan" https://zbmath.org/authors/?q=ai:bozilov.dusan "Eichlseder, Maria" https://zbmath.org/authors/?q=ai:eichlseder.maria "Knežević, Miroslav" https://zbmath.org/authors/?q=ai:knezevic.miroslav "Lambin, Baptiste" https://zbmath.org/authors/?q=ai:lambin.baptiste "Leander, Gregor" https://zbmath.org/authors/?q=ai:leander.gregor "Moos, Thorben" https://zbmath.org/authors/?q=ai:moos.thorben "Nikov, Ventzislav" https://zbmath.org/authors/?q=ai:nikov.ventzislav "Rasoolzadeh, Shahram" https://zbmath.org/authors/?q=ai:rasoolzadeh.shahram "Todo, Yosuke" https://zbmath.org/authors/?q=ai:todo.yosuke "Wiemer, Friedrich" https://zbmath.org/authors/?q=ai:wiemer.friedrich Summary: In this work, we propose tweaks to the \texttt{PRINCE} block cipher that help us to increase its security without changing the number of rounds or round operations. We get substantially higher security for the same complexity. From an implementation perspective, \texttt{PRINCEv2} comes at an extremely low overhead compared to \texttt{PRINCE} in all key categories, such as area, latency and energy. We expect, as it is already the case for \texttt{PRINCE}, that the new cipher \texttt{PRINCEv2} will be deployed in various settings. For the entire collection see [Zbl 1482.94005]. Towards post-quantum security for signal's X3DH handshake https://zbmath.org/1485.94065 2022-06-24T15:10:38.853281Z "Brendel, Jacqueline" https://zbmath.org/authors/?q=ai:brendel.jacqueline "Fischlin, Marc" https://zbmath.org/authors/?q=ai:fischlin.marc "Günther, Felix" https://zbmath.org/authors/?q=ai:gunther.felix.1 "Janson, Christian" https://zbmath.org/authors/?q=ai:janson.christian "Stebila, Douglas" https://zbmath.org/authors/?q=ai:stebila.douglas Summary: Modern key exchange protocols are usually based on the Diffie-Hellman (DH) primitive. The beauty of this primitive, among other things, is its potential reusage of key shares: DH shares can be either used a single time or in multiple runs. Since DH-based protocols are insecure against quantum adversaries, alternative solutions have to be found when moving to the post-quantum setting. However, most post-quantum candidates, including schemes based on lattices and even supersingular isogeny DH, are not known to be secure under key reuse. In particular, this means that they cannot be necessarily deployed as an immediate DH substitute in protocols. In this paper, we introduce the notion of a split key encapsulation mechanism (split KEM) to translate the desired key-reusability of a DH-based protocol to a KEM-based flow. We provide the relevant security notions of split KEMs and show how the formalism lends itself to lifting Signal's $$\mathsf{X3DH}$$ handshake to the post-quantum KEM setting without additional message flows. Although the proposed framework conceptually solves the raised issues, instantiating it securely from post-quantum assumptions proved to be non-trivial. We give passively secure instantiations from $$\mathsf{(R)LWE}$$, yet overcoming the above-mentioned insecurities under key reuse in the presence of active adversaries remains an open problem. Approaching one-sided key reuse, we provide a split KEM instantiation that allows such reuse based on the KEM introduced by \textit{E. Kiltz} [Lect. Notes Comput. Sci. 4450, 282--297 (2007; Zbl 1127.94016)], which may serve as a post-quantum blueprint if the underlying hardness assumption (gap hashed Diffie-Hellman) holds for the commutative group action of CSIDH [\textit{H. Xue} et al., ibid. 11273, 158--189 (2018; Zbl 1446.94162)]. The intention of this paper hence is to raise awareness of the challenges arising when moving to KEM-based key exchange protocols with key-reusability, and to propose split KEMs as a specific target for instantiation in future research. For the entire collection see [Zbl 1482.94005]. Proof-carrying data without succinct arguments https://zbmath.org/1485.94066 2022-06-24T15:10:38.853281Z "Bünz, Benedikt" https://zbmath.org/authors/?q=ai:bunz.benedikt "Chiesa, Alessandro" https://zbmath.org/authors/?q=ai:chiesa.alessandro "Lin, William" https://zbmath.org/authors/?q=ai:lin.william "Mishra, Pratyush" https://zbmath.org/authors/?q=ai:mishra.pratyush "Spooner, Nicholas" https://zbmath.org/authors/?q=ai:spooner.nicholas Summary: Proof-carrying data (PCD) is a powerful cryptographic primitive that enables mutually distrustful parties to perform distributed computations that run indefinitely. Known approaches to construct PCD are based on succinct non-interactive arguments of knowledge (SNARKs) that have a succinct verifier or a succinct accumulation scheme. In this paper we show how to obtain PCD without relying on SNARKs. We construct a PCD scheme given any non-interactive argument of knowledge (e.g., with linear-size arguments) that has a split accumulation scheme, which is a weak form of accumulation that we introduce. Moreover, we construct a transparent non-interactive argument of knowledge for R1CS whose split accumulation is verifiable via a (small) constant number of group and field operations. Our construction is proved secure in the random oracle model based on the hardness of discrete logarithms, and it leads, via the random oracle heuristic and our result above, to concrete efficiency improvements for PCD. Along the way, we construct a split accumulation scheme for Hadamard products under Pedersen commitments and for a simple polynomial commitment scheme based on Pedersen commitments. Our results are supported by a modular and efficient implementation. For the entire collection see [Zbl 1483.94004]. A note on the insecurity of cryptosystems based on Chebyshev polynomials https://zbmath.org/1485.94067 2022-06-24T15:10:38.853281Z "Cao, Zhengjun" https://zbmath.org/authors/?q=ai:cao.zhengjun "Liu, Lihua" https://zbmath.org/authors/?q=ai:liu.lihua The Chebyshev polynomials are defined by $$T_n(x) = \cos(n \arccos x)$$, where $$x \in [-1, 1]$$ and $$n\in\mathbb{N}$$, and they can be also defined by the following recursive relation: $$T_0(x)=1$$, $$T_1(x)=x$$ and $$T_n(x)=2x\cdot T_{n-1}(x)-T_{n-2}(x)$$, for $$n\geq 2$$. There are many encryption schemes based on the Chebyshev recursive relation and their security relies on the hardness to find a large degree $$n$$ of Chebyshev polynomials for given parameters. In this paper, the authors focus on the problem to evaluate modular Chebyshev polynomials. After describing two public key encryption schemes based on the Chebyshev recursive relation, one defined over the real number field [\textit{L. Kocarev} et al., Circuits Syst. Signal Process. 24, No. 5, 497--517 (2005; Zbl 1103.94018)] and the other one defined on finite fields [\textit{J. Li} et al., Verifiable Chebyshev maps-based chaotic encryption schemes with outsourcing computations in the cloud/fog scenarios'', Concurr. Comput. Pract. Exp. 31, No. 22, Article ID e4523, 10 p. (2019; \url{doi:10.1002/cpe.4523})], they investigate three methods for evaluating Chebyshev polynomials. First, the authors explain that the expression $$T_n(x) = \cos(n \arccos x)$$ can be used for the evaluating, but this method is only suitable for small $$n$$, in particular for $$n\leq 2^{20}$$. Then they describe three other methods, i.e. Matrix-multiplication-based evaluation, Halve-and-square evaluation and Root-extraction-based evaluation, and they prove that all these techniques, if executed over a finite field $$\mathbb{F}_p$$, where $$p$$ is a prime, have computational complexity $$O(\log n \log^2 p)$$. Finally, the authors conclude that there is no significant performance difference between these methods for moderate degrees. However, in theory, the root-extraction-based method could be more efficient. If one directly picks $$a\in\mathbb{F}_p$$ and considers the explicit expression $$T_n(\frac{a+a^{-1}}{2} \bmod p)$$, this method runs almost as fast as the general modular exponentiation. Reviewer: Riccardo Aragona (L'Aquila) Classical and quantum algorithms for generic syndrome decoding problems and applications to the Lee metric https://zbmath.org/1485.94068 2022-06-24T15:10:38.853281Z "Chailloux, André" https://zbmath.org/authors/?q=ai:chailloux.andre "Debris-Alazard, Thomas" https://zbmath.org/authors/?q=ai:debris-alazard.thomas "Etinski, Simona" https://zbmath.org/authors/?q=ai:etinski.simona Summary: The security of code-based cryptography usually relies on the hardness of the Syndrome Decoding (SD) problem for the Hamming weight. The best generic algorithms are all improvements of an old algorithm by \textit{E. Prange} [The use of information sets in decoding cyclic codes'', IRE Trans. Inf. Theory 8, No. 5, 5--9 (1962; \url{doi:10.1109/TIT.1962.1057777})], and they are known under the name of Information Set Decoding (ISD) algorithms. This work aims to extend ISD algorithms' scope by changing the underlying weight function and alphabet size of SD. More precisely, we show how to use Wagner's algorithm in the ISD framework to solve SD for a wide range of weight functions. We also calculate the asymptotic complexities of ISD algorithms, both for the classical and quantum case. We then apply our results to the Lee metric, which is currently receiving a significant amount of attention. By providing the parameters of SD for the Lee weight for which decoding seems to be the hardest, our study could have several applications for designing code-based cryptosystems and their security analysis, especially against quantum adversaries. For the entire collection see [Zbl 1482.94004]. Kim-type APN functions are affine equivalent to Gold functions https://zbmath.org/1485.94069 2022-06-24T15:10:38.853281Z "Chase, Benjamin" https://zbmath.org/authors/?q=ai:chase.benjamin "Lisoněk, Petr" https://zbmath.org/authors/?q=ai:lisonek.petr Summary: The problem of finding APN permutations of $${\mathbb{F}}_{2^n}$$ where $$n$$ is even and $$n > 6$$ has been called the Big APN Problem. \textit{K. Li} et al. [IEEE Trans. Inf. Theory 67, No. 11, 7535--7549 (2021; Zbl 07475556)] recently characterized APN functions defined on $${\mathbb{F}}_{q^2}$$ of the form $$f(x) = x^{3q} + a_1x^{2q+1} + a_2x^{q+2} + a_3x^3$$, where $$q = 2^m$$ and $$m \geq 4$$. We will call functions of this form Kim-type functions because they generalize the form of the Kim function that was used to construct an APN permutation of $$\mathbb{F}_{2^6}$$. We prove that Kim-type APN functions with $$m \geq 4$$ (previously characterized by Li et al. [loc. cit.] are affine equivalent to one of two Gold functions $$G_1(x) = x^3$$ or $$G_2(x)=x^{2^{m-1}+1}$$. Combined with the recent result of \textit{F. Göloğlu} and \textit{P. Langevin} [Finite Fields Appl. 67, Article ID 101707, 21 p. (2020; Zbl 1459.94112)] who proved that, for even $$n$$, Gold APN functions are never CCZ equivalent to permutations, it follows that for $$m \geq 4$$ Kim-type APN functions on $$\mathbb{F}_{2^{2m}}$$ are never CCZ equivalent to permutations. Lower bounds on the time/memory tradeoff of function inversion https://zbmath.org/1485.94070 2022-06-24T15:10:38.853281Z "Chawin, Dror" https://zbmath.org/authors/?q=ai:chawin.dror "Haitner, Iftach" https://zbmath.org/authors/?q=ai:haitner.iftach "Mazor, Noam" https://zbmath.org/authors/?q=ai:mazor.noam Summary: We study time/memory tradeoffs of function inversion: an algorithm, i.e., an inverter, equipped with an $$s$$-bit advice on a randomly chosen function $$f:[n] \mapsto [n]$$ and using $$q$$ oracle queries to $$f$$, tries to invert a randomly chosen output $$y$$ of $$f$$, i.e., to find $$x\in f^{-1}(y)$$. Much progress was done regarding adaptive function inversion -- the inverter is allowed to make adaptive oracle queries. \textit{M. E. Hellman} [IEEE Trans. Inf. Theory 26, 401--406 (1980; Zbl 0436.94016)] presented an adaptive inverter that inverts with high probability a random $$f$$. \textit{A. Fiat} and \textit{M. Naor} [SIAM J. Comput. 29, No. 3, 790--803 (2000; Zbl 0941.68002)] proved that for any $$s$$, $$q$$ with $$s^3 q = n^3$$ (ignoring low-order terms), an $$s$$-advice, $$q$$-query variant of Hellman's algorithm inverts a constant fraction of the image points of any function. \textit{A. C.-C. Yao} [Coherent functions and program checkers'', in: Proceedings of the 22nd annual ACM symposium on theory of computing, STOC '90, Baltimore, MA, USA,, May 13--17, 1990. New York, NY: Association for Computing Machinery (ACM) 84--94 (1990; \url{doi:10.1145/100216.100226})] proved a lower bound of $$sq\ge n$$ for this problem. Closing the gap between the above lower and upper bounds is a long-standing open question. Very little is known of the non-adaptive variant of the question -- the inverter chooses its queries in advance. The only known upper bounds, i.e., inverters, are the trivial ones (with $$s+q= n$$), and the only lower bound is the above bound of Yao. In a recent work, \textit{H. Corrigan-Gibbs} and \textit{D. Kogan} [Lect. Notes Comput. Sci. 11891, 393--421 (2019; Zbl 1455.94145)] partially justified the difficulty of finding lower bounds on non-adaptive inverters, showing that a lower bound on the time/memory tradeoff of non-adaptive inverters implies a lower bound on low-depth Boolean circuits. Bounds that, for a strong enough choice of parameters, are notoriously hard to prove. We make progress on the above intriguing question, both for the adaptive and the non-adaptive case, proving the following lower bounds on restricted families of inverters: \par i) Linear-advice (adaptive inverter). If the advice string is a linear function of $$f$$ (e.g., $$A\times f$$, for some matrix $$A$$, viewing $$f$$ as a vector in $$[n]^n$$), then $$s+q \in \varOmega (n)$$. The bound generalizes to the case where the advice string of $$f_1 + f_2$$, i.e., the coordinate-wise addition of the truth tables of $$f_1$$ and $$f_2$$, can be computed from the description of $$f_1$$ and $$f_2$$ by a low communication protocol. \par ii) Affine non-adaptive decoders. If the non-adaptive inverter has an affine decoder -- it outputs a linear function, determined by the advice string and the element to invert, of the query answers -- then $$s \in \varOmega (n)$$ (regardless of $$q$$). \par iii) Affine non-adaptive decision trees. If the non-adaptive inversion algorithm is a $$d$$-depth affine decision tree -- it outputs the evaluation of a decision tree whose nodes compute a linear function of the answers to the queries -- and $$q < cn$$ for some universal $$c>0$$, then $$s\in \varOmega (n/d \log n)$$. For the entire collection see [Zbl 1482.94009]. High-throughput elliptic curve cryptography using AVX2 vector instructions https://zbmath.org/1485.94071 2022-06-24T15:10:38.853281Z "Cheng, Hao" https://zbmath.org/authors/?q=ai:cheng.hao "Großschädl, Johann" https://zbmath.org/authors/?q=ai:grossschadl.johann "Tian, Jiaqi" https://zbmath.org/authors/?q=ai:tian.jiaqi "Rønne, Peter B." https://zbmath.org/authors/?q=ai:ronne.peter-b "Ryan, Peter Y. A." https://zbmath.org/authors/?q=ai:ryan.peter-y-a Summary: Single Instruction Multiple Data (SIMD) execution engines like Intel's Advanced Vector Extensions 2 (AVX2) offer a great potential to accelerate elliptic curve cryptography compared to implementations using only basic x64 instructions. All existing AVX2 implementations of scalar multiplication on e.g. Curve25519 (and alternative curves) are optimized for low latency. We argue in this paper that many real-world applications, such as server-side SSL/TLS handshake processing, would benefit more from throughput-optimized implementations than latency-optimized ones. To support this argument, we introduce a throughput-optimized AVX2 implementation of variable-base scalar multiplication on Curve25519 and fixed-base scalar multiplication on Ed25519. Both implementations perform four scalar multiplications in parallel, where each uses a 64-bit element of a 256-bit vector. The field arithmetic is based on a radix-$$2^{29}$$ representation of the field elements, which makes it possible to carry out four parallel multiplications modulo a multiple of $$p = 2^{255} - 19$$ in just 88 cycles on a Skylake CPU. Four variable-base scalar multiplications on Curve25519 require less than 250,000 Skylake cycles, which translates to a throughput of 32,318 scalar multiplications per second at a clock frequency of 2 GHz. For comparison, the to-date best latency-optimized AVX2 implementation has a throughput of some 21,000 scalar multiplications per second on the same Skylake CPU. For the entire collection see [Zbl 1482.94005]. Barriers for succinct arguments in the random oracle model https://zbmath.org/1485.94072 2022-06-24T15:10:38.853281Z "Chiesa, Alessandro" https://zbmath.org/authors/?q=ai:chiesa.alessandro "Yogev, Eylon" https://zbmath.org/authors/?q=ai:yogev.eylon Summary: We establish barriers on the efficiency of succinct arguments in the random oracle model. We give evidence that, under standard complexity assumptions, there do not exist succinct arguments where the argument verifier makes a small number of queries to the random oracle. The new barriers follow from new insights into how probabilistic proofs play a fundamental role in constructing succinct arguments in the random oracle model. \par i) OPs are necessary for succinctness. We prove that any succinct argument in the random oracle model can be transformed into a corresponding interactive oracle proof (IOP). The query complexity of the IOP is related to the succinctness of the argument. \par ii) Algorithms for IOPs. We prove that if a language has an IOP with good soundness relative to query complexity, then it can be decided via a fast algorithm with small space complexity. By combining these results we obtain barriers for a large class of deterministic and non-deterministic languages. For example, a succinct argument for 3SAT with few verifier queries implies an IOP with good parameters, which in turn implies a fast algorithm for 3SAT that contradicts the Exponential-Time Hypothesis. We additionally present results that shed light on the necessity of several features of probabilistic proofs that are typically used to construct succinct arguments, such as holography and state restoration soundness. Our results collectively provide an explanation for why'' known constructions of succinct arguments have a certain structure. For the entire collection see [Zbl 1482.94008]. Subquadratic SNARGs in the random oracle model https://zbmath.org/1485.94073 2022-06-24T15:10:38.853281Z "Chiesa, Alessandro" https://zbmath.org/authors/?q=ai:chiesa.alessandro "Yogev, Eylon" https://zbmath.org/authors/?q=ai:yogev.eylon Summary: In a seminal work, \textit{S. Micali} [SIAM J. Comput. 30, No. 4, 1253--1298 (2000; Zbl 1009.68053)] gave the first succinct non-interactive argument (SNARG) in the random oracle model (ROM). The construction combines a PCP and a cryptographic commitment, and has several attractive features: it is plausibly post-quantum; it can be heuristically instantiated via lightweight cryptography; and it has a transparent (public-coin) parameter setup. However, it also has a significant drawback: a large argument size. In this work, we provide a new construction that achieves a smaller argument size. This is the first progress on the Micali construction since it was introduced over 25 years ago. A SNARG in the ROM is $$(t,\mathsf{\epsilon })$$-secure if every $$t$$-query malicious prover can convince the verifier of a false statement with probability at most $$\mathsf{\epsilon }$$. For $$(t,\mathsf{\epsilon })$$-security, the argument size of all known SNARGs in the ROM (including Micali's) is $$\tilde{O}((\log (t/\mathsf{\epsilon }))^2)$$ bits, even if one were to rely on conjectured probabilistic proofs well beyond current techniques. In practice, these costs lead to SNARGs that are much larger than constructions based on other (pre-quantum and costly) tools. This has led many to believe that SNARGs in the ROM are inherently quadratic. We show that this is not the case. We present a SNARG in the ROM with a sub-quadratic argument size: $$\tilde{O}(\log (t/\mathsf{\epsilon }) \cdot \log t)$$. Our construction relies on a strong soundness notion for PCPs and a weak binding notion for commitments. We hope that our work paves the way for understanding if a linear argument size, that is $$O(\log (t/\mathsf{\epsilon }))$$, is achievable in the ROM. For the entire collection see [Zbl 1483.94004]. Algebraic key-recovery attacks on reduced-round Xoofff https://zbmath.org/1485.94074 2022-06-24T15:10:38.853281Z "Cui, Tingting" https://zbmath.org/authors/?q=ai:cui.tingting "Grassi, Lorenzo" https://zbmath.org/authors/?q=ai:grassi.lorenzo Summary: Farfalle, a permutation-based construction for building a pseudorandom function (PRF), is really versatile. It can be used for message authentication code, stream cipher, key derivation function, authenticated encryption and so on. Farfalle construction relies on a set of permutations and on so-called rolling functions: it can be split into a compression layer followed by a two-step expansion layer. As one instance of Farfalle, Xoofff is very efficient on a wide range of platforms from low-end devices to high-end processors by combining the narrow permutation Xoodoo and the inherent parallelism of Farfalle. In this paper, we present key-recovery attacks on reduced-round Xoofff. After identifying a weakness in the expanding rolling function, we first propose practical attacks on Xoofff instantiated with 1-/2-round Xoodoo in the expansion layer. We next extend such attack on Xoofff instantiated with 3-/4-round Xoodoo in the expansion layer by making use of Meet-in-the-Middle algebraic attacks and the linearization technique. All attacks proposed here -- which are independent of the details of the compression and/or middle layer -- have been practically verified (either on the real'' Xoofff or on a toy-version Xoofff with block-size of 96 bits). As a countermeasure, we discuss how to slightly modified the rolling function for free to reduce the number of attackable rounds. For the entire collection see [Zbl 1482.94005]. Revisiting fairness in MPC: polynomial number of parties and general adversarial structures https://zbmath.org/1485.94075 2022-06-24T15:10:38.853281Z "Dachman-Soled, Dana" https://zbmath.org/authors/?q=ai:dachman-soled.dana Summary: We investigate fairness in secure multiparty computation when the number of parties $$n = \operatorname{poly}(\lambda )$$ grows polynomially in the security parameter, $$\lambda$$. Prior to this work, efficient protocols achieving fairness with no honest majority and polynomial number of parties were known only for the AND and OR functionalities [\textit{S. D. Gordon} and \textit{J. Katz}, Lect. Notes Comput. Sci. 5444, 19--35 (2009; Zbl 1213.94104)]. We show the following: \par i) We first consider symmetric Boolean functions $$F : \{0,1\}^n \rightarrow \{0,1\}$$, where the underlying function $$f_{n/2,n/2}: \{0, \ldots , n/2\} \times \{0, \ldots , n/2\} \rightarrow \{0,1\}$$ can be computed fairly and efficiently in the 2-party setting. We present an efficient protocol for any such $$F$$ tolerating $$n/2$$ or fewer corruptions, for $$n = \operatorname{poly}(\lambda )$$ number of parties. \par ii) We present an efficient protocol for $$n$$-party majority tolerating $$n/2+1$$ or fewer corruptions, for $$n = \operatorname{poly}(\lambda )$$ number of parties. The construction extends to $$n/2+c$$ or fewer corruptions, for constant $$c$$. \par iii) We extend both of the above results to more general types of adversarial structures and present instantiations of non-threshold adversarial structures of these types. These instantiations are obtained via constructions of projective planes and combinatorial designs. For the entire collection see [Zbl 1482.94008]. Super-linear time-memory trade-offs for symmetric encryption https://zbmath.org/1485.94076 2022-06-24T15:10:38.853281Z "Dai, Wei" https://zbmath.org/authors/?q=ai:dai.wei.7 "Tessaro, Stefano" https://zbmath.org/authors/?q=ai:tessaro.stefano "Zhang, Xihu" https://zbmath.org/authors/?q=ai:zhang.xihu Summary: We build symmetric encryption schemes from a pseudorandom function/permutation with domain size $$N$$ which have very high security -- in terms of the amount of messages $$q$$ they can securely encrypt -- assuming the adversary has $$S < N$$ bits of memory. We aim to minimize the number of calls $$k$$ we make to the underlying primitive to achieve a certain $$q$$, or equivalently, to maximize the achievable $$q$$ for a given $$k$$. We target in particular $$q \gg N$$, in contrast to recent works [\textit{J. Jaeger} and \textit{S. Tessaro}, Lect. Notes Comput. Sci. 11476, 467--497 (2019; Zbl 1470.94089); \textit{I. Dinur}, ibid. 12106, 433--460 (2020; Zbl 07496558)] which aim to beat the birthday barrier with one call when $$S < \sqrt{N}$$. Our first result gives new and explicit bounds for the Sample-then-Extract paradigm by \textit{S. Tessaro} and \textit{A. Thiruvengadam} [ibid. 11239, 3--32 (2018; Zbl 1443.94080)]. We show instantiations for which $$q =\varOmega \left( (N/S)^k \right)$$. If $$S < N^{1- \alpha }$$, Thiruvengadam and Tessaro's weaker bounds only guarantee $$q > N$$ when $$k = \varOmega (\log N)$$. In contrast, here, we show this is true already for $$k = \varTheta (1/\alpha )$$. We also consider a scheme by \textit{M. Bellare} et al. [ibid. 1666, 270--287 (1999; Zbl 0940.94019)] which evaluates the primitive on $$k$$ independent random inputs, and masks the message with the XOR of the outputs. Here, we show $$q= \varOmega \left( (N/S)^{k/2} \right)$$, using new combinatorial bounds on the list-decodability of XOR codes which are of independent interest. We also study best-possible attacks against this construction. For the entire collection see [Zbl 1482.94009]. A state bit recovery algorithm with TMDTO attack on Lizard and Grain-128a https://zbmath.org/1485.94077 2022-06-24T15:10:38.853281Z "Dalai, Deepak Kumar" https://zbmath.org/authors/?q=ai:dalai.deepak-kumar "Pal, Santu" https://zbmath.org/authors/?q=ai:pal.santu "Sarkar, Santanu" https://zbmath.org/authors/?q=ai:sarkar.santanu Summary: We propose a deterministic algorithm to recover some state bits of any FSR-based stream cipher knowing some keystream bits by fixing some state bits. This algorithm searches for the number of fixing bits as minimum as possible. Applying the algorithm, we could recover $$10,11, \dots, 24$$ state bits by fixing 10, 12, 14, 16, 18, 20, 22, 24, 38, 40, 42, 44, 46, 48, 50 state bits respectively for Lizard and 35, 48 state bits by fixing 34, 54 state bits respectively for Grain-128a. The result on Lizard beats the previous result, which can recover 14 state bits by fixing 30 state bits and the result on Grain-128a is the first one in this direction. Further, we present the Time-Memory-Data Trade-Off (TMDTO) curve by using the number of recovering and fixing state bits. Then we use the obtained results on the number of recovering and fixing state bits of Lizard and Grain 128a to implement the TMDTO attack to recover other state bits of these two ciphers. Our results supersede the previous result by \textit{S. Maitra} et al. [IEEE Trans. Comput. 67, No. 5, 733--739 (2018; Zbl 1395.94302)] (i.e., $$T = M = D =2^{54}$$) on TMDTO attack on Lizard. The best results for Lizard are \par 1.) $$T = M = 2^{54}$$, $$D = 2^{48}$$ which requires 64 times lesser data than in [Maitra et al., loc. cit.]; \par 2.) $$T = 2^{52}$$, $$M = D = 2^{53}$$ or, $$D = 2^{52}$$, $$M = T = 2^{53}$$ which improves the minimization of $$\max \{T, M, D\}$$; \par 3.) $$T = 2^{50}$$, $$M = D = 2^{54}$$, which reduces the time complexity by 16 times than in [Maitra et al., loc.cit.]; \par 4.) $$T = 2^{42}$$, $$M = D = 2^{60}$$ which reduces the time complexity by $$2^{18}$$ times with respect to overall complexity of Lizard claimed by \textit{M. Hamann} et al. [LIZARD-a lightweight stream cipher for power-constrained devices'', IACR Trans. Symmetric Cryptol. 2017, No. 1, 45--79 (2017; \url{doi:10.13154/tosc.v2017.i1.45-79 })]. Improvements to quantum search techniques for block-ciphers, with applications to AES https://zbmath.org/1485.94078 2022-06-24T15:10:38.853281Z "Davenport, James H." https://zbmath.org/authors/?q=ai:davenport.james-harold "Pring, Benjamin" https://zbmath.org/authors/?q=ai:pring.benjamin Summary: In this paper we demonstrate that the overheads (ancillae qubits/time/number of gates) involved with implementing quantum oracles for a generic key-recovery attack against block-ciphers using quantum search techniques can be reduced. In particular, if we require $$r \ge 1$$ plaintext-ciphertext pairs to uniquely identify a user's key, then using Grover's quantum search algorithm for cryptanalysis of block-ciphers as in [\textit{S. Jaques} et al., Lect. Notes Comput. Sci. 12106, 280--310 (2020; Zbl 07496553)] would require a quantum circuit which requires effort (either Time $$\times$$ Space product or number of quantum gates) proportional to $$r$$. We demonstrate how we can reduce this by a fine-grained approach to quantum amplitude amplification [\textit{S. Kimmel} et al., LIPIcs -- Leibniz Int. Proc. Inform. 44, 1--26 (2015; Zbl 1366.68057); \textit{G. Brassard} et al., Contemp. Math. 305, 53--74 (2002; Zbl 1063.81024)] and design of the required quantum oracles. We furthermore demonstrate that this effort can be reduced to $$< r$$ with respect to cryptanalysis of AES-128/192/256 and provide full quantum resource estimations for AES-128/192/256 with our methods, and code in the Q\# quantum programming language that extends the work of Jaques et al. [loc. cit.]. For the entire collection see [Zbl 1482.94005]. Practical isogeny-based key-exchange with optimal tightness https://zbmath.org/1485.94079 2022-06-24T15:10:38.853281Z "de Kock, Bor" https://zbmath.org/authors/?q=ai:de-kock.bor "Gjøsteen, Kristian" https://zbmath.org/authors/?q=ai:gjosteen.kristian "Veroni, Mattia" https://zbmath.org/authors/?q=ai:veroni.mattia Summary: We exploit the Diffie-Hellman-like structure of CSIDH to build a quantum-resistant authenticated key-exchange algorithm. Our security proof has optimal tightness, which means that the protocol is efficient even when instantiated with theoretically-sound security parameters. Compared to previous isogeny-based authenticated key-exchange protocols, our scheme is extremely simple, its security relies only on the underlying CSIDH-problem and it has optimal communication complexity for CSIDH-based protocols. Our security proof relies heavily on the re-randomizability of CSIDH-like problems and carries on in the ROM. For the entire collection see [Zbl 1482.94005]. Towards defeating backdoored random oracles: indifferentiability with bounded adaptivity https://zbmath.org/1485.94080 2022-06-24T15:10:38.853281Z "Dodis, Yevgeniy" https://zbmath.org/authors/?q=ai:dodis.yevgeniy "Farshim, Pooya" https://zbmath.org/authors/?q=ai:farshim.pooya "Mazaheri, Sogol" https://zbmath.org/authors/?q=ai:mazaheri.sogol "Tessaro, Stefano" https://zbmath.org/authors/?q=ai:tessaro.stefano Summary: In the backdoored random-oracle (BRO) model, besides access to a random function $$\mathsf{H}$$, adversaries are provided with a backdoor oracle that can compute arbitrary leakage functions $$f$$ of the function table of $$\mathsf{H}$$. Thus, an adversary would be able to invert points, find collisions, test for membership in certain sets, and more. This model was introduced in the work of \textit{B. Bauer} et al. [Lect. Notes Comput. Sci. 10992, 272--302 (2018; Zbl 1436.94039)] and extends the auxiliary-input idealized models of \textit{D. Unruh} [ibid. 4622, 205--223 (2007; Zbl 1215.94071)], \textit{Y. Dodis} et al. [ibid. 10211, 473--495 (2017; Zbl 1415.94424)], \textit{S. Coretti} et al. [ibid. 10820, 227--258 (2018; Zbl 1423.94063)], and \textit{S. Coretti} et al. [ibid. 10991, 693--721 (2018; Zbl 1444.94058)]. It was shown that certain security properties, such as one-wayness, pseudorandomness, and collision resistance can be re-established by combining two independent BROs, even if the adversary has access to both backdoor oracles. In this work we further develop the technique of combining two or more independent BROs to render their backdoors useless in a more general sense. More precisely, we study the question of building an indifferentiable and backdoor-free random function by combining multiple BROs. Achieving full indifferentiability in this model seems very challenging at the moment. We however make progress by showing that the xor combiner goes well beyond security against preprocessing attacks and offers indifferentiability as long as the adaptivity of queries to different backdoor oracles remains logarithmic in the input size of the BROs. We even show that an extractor-based combiner of three BROs can achieve indifferentiability with respect to a linear adaptivity of backdoor queries. Furthermore, a natural restriction of our definition gives rise to a notion of indifferentiability with auxiliary input, for which we give two positive feasibility results. To prove these results we build on and refine techniques by \textit{M. Göös} et al. [in: Proceedings of the 47th annual ACM symposium on theory of computing, STOC '15, Portland, OR, USA, June 14--17, 2015. New York, NY: Association for Computing Machinery (ACM). 257--266 (2015; Zbl 1321.68313)] and \textit{P. K. Kothari} et al. [in: Proceedings of the 49th annual ACM SIGACT symposium on theory of computing, STOC '17, Montreal, QC, Canada, June 19--23, 2017. New York, NY: Association for Computing Machinery (ACM). 590--603 (2017; Zbl 1370.68133)] for decomposing distributions with high entropy into distributions with more structure and show how they can be applied in the more involved adaptive settings. For the entire collection see [Zbl 1482.94009]. Practical key recovery attacks on FlexAEAD https://zbmath.org/1485.94081 2022-06-24T15:10:38.853281Z "Dunkelman, Orr" https://zbmath.org/authors/?q=ai:dunkelman.orr "Eichlseder, Maria" https://zbmath.org/authors/?q=ai:eichlseder.maria "Kales, Daniel" https://zbmath.org/authors/?q=ai:kales.daniel "Keller, Nathan" https://zbmath.org/authors/?q=ai:keller.nathan "Leurent, Gaëtan" https://zbmath.org/authors/?q=ai:leurent.gaetan "Schofnegger, Markus" https://zbmath.org/authors/?q=ai:schofnegger.markus Summary: FlexAEAD is a block cipher candidate submitted to the NIST Lightweight Cryptography standardization project, based on repeated application of an Even-Mansour construction. In order to optimize performance, the designers chose a relatively small number of rounds, using properties of the mode and bounds on differential and linear characteristics to substantiate their security claims. Due to a forgery attack with complexity of $$2^{46}$$, FlexAEAD was not selected to the second round of evaluation in the NIST project. In this paper we present a practical key recovery attack on FlexAEAD, using clusters of differentials for the internal permutation and the interplay between different parts of the mode. Our attack, that was fully verified in practice, allows recovering the secret subkeys of FlexAEAD-64 with time complexity of less than $$2^{31}$$ encryptions (with experimental success rate of 75\%). This is the first practical key recovery attack on a candidate of the NIST standartization project. A practical adaptive key recovery attack on the LGM (GSW-like) cryptosystem https://zbmath.org/1485.94082 2022-06-24T15:10:38.853281Z "Fauzi, Prastudy" https://zbmath.org/authors/?q=ai:fauzi.prastudy-mungkas "Hovd, Martha Norberg" https://zbmath.org/authors/?q=ai:hovd.martha-norberg "Raddum, Håvard" https://zbmath.org/authors/?q=ai:raddum.havard Summary: We present an adaptive key recovery attack on the leveled homomorphic encryption scheme suggested by \textit{Z. Li} et al. [Preventing adaptive key recovery attacks on the Gentry-Sahai-Waters leveled homomorphic encryption scheme'', Preprint, \url{https://eprint.iacr.org/2016/1146.pdf}], which itself is a modification of the GSW cryptosystem designed to resist key recovery attacks by using a different linear combination of secret keys for each decryption. We were able to efficiently recover the secret key for a realistic choice of parameters using a statistical attack. In particular, this means that the Li et al. strategy does not prevent adaptive key recovery attacks. For the entire collection see [Zbl 1482.94004]. SimS: a simplification of SiGamal https://zbmath.org/1485.94083 2022-06-24T15:10:38.853281Z "Fouotsa, Tako Boris" https://zbmath.org/authors/?q=ai:fouotsa.tako-boris "Petit, Christophe" https://zbmath.org/authors/?q=ai:petit.christophe.2|petit.christophe.1 Summary: \textit{T. Moriya} et al. [SiGamal: a supersingular isogeny-based PKE and its application to a PRF'', Preprint, \url{https://eprint.iacr.org/2020/613.pdf}] introduced two new IND-CPA secure supersingular isogeny based Public Key Encryption (PKE) protocols: SiGamal and C-SiGamal. Unlike the PKEs canonically derived from SIDH and CSIDH, the new protocols provide IND-CPA security without the use of hash functions. SiGamal and C-SiGamal are however not IND-CCA secure. Moriya et al. suggested a variant of SiGamal that could be IND-CCA secure, but left its study as an open problem. In this paper, we revisit the protocols introduced by Moriya et al. First, we show that the SiGamal variant suggested by Moriya et al. for IND-CCA security is, in fact, not IND-CCA secure. Secondly, we propose a new isogeny-based PKE protocol named SimS, obtained by simplifying SiGamal. SimS has smaller public keys and ciphertexts than (C-)SiGamal and it is more efficient. We prove that SimS is IND-CCA secure under CSIDH security assumptions and one Knowledge of Exponent-type assumption we introduce. Interestingly, SimS is also much closer to the CSIDH protocol, facilitating a comparison between SiGamal and CSIDH. For the entire collection see [Zbl 1482.94004]. A fast chaos-based colour image encryption algorithm using a hash function https://zbmath.org/1485.94084 2022-06-24T15:10:38.853281Z "Fu, Chong" https://zbmath.org/authors/?q=ai:fu.chong "Zhang, Gao-Yuan" https://zbmath.org/authors/?q=ai:zhang.gaoyuan "Zhu, Mai" https://zbmath.org/authors/?q=ai:zhu.mai "Chen, Jun-Xin" https://zbmath.org/authors/?q=ai:chen.junxin "Lei, Wei-Min" https://zbmath.org/authors/?q=ai:lei.wei-min Summary: This paper suggests a new fast colour image cipher to meet the increasing demand for secure online image communication applications. Unlike most other existing approaches using a permutation-substitution network, the proposed algorithm consists of only a single substitution part. The keystream sequence is generated from a 4-D hyperchaotic system, whose initial conditions are determined by both the secret key and the SHA-224 cryptographic hash value of the plain-image. Favoured by the avalanche effect of hash functions, totally different keystream sequences will be generated for different images. Consequently, desired diffusion effect can be achieved after only a single round of substitution operation, whereas at least two encryption rounds are required by the state-of-the-art permutation-substitution type image ciphers. We also demonstrate the computational efficiency of the proposed algorithm by comparing it with the AES encryption algorithm. A thorough security analysis is carried out in detail, demonstrating the satisfactory security of the proposed algorithm. Quantum indistinguishability for public key encryption https://zbmath.org/1485.94085 2022-06-24T15:10:38.853281Z "Gagliardoni, Tommaso" https://zbmath.org/authors/?q=ai:gagliardoni.tommaso "Krämer, Juliane" https://zbmath.org/authors/?q=ai:kramer.juliane "Struck, Patrick" https://zbmath.org/authors/?q=ai:struck.patrick Summary: In this work we study the quantum security of public key encryption schemes (PKE). \textit{D. Boneh} et al. [Lect. Notes Comput. Sci. 7073, 41--69 (2011; Zbl 1227.94033)] initiated this research area for PKE and symmetric key encryption (SKE), albeit restricted to a classical indistinguishability phase. \textit{T. Gagliardoni} et al. [ibid. 9816, 60--89 (2016; Zbl 1406.81021)] advanced the study of quantum security by giving, for SKE, the first definition with a quantum indistinguishability phase. For PKE, on the other hand, no notion of quantum security with a quantum indistinguishability phase exists. Our main result is a novel quantum security notion (qIND-qCPA) for PKE with a quantum indistinguishability phase, which closes the aforementioned gap. We show a distinguishing attack against code-based schemes and against LWE-based schemes with certain parameters. We also show that the canonical hybrid PKE-SKE encryption construction is qIND-qCPA-secure, even if the underlying PKE scheme by itself is not. Finally, we classify quantum-resistant PKE schemes based on the applicability of our security notion. Our core idea follows the approach of Gagliardoni et al. [loc. cit.] by using so-called type-2 operators for encrypting the challenge message. At first glance, type-2 operators appear unnatural for PKE, as the canonical way of building them requires both the secret and the public key. However, we identify a class of PKE schemes -- which we call recoverable -- and show that for this class type-2 operators require merely the public key. Moreover, recoverable schemes allow to realise type-2 operators even if they suffer from decryption failures, which in general thwarts the reversibility mandated by type-2 operators. Our work reveals that many real-world quantum-resistant PKE schemes, including most NIST PQC candidates and the canonical hybrid construction, are indeed recoverable. For the entire collection see [Zbl 1482.94004]. On index calculus algorithms for subfield curves https://zbmath.org/1485.94086 2022-06-24T15:10:38.853281Z "Galbraith, Steven D." https://zbmath.org/authors/?q=ai:galbraith.steven-d "Granger, Robert" https://zbmath.org/authors/?q=ai:granger.robert-a "Merz, Simon-Philipp" https://zbmath.org/authors/?q=ai:merz.simon-philipp "Petit, Christophe" https://zbmath.org/authors/?q=ai:petit.christophe.1 Summary: In this paper we further the study of index calculus methods for solving the elliptic curve discrete logarithm problem (ECDLP). We focus on the index calculus for subfield curves, also called Koblitz curves, defined over $$\mathbb{F}_q$$ with ECDLP in $$\mathbb{F}_{q^n}$$. Instead of accelerating the solution of polynomial systems during index calculus as was predominantly done in previous work, we define factor bases that are invariant under the $$q$$-power Frobenius automorphism of the field $$\mathbb{F}_{q^n}$$, reducing the number of polynomial systems that need to be solved. A reduction by a factor of $$1/n$$ is the best one could hope for. We show how to choose factor bases to achieve this, while simultaneously accelerating the linear algebra step of the index calculus method for Koblitz curves by a factor $$n^2$$. Furthermore, we show how to use the Frobenius endomorphism to improve symmetry breaking for Koblitz curves. We provide constructions of factor bases with the desired properties, and we study their impact on the polynomial system solving costs experimentally. For the entire collection see [Zbl 1482.94005]. Obfuscating finite automata https://zbmath.org/1485.94087 2022-06-24T15:10:38.853281Z "Galbraith, Steven D." https://zbmath.org/authors/?q=ai:galbraith.steven-d "Zobernig, Lukas" https://zbmath.org/authors/?q=ai:zobernig.lukas Summary: We construct a virtual black box and perfect circuit-hiding obfuscator for evasive deterministic finite automata using a matrix encoding scheme with a limited zero-testing algorithm. We construct the matrix encoding scheme by extending an existing matrix fully homomorphic encryption scheme. Using obfuscated deterministic finite automata we can for example evaluate secret regular expressions or disjunctive normal forms on public inputs. In particular, the possibility of evaluating regular expressions solves the open problem of obfuscated substring matching. For the entire collection see [Zbl 1482.94005]. New techniques in replica encodings with client setup https://zbmath.org/1485.94088 2022-06-24T15:10:38.853281Z "Garg, Rachit" https://zbmath.org/authors/?q=ai:garg.rachit "Lu, George" https://zbmath.org/authors/?q=ai:lu.george-jingfeng "Waters, Brent" https://zbmath.org/authors/?q=ai:waters.brent Summary: A proof of replication system is a cryptographic primitive that allows a server (or group of servers) to prove to a client that it is dedicated to storing multiple copies or replicas of a file. Until recently, all such protocols required fine-grained timing assumptions on the amount of time it takes for a server to produce such replicas. \textit{I. Damgård} et al. [Lect. Notes Comput. Sci. 11692, 355--380 (2019; Zbl 1452.94064)] proposed a novel notion that we will call proof of replication with client setup. Here, a client first operates with secret coins to generate the replicas for a file. Such systems do not inherently have to require fine-grained timing assumptions. At the core of their solution to building proofs of replication with client setup is an abstraction called replica encodings. Briefly, these comprise a private coin scheme where a client algorithm given a file $$m$$ can produce an encoding $$\sigma$$. The encodings have the property that, given any encoding $$\sigma$$, one can decode and retrieve the original file $$m$$. Secondly, if a server has significantly less than $$n \cdot |m|$$ bit of storage, it cannot reproduce $$n$$ encodings. The authors give a construction of encodings from ideal permutations and trapdoor functions. In this work, we make three central contributions. \par i) Our first contribution is that we discover and demonstrate that the security argument put forth by Damgård et al. [loc. cit] is fundamentally flawed. Briefly, the security argument makes assumptions on the attacker's storage behavior that does not capture general attacker strategies. We demonstrate this issue by constructing a trapdoor permutation which is secure assuming indistinguishability obfuscation, serves as a counterexample to their claim (for the parameterization stated). \par ii) In our second contribution we show that the DGO construction is actually secure in the ideal permutation model (or ideal cipher model) and the random oracle (or random function) model from any trapdoor permutation when parameterized correctly. In particular, when the number of rounds in the construction is equal to $$\lambda \cdot n \cdot b$$ where $$\lambda$$ is the security parameter, $$n$$ is the number of replicas and $$b$$ is the number of blocks. To do so we build up a proof approach from the ground up that accounts for general attacker storage behavior where we create an analysis technique that we call sequence-then-switch''. \par iii) Finally, we show a new construction that is provably secure in the random oracle model. Thus requiring less structure on the ideal function. For the entire collection see [Zbl 1482.94009]. Weak-key distinguishers for AES https://zbmath.org/1485.94089 2022-06-24T15:10:38.853281Z "Grassi, Lorenzo" https://zbmath.org/authors/?q=ai:grassi.lorenzo "Leander, Gregor" https://zbmath.org/authors/?q=ai:leander.gregor "Rechberger, Christian" https://zbmath.org/authors/?q=ai:rechberger.christian "Tezcan, Cihangir" https://zbmath.org/authors/?q=ai:tezcan.cihangir "Wiemer, Friedrich" https://zbmath.org/authors/?q=ai:wiemer.friedrich Summary: In this paper, we analyze the security of AES in the case in which the whitening key is a weak key. After a systematization of the classes of weak-keys of AES, we perform an extensive analysis of weak-key distinguishers (in the single-key setting) for AES instantiated with the original key-schedule and with the new key-schedule proposed in [\textit{K. Khoo} et al., Human-readable proof of the related-key security of AES-128''. IACR Trans. Symmetric Cryptol. 2017, No. 2, 59--83 (2017; \url{doi:10.13154/tosc.v2017.i2.59-83 })]. As one of the main results, we show that (almost) all the secret-key distinguishers for round-reduced AES currently present in the literature can be set up for a higher number of rounds of AES if the whitening key is a weak-key. Using these results as starting point, we describe a property for 9-round AES-128 and 12-round AES-256 in the chosen-key setting with complexity $$2^{64}$$ without requiring related keys. These new chosen-key distinguishers -- set up by exploiting a variant of the multiple-of-8 property introduced in [\textit{L. Grassi} et al., Lect. Notes Comput. Sci. 10211, 289--317 (2017; Zbl 1415.94433)] -- improve all the AES chosen-key distinguishers in the single-key setting. The entire analysis has been performed using a new framework that we introduce here -- called weak-key subspace trails'', which is obtained by combining invariant subspaces [\textit{G. Leander} et al., ibid. 6841, 206--221 (2011; Zbl 1287.94080)] and subspace trails into a new, more powerful, attack. For the entire collection see [Zbl 1482.94005]. Attacks on beyond-birthday-bound MACs in the quantum setting https://zbmath.org/1485.94090 2022-06-24T15:10:38.853281Z "Guo, Tingting" https://zbmath.org/authors/?q=ai:guo.tingting "Wang, Peng" https://zbmath.org/authors/?q=ai:wang.peng.2|wang.peng.1|wang.peng "Hu, Lei" https://zbmath.org/authors/?q=ai:hu.lei "Ye, Dingfeng" https://zbmath.org/authors/?q=ai:ye.dingfeng Summary: We systematically study the security of twelve Beyond-Birthday-Bound Message Authentication Codes (BBB MACs) in the Q2 model where attackers have quantum-query access to MACs. Assuming the block size of the underlying (tweakable) block cipher is $$n$$ bits, the security proofs show that they are secure at least up to $$\mathcal{O}(2^{2n/3})$$ queries in the classical setting. The best classical attacks need $$\mathcal{O}(2^{3n/4})$$ queries. We consider secret state recovery against SUM-ECBC-like and PMAC\_Plus-like MACs and key recovery against PMAC\_Plus-like MACs. Both attacks lead to successful forgeries. The first attack costs $$\mathcal{O}(2^{n/2}n)$$ quantum queries by applying Grover-meet-Simon algorithm. The second attack costs $$\mathcal{O}(2^{m/2})$$ quantum queries by applying Grover's algorithm, assuming the key size of (tweakable) block cipher is $$m$$ bits. As far as we know, these are the first quantum attacks against BBB MACs. It is remarkable that our attacks are suitable even for some optimally secure MACs, such as mPMAC+-f, mPMAC+-p1, and mPMAC+-p2. For the entire collection see [Zbl 1482.94004]. Generic transformation from broadcast encryption to round-optimal deniable ring authentication https://zbmath.org/1485.94091 2022-06-24T15:10:38.853281Z "Hara, Keisuke" https://zbmath.org/authors/?q=ai:hara.keisuke "Matsuda, Takahiro" https://zbmath.org/authors/?q=ai:matsuda.takahiro "Hanaoka, Goichiro" https://zbmath.org/authors/?q=ai:hanaoka.goichiro "Tanaka, Keisuke" https://zbmath.org/authors/?q=ai:tanaka.keisuke Summary: Deniable ring authentication enables a prover in some group (called a ring) to authenticate a message to a verifier using its secret key while at the same time allowing the prover to deny ever having interacted with the verifier. This primitive furthermore guarantees the anonymity of the prover in the sense that the verifier will learn nothing about the prover's identity except that it is included in the ring. In this work, we propose a new generic construction of two-round concurrently deniable ring authentication in the random oracle model. Our generic construction is based on any IND-CPA secure broadcast encryption (BE) scheme. Instantiating the underlying IND-CPA secure BE scheme with the schemes proposed by \textit{S. Agrawal} and \textit{S. Yamada} [Lect. Notes Comput. Sci. 12105, 13--43 (2020; Zbl 1479.94105)], we obtain the first two-round concurrently deniable ring authentication scheme with optimal efficiency in an asymptotic sense. Here, by optimal efficiency, we mean that all of the sizes of a public parameter and secret keys, the communication costs, and the number of pairing operations are independent of $$n$$, where $$n$$ is the number of users in a ring. In addition to these main instantiations, through our generic construction, we further obtain various two-round concurrently deniable ring authentication schemes. On average-case hardness in \textsf{TFNP} from one-way functions https://zbmath.org/1485.94092 2022-06-24T15:10:38.853281Z "Hubáček, Pavel" https://zbmath.org/authors/?q=ai:hubacek.pavel "Kamath, Chethan" https://zbmath.org/authors/?q=ai:kamath.chethan "Král, Karel" https://zbmath.org/authors/?q=ai:kral.karel "Slívová, Veronika" https://zbmath.org/authors/?q=ai:slivova.veronika Summary: The complexity class $$\mathsf{TFNP}$$ consists of all NP search problems that are total in the sense that a solution is guaranteed to exist for all instances. Over the years, this class has proved to illuminate surprising connections among several diverse subfields of mathematics like combinatorics, computational topology, and algorithmic game theory. More recently, we are starting to better understand its interplay with cryptography. We know that certain cryptographic primitives (e.g. one-way permutations, collision-resistant hash functions, or indistinguishability obfuscation) imply average-case hardness in $$\mathsf{TFNP}$$ and its important subclasses. However, its relationship with the most basic cryptographic primitive -- i.e., one-way functions (OWFs) -- still remains unresolved. Under an additional complexity theoretic assumption, OWFs imply hardness in $$\mathsf{TFNP}$$. It is also known that average-case hardness in most structured subclasses of $$\mathsf{TFNP}$$ does not imply any form of cryptographic hardness in a black-box way and, thus, one-way functions might be sufficient. Specifically, no negative result which would rule out basing average-case hardness in $$\mathsf{TFNP}$$ solely on OWFs is currently known. In this work, we further explore the interplay between $$\mathsf{TFNP}$$ and OWFs and give the first negative results. As our main result, we show that there cannot exist constructions of average-case (and, in fact, even worst-case) hard $$\mathsf{TFNP}$$ problem from OWFs with a certain type of simple black-box security reductions. The class of reductions we rule out is, however, rich enough to capture many of the currently known cryptographic hardness results for $$\mathsf{TFNP}$$. Our results are established using the framework of black-box separations and involve a novel application of the reconstruction paradigm. For the entire collection see [Zbl 1482.94009]. Memory optimization techniques for computing discrete logarithms in compressed SIKE https://zbmath.org/1485.94093 2022-06-24T15:10:38.853281Z "Hutchinson, Aaron" https://zbmath.org/authors/?q=ai:hutchinson.aaron "Karabina, Koray" https://zbmath.org/authors/?q=ai:karabina.koray "Pereira, Geovandro" https://zbmath.org/authors/?q=ai:pereira.geovandro-c-c-f Summary: The supersingular isogeny-based key encapsulation (SIKE) suite stands as an attractive post- quantum cryptosystem with its relatively small public keys. Public key sizes in SIKE can further be compressed by computing pairings and solving discrete logarithms in certain subgroups of finite fields. This comes at a cost of precomputing and storing large discrete logarithm tables. In this paper, we propose several techniques to optimize memory requirements in computing discrete logarithms in SIKE, and achieve to reduce table sizes by a factor of 4. We implement our techniques and verify our theoretical findings. For the entire collection see [Zbl 1482.94004]. Boolean polynomials, BDDs and CRHS equations -- connecting the dots with CryptaPath https://zbmath.org/1485.94094 2022-06-24T15:10:38.853281Z "Indrøy, John Petter" https://zbmath.org/authors/?q=ai:indroy.john-petter "Costes, Nicolas" https://zbmath.org/authors/?q=ai:costes.nicolas "Raddum, Håvard" https://zbmath.org/authors/?q=ai:raddum.havard Summary: When new symmetric-key ciphers and hash functions are proposed they are expected to document resilience against a number of known attacks. Good, easy to use tools may help designers in this process and give improved cryptanalysis. In this paper we introduce CryptaPath, a tool for doing algebraic cryptanalysis which utilizes Compressed Right-Hand Side (CRHS) equations to attack SPN ciphers and sponge constructions. It requires no previous knowledge of CRHS equations to be used, only a reference implementation of a primitive. The connections between CRHS equations, binary decision diagrams and Boolean polynomials have not been described earlier in literature. A comprehensive treatment of these relationships is made before we explain how CryptaPath works. We then describe the process of solving CRHS equation systems while introducing a new operation, dropping variables. For the entire collection see [Zbl 1482.94005]. Expected-time cryptography: generic techniques and applications to concrete soundness https://zbmath.org/1485.94095 2022-06-24T15:10:38.853281Z "Jaeger, Joseph" https://zbmath.org/authors/?q=ai:jaeger.joseph "Tessaro, Stefano" https://zbmath.org/authors/?q=ai:tessaro.stefano Summary: This paper studies concrete security with respect to expected-time adversaries. Our first contribution is a set of generic tools to obtain tight bounds on the advantage of an adversary with expected-time guarantees. We apply these tools to derive bounds in the random-oracle and generic-group models, which we show to be tight. As our second contribution, we use these results to derive concrete bounds on the soundness of public-coin proofs and arguments of knowledge. Under the lens of concrete security, we revisit a paradigm by \textit{J. Bootle} et al. [Lect. Notes Comput. Sci. 9666, 327--357 (2016; Zbl 1369.94520)] that proposes a general Forking Lemma for multi-round protocols which implements a rewinding strategy with expected-time guarantees. We give a tighter analysis, as well as a modular statement. We adopt this to obtain the first quantitative bounds on the soundness of Bulletproofs [\textit{B. Bünz} et al., Bulletproofs: Short proofs for confidential transactions and more'', in: Proceedings of the 2018 IEEE symposium on security and privacy, San Francisco, CA, USA, 2018. Los Alamitos, CA: IEEE Computer Society. 315--334 (2018; \url{doi:10.1109/SP.2018.00020})] which we instantiate with our expected-time generic-group analysis to surface inherent dependence between the concrete security and the statement to be proved. For the entire collection see [Zbl 1482.94009]. Improved (related-key) differential cryptanalysis on GIFT https://zbmath.org/1485.94096 2022-06-24T15:10:38.853281Z "Ji, Fulei" https://zbmath.org/authors/?q=ai:ji.fulei "Zhang, Wentao" https://zbmath.org/authors/?q=ai:zhang.wentao "Zhou, Chunning" https://zbmath.org/authors/?q=ai:zhou.chunning "Ding, Tianyou" https://zbmath.org/authors/?q=ai:ding.tianyou Summary: In this paper, we reevaluate the security of GIFT against differential cryptanalysis under both single-key scenario and related-key scenario. Firstly, we apply Matsui's algorithm to search related-key differential trails of GIFT. We add three constraints to limit the search space and search the optimal related-key differential trails on the limited search space. We obtain related-key differential trails of GIFT-64/128 for up to 15/14 rounds, which are the best results on related-key differential trails of GIFT so far. Secondly, we propose an automatic algorithm to increase the probability of the related-key boomerang distinguisher of GIFT by searching the clustering of the related-key differential trails utilized in the boomerang distinguisher. We find a 20-round related-key boomerang distinguisher of GIFT-64 with probability $$2^{-58.557}$$. The 25-round related-key rectangle attack on GIFT-64 is constructed based on it. This is the longest attack on GIFT-64. We also find a 19-round related-key boomerang distinguisher of GIFT-128 with probability $$2^{-109.626}$$. We propose a 23-round related-key rectangle attack on GIFT-128 utilizing the 19-round distinguisher, which is the longest related-key attack on GIFT-128. The 24-round related-key rectangle attack on GIFT-64 and 22-round related-key boomerang attack on GIFT-128 are also presented. Thirdly, we search the clustering of the single-key differential trails. We increase the probability of a 20-round single-key differential distinguisher of GIFT-128 from $$2^{-121.415}$$ to $$2^{-120.245}$$. The time complexity of the 26-round single-key differential attack on GIFT-128 is improved from $$2^{124.415}$$ to $$2^{123.245}$$. For the entire collection see [Zbl 1482.94005]. Efficient image encryption scheme based on 4-dimensional chaotic maps https://zbmath.org/1485.94097 2022-06-24T15:10:38.853281Z "Kanso, Ali" https://zbmath.org/authors/?q=ai:kanso.ali "Ghebleh, Mohammad" https://zbmath.org/authors/?q=ai:ghebleh.mohammad "Alazemi, Abdullah" https://zbmath.org/authors/?q=ai:alazemi.abdullah Summary: This paper proposes a new family of 4-dimensional chaotic cat maps. This family is then used in the design of a novel block-based image encryption scheme. This scheme is composed of two independent phases, a robust light shuffling phase and a masking phase which operate on image-blocks. It utilizes measures of central tendency to mix blocks of the image at hand to enhance security against a number of cryptanalytic attacks. The mixing is designed so that while encryption is highly sensitive to the secret key and the input image, decryption is robust against noise and cropping of the cipher-image. Empirical results show high performance of the suggested scheme and its robustness against well-known cryptanalytic attacks. Furthermore, comparisons with existing image encryption methods are presented which demonstrate the superiority of the proposed scheme. On the security of time-lock puzzles and timed commitments https://zbmath.org/1485.94098 2022-06-24T15:10:38.853281Z "Katz, Jonathan" https://zbmath.org/authors/?q=ai:katz.jonathan-i|katz.jonathan-n "Loss, Julian" https://zbmath.org/authors/?q=ai:loss.julian "Xu, Jiayu" https://zbmath.org/authors/?q=ai:xu.jiayu Summary: Time-lock puzzles -- problems whose solution requires some amount of sequential effort -- have recently received increased interest (e.g., in the context of verifiable delay functions). Most constructions rely on the sequential-squaring conjecture that computing $$g^{2^T} \bmod N$$ for a uniform $$g$$ requires at least $$T$$ (sequential) steps. We study the security of time-lock primitives from two perspectives: \par i) We give the first hardness result about the sequential-squaring conjecture in a non-generic model of computation. Namely, in a quantitative version of the algebraic group model (AGM) that we call the strong AGM, we show that any speed up of sequential squaring is as hard as factoring $$N$$. \par ii) We then focus on timed commitments, one of the most important primitives that can be obtained from time-lock puzzles. We extend existing security definitions to settings that may arise when using timed commitments in higher-level protocols, and give the first construction of non-malleable timed commitments. As a building block of independent interest, we also define (and give constructions for) a related primitive called timed public-key encryption. For the entire collection see [Zbl 1482.94009]. On statistical security in two-party computation https://zbmath.org/1485.94099 2022-06-24T15:10:38.853281Z "Khurana, Dakshita" https://zbmath.org/authors/?q=ai:khurana.dakshita "Mughees, Muhammad Haris" https://zbmath.org/authors/?q=ai:mughees.muhammad-haris Summary: There has been a large body of work characterizing the round complexity of general-purpose maliciously secure two-party computation $$( \mathsf{2PC})$$ against probabilistic polynomial time adversaries. This is particularly true for zero-knowledge, which is a special case of $$\mathsf{2PC}$$. In fact, in the special case of zero knowledge, optimal protocols with unconditional security against one of the two players have also been meticulously studied and constructed. On the other hand, general-purpose maliciously secure $$\mathsf{2PC}$$ with statistical or unconditional security against one of the two participants has remained largely unexplored so far. In this work, we initiate the study of such protocols, which we refer to as $$\mathsf{2PC}$$ with one-sided statistical security. We settle the round complexity of $$\mathsf{2PC}$$ with one-sided statistical security with respect to black-box simulation by obtaining the following tight results: \par i) In a setting where only one party obtains an output, we design $$\mathsf{2PC}$$ in 4 rounds with statistical security against receivers and computational security against senders. \par ii) In a setting where both parties obtain outputs, we design $$\mathsf{2PC}$$ in 5 rounds with computational security against the party that obtains output first and statistical security against the party that obtains output last. \textit{J. Katz} and \textit{R. Ostrovsky} [Lect. Notes Comput. Sci. 3152, 335--354 (2004; Zbl 1104.94027)] showed that $$\mathsf{2PC}$$ with black-box simulation requires at least 4 rounds when one party obtains an output and 5 rounds when both parties obtain outputs, even when only computational security is desired against both parties. Thus in these settings, not only are our results tight, but they also show that statistical security is achievable at no extra cost to round complexity. This still leaves open the question of whether $$\mathsf{2PC}$$ can be achieved with black-box simulation in 4 rounds with statistical security against senders and computational security against receivers. Based on a lower bound on computational zero-knowledge proofs due to \textit{J. Katz} [J. Cryptology 25, No. 1, 41--56 (2012; Zbl 1276.94016)], we observe that the answer is negative unless the polynomial hierarchy collapses. For the entire collection see [Zbl 1482.94008]. Cryptanalysis of the BBCRS system on Reed-Muller binary code https://zbmath.org/1485.94100 2022-06-24T15:10:38.853281Z "Kosolapov, Yuriĭ Vladimirovich" https://zbmath.org/authors/?q=ai:kosolapov.yurii-vladimirovich "Lelyuk, Anastasiya Andreevna" https://zbmath.org/authors/?q=ai:lelyuk.anastasiya-andreevna Summary: The paper considers the BBCRS system which is a modification of the McEliece cryptosystem proposed by \textit{M. Baldi} et al. [J. Cryptology 29, No. 1, 1--27 (2016; Zbl 1351.94024)]. In this modification matrix $$G_{pub}$$ of the public key is the product of three matrices: a non-singular $$(k\times k)$$-matrix $$S$$, a generator matrix $$G$$ of a secret $$[n,k]_q$$-code $$C_{sec}$$, and a non-singular $$(n\times n)$$-matrix $$Q$$. The difference between the modified system and the original system is that the permutation matrix used in the McEliece system is replaced by a non-singular matrix $$Q$$. The matrix $$Q$$ is obtained as the sum of a permutation matrix $$P$$ and a matrix $$R$$ of small rank $$r(R)$$. Later, \textit{V. Gauthier} et al. [Distinguisher-based attack on a variant of McEliece's cryptosystem based on Reed-Solomon codes'', Preprint, \url{arXiv:1204.6459}] constructed an attack that allows decrypting messages in the case when $$C_{sec}$$ is a generalized Reed-Solomon code (GRS code) and $$r(R)=1$$. The key stages of the constructed attack are, firstly, finding the intersection of the linear span $$\mathcal{L}(G_{pub})=C_{pub}$$ and $$\mathcal{L}(G P)=C$$ that spanned on the rows of the matrices $$G_{pub}$$ and $$G P$$ respectively, and secondly, finding the code $$C$$ by the subcode $$C_{pub}\cap C$$. In this paper we present an attack in the case when $$C_{sec}$$ is the Reed-Muller binary code of order $$r$$, length $$2^m$$ and $$r(R)=1$$. The stages of finding the codes $$C_{pub}\cap C$$ and $$C$$ in this paper are completely different from the corresponding steps in attack by Gauthier et al. [loc. cit.] and other steps are the adaptation of the known results of cryptanalysis that applied in the case of GRS codes. Efficient lattice-based polynomial evaluation and batch ZK arguments https://zbmath.org/1485.94101 2022-06-24T15:10:38.853281Z "Kuchta, Veronika" https://zbmath.org/authors/?q=ai:kuchta.veronika "Sakzad, Amin" https://zbmath.org/authors/?q=ai:sakzad.amin "Steinfeld, Ron" https://zbmath.org/authors/?q=ai:steinfeld.ron "Liu, Joseph K." https://zbmath.org/authors/?q=ai:liu.joseph-k-k Summary: In this paper we provide an efficient construction of a lattice-based polynomial argument and a polynomial batch-protocol, where the latter contains the polynomial argument as a building block. Our contribution is motivated by the discrete log based construction [\textit{J. Bootle} et al., Lect. Notes Comput. Sci. 9666, 327--357 (2016; Zbl 1369.94520)], where in our case we employ different techniques to obtain a communication efficient lattice-based scheme. In the zero-knowledge polynomial batch-protocol, we prove the knowledge of an easy relation between two polynomials which also allows batching of several instances of the same relation. Our batch-protocol is applicable to an efficient lattice-based range proof construction which represents a useful application in cryptocurrencies. In contrast to the existing range proof, our proof is more efficient for large number of batched instances. For the entire collection see [Zbl 1482.94005]. Trapdoor DDH groups from pairings and isogenies https://zbmath.org/1485.94102 2022-06-24T15:10:38.853281Z "Kutas, Péter" https://zbmath.org/authors/?q=ai:kutas.peter "Petit, Christophe" https://zbmath.org/authors/?q=ai:petit.christophe.1 "Silva, Javier" https://zbmath.org/authors/?q=ai:silva.javier Summary: Trapdoor DDH groups are an appealing cryptographic primitive introduced by \textit{A. W. Dent} and \textit{S. D. Galbraith} [Lect. Notes Comput. Sci. 4076, 436--451 (2006; Zbl 1143.94344)], where DDH instances are hard to solve unless provided with additional information (i.e., a trapdoor). In this paper, we introduce a new trapdoor DDH group construction using pairings and isogenies of supersingular elliptic curves, and present two instantiations of it. The construction solves all shortcomings of previous constructions as identified by \textit{Y. Seurin} [ibid. 7778, 443--460 (2013; Zbl 1314.94093)]. We also present partial attacks on a previous construction due to Dent-Galbraith, and we provide a formal security definition of the related notion of trapdoor pairings''. For the entire collection see [Zbl 1482.94005]. Monomial evaluation of polynomial functions protected by threshold implementations -- with an illustration on AES -- extended version https://zbmath.org/1485.94103 2022-06-24T15:10:38.853281Z "Landry, Simon" https://zbmath.org/authors/?q=ai:landry.simon "Linge, Yanis" https://zbmath.org/authors/?q=ai:linge.yanis "Prouff, Emmanuel" https://zbmath.org/authors/?q=ai:prouff.emmanuel Summary: In the context of side-channel countermeasures, threshold implementations (TI) have been introduced in [\textit{S. Nikova} et al., J. Cryptology 24, No. 2, 292--321 (2011; Zbl 1239.94060)] to defeat attacks which exploit hardware effects called glitches. On several aspects, TI may be seen as an extension of another classical side-channel countermeasure, called masking, which is essentially based on the sharing of any internal state of the processing into independent parts (also called shares). To achieve side-channel security, a TI scheme operates on shared data and comes with additional properties to get robustness to glitches. When specifying such a scheme to secure a cryptographic implementation, as e.g. the AES block cipher, the challenging part is to minimise both the number of steps (or cycles) and the consumption of randomness. In this paper, we combine the changing of the guards technique published by \textit{J. Daemen} [Lect. Notes Comput. Sci. 10529, 137--153 (2017; Zbl 1440.94043)] (which reduces the need for fresh randomness) with the work of \textit{L. Genelle} et al. [Thwarting higher-order side channel analysis with additive and multiplicative maskings'', Lect, Notes Comput. Sci. 6917, 240--255 (2011; \url{doi:10.1007/978-3-642-23951-9_16})] (which combines additive masking and multiplicative one) to propose a new TI which does not consume fresh randomness and which is efficient (in terms of cycles) for classical block ciphers. As an illustration, we develop our proposal for the AES, and more specifically its SBox implemented thanks to a finite field exponentiation. In this particular context, we argue that our proposal is a valuable alternative to the state of the art solutions. More generally, it has the advantage of being easily applicable to the evaluation of any polynomial function, which was usually not the case of previous solutions. Coupling of random systems https://zbmath.org/1485.94104 2022-06-24T15:10:38.853281Z "Lanzenberger, David" https://zbmath.org/authors/?q=ai:lanzenberger.david "Maurer, Ueli" https://zbmath.org/authors/?q=ai:maurer.ueli-m Summary: This paper makes three contributions. First, we present a simple theory of random systems. The main idea is to think of a probabilistic system as an equivalence class of distributions over deterministic systems. Second, we demonstrate how in this new theory, the optimal information-theoretic distinguishing advantage between two systems can be characterized merely in terms of the statistical distance of probability distributions, providing a more elementary understanding of the distance of systems. In particular, two systems that are $$\epsilon$$-close in terms of the best distinguishing advantage can be understood as being equal with probability $$1-\epsilon$$, a property that holds statically, without even considering a distinguisher, let alone its interaction with the systems. Finally, we exploit this new characterization of the distinguishing advantage to prove that any threshold combiner is an amplifier for indistinguishability in the information-theoretic setting, generalizing and simplifying results from \textit{U. Maurer} et al. [Lect. Notes Comput. Sci. 4622, 130--149 (2007; Zbl 1215.94062)]. For the entire collection see [Zbl 1482.94009]. On the design and security of Lee metric McEliece cryptosystems https://zbmath.org/1485.94105 2022-06-24T15:10:38.853281Z "Lau, Terry Shue Chien" https://zbmath.org/authors/?q=ai:lau.terry-shue-chien "Tan, Chik How" https://zbmath.org/authors/?q=ai:tan.chik-how Summary: Recently, \textit{A.-L. Horlemann-Trautmann} and \textit{V. Weger} [Adv. Math. Commun. 15, No. 4, 677--699 (2021; Zbl 07449473)] proposed a general framework for a Quaternary McEliece cryptosystem over the ring $$\mathbb{Z}_4$$, and generalized the construction over the ring $$\mathbb{Z}_{p^m}$$ where $$p$$ is a prime integer. By considering the Lee metric, the public key size for the McEliece cryptosystems can be substantially smaller than their counterparts in the Hamming metric. Furthermore, the hardness of the McEliece cryptosystems over $$\mathbb{Z}_{p^m}$$ is based on the Lee Syndrome Decoding problem, which was shown to be NP-complete. This paper aims to analyze the design and security of the Lee metric McEliece cryptosystem over $$\mathbb{Z}_{p^m}$$ in the Lee metric. We derive some necessary conditions for the quaternary codes used in the Lee metric McEliece cryptosystem, and show that the theoretical quaternary codes proposed in [loc. cit.] do not exist. Furthermore, we propose a plaintext recovery attack on the Lee metric McEliece cryptosystem over $$\mathbb{Z}_{p^m}$$, when the minimum Lee distance of the underlying codes with certain structures is greater than twice the Lee weight of the error. Our plaintext recovery attack is applicable to the cryptosystems over $$\mathbb{Z}_{p^m}$$ when $$m>1$$. We apply our plaintext recovery attack on the Quaternary McEliece cryptosystem, and manage to recover partial plaintext of length $$k_1$$ within $$O \left( 2^{\min \{ k_1, 0.0473 n\}} \right)$$ operations, where $$n$$ is the length of ciphertext. Hence, the proposed theoretical parameters for the Quaternary McEliece over $$\mathbb{Z}_4$$ in [loc. cit.] do not achieve 128-bit security, as we can recover the partial plaintext within 0.591 s. This also implies that the use of quaternary codes in the McEliece cryptosystem may not reduce the public key size significantly. Furthermore, for some parameters of the Lee metric McEliece cryptosystems over $$\mathbb{Z}_4$$, our plaintext recovery attack can be more efficient than the Lee-Brickell's and Stern's Information Set Decoding Algorithm over $$\mathbb{Z}_4$$. Zero-knowledge proofs for committed symmetric Boolean functions https://zbmath.org/1485.94106 2022-06-24T15:10:38.853281Z "Ling, San" https://zbmath.org/authors/?q=ai:ling.san "Nguyen, Khoa" https://zbmath.org/authors/?q=ai:nguyen.khoa "Phan, Duong Hieu" https://zbmath.org/authors/?q=ai:phan.duong-hieu "Tang, Hanh" https://zbmath.org/authors/?q=ai:tang.hanh "Wang, Huaxiong" https://zbmath.org/authors/?q=ai:wang.huaxiong Summary: Zero-knowledge proofs (ZKP) are a fundamental notion in modern cryptography and an essential building block for countless privacy-preserving constructions. Recent years have witnessed a rapid development in the designs of ZKP for general statements, in which, for a publicly given Boolean function $$f: \{0,1\}^n \rightarrow \{0,1\}$$, one's goal is to prove knowledge of a secret input $$\mathfrak{x} \in \{0,1\}^n$$ satisfying $$f(\mathfrak{x}) = b$$, for a given bit $$b$$. Nevertheless, in many interesting application scenarios, not only the input $$\mathfrak{x}$$ but also the underlying function $$f$$ should be kept private. The problem of designing ZKP for the setting where both $$\mathfrak{x}$$ and $$f$$ are hidden, however, has not received much attention. This work addresses the above-mentioned problem for the class of symmetric Boolean functions, namely, Boolean functions $$f$$ whose output value is determined solely by the Hamming weight of the $$n$$-bit input $$\mathfrak{x}$$. Although this class might sound restrictive, it has exponential cardinality $$2^{n+1}$$ and captures a number of well-known Boolean functions, such as threshold, sorting and counting functions. Specifically, with respect to a commitment scheme secure under the Learning-Parity-with-Noise (LPN) assumption, we show how to prove in zero-knowledge that $$f(\mathfrak{x}) = b$$, for a committed symmetric function $$f$$, a committed input $$\mathbf{x}$$ and a bit $$b$$. The security of our protocol relies on that of an auxiliary commitment scheme which can be instantiated under quantum-resistant assumptions (including LPN). The protocol also achieves reasonable communication cost: the variant with soundness error $$2^{-\lambda }$$ has proof size $$c\cdot \lambda \cdot n$$, where $$c$$ is a relatively small constant. The protocol can potentially find appealing privacy-preserving applications in the area of post-quantum cryptography, and particularly in code-based cryptography. For the entire collection see [Zbl 1482.94004]. Information-theoretic 2-round MPC without round collapsing: adaptive security, and more https://zbmath.org/1485.94107 2022-06-24T15:10:38.853281Z "Lin, Huijia" https://zbmath.org/authors/?q=ai:lin.huijia "Liu, Tianren" https://zbmath.org/authors/?q=ai:liu.tianren "Wee, Hoeteck" https://zbmath.org/authors/?q=ai:wee.hoeteck Summary: We present simpler and improved constructions of 2-round protocols for secure multi-party computation (MPC) in the semi-honest setting. Our main results are new information-theoretically secure protocols for arithmetic NC1 in two settings: \par (i) the plain model tolerating up to $$t < n/2$$ corruptions; and \par (ii) in the OLE-correlation model tolerating any number of corruptions. Our protocols achieve adaptive security and require only black-box access to the underlying field, whereas previous results only achieve static security and require non-black-box field access. Moreover, both results extend to polynomial-size circuits with computational and adaptive security, while relying on black-box access to a pseudorandom generator. In the OLE correlation model, the extended protocols for circuits tolerate up to $$n-1$$ corruptions. Along the way, we introduce a conceptually novel framework for 2-round MPC that does not rely on the round collapsing framework underlying all of the recent advances in 2-round MPC. For the entire collection see [Zbl 1482.94008]. Synchronous constructive cryptography https://zbmath.org/1485.94108 2022-06-24T15:10:38.853281Z "Liu-Zhang, Chen-Da" https://zbmath.org/authors/?q=ai:liu-zhang.chen-da "Maurer, Ueli" https://zbmath.org/authors/?q=ai:maurer.ueli-m Summary: This paper proposes a simple synchronous composable security framework as an instantiation of the Constructive Cryptography framework, aiming to capture minimally, without unnecessary artefacts, exactly what is needed to state synchronous security guarantees. The objects of study are specifications (i.e., sets) of systems, and traditional security properties like consistency and validity can naturally be understood as specifications, thus unifying composable and property-based definitions. The framework's simplicity is in contrast to current composable frameworks for synchronous computation which are built on top of an asynchronous framework (e.g. the UC framework), thus not only inheriting artefacts and complex features used to handle asynchronous communication, but adding additional overhead to capture synchronous communication. As a second, independent contribution we demonstrate how secure (synchronous) multi-party computation protocols can be understood as constructing a computer that allows a set of parties to perform an arbitrary, on-going computation. An interesting aspect is that the instructions of the computation need not be fixed before the protocol starts but can also be determined during an on-going computation, possibly depending on previous outputs. For the entire collection see [Zbl 1482.94008]. Problem of learning with errors and modern cryptosystems https://zbmath.org/1485.94109 2022-06-24T15:10:38.853281Z "Marc, Tilen" https://zbmath.org/authors/?q=ai:marc.tilen Summary: Modern cryptosystems are based on mathematical problems and their security is guaranteed only as long as there are no efficient algorithms solving these problems. We present a recently introduced algorithmic problem of learning with errors (LWE). The problem is crafted for the use in cryptography and allows to construct new cryptosystems with interesting and useful properties. Such cryptosystems are considered safe even against adversaries with access to quantum computers, which does not hold for most of the other systems. We explain how to construct two cryptosystems: a quantumly secure cryptographic scheme with public key, and a scheme that enables computation on encrypted data, known as homomorphic encryption. The construction of the latter was a long standing open problem and was solved only recently with the help of LWE problem. Boolean ring cryptographic equation solving https://zbmath.org/1485.94110 2022-06-24T15:10:38.853281Z "Murphy, Sean" https://zbmath.org/authors/?q=ai:murphy.sean "Paterson, Maura" https://zbmath.org/authors/?q=ai:paterson.maura-beth "Swart, Christine" https://zbmath.org/authors/?q=ai:swart.christine-s Summary: This paper considers multivariate polynomial equation systems over $$\operatorname{GF}(2)$$ that have a small number of solutions. This paper gives a new method \texttt{EGHAM2} for solving such systems of equations that uses the properties of the Boolean quotient ring to potentially reduce memory and time complexity relative to existing \texttt{XL}-type or Gröbner basis algorithms applied in this setting. This paper also establishes a direct connection between solving such a multivariate polynomial equation system over $$\operatorname{GF}(2)$$, an \texttt{MQ} problem, and an instance of the \texttt{LPN} problem. For the entire collection see [Zbl 1482.94005]. Dispelling myths on superposition attacks: formal security model and attack analyses https://zbmath.org/1485.94111 2022-06-24T15:10:38.853281Z "Music, Luka" https://zbmath.org/authors/?q=ai:music.luka "Chevalier, Céline" https://zbmath.org/authors/?q=ai:chevalier.celine "Kashefi, Elham" https://zbmath.org/authors/?q=ai:kashefi.elham Summary: With the emergence of quantum communication, it is of folkloric belief that the security of classical cryptographic protocols is automatically broken if the Adversary is allowed to perform superposition queries and the honest players forced to perform actions coherently on quantum states. Another widely held intuition is that enforcing measurements on the exchanged messages is enough to protect protocols from these attacks. However, the reality is much more complex. Security models dealing with superposition attacks only consider unconditional security. Conversely, security models considering computational security assume that all supposedly classical messages are measured, which forbids by construction the analysis of superposition attacks. To fill in the gap between those models, Boneh and Zhandry have started to study the quantum computational security for classical primitives in their seminal work [\textit{D. Boneh} and \textit{M. Zhandry} [Lect. Notes Comput. Sci. 8043, 361--379 (2013; Zbl 1317.81074)], but only in the single-party setting. To the best of our knowledge, an equivalent model in the multiparty setting is still missing. In this work, we propose the first computational security model considering superposition attacks for multiparty protocols. We show that our new security model is satisfiable by proving the security of the well-known One-Time-Pad protocol and give an attack on a variant of the equally reputable Yao Protocol for Secure Two-Party Computations. The post-mortem of this attack reveals the precise points of failure, yielding highly counter-intuitive results: Adding extra classical communication, which is harmless for classical security, can make the protocol become subject to superposition attacks. We use this newly imparted knowledge to construct the first concrete protocol for Secure Two-Party Computation that is resistant to superposition attacks. Our results show that there is no straightforward answer to provide for either the vulnerabilities of classical protocols to superposition attacks or the adapted countermeasures. Zero-communication reductions https://zbmath.org/1485.94112 2022-06-24T15:10:38.853281Z "Narayanan, Varun" https://zbmath.org/authors/?q=ai:narayanan.varun "Prabhakaran, Manoj" https://zbmath.org/authors/?q=ai:prabhakaran.manoj-m "Prabhakaran, Vinod M." https://zbmath.org/authors/?q=ai:prabhakaran.vinod-m Summary: We introduce a new primitive in information-theoretic cryptography, namely zero-communication reductions (\textsc{zcr}), with different levels of security. We relate \textsc{zcr} to several other important primitives, and obtain new results on upper and lower bounds. In particular, we obtain new upper bounds for PSM, CDS and OT complexity of functions, which are exponential in the information complexity of the functions. These upper bounds complement the results of \textit{A. Beimel} et al. [Lect. Notes Comput. Sci. 8349, 317--342 (2014; Zbl 1326.94072)] which broke the circuit-complexity barrier for high complexity'' functions; our results break the barrier of input size for low complexity'' functions. We also show that lower bounds on secure \textsc{zcr} can be used to establish lower bounds for OT-complexity. We recover the known (linear) lower bounds on OT-complexity [\textit{A. Beimel} and \textit{T. Malkin}, ibid. 2951, 238--257 (2004; Zbl 1197.94178)] via this new route. We also formulate the lower bound problem for secure \textsc{zcr} in purely linear-algebraic terms, by defining the invertible rank of a matrix. We present an Invertible Rank Conjecture, proving which will establish super-linear lower bounds for OT-complexity (and if accompanied by an explicit construction, will provide explicit functions with super-linear circuit lower bounds). For the entire collection see [Zbl 1482.94009]. On collisions related to an ideal class of order 3 in CSIDH https://zbmath.org/1485.94113 2022-06-24T15:10:38.853281Z "Onuki, Hiroshi" https://zbmath.org/authors/?q=ai:onuki.hiroshi "Takagi, Tsuyoshi" https://zbmath.org/authors/?q=ai:takagi.tsuyoshi The authors of this paper study an isogeny-based key exchange for post quantum cryptography. It is based on the action of an ideal class group on isomorphism classes of supersingular elliptic curves. In the key exchange, the classes are represented by vectors with integer coefficients and the number of ideal classes is related to the security level ofthe system. They study the correspondence between the integer vectors and the ideal classes and proof that the all ones vector corresponds to an ideal class of order 3 and, therefore, distinct vectors belongs to the same ideal class. This means that distinct secret keys correspond to the same public key. In the final part of the paper, they show a new ideal representation that does not include those collisions. For the entire collection see [Zbl 1454.68007]. Reviewer: Edgar Martinez-Moro (Soria) Efficient generation of quadratic cyclotomic classes for shortest quadratic decompositions of polynomials https://zbmath.org/1485.94114 2022-06-24T15:10:38.853281Z "Otal, Kamil" https://zbmath.org/authors/?q=ai:otal.kamil "Tekin, Eda" https://zbmath.org/authors/?q=ai:tekin.eda Summary: \textit{S. Nikova} et al. [Cryptogr. Commun. 11, No. 3, 379--384 (2019; Zbl 1412.94196)] investigated the decomposition problem of power permutations over finite fields $$\mathbb{F}_{2^n}$$. In particular, they provided an algorithm to give a decomposition of a power permutation into quadratic power permutations. Their algorithm has a precomputation step that finds all cyclotomic classes of $$\mathbb{F}_{2^n}$$ and then use the quadratic ones. In this paper, we provide an efficient and systematic method to generate the representatives of quadratic cyclotomic classes and hence reduce the complexity of the precomputation step drastically. We then apply our method to extend their results on shortest quadratic decompositions of $$x^{2^n-2}$$ from $$3 \leq n \leq 16$$ to $$3 \leq n \leq 24$$ and correct a typo (for $$n = 11$$). We also give two explicit formulas for the time complexity of the adaptive search to understand its efficiency with respect to the parameters. On the effect of projection on rank attacks in multivariate cryptography https://zbmath.org/1485.94115 2022-06-24T15:10:38.853281Z "Øygarden, Morten" https://zbmath.org/authors/?q=ai:oygarden.morten "Smith-Tone, Daniel" https://zbmath.org/authors/?q=ai:smith-tone.daniel "Verbel, Javier" https://zbmath.org/authors/?q=ai:verbel.javier-a Summary: The multivariate scheme HFEv- used to be considered a promising candidate for a post-quantum signature system. First suggested in the early 2000s, a version of the scheme made it to the third round of the ongoing NIST post-quantum standardization process. In late 2020, the system suffered from an efficient rank attack due to \textit{C. Tao} et al. [Improved key recovery of the HFEv-signature scheme'', Preprint, \url{https://eprint.iacr.org/2020/1424.pdf}]. In this paper, we inspect how this recent rank attack is affected by the projection modification. This modification was introduced to secure the signature scheme PFLASH against its predecessor's attacks. We prove upper bounds for the rank of projected HFEv- (pHFEv-) and PFLASH under the new attack, which are tight for the experiments we have performed. We conclude that projection could be a useful tool in protecting against this recent cryptanalysis. For the entire collection see [Zbl 1482.94004]. An algebraic framework for universal and updatable SNARKs https://zbmath.org/1485.94116 2022-06-24T15:10:38.853281Z "Ràfols, Carla" https://zbmath.org/authors/?q=ai:rafols.carla "Zapico, Arantxa" https://zbmath.org/authors/?q=ai:zapico.arantxa Summary: We introduce Checkable Subspace Sampling Arguments, a new information theoretic interactive proof system in which the prover shows that a vector has been sampled in a subspace according to the verifier's coins. We show that this primitive provides a unifying view that explains the technical core of most of the constructions of universal and updatable pairing-based (zk)SNARKs. This characterization is extended to a fully algebraic framework for designing such SNARKs in a modular way. We propose new constructions of CSS arguments that lead to SNARKs with different performance trade-offs. For the entire collection see [Zbl 1483.94004]. Algebraic distinguishers: from discrete logarithms to decisional Uber assumptions https://zbmath.org/1485.94117 2022-06-24T15:10:38.853281Z "Rotem, Lior" https://zbmath.org/authors/?q=ai:rotem.lior "Segev, Gil" https://zbmath.org/authors/?q=ai:segev.gil Summary: The algebraic group model, introduced by \textit{G. Fuchsbauer} et al. [Lect. Notes Comput. Sci. 10992, 33--62 (2018; Zbl 1430.94068)], is a substantial relaxation of the generic group model capturing algorithms that may exploit the representation of the underlying group. This idealized yet realistic model was shown useful for reasoning about cryptographic assumptions and security properties defined via computational problems. However, it does not generally capture assumptions and properties defined via decisional problems. As such problems play a key role in the foundations and applications of cryptography, this leaves a significant gap between the restrictive generic group model and the standard model. We put forward the notion of algebraic distinguishers, strengthening the algebraic group model by enabling it to capture decisional problems. Within our framework we then reveal new insights on the algebraic interplay between a wide variety of decisional assumptions. These include the decisional Diffie-Hellman assumption, the family of Linear assumptions in multilinear groups, and the family of Uber assumptions in bilinear groups. Our main technical results establish that, from an algebraic perspective, these decisional assumptions are in fact all polynomially equivalent to either the most basic discrete logarithm assumption or to its higher-order variant, the $$q$$-discrete logarithm assumption. On the one hand, these results increase the confidence in these strong decisional assumptions, while on the other hand, they enable to direct cryptanalytic efforts towards either extracting discrete logarithms or significantly deviating from standard algebraic techniques. For the entire collection see [Zbl 1482.94009]. Tighter security for Schnorr identification and signatures: a high-moment forking lemma for $${\varSigma }$$-protocols https://zbmath.org/1485.94118 2022-06-24T15:10:38.853281Z "Rotem, Lior" https://zbmath.org/authors/?q=ai:rotem.lior "Segev, Gil" https://zbmath.org/authors/?q=ai:segev.gil Summary: The Schnorr identification and signature schemes have been amongst the most influential cryptographic protocols of the past three decades. Unfortunately, although the best-known attacks on these two schemes are via discrete-logarithm computation, the known approaches for basing their security on the hardness of the discrete logarithm problem encounter the square-root barrier''. In particular, in any group of order $$p$$ where Shoup's generic hardness result for the discrete logarithm problem is believed to hold (and is thus used for setting concrete security parameters), the best-known $$t$$-time attacks on the Schnorr identification and signature schemes have success probability $$t^2/p$$, whereas existing proofs of security only rule out attacks with success probabilities $$(t^2/p)^{1/2}$$ and $$(q_{\mathsf{H}} \cdot t^2/p)^{1/2}$$, respectively, where $$q_{\mathsf{H}}$$ denotes the number of random-oracle queries issued by the attacker. We establish tighter security guarantees for identification and signature schemes which result from $$\varSigma$$-protocols with special soundness based on the hardness of their underlying relation, and in particular for Schnorr's schemes based on the hardness of the discrete logarithm problem. We circumvent the square-root barrier by introducing a high-moment generalization of the classic forking lemma, relying on the assumption that the underlying relation is $$d$$-moment hard'': The success probability of any algorithm in the task of producing a witness for a random instance is dominated by the $$d$$-th moment of the algorithm's running time. In the concrete context of the discrete logarithm problem, already \textit{V. Shoup}'s original proof [Lower bounds for discrete logarithms and related problems'', Lect. Notes Comput. Sci. 1233, 256--266 (1997; \url{doi:10.1007/3-540-69053-0_18})] shows that the discrete logarithm problem is 2-moment hard in the generic-group model, and thus our assumption can be viewed as a highly-plausible strengthening of the discrete logarithm assumption in any group where no better-than-generic algorithms are currently known. Applying our high-moment forking lemma in this context shows that, assuming the 2-moment hardness of the discrete logarithm problem, any $$t$$-time attacker breaks the security of the Schnorr identification and signature schemes with probabilities at most $$(t^2/p)^{2/3}$$ and $$(q_\mathsf{H}\cdot t^2/p)^{2/3}$$, respectively. For the entire collection see [Zbl 1483.94004]. Interpolation cryptanalysis of unbalanced Feistel networks with low degree round functions https://zbmath.org/1485.94119 2022-06-24T15:10:38.853281Z "Roy, Arnab" https://zbmath.org/authors/?q=ai:roy.arnab "Andreeva, Elena" https://zbmath.org/authors/?q=ai:andreeva.elena-anatolevna "Sauer, Jan Ferdinand" https://zbmath.org/authors/?q=ai:sauer.jan-ferdinand Summary: In recent years a new type of block ciphers and hash functions over a (large) field, such as MiMC and GMiMC, have been designed. Their security, particularly over a prime field, is mainly determined by algebraic cryptanalysis techniques, such as Gröbner basis and interpolation attacks. \textit{C. Li} and \textit{B. Preneel} [Lect. Notes Comput. Sci. 11959, 171--193 (2020; Zbl 1453.94098)] presented low memory interpolation attack against the MiMC and Feistel-MiMC designs. In this work we answer the open question posed in their work and show that low memory interpolation attacks can be extended to unbalanced Feistel networks (UFN) with low degree functions, and in particular to the GMiMC design. Our attack applies to UFNs with expanding and contracting round functions keyed either via identical (univariate) or distinct round keys (multivariate). Since interpolation attacks do not necessarily yield the best possible attacks over a binary extension field, we focus our analysis on prime fields $$\mathbb{F}_{p}$$. Our next contribution is to develop an improved technique for a more efficient key recovery against UFNs with expanding round function. We show that the final key recovery step can be reduced not only to the gcd but also to the root finding problem. Despite its higher theoretical complexity, we show that our approach has a particularly interesting application on Sponge hash functions based on UFNs, such as GMiMCHash. We illustrate for the first time how our root finding technique can be used to find collision, second preimage and preimage attacks on (reduced round) members of the GMiMCHash family. In addition, we support our theoretical analysis with small-scale experimental results. For the entire collection see [Zbl 1482.94005]. Accumulators in (and beyond) generic groups: non-trivial batch verification requires interaction https://zbmath.org/1485.94120 2022-06-24T15:10:38.853281Z "Schul-Ganz, Gili" https://zbmath.org/authors/?q=ai:schul-ganz.gili "Segev, Gil" https://zbmath.org/authors/?q=ai:segev.gil Summary: We prove a tight lower bound on the number of group operations required for batch verification by any generic-group accumulator that stores a less-than-trivial amount of information. Specifically, we show that $$\varOmega (t \cdot (\lambda / \log \lambda ))$$ group operations are required for the batch verification of any subset of $$t \ge 1$$ elements, where $$\lambda \in \mathbb{N}$$ is the security parameter, thus ruling out non-trivial batch verification in the standard non-interactive manner. Our lower bound applies already to the most basic form of accumulators (i.e., static accumulators that support membership proofs), and holds both for known-order (and even multilinear) groups and for unknown-order groups, where it matches the asymptotic performance of the known bilinear and RSA accumulators, respectively. In addition, it complements the techniques underlying the generic-group accumulators of \textit{D. Boneh} et al. [Lect. Notes Comput. Sci. 11692, 561--586 (2019; Zbl 1456.94051)] and \textit{S. Thakur} [Batching non-membership proofs with bilinear accumulators'', Preprint, \url{https://eprint.iacr.org/2019/1147.pdf}] by justifying their application of the Fiat-Shamir heuristic for transforming their interactive batch-verification protocols into non-interactive procedures. Moreover, motivated by a fundamental challenge introduced by \textit{D. Aggarwal} and \textit{U. Maurer} [Lect. Notes Comput. Sci. 5479, 36--53 (2009; Zbl 1239.94031)], we propose an extension of the generic-group model that enables us to capture a bounded amount of arbitrary non-generic information (e.g., least-significant bits or Jacobi symbols that are hard to compute generically but are easy to compute non-generically). We prove our lower bound within this extended model, which may be of independent interest for strengthening the implications of impossibility results in idealized models. For the entire collection see [Zbl 1482.94008]. Multi-theorem designated-verifier NIZK for QMA https://zbmath.org/1485.94121 2022-06-24T15:10:38.853281Z "Shmueli, Omri" https://zbmath.org/authors/?q=ai:shmueli.omri Summary: We present a designated-verifier non-interactive zero-knowledge argument system for QMA with multi-theorem security under the Learning with Errors Assumption. All previous such protocols for QMA are only single-theorem secure. We also relax the setup assumption required in previous works. We prove security in the malicious designated-verifier (MDV-NIZK) model [\textit{W. Quach} et al., Lect. Notes Comput. Sci. 11477, 593--621 (2019; Zbl 07164010)], where the setup consists of a mutually trusted random string and an untrusted verifier public key. Our main technical contribution is a general compiler that given a NIZK for NP and a quantum sigma protocol for QMA generates an MDV-NIZK protocol for QMA. For the entire collection see [Zbl 1483.94004]. Algorithmic acceleration of B/FV-like somewhat homomorphic encryption for compute-enabled RAM https://zbmath.org/1485.94122 2022-06-24T15:10:38.853281Z "Takeshita, Jonathan" https://zbmath.org/authors/?q=ai:takeshita.jonathan "Reis, Dayane" https://zbmath.org/authors/?q=ai:reis.dayane "Gong, Ting" https://zbmath.org/authors/?q=ai:gong.ting "Niemier, Michael" https://zbmath.org/authors/?q=ai:niemier.michael-t "Hu, X. Sharon" https://zbmath.org/authors/?q=ai:hu.xiaobo-sharon "Jung, Taeho" https://zbmath.org/authors/?q=ai:jung.taeho Summary: Somewhat Homomorphic Encryption (SHE) allows arbitrary computation with finite multiplicative depths to be performed on encrypted data, but its overhead is high due to memory transfer incurred by large ciphertexts. Recent research has recognized the shortcomings of general-purpose computing for high-performance SHE, and has begun to pioneer the use of hardware-based SHE acceleration with hardware including FPGAs, GPUs, and Compute-Enabled RAM (CE-RAM). CE-RAM is well-suited for SHE, as it is not limited by the separation between memory and processing that bottlenecks other hardware. Further, CE-RAM does not move data between different processing elements. Recent research has shown the high effectiveness of CE-RAM for SHE as compared to highly-optimized CPU and FPGA implementations. However, algorithmic optimization for the implementation on CE-RAM is underexplored. In this work, we examine the effect of existing algorithmic optimizations upon a CE-RAM implementation of the B/FV scheme [\textit{J. Fan} and \textit{F. Vercauteren}, Somewhat practical fully homomorphic encryption'', Preprint, \url{https://eprint.iacr.org/2012/144.pdf}], and further introduce novel optimization techniques for the Full RNS Variant of B/FV [\textit{J.-C. Bajard} et al., Lect. Notes Comput. Sci. 10532, 423--442 (2017; Zbl 1418.94029)]. Our experiments show speedups of up to 784x for homomorphic multiplication, 143x for decryption, and 330x for encryption against a CPU implementation. We also compare our approach to similar work in CE-RAM, FPGA, and GPU acceleration, and note general improvement over existing work. In particular, for homomorphic multiplication we see speedups of 506.5x against CE-RAM [\textit{D. Reis}, Computing in memory for performance and energy efficient homomorphic encryption'', IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 28, No. 11, 2300--2313 (2020; \url{doi:10.1109/TVLSI.2020.3017595})], 66.85x against FPGA [\textit{S. S. Roy} et al., FPGA-based high-performance parallel architecture for homomorphic computing on encrypted data'', in: Proceedings of the 2019 IEEE international symposium on high performance computer architecture (HPCA), Washington, DC, USA, February 16--20, 2019. 387--398 (2019; \url{doi:10.1109/HPCA.2019.00052})], and 30.8x against GPU [\textit{W. Q. A. Al Badawi} et al., Implementation and performance evaluation of RNS variants of the BFV homomorphic encryption scheme'', IEEE Trans. Emerging Topics Comput. 9, No. 2, 941--956 (2021; \url{doi:10.1109/TETC.2019.2902799})] as compared to existing work in hardware acceleration of B/FV. For the entire collection see [Zbl 1482.94005]. Quantum Demiric-Selcuk meet-in-the-middle attacks on reduced-round AES https://zbmath.org/1485.94123 2022-06-24T15:10:38.853281Z "Wang, Ping" https://zbmath.org/authors/?q=ai:wang.ping|wang.ping.2|wang.ping.1 "Chen, Xiaomei" https://zbmath.org/authors/?q=ai:chen.xiaomei "Jiang, Guohao" https://zbmath.org/authors/?q=ai:jiang.guohao Summary: In this paper, we employ quantum algorithms to attack the reduced-round AES based on the meet-in-the-middle attack proposed by Demiric and Selcuk (DS-MITM). Technically, we adopt the Grover's algorithm and the quantum claw-finding algorithm respectively to implement the attack based on the $$Q1$$ model (quantum computers perform offline computations, and classical methods perform online queries). Then we propose a data/memory/time trade-off to optimize the new quantum algorithm, which can further reduce the time complexity of the attack. Compared to the DS-MITM, the new algorithm can significantly reduce the complexities, and it can also be applied to AES-128. Leakage-resilient revocable identity-based signature with cloud revocation authority https://zbmath.org/1485.94124 2022-06-24T15:10:38.853281Z "Wu, Jui-Di" https://zbmath.org/authors/?q=ai:wu.jui-di "Tseng, Yuh-Min" https://zbmath.org/authors/?q=ai:tseng.yuh-min "Huang, Sen-Shan" https://zbmath.org/authors/?q=ai:huang.sen-shan "Tsai, Tung-Tso" https://zbmath.org/authors/?q=ai:tsai.tung-tso Summary: Very recently, side-channel attacks have threatened all traditional cryptographic schemes. Typically, in traditional cryptography, private/secret keys are assumed to be completely hidden to adversaries. However, by side-channel attacks, an adversary may extract fractional content of these private/secret keys. To resist side-channel attacks, leakage-resilient cryptography is a countermeasure. Identity-based public-key system (ID-PKS) is an attractive public-key setting. ID-PKS settings not only discard the certificate requirement, but also remove the construction of the public-key infrastructure. For solving the user revocation problem in ID-PKS settings, revocable ID-PKS (RID-PKS) setting has attracted significant attention. Numerous cryptographic schemes based on RID-PKS settings have been proposed. However, under RID-PKS settings, no leakage-resilient signature or encryption scheme is proposed. In this article, we present the first leakage-resilient revocable ID-based signature (LR-RIBS) scheme with cloud revocation authority (CRA) under the continual leakage model. Also, a new adversary model of LR-RIBS schemes with CRA is defined. Under this new adversary model, security analysis is made to demonstrate that our LR-RIBS scheme with CRA is provably secure in the generic bilinear group (GBG) model. Finally, performance analysis is made to demonstrate that our scheme is suitable for mobile devices. DualRing: generic construction of ring signatures with efficient instantiations https://zbmath.org/1485.94125 2022-06-24T15:10:38.853281Z "Yuen, Tsz Hon" https://zbmath.org/authors/?q=ai:yuen.tsz-hon "Esgin, Muhammed F." https://zbmath.org/authors/?q=ai:esgin.muhammed-f "Liu, Joseph K." https://zbmath.org/authors/?q=ai:liu.joseph-k-k "Au, Man Ho" https://zbmath.org/authors/?q=ai:au.man-ho "Ding, Zhimin" https://zbmath.org/authors/?q=ai:ding.zhimin Summary: We introduce a novel generic ring signature construction, called DualRing, which can be built from several canonical identification schemes (such as Schnorr identification). DualRing differs from the classical ring signatures by its formation of two rings: a ring of commitments and a ring of challenges. It has a structural difference from the common ring signature approaches based on accumulators or zero-knowledge proofs of the signer index. Comparatively, DualRing has a number of unique advantages. Considering the DL-based setting by using Schnorr identification scheme, our DualRin structure allows the signature size to be compressed into logarithmic size via an argument of knowledge system such as Bulletproofs. We further improve on the Bulletproofs argument system to eliminate about half of the computation while maintaining the same proof size. We call this Sum Argument and it can be of independent interest. This DL-based construction, named DualRing-EC, using Schnorr identification with Sum Argument has the shortest ring signature size in the literature without using trusted setup. Considering the lattice-based setting, we instantiate DualRing by a canonical identification based on M-LWE and M-SIS. In practice, we achieve the shortest lattice-based ring signature, named DualRing-LB, when the ring size is between 4 and 2000. DualRing-LB is also $$5 \times$$ faster in signing and verification than the fastest lattice-based scheme by \textit{M. F. Esgin} et al. [Lect. Notes Comput. Sci. 11692, 115--146 (2019; Zbl 1456.94075)]. For the entire collection see [Zbl 1483.94004]. The \textsf{CiliPadi} family of lightweight authenticated encryption, v1.2 https://zbmath.org/1485.94126 2022-06-24T15:10:38.853281Z "Z'aba, M. R." https://zbmath.org/authors/?q=ai:zaba.muhammad-reza "Jamil, N." https://zbmath.org/authors/?q=ai:jamil.noreen|jamil.nur-shifa-farah-ain "Rohmad, M. S." https://zbmath.org/authors/?q=ai:rohmad.m-s "Rani, H. A." https://zbmath.org/authors/?q=ai:rani.h-a "Shamsuddin, S." https://zbmath.org/authors/?q=ai:shamsuddin.siti-mariyam-hj Summary: This article describes the specification and analysis of the \textsf{CiliPadi} family of lightweight authenticated encryption v1.2. An earlier version, dubbed v1.0, was accepted as one of the Round 1 candidates in the US NIST lightweight cryptography project. \textsf{CiliPadi} is designed based on the Sponge construction which is also used in the SHA-3 hash function. \textsf{CiliPadi} supports 128- and 256-bit keys and is offered in four variants or flavours. The flavours differ in the length of tag, message block and the number of rounds of the internal permutation. Nonce-misuse security of the SAEF authenticated encryption mode https://zbmath.org/1485.94127 2022-06-24T15:10:38.853281Z "Andreeva, Elena" https://zbmath.org/authors/?q=ai:andreeva.elena-anatolevna "Bhati, Amit Singh" https://zbmath.org/authors/?q=ai:bhati.amit-singh "Vizár, Damian" https://zbmath.org/authors/?q=ai:vizar.damian Summary: ForkAE is a NIST lightweight cryptography candidate that uses the forkcipher primitive in two modes of operation -- SAEF and PAEF -- optimized for authenticated encryption of the shortest messages. SAEF is a sequential and online AEAD that minimizes the memory footprint compared to its alternative parallel mode PAEF, catering to the most constrained devices. SAEF was proven AE secure against nonce-respecting adversaries. Due to their more acute and direct exposure to device misuse and mishandling, in most use cases of lightweight cryptography, nonce reuse presents a very realistic attack vector. Furthermore, many lightweight applications mandate security for their online AEAD schemes against block-wise adversaries. Surprisingly, a very few NIST lightweight AEAD candidates come with provable guarantees against these security threats. In this work we investigate the provable security guarantees of SAEF when nonces are repeated under a refined version of the notion of online authenticated encryption OAE given by \textit{E. Fleischmann} et al. [Lect. Notes Comput. Sci. 7549, 196--215 (2012; Zbl 1312.94113)]. Using the coefficient H technique we show that, with no modifications, SAEF is OAE secure up to the birthday security bound, i.e., up to $$2^{n/2}$$ processed blocks of data, where $$n$$ is the block size of the forkcipher. The implications of our work is that SAEF is safe to use in a block-wise fashion, and that if nonces get repeated, this has no impact on ciphertext integrity and confidentiality only degrades by a limited extent up to repetitions of common message prefixes. For the entire collection see [Zbl 1482.94005]. LESS-FM: fine-tuning signatures from the code equivalence problem https://zbmath.org/1485.94128 2022-06-24T15:10:38.853281Z "Barenghi, Alessandro" https://zbmath.org/authors/?q=ai:barenghi.alessandro "Biasse, Jean-François" https://zbmath.org/authors/?q=ai:biasse.jean-francois "Persichetti, Edoardo" https://zbmath.org/authors/?q=ai:persichetti.edoardo "Santini, Paolo" https://zbmath.org/authors/?q=ai:santini.paolo-maria Summary: Code-based cryptographic schemes are highly regarded among the quantum-safe alternatives to current standards. Yet, designing code-based signatures using traditional methods has always been a challenging task, and current proposals are still far from the target set by other post-quantum primitives (e.g. lattice-based). In this paper, we revisit a recent work using an innovative approach for signing, based on the hardness of the code equivalence problem. We introduce some optimizations and provide a security analysis for all variants considered. We then show that the new parameters produce instances of practical interest. For the entire collection see [Zbl 1482.94004]. On removing rejection conditions in practical lattice-based signatures https://zbmath.org/1485.94129 2022-06-24T15:10:38.853281Z "Behnia, Rouzbeh" https://zbmath.org/authors/?q=ai:behnia.rouzbeh "Chen, Yilei" https://zbmath.org/authors/?q=ai:chen.yilei "Masny, Daniel" https://zbmath.org/authors/?q=ai:masny.daniel Summary: Digital signatures following the methodology of Fiat-Shamir with Aborts'', proposed by \textit{V. Lyubashevsky} [Lect. Notes Comput. Sci. 5912, 598--616 (2009; Zbl 1267.94125)], are capable of achieving the smallest public-key and signature sizes among all the existing lattice signature schemes based on the hardness of the Ring-SIS and Ring-LWE problems. Since its introduction, several variants and optimizations have been proposed, and two of them (i.e., Dilithium and qTESLA) entered the second round of the NIST post-quantum cryptography standardization. This method of designing signatures relies on rejection sampling during the signing process. Rejection sampling is crucial for ensuring both the correctness and security of these signature schemes. In this paper, we investigate the possibility of removing the two rejection conditions used both in Dilithium and qTESLA. First, we show that removing one of the rejection conditions is possible, and provide a variant of Lyubashevsky's signature with comparable parameters with Dilithium and qTESLA. Second, we give evidence on the difficulty of removing the other rejection condition, by showing that two very general approaches do not yield a signature scheme with correctness or security. For the entire collection see [Zbl 1482.94004]. The share size of secret-sharing schemes for almost all access structures and graphs https://zbmath.org/1485.94130 2022-06-24T15:10:38.853281Z "Beimel, Amos" https://zbmath.org/authors/?q=ai:beimel.amos "Farràs, Oriol" https://zbmath.org/authors/?q=ai:farras.oriol Summary: The share size of general secret-sharing schemes is poorly understood. The gap between the best known upper bound on the total share size per party of $$2^{0.64n}$$ [\textit{B. Applebaum} et al., in: Proceedings of the 52nd annual ACM SIGACT symposium on theory of computing, STOC '20, Chicago, IL, USA, June 22--26, 2020. New York, NY: Association for Computing Machinery (ACM). 280--293 (2020; Zbl 07298248)] and the best known lower bound of $$\varOmega (n/\log n)$$ [\textit{L. Csirmaz}, J. Cryptology 10, No. 4, 223--231 (1997; Zbl 0897.94012)] is huge (where $$n$$ is the number of parties in the scheme). To gain some understanding on this problem, we study the share size of secret-sharing schemes of almost all access structures, i.e., of almost all collections of authorized sets. This is motivated by the fact that in complexity, many times almost all objects are hardest (e.g., most Boolean functions require exponential size circuits). All previous constructions of secret-sharing schemes were for the worst access structures (i.e., all access structures) or for specific families of access structures. We prove upper bounds on the share size for almost all access structures. We combine results on almost all monotone Boolean functions [\textit{A. D. Korshunov}, Probl. Kibern. 38, 5--108 (1981; Zbl 0521.94018)] and a construction of \textit{T. Liu} et al. [Lect. Notes Comput. Sci. 10820, 567--596 (2018; Zbl 1423.94131)] and conclude that almost all access structures have a secret-sharing scheme with share size $$2^{\tilde{O}(\sqrt{n})}$$. We also study graph secret-sharing schemes. In these schemes, the parties are vertices of a graph and a set can reconstruct the secret if and only if it contains an edge. Again, for this family there is a huge gap between the upper bounds -- $$O(n/\log n)$$ [\textit{P. Erdős} and \textit{L. Pyber}, Discrete Math. 170, No. 1--3, 249--251 (1997; Zbl 0876.05080)] -- and the lower bounds -- $$\varOmega (\log n)$$ [\textit{M. van Dijk}, Des. Codes Cryptography 6, No. 2, 143--169 (1995; Zbl 0829.94006)]. We show that for almost all graphs, the share size of each party is $$n^{o(1)}$$. This result is achieved by using robust 2-server conditional disclosure of secrets protocols, a new primitive introduced and constructed in [Applebaum et al., loc. cit.], and the fact that the size of the maximal independent set in a random graph is small. Finally, using robust conditional disclosure of secrets protocols, we improve the total share size for all very dense graphs. For the entire collection see [Zbl 1482.94009]. Not enough less: an improved algorithm for solving code equivalence problems over $$\mathbb{F}_q$$ https://zbmath.org/1485.94131 2022-06-24T15:10:38.853281Z "Beullens, Ward" https://zbmath.org/authors/?q=ai:beullens.ward Summary: Recently, a new code based signature scheme, called LESS, was proposed with three concrete instantiations, each aiming to provide 128 bits of classical security [\textit{J.-F. Biasse}, LESS is more: code-based signatures without syndromes'', Preprint, \url{https://eprint.iacr.org/2020/594.pdf}]. Two instantiations (LESS-I and LESS-II) are based on the conjectured hardness of the linear code equivalence problem, while a third instantiation, LESS-III, is based on the conjectured hardness of the permutation code equivalence problem for weakly self-dual codes. We give an improved algorithm for solving both these problems over sufficiently large finite fields. Our implementation breaks LESS-I and LESS-III in approximately 25 s and 2 s respectively on an Intel i5-8400H CPU. Since the field size for LESS-II is relatively small $$(\mathbb{F}_7)$$ our algorithm does not improve on existing methods. Nonetheless, we estimate that LESS-II can be broken with approximately $$2^{44}$$ row operations. For the entire collection see [Zbl 1482.94005]. Recursive proof composition from accumulation schemes https://zbmath.org/1485.94132 2022-06-24T15:10:38.853281Z "Bünz, Benedikt" https://zbmath.org/authors/?q=ai:bunz.benedikt "Chiesa, Alessandro" https://zbmath.org/authors/?q=ai:chiesa.alessandro "Mishra, Pratyush" https://zbmath.org/authors/?q=ai:mishra.pratyush "Spooner, Nicholas" https://zbmath.org/authors/?q=ai:spooner.nicholas Summary: Recursive proof composition has been shown to lead to powerful primitives such as incrementally-verifiable computation (IVC) and proof-carrying data (PCD). All existing approaches to recursive composition take a succinct non-interactive argument of knowledge (SNARK) and use it to prove a statement about its own verifier. This technique requires that the verifier run in time sublinear in the size of the statement it is checking, a strong requirement that restricts the class of SNARKs from which PCD can be built. This in turn restricts the efficiency and security properties of the resulting scheme. \textit{S. Bowe} et al. [Recursive proof composition without a trusted setup'', Preprint, \url{https://eprint.iacr.org/2019/1021.pdf}] outlined a novel approach to recursive composition, and applied it to a particular SNARK construction which does not have a sublinear-time verifier. However, they omit details about this approach and do not prove that it satisfies any security property. Nonetheless, schemes based on their ideas have already been implemented in software. In this work we present a collection of results that establish the theoretical foundations for a generalization of the above approach. We define an accumulation scheme for a non-interactive argument, and show that this suffices to construct PCD, even if the argument itself does not have a sublinear-time verifier. Moreover we give constructions of accumulation schemes for SNARKs, which yield PCD schemes with novel efficiency and security features. For the entire collection see [Zbl 1482.94008]. A secret-sharing based MPC protocol for Boolean circuits with good amortized complexity https://zbmath.org/1485.94133 2022-06-24T15:10:38.853281Z "Cascudo, Ignacio" https://zbmath.org/authors/?q=ai:cascudo.ignacio "Gundersen, Jaron Skovsted" https://zbmath.org/authors/?q=ai:gundersen.jaron-skovsted Summary: We present a new secure multiparty computation protocol in the preprocessing model that allows for the evaluation of a number of instances of a boolean circuit in parallel, with a small online communication complexity per instance of 10 bits per party and multiplication gate. Our protocol is secure against an active dishonest majority, and can also be transformed, via existing techniques, into a protocol for the evaluation of a single well-formed'' boolean circuit with the same complexity per multiplication gate at the cost of some overhead that depends on the topology of the circuit. Our protocol uses an approach introduced recently in the setting of honest majority and information-theoretical security which, using an algebraic notion called reverse multiplication friendly embeddings, essentially transforms a batch of evaluations of an arithmetic circuit over a small field into one evaluation of another arithmetic circuit over a larger field. To obtain security against a dishonest majority we combine this approach with the well-known SPDZ protocol that operates over a large field. Structurally our protocol is most similar to MiniMAC, a protocol which bases its security on the use of error-correcting codes, but our protocol has a communication complexity which is half of that of MiniMAC when the best available binary codes are used. With respect to certain variant of MiniMAC that utilizes codes over larger fields, our communication complexity is slightly worse; however, that variant of MiniMAC needs a much larger preprocessing than ours. We also show that our protocol also has smaller amortized communication complexity than Committed MPC, a protocol for general fields based on homomorphic commitments, if we use the best available constructions for those commitments. Finally, we construct a preprocessing phase from oblivious transfer based on ideas from MASCOT and Committed MPC. For the entire collection see [Zbl 1482.94008]. Compact ring signatures from learning with errors https://zbmath.org/1485.94134 2022-06-24T15:10:38.853281Z "Chatterjee, Rohit" https://zbmath.org/authors/?q=ai:chatterjee.rohit "Garg, Sanjam" https://zbmath.org/authors/?q=ai:garg.sanjam "Hajiabadi, Mohammad" https://zbmath.org/authors/?q=ai:hajiabadi.mohammad "Khurana, Dakshita" https://zbmath.org/authors/?q=ai:khurana.dakshita "Liang, Xiao" https://zbmath.org/authors/?q=ai:liang.xiao "Malavolta, Giulio" https://zbmath.org/authors/?q=ai:malavolta.giulio "Pandey, Omkant" https://zbmath.org/authors/?q=ai:pandey.omkant "Shiehian, Sina" https://zbmath.org/authors/?q=ai:shiehian.sina Summary: Ring signatures allow a user to sign a message on behalf of a ring'' of signers, while hiding the true identity of the signer. As the degree of anonymity guaranteed by a ring signature is directly proportional to the size of the ring, an important goal in cryptography is to study constructions that minimize the size of the signature as a function of the number of ring members. In this work, we present the first compact ring signature scheme (i.e., where the size of the signature grows logarithmically with the size of the ring) from the (plain) learning with errors (LWE) problem. The construction is in the standard model and it does not rely on a common random string or on the random oracle heuristic. In contrast with the prior work of \textit{M. Backes} et al. [Lect. Notes Comput. Sci. 11478, 281--311 (2019; Zbl 07162731)], our scheme does not rely on bilinear pairings, which allows us to show that the scheme is post-quantum secure assuming the quantum hardness of LWE. At the heart of our scheme is a new construction of compact and statistically witness indistinguishable ZAP arguments for $$\mathrm{NP} \cap \mathrm{coNP}$$, that we show to be sound based on the plain LWE assumption. Prior to our work, statistical ZAPs (for all of NP) were known to exist only assuming sub-exponential LWE. We believe that this scheme might find further applications in the future. For the entire collection see [Zbl 1483.94004]. On the complexity of arithmetic secret sharing https://zbmath.org/1485.94135 2022-06-24T15:10:38.853281Z "Cramer, Ronald" https://zbmath.org/authors/?q=ai:cramer.ronald "Xing, Chaoping" https://zbmath.org/authors/?q=ai:xing.chaoping "Yuan, Chen" https://zbmath.org/authors/?q=ai:yuan.chen Summary: Since the mid 2000s, asymptotically-good strongly-multiplicative linear (ramp) secret sharing schemes over a fixed finite field have turned out as a central theoretical primitive in numerous constant-communication-rate results in multi-party cryptographic scenarios, and, surprisingly, in two-party cryptography as well. Known constructions of this most powerful class of arithmetic secret sharing schemes all rely heavily on algebraic geometry (AG), i.e., on dedicated AG codes based on asymptotically good towers of algebraic function fields defined over finite fields. It is a well-known open question since the first (explicit) constructions of such schemes appeared in [\textit{H. Chen} and \textit{R. Cramer}, Lect. Notes Comput. Sci. 4117, 521--536 (2006; Zbl 1129.94016)] whether the use of heavy machinery'' can be avoided here. i.e., the question is whether the mere existence of such schemes can also be proved by elementary'' techniques only (say, from classical algebraic coding theory), even disregarding effective construction. So far, there is no progress. In this paper we show the theoretical result that, (1) no matter whether this open question has an affirmative answer or not, these schemes can be constructed explicitly by elementary algorithms defined in terms of basic algebraic coding theory. This pertains to all relevant operations associated to such schemes, including, notably, the generation of an instance for a given number of players $$n$$, as well as error correction in the presence of corrupt shares. We further show that (2) the algorithms are quasi-linear time (in $$n$$); this is (asymptotically) significantly more efficient than the known constructions. That said, the analysis of the mere termination of these algorithms does still rely on algebraic geometry, in the sense that it requires blackbox application'' of suitable existence results for these schemes. Our method employs a nontrivial, novel adaptation of a classical (and ubiquitous) paradigm from coding theory that enables transformation of existence results on asymptotically good codes into explicit construction of such codes via concatenation, at some constant loss in parameters achieved. In a nutshell, our generating idea is to combine a cascade of explicit but asymptotically-bad-yet-good-enough schemes'' with an asymptotically good one in such a judicious way that the latter can be selected with exponentially small number of players in that of the compound scheme. This opens the door to efficient, elementary exhaustive search. In order to make this work, we overcome a number of nontrivial technical hurdles. Our main handles include a novel application of the recently introduced notion of Reverse Multiplication-Friendly Embeddings (RMFE) from [\textit{I. Cascudo} et al., ibid. 10993, 395--426 (2018; Zbl 1457.94201)], as well as a novel application of a natural variant in arithmetic secret sharing from [\textit{H. Chen} et al., ibid. 4965, 451--470 (2008; Zbl 1149.94309)]. For the entire collection see [Zbl 1482.94009]. Stronger security and constructions of multi-designated verifier signatures https://zbmath.org/1485.94136 2022-06-24T15:10:38.853281Z "Damgård, Ivan" https://zbmath.org/authors/?q=ai:damgard.ivan-bjerre "Haagh, Helene" https://zbmath.org/authors/?q=ai:haagh.helene "Mercer, Rebekah" https://zbmath.org/authors/?q=ai:mercer.rebekah "Nitulescu, Anca" https://zbmath.org/authors/?q=ai:nitulescu.anca "Orlandi, Claudio" https://zbmath.org/authors/?q=ai:orlandi.claudio "Yakoubov, Sophia" https://zbmath.org/authors/?q=ai:yakoubov.sophia Summary: Off-the-Record (OTR) messaging is a two-party message authentication protocol that also provides plausible deniability: there is no record that can later convince a third party what messages were actually sent. The challenge in group OTR, is to enable the sender to sign his messages so that group members can verify who sent a message (signatures should be unforgeable, even by group members). Also, we want the off-the-record property: even if some verifiers are corrupt and collude, they should not be able to prove the authenticity of a message to any outsider. Finally, we need consistency, meaning that if any group member accepts a signature, then all of them do. To achieve these properties it is natural to consider Multi-Designated Verifier Signatures (MDVS). However, existing literature defines and builds only limited notions of MDVS, where (a) the off-the-record property (source hiding) only holds when all verifiers could conceivably collude, and (b) the consistency property is not considered. The contributions of this paper are two-fold: stronger definitions for MDVS, and new constructions meeting those definitions. We strengthen source-hiding to support any subset of corrupt verifiers, and give the first formal definition of consistency. We build three new MDVS: one from generic standard primitives (PRF, key agreement, NIZK), one with concrete efficiency and one from functional encryption. For the entire collection see [Zbl 1482.94008]. Robust secret sharing with almost optimal share size and security against rushing adversaries https://zbmath.org/1485.94137 2022-06-24T15:10:38.853281Z "Fehr, Serge" https://zbmath.org/authors/?q=ai:fehr.serge "Yuan, Chen" https://zbmath.org/authors/?q=ai:yuan.chen Summary: We show a robust secret sharing scheme for a maximal threshold $$t < n/2$$ that features an optimal overhead in share size, offers security against a rushing adversary, and runs in polynomial time. Previous robust secret sharing schemes for $$t < n/2$$ either suffered from a suboptimal overhead, offered no (provable) security against a rushing adversary, or ran in superpolynomial time. For the entire collection see [Zbl 1482.94009]. Threshold Schnorr with stateless deterministic signing from standard assumptions https://zbmath.org/1485.94138 2022-06-24T15:10:38.853281Z "Garillot, François" https://zbmath.org/authors/?q=ai:garillot.francois "Kondi, Yashvanth" https://zbmath.org/authors/?q=ai:kondi.yashvanth "Mohassel, Payman" https://zbmath.org/authors/?q=ai:mohassel.payman "Nikolaenko, Valeria" https://zbmath.org/authors/?q=ai:nikolaenko.valeria Summary: Schnorr's signature scheme permits an elegant threshold signing protocol due to its linear signing equation. However each new signature consumes fresh randomness, which can be a major attack vector in practice. Sources of randomness in deployments are frequently either unreliable, or require state continuity, i.e. reliable fresh state resilient to rollbacks. State continuity is a notoriously difficult guarantee to achieve in practice, due to system crashes caused by software errors, malicious actors, or power supply interruptions [\textit{B. Parno} et al., Memoir: practical state continuity for protected modules'', in: Proceedings of the IEEE symposium on security and privacy, Oakland, CA, USA, May 22--25, 2011. Los Alamitos, CA: IEEE Computer Society. 379--394 (2011; \url{doi:10.1109/SP.2011.38})]. This is a non-issue for Schnorr variants such as EdDSA, which is specified to derive nonces deterministically as a function of the message and the secret key. However, it is challenging to translate these benefits to the threshold setting, specifically to construct a threshold Schnorr scheme where signing neither requires parties to consume fresh randomness nor update long-term secret state. In this work, we construct a dishonest majority threshold Schnorr protocol that enables such stateless deterministic nonce derivation using standardized block ciphers. Our core technical ingredients are new tools for the zero-knowledge from garbled circuits (ZKGC) paradigm to aid in verifying correct nonce derivation: \par i) A mechanism based on UC Commitments that allows a prover to commit once to a witness, and prove an unbounded number of statements online with only cheap symmetric key operations. \par ii) A garbling gadget to translate intermediate garbled circuit wire labels to arithmetic encodings. A proof per our scheme requires only a small constant number of exponentiations. For the entire collection see [Zbl 1483.94004]. Differential power analysis of the Picnic signature scheme https://zbmath.org/1485.94139 2022-06-24T15:10:38.853281Z "Gellersen, Tim" https://zbmath.org/authors/?q=ai:gellersen.tim "Seker, Okan" https://zbmath.org/authors/?q=ai:seker.okan "Eisenbarth, Thomas" https://zbmath.org/authors/?q=ai:eisenbarth.thomas Summary: This work introduces the first differential side-channel analysis of the Picnic Signature Scheme, an alternate candidate in the ongoing competition for post-quantum cryptography by the National Institute of Standards and Technology (NIST). We present a successful side-channel analysis of the underlying multiparty implementation of the \textsf{LowMC} block cipher (\textsf{MPC-LowMC}) and show how side-channel information can be used to recover the entire secret key by exploiting two different parts of the algorithm. \textsf{LowMC} key recovery then allows to forge signatures for the calling \textsf{Picnic} post-quantum signature scheme. We target the NIST reference implementation executed on a FRDM-K66F development board. Key recovery succeeds with fewer than 1000 \textsf{LowMC} traces, which can be obtained from fewer than 30 observed \textsf{Picnic} signatures. For the entire collection see [Zbl 1482.94004]. A lossless linear algebraic secret image sharing scheme https://zbmath.org/1485.94140 2022-06-24T15:10:38.853281Z "Kanso, Ali" https://zbmath.org/authors/?q=ai:kanso.ali "Ghebleh, Mohammad" https://zbmath.org/authors/?q=ai:ghebleh.mohammad "Alazemi, Abdullah" https://zbmath.org/authors/?q=ai:alazemi.abdullah Summary: A $$(k, n)$$-threshold secret image sharing scheme is any method of distributing a secret image amongst $$n$$ participants in such a way that any $$k$$ participants are able to use their shares collectively to reconstruct the secret image, while fewer than $$k$$ shares do not reveal any information about the secret image. In this work, we propose a lossless linear algebraic $$(k, n)$$-threshold secret image sharing scheme. The scheme associates a vector $$\mathfrak{v}_i$$ to the $$i$$th participant in the vector space $$\mathbb F^k_{2^\alpha}$$, where the vectors $$\mathfrak{v}_i$$ satisfy some admissibility conditions. The $$i$$th share is simply a linear combination of the vectors $$\mathfrak{v}_i$$ with coefficients from the secret image. Simulation results demonstrate the effectiveness and robustness of the proposed scheme compared to standard statistical attacks on secret image sharing schemes. Furthermore, the proposed scheme has a high level of security, error-resilient capability, and the size of each share is $$1 / k$$ the size of the secret image. In comparison with existing work, the scheme is shown to be very competitive. Authenticated key agreement protocol based on provable secure cryptographic functions https://zbmath.org/1485.94141 2022-06-24T15:10:38.853281Z "Kilciauskas, Ausrys" https://zbmath.org/authors/?q=ai:kilciauskas.ausrys "Butkus, Gintaras" https://zbmath.org/authors/?q=ai:butkus.gintaras "Sakalauskas, Eligijus" https://zbmath.org/authors/?q=ai:sakalauskas.eligijus Summary: The vulnerable part of communications between user and server is the poor authentication level at the user's side. For example, in e-banking systems for user authentication are used passwords that can be lost or swindled by a person maliciously impersonating bank. To increase the security of e-banking system users should be supplied by the elements of public key infrastructure (PKI) but not necessary to the extent of standard requirements which are too complicated for ordinary users. In this paper, we propose two versions of authenticated key agreement protocol (AKAP) which can be simply realized on the user's side. AKAP is a collection of cryptographic functions having provable security properties. It is proved that AKAP1 is secure against active adversary under discrete logarithm assumption when formulated certain conditions hold. AKAP2 provides user's anonymity against eavesdropping adversary. The partial security of AKAP2 is investigated which relies on the security of asymmetric encryption function. Two-round trip Schnorr multi-signatures via delinearized witnesses https://zbmath.org/1485.94142 2022-06-24T15:10:38.853281Z "Kılınç Alper, Handan" https://zbmath.org/authors/?q=ai:kilinc-alper.handan "Burdges, Jeffrey" https://zbmath.org/authors/?q=ai:burdges.jeffrey Summary: We construct a two-round Schnorr-based signature scheme (DWMS) by delinearizing two pre-commitments supplied by each signer. DWMS is a secure signature scheme in the algebraic group model (AGM) and the random oracle model (ROM) under the assumption of the hardness of the one-more discrete logarithm problem and the 2-entwined sum problem that we introduce in this paper. Our new $$m$$-entwined sum problem tweaks the $$k$$-sum problem in a scalar field using the associated group. We prove the hardness of our new problem in the AGM assuming the hardness of the discrete logarithm problem in the associated group. We believe that our new problem simplifies the security proofs of multi-signature schemes that use the delinearization of commitments. For the entire collection see [Zbl 1483.94004]. FROST: Flexible round-optimized Schnorr threshold signatures https://zbmath.org/1485.94143 2022-06-24T15:10:38.853281Z "Komlo, Chelsea" https://zbmath.org/authors/?q=ai:komlo.chelsea "Goldberg, Ian" https://zbmath.org/authors/?q=ai:goldberg.ian Summary: Unlike signatures in a single-party setting, threshold signatures require cooperation among a threshold number of signers each holding a share of a common private key. Consequently, generating signatures in a threshold setting imposes overhead due to network rounds among signers, proving costly when secret shares are stored on network-limited devices or when coordination occurs over unreliable networks. In this work, we present FROST, a Flexible Round-Optimized Schnorr Threshold signature scheme that reduces network overhead during signing operations while employing a novel technique to protect against forgery attacks applicable to similar schemes in the literature. FROST improves upon the state of the art in Schnorr threshold signature protocols, as it can safely perform signing operations in a single round without limiting concurrency of signing operations, yet allows for true threshold signing, as only a threshold $$t$$ out of $$n$$ possible participants are required for signing operations, such that $$t\le n$$. FROST can be used as either a two-round protocol, or optimized to a single-round signing protocol with a pre-processing stage. FROST achieves its efficiency improvements in part by allowing the protocol to abort in the presence of a misbehaving participant (who is then identified and excluded from future operations) -- a reasonable model for practical deployment scenarios. We present proofs of security demonstrating that FROST is secure against chosen-message attacks assuming the discrete logarithm problem is hard and the adversary controls fewer participants than the threshold. For the entire collection see [Zbl 1482.94005]. A new provably secure certificateless signature with revocation in the standard model https://zbmath.org/1485.94144 2022-06-24T15:10:38.853281Z "Mei, Qian" https://zbmath.org/authors/?q=ai:mei.qian "Zhao, Yanan" https://zbmath.org/authors/?q=ai:zhao.yanan "Xiong, Hu" https://zbmath.org/authors/?q=ai:xiong.hu Summary: The primitive of certificateless signature, since its invention, has become a widely studied paradigm due to the lack of key escrow problem and certificate management problem. However, this primitive cannot resist catastrophic damage caused by key exposure. Therefore, it is necessary to integrate revocation mechanism into certificateless signature. In this paper, we propose a new certificateless signature scheme with revocation (RCLS) and prove its security under the standard model. In the meanwhile, our scheme can resist malicious-but-passive Key Generation Center (KGC) attacks that were not possible in previous solutions. The theoretical analysis shows our scheme has high efficiency and practicality. Short identity-based signatures with tight security from lattices https://zbmath.org/1485.94145 2022-06-24T15:10:38.853281Z "Pan, Jiaxin" https://zbmath.org/authors/?q=ai:pan.jiaxin "Wagner, Benedikt" https://zbmath.org/authors/?q=ai:wagner.benedikt Summary: We construct a short and adaptively secure identity-based signature scheme tightly based on the well-known Short Integer Solution (SIS) assumption. Although identity-based signature schemes can be tightly constructed from either standard signature schemes against adaptive corruptions in the multi-user setting or a two-level hierarchical identity-based encryption scheme, neither of them is known with short signature size and tight security based on the SIS assumption. Here short'' means the signature size is independent of the message length, which is in contrast to the tree-based (tight) signatures. Our approach consists of two steps: Firstly, we give two generic transformations (one with random oracles and the other without) from non-adaptively secure identity-based signature schemes to adaptively secure ones tightly. Our idea extends the similar transformation for digital signature schemes. Secondly, we construct a non-adaptively secure identity-based signature scheme based on the SIS assumption in the random oracle model. For the entire collection see [Zbl 1482.94004]. Security analysis of \textit{SPAKE2+} https://zbmath.org/1485.94146 2022-06-24T15:10:38.853281Z "Shoup, Victor" https://zbmath.org/authors/?q=ai:shoup.victor Summary: We show that a slight variant of Protocol \textit{SPAKE2+}, which was presented but not analyzed in [\textit{D. Cash} et al., J. Cryptology 22, No. 4, 470--504 (2009; Zbl 1220.94034)], is a secure asymmetric password-authenticated key exchange protocol (PAKE), meaning that the protocol still provides good security guarantees even if a server is compromised and the password file stored on the server is leaked to an adversary. The analysis is done in the UC framework (i.e., a simulation-based security model), under the computational Diffie-Hellman (CDH) assumption, and modeling certain hash functions as random oracles. The main difference between our variant and the original Protocol \textit{SPAKE2+} is that our variant includes standard key confirmation flows; also, adding these flows allows some slight simplification to the remainder of the protocol. Along the way, we also (i) provide the first proof (under the same assumptions) that a slight variant of Protocol \textit{SPAKE2} from [\textit{M. Abdalla} and \textit{D. Pointcheval}, Lect. Notes Comput. Sci. 3376, 191--208 (2005; Zbl 1079.94529)] is a secure symmetric PAKE in the UC framework (previous security proofs were all in the weaker BPR framework [\textit{M. Bellare} et al., ibid. 1807, 139--155 (2000; Zbl 1082.94533)]); (ii) provide a proof (under very similar assumptions) that a variant of Protocol \textit{SPAKE2+} that is currently being standardized is also a secure asymmetric PAKE; (iii) repair several problems in earlier UC formulations of secure symmetric and asymmetric PAKE. For the entire collection see [Zbl 1482.94009]. New practical multivariate signatures from a nonlinear modifier https://zbmath.org/1485.94147 2022-06-24T15:10:38.853281Z "Smith-Tone, Daniel" https://zbmath.org/authors/?q=ai:smith-tone.daniel Summary: Multivariate cryptography is dominated by schemes supporting various tweaks, or modifiers,'' designed to patch certain algebraic weaknesses they would otherwise exhibit. Typically these modifiers are linear in nature -- either requiring an extra composition with an affine map, or being evaluated by a legitimate user via an affine projection. This description applies to the minus, plus, vinegar and internal perturbation modifiers, to name a few. Though it is well-known that combinations of various modifiers can offer security against certain classes of attacks, cryptanalysts have produced ever more sophisticated attacks against various combinations of these linear modifiers. In this article, we introduce a more fundamentally nonlinear modifier, called Q, that is inspired from relinearization. The effect of the Q modifier on multivariate digital signature schemes is to maintain inversion efficiency at the cost of slightly slower verification and larger public keys, while altering the algebraic properties of the public key. Thus the Q modifier is ideal for applications of digital signature schemes requiring very fast signing and verification without key transport. As an application of this modifier, we propose new multivariate digital signature schemes with fast signing and verification that are resistant to all known attacks. For the entire collection see [Zbl 1482.94004]. Efficient key recovery for all HFE signature variants https://zbmath.org/1485.94148 2022-06-24T15:10:38.853281Z "Tao, Chengdong" https://zbmath.org/authors/?q=ai:tao.chengdong "Petzoldt, Albrecht" https://zbmath.org/authors/?q=ai:petzoldt.albrecht "Ding, Jintai" https://zbmath.org/authors/?q=ai:ding.jintai Summary: The HFE cryptosystem is one of the most popular multi- variate schemes. Especially in the area of digital signatures, the HFEv- variant offers short signatures and high performance. Recently, an instance of the HFEv- signature scheme called GeMSS was selected as one of the alternative candidates for signature schemes in the third round of the NIST Post-Quantum Crypto (PQC) Standardization Project. In this paper, we propose a new key recovery attack on the HFEv- signature scheme. Our attack shows that both the Minus and the Vinegar modification do not enhance the security of the basic HFE scheme significantly. This shows that it is very difficult to build a secure and efficient signature scheme on the basis of HFE. In particular, we use our attack to show that the proposed parameters of the GeMSS scheme are not as secure as claimed. For the entire collection see [Zbl 1483.94004]. Efficient certificate-based signature with short key and signature sizes from lattices https://zbmath.org/1485.94149 2022-06-24T15:10:38.853281Z "Tseng, Yuh-Min" https://zbmath.org/authors/?q=ai:tseng.yuh-min "Tsai, Tung-Tso" https://zbmath.org/authors/?q=ai:tsai.tung-tso "Wu, Jui-Di" https://zbmath.org/authors/?q=ai:wu.jui-di "Huang, Sen-Shan" https://zbmath.org/authors/?q=ai:huang.sen-shan Summary: Certificate-based cryptography (CB-PKC) is an attractive public key setting, which reduces the complexity of public key infrastructure in traditional public key settings and resolves the key escrow problem in ID-based public key settings. In the past, a large number of certificate-based signature and encryption schemes were proposed. Nevertheless, the security assumptions of these schemes are mainly relied on the difficulties of the discrete logarithm and factorization problems. Unfortunately, both problems will be resolved when quantum computers come true in the future. Public key cryptography from lattices is one of the important candidates for post-quantum cryptography. However, there is little work on certificate-based cryptography from lattices. In the paper, we propose a new and efficient certificate-based signature (CBS) scheme from lattices. Under the short integer solution (SIS) assumption from lattices, the proposed CBS scheme is shown to be existential unforgeability against adaptive chosen message attacks. Performance comparisons are made to demonstrate that the proposed CBS scheme from lattices is better than the previous lattice-based CBS scheme in terms of private key size and signature size. A chronological review of key establishment models and protocols https://zbmath.org/1485.94150 2022-06-24T15:10:38.853281Z "Yap, E. Y. Y." https://zbmath.org/authors/?q=ai:yap.e-y-y "Chin, J. J." https://zbmath.org/authors/?q=ai:chin.ji-jian "Goh, A." https://zbmath.org/authors/?q=ai:goh.angela|goh.a-t-c|goh.alvina|goh.alwyn Summary: This work is a review on existing authenticated key exchange (AKE) security models and protocols mainly based on Diffie-Hellman Key Exchange (DHKE). We provide a discussion on the various security models of AKEs, such as the Bellare Rogaway (BR) model, Canetti Krawczyk (CK) model and their variants. Then we provide a review covering over ninety protocols in chronological order. The security models' security features and protocol examples that fit in those security models are exhibited. On the security of a non-interactive authenticated key agreement over mobile communication networks https://zbmath.org/1485.94151 2022-06-24T15:10:38.853281Z "Yau, W. C." https://zbmath.org/authors/?q=ai:yau.wei-chuen "Yap, W. S." https://zbmath.org/authors/?q=ai:yap.wun-she "Chin, J. J." https://zbmath.org/authors/?q=ai:chin.ji-jian Summary: Setting up a common secret key for communications between two parties over insecure mobile communication networks is important for many network applications. Previously, Wu and Lin proposed a non-interactive authenticated key agreement over mobile communication networks with security proofs assuming the Bilinear Diffie-Hellman problem is hard. Wu and Lin scheme [\textit{T.-S. Wu} and \textit{H.-Y. Lin}, Non-interactive authenticated key agreement over the mobile communication network'', Mobile Netw. Appl. 18, 594--599 (2013)] is unique as the users do not need to interact at all in sharing a secret key. Besides, their scheme will at least achieve trust level of 2, where the system authority will not know the user secret keys since self-certified cryptography is used. In this paper, we demonstrate that any malicious outsider can break the security of Wu and Lin's scheme by impersonating any one of the party using public key replacement attack. Besides, we show that the system authority can easily recover all the user secret keys which contradicts with the concept of self-certified cryptography. Lastly, if the secret key shared between two parties or one of the party's private key had been compromised, the same two users can no longer communicate in the future since the same secret key will be derived and shared forever. This violates the property of forward secrecy, a property that must be provided for a key agreement scheme. On subset-resilient hash function families https://zbmath.org/1485.94152 2022-06-24T15:10:38.853281Z "Yuan, Quan" https://zbmath.org/authors/?q=ai:yuan.quan "Tibouchi, Mehdi" https://zbmath.org/authors/?q=ai:tibouchi.mehdi "Abe, Masayuki" https://zbmath.org/authors/?q=ai:abe.masayuki Summary: In this paper, we analyze the security of subset-resilient hash function families, which is first proposed as a requirement of a hash-based signature scheme called HORS. Let $$\mathcal{H}$$ be a family of functions mapping an element to a subset of size at most $$k. (r, k)$$-subset resilience guarantees that given a random function $$H$$ from $$\mathcal{H}$$, it is hard to find an $$(r+1)$$-tuple $$(x,x_1,\dots,x_r)$$ such that (1) $$H(x)$$ is covered by the union of $$H(x_i)$$ and (2) $$x$$ is not equal to any $$x_i$$. Subset resilience and its variants are related to nearly all existing stateless hash-based signature schemes, but the power of this security notion is lacking in research. We present three results on subset resilience. First, we show a generic quantum attack against subset resilience, whose time complexity is smaller than simply implementing Grover's search. Second, we show that subset-resilient hash function families imply the existence of distributional collision-resistant hash function families. Informally, distributional collision resistance is a relaxation of collision resistance, which guarantees that it is hard to find a uniform collision for a hash function. This result implies a comparison among the power of subset resilience, collision resistance, and distributional collision resistance. Third, we prove the fully black-box separation from one-way permutations. The least-squares solutions in linear codes based multisecret-sharing approach https://zbmath.org/1485.94153 2022-06-24T15:10:38.853281Z "Çalkavur, Selda" https://zbmath.org/authors/?q=ai:calkavur.selda "Nauman, Syed Khalid" https://zbmath.org/authors/?q=ai:nauman.syed-khalid "Özel, Cenap" https://zbmath.org/authors/?q=ai:ozel.cenap "Zekraoui, Hanifa" https://zbmath.org/authors/?q=ai:zekraoui.hanifa Summary: The theory of generalised inverses of matrices has been applied in many areas since 1950. Some of them are Markov chains, robotics differential equations, etc. Especially, the applications over finite fields of this theory have an important role in cryptography and coding theory. In this work, we study a new multisecret-sharing scheme by using the generalised inverses of matrices and least-squares solutions in linear codes. We determine the access structure. Its security improves on that of multisecret-sharing schemes. Improved analysis of the reduction from BDD to uSVP https://zbmath.org/1485.94171 2022-06-24T15:10:38.853281Z "Zhao, Chunhuan" https://zbmath.org/authors/?q=ai:zhao.chunhuan "Zheng, Zhongxiang" https://zbmath.org/authors/?q=ai:zheng.zhongxiang Summary: \textit{S. Bai} et al. [LIPIcs -- Leibniz Int. Proc. Inform. 55, Article 76, 12 p. (2016; Zbl 1391.94869)] showed that for polynomially bounded parameter $$\gamma$$, the Bounded Distance Decoding (BDD) problem with parameter $$1/\sqrt{2}\gamma)$$ can be probabilistically reduced to the unique Shortest Vector Problem (uSVP) with parameter $$\gamma$$. Their reduction is based on the lattice sparsification and embedding techniques. In this paper, we improve their analysis for specific values of the parameter and prove that $$\mathrm{BDD}_{1/(\sqrt{2}\gamma)}$$ can be reduced to $$\mathrm{uSVP}_{\gamma_1}$$ with a slightly larger gap. The improvement follows from a careful study of the radius of a sphere which contains at most polynomial number of vectors, then a proper choice of the parameter in the embedding lattice is used to obtain a unique lattice with a larger gap. Our result shows that the equivalence between $$\mathrm{BDD}_{1/(\sqrt{2}\gamma)}$$ and $$\mathrm{uSVP}_\gamma$$ does not hold under the assumption that $$\mathrm{uSVP}_\gamma$$ does not reduce to $$\mathrm{uSVP}_{\gamma^\prime}$$ for any two constants $$\gamma<\gamma^\prime$$. A note on `Cryptographically strong permutations from the butterfly structure'' https://zbmath.org/1485.94175 2022-06-24T15:10:38.853281Z "Li, Nian" https://zbmath.org/authors/?q=ai:li.nian "Hu, Zhao" https://zbmath.org/authors/?q=ai:hu.zhao "Xiong, Maosheng" https://zbmath.org/authors/?q=ai:xiong.maosheng "Zeng, Xiangyong" https://zbmath.org/authors/?q=ai:zeng.xiangyong Summary: Very recently, a class of cryptographically strong permutations with boomerang uniformity 4 and the best known nonlinearity is constructed from the closed butterfly structure in [\textit{K. Li} et al., ibid. 89, No. 4, 737--761 (2021; Zbl 1469.94254)]. In this note, we provide two additional results concerning these permutations. We first represent the conditions of these permutation obtained in Li et al. (loc. cit.) in a much simpler form, and then show that they are linear equivalent to Gold functions. We also prove a criterion for solving a new type of equations over finite fields, which is useful and may be of independent interest.