Recent zbMATH articles in MSC 68https://zbmath.org/atom/cc/682022-11-17T18:59:28.764376ZUnknown authorWerkzeugConsciousness and information, classical, quantum or algorithmic?https://zbmath.org/1496.000242022-11-17T18:59:28.764376Z"Chaitin, Gregory"https://zbmath.org/authors/?q=ai:chaitin.gregory-jFor the entire collection see [Zbl 1496.00078].Digital form finding using Voronoi patternhttps://zbmath.org/1496.000482022-11-17T18:59:28.764376Z"Capone, Mara"https://zbmath.org/authors/?q=ai:capone.mara"Lanzara, Emanuela"https://zbmath.org/authors/?q=ai:lanzara.emanuela"Portioli, Francesco Paolo Antonio"https://zbmath.org/authors/?q=ai:portioli.francesco-paolo-antonio"Flore, Francesco"https://zbmath.org/authors/?q=ai:flore.francescoSummary: Starting from funicular models, chain models and hanging membranes, the role of 3D physical models in optimized shape research is the basis of form-finding strategies. Advances in structural optimized shape design derive from the wide spread of special digital form-finding tools. The goal of this paper is to test and evaluate interdisciplinary approaches based on computational tools useful in the form finding of efficient structural systems. This work is aimed at designing an inverse hanging shape subdivided into polygonal voussoirs (Voronoi patterns) by relaxing a planar discrete and elastic system, loaded at each point and anchored along its boundary. The workflow involves shaping, discretization (from pre-shaped paneling to digital stereotomy) and structural analysis carried out using two modeling approaches, finite element and rigid block modeling, using an in-house software tool, LiABlock\_3D (MATLAB\textsuperscript{\circledR}), to check the stress state and to evaluate the equilibrium stability of the final shell.Triangulation algorithms for generating as-is floor planshttps://zbmath.org/1496.000492022-11-17T18:59:28.764376Z"da Silva Brandão, Filipe Jorge"https://zbmath.org/authors/?q=ai:da-silva-brandao.filipe-jorge"Paio, Alexandra"https://zbmath.org/authors/?q=ai:paio.alexandra"Lopes, Adriano"https://zbmath.org/authors/?q=ai:lopes.adrianoSummary: Precisely capturing context is a fundamental first step in dealing with built environments. Previous research has demonstrated that existing methods for generating as-is floor plans of non-orthogonal rooms by non-expert users do not produce geometrically accurate results. The present paper proposes the adaptation of empirical triangulation methods, traditionally used by architects and other building professionals in surveying building interiors, to the development of semi-automated workflow of room survey. A set of triangulation algorithms that automate the plan drawing stage are presented.Special issue of invited papers in honor of the Boris Trakhtenbrot centenary. Prefacehttps://zbmath.org/1496.000582022-11-17T18:59:28.764376ZFrom the text: This issue of Fundamenta Informaticae is dedicated to the memory of the late Boris/Boaz Abramovich -- and commemorates the centennial of his birth on 20 February 1921 (Gregorian).Preface: CALDAM 2017https://zbmath.org/1496.000612022-11-17T18:59:28.764376ZFrom the text: This special issue of Discrete Applied Mathematics (DAM) is a collection of 12 papers received in response to a general call for papers after the Third International Conference on Algorithms and Discrete Applied Mathematics, CALDAM 2017, held in Goa, India, in February 2017. Nine of the papers appeared in preliminary form in the proceedings of CALDAM 2017. A total of 29 manuscripts were received in response to the call for papers. Nearly half (14) of the submissions were results that were not presented at the CALDAM conference.Special issue: Selected papers of the 14th international conference on language and automata theory and applications, LATA 2020https://zbmath.org/1496.000722022-11-17T18:59:28.764376ZFrom the text: This special issue of the journal Information and Computation contains extended versions of some papers that were accepted for presentation at the 14th International Conference on Language and Automata Theory and Applications (LATA 2020) which was planned to take place in Milan, Italy, on March 4--6, 2020.Logic and argumentation. Third international conference, CLAR 2020, Hangzhou, China, April 6--9, 2020. Proceedingshttps://zbmath.org/1496.030072022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding conference see [Zbl 1428.03006].
Indexed articles:
\textit{Ågotnes, Thomas; Wáng, Yì N.}, Group belief, 3-21 [Zbl 07578357]
\textit{Arisaka, Ryuta; Ito, Takayuki}, Broadening label-based argumentation semantics with may-must scales, 22-41 [Zbl 07578358]
\textit{Baur, Michael; Studer, Thomas}, Semirings of evidence, 42-57 [Zbl 07578359]
\textit{Cramer, Marcos; Dietz Saldanha, Emmanuelle-Anna}, Logic programming, argumentation and human reasoning, 58-79 [Zbl 07578360]
\textit{Dautović, Šejla; Doder, Dragan; Ognjanović, Zoran}, Reasoning about degrees of confirmation, 80-95 [Zbl 07578361]
\textit{Düntsch, Ivo; Dzik, Wojciech}, Ideal related algebras and their logics extended abstract, 96-103 [Zbl 07578362]
\textit{Fuenmayor, David; Benzmüller, Christoph}, Computer-supported analysis of arguments in climate engineering, 104-115 [Zbl 07578363]
\textit{Li, Xu; Wáng, Yì N.}, A logic of knowledge and belief based on abstract arguments, 116-130 [Zbl 07578364]
\textit{Libal, Tomer}, A meta-level annotation language for legal texts, 131-150 [Zbl 07578365]
\textit{Libal, Tomer; Steen, Alexander}, Towards an executable methodology for the formalization of legal texts, 151-165 [Zbl 07578366]
\textit{Oliveira, Tiago; Dauphin, Jérémie; Satoh, Ken; Tsumoto, Shusaku; Novais, Paulo}, Goal-driven structured argumentation for patient management in a multimorbidity setting, 166-183 [Zbl 07578367]
\textit{Suzuki, Satoru}, Intuitionistic-Bayesian semantics of first-order logic for generics, 184-200 [Zbl 07578368]
\textit{Tang, Liping}, Ambiguity preference and context learning in uncertain signaling, 201-218 [Zbl 07578369]
\textit{van Berkel, Kees; Lyon, Tim; Olivieri, Francesco}, A decidable multi-agent logic for reasoning about actions, instruments, and norms, 219-241 [Zbl 07578370]
\textit{Chen, Weiwei}, Preservation of admissibility with rationality and feasibility constraints, 245-258 [Zbl 07578371]
\textit{Liga, Davide; Palmirani, Monica}, Uncertainty in argumentation schemes: negative consequences and basic slippery slope, 259-278 [Zbl 07578372]
\textit{Su, Chinghui; Rong, Liwu; Liang, Fei}, Reasoning as speech acts, 279-286 [Zbl 07578373]
\textit{Wang, Zongshun; Wu, Jiachao}, Dynamics of fuzzy argumentation frameworks, 287-307 [Zbl 07578374]
\textit{Wu, Jiachao; Li, Hengfei}, Probabilistic three-valued argumentation frameworks, 308-323 [Zbl 07578375]
\textit{Pedersen, Mina Young; Smets, Sonja; Ågotnes, Thomas}, Further steps towards a logic of polarization in social networks, 324-345 [Zbl 07578376]
\textit{Yu, Zhe}, A formalization of the slippery slope argument, 346-361 [Zbl 07578377]Caculability anthology. Birth and development of the theory of computability from the 1920s to the 1970s. Historical introduction by Serge Grigorieffhttps://zbmath.org/1496.030092022-11-17T18:59:28.764376ZPublisher's description: À l'heure où tout le monde ne parle que d'algorithmes, cette anthologie de la calculabilité vient à point nommé. Elle vise à retracer, au contact direct avec les sources, les étapes initiales décisives d'une théorie du calcul. Elle propose un ensemble de vingt-quatre textes depuis celui de Babbage sur sa machine aux différences, en passant par Behmann, Skolem, Hilbert, Ackermann, Gödel, Church, Kleene, Turing, Post, Rosser, Markov, Kolmogorov-Uspenski, Howard et d'autres, jusqu'à Matiyasevich en 1970.
Chaque texte est accompagné d'une présentation et de notes destinées à en faciliter la lecture, rédigées par une équipe composée de spécialistes internationaux. L'ensemble est précédé d'une introduction de Serge Grigorieff, qui apporte sur cette histoire le point de vue du logicien informaticien contemporain.
The articles of this volume will not be indexed individually.Unification properties of commutative theories: a categorical treatmenthttps://zbmath.org/1496.030362022-11-17T18:59:28.764376Z"Baader, Franz"https://zbmath.org/authors/?q=ai:baader.franzSummary: A general framework for unification in ``commutative'' theories is investigated, which is based on a categorical reformulation of theory unification. This yields algebraic characterizations of unification type unitary (resp. finitary for unification with constants). We thus obtain the well-known results for abelian groups, abelian monoids and idempotent abelian monoids as well as some new results as corollaries to a general theorem. In addition, it is shown that constant-free unification problems in ``commutative'' theories are either unitary or of unification type zero and we give an example of a ``commutative'' theory of type zero.
For the entire collection see [Zbl 0712.68006].Decision procedures for theories of sets with measureshttps://zbmath.org/1496.030382022-11-17T18:59:28.764376Z"Bender, Markus"https://zbmath.org/authors/?q=ai:bender.markus"Sofronie-Stokkermans, Viorica"https://zbmath.org/authors/?q=ai:sofronie-stokkermans.vioricaSummary: In this paper we introduce a decision procedure for checking satisfiability of quantifier-free formulae in the combined theory of sets, measures and arithmetic. Such theories are important in mathematics (e.g. probability theory and measure theory) and in applications. We indicate how these ideas can be used for obtaining a decision procedure for a fragment of the duration calculus.
For the entire collection see [Zbl 1369.68037].Translating between implicit and explicit versions of proofhttps://zbmath.org/1496.030392022-11-17T18:59:28.764376Z"Blanco, Roberto"https://zbmath.org/authors/?q=ai:blanco.roberto"Chihani, Zakaria"https://zbmath.org/authors/?q=ai:chihani.zakaria"Miller, Dale"https://zbmath.org/authors/?q=ai:miller.dale-aSummary: The Foundational Proof Certificate (FPC) framework can be used to define the semantics of a wide range of proof evidence. For example, such definitions exist for a number of textbook proof systems as well as for the proof evidence output from some existing theorem proving systems. An important decision in designing a proof certificate format is the choice of how many details are to be placed within certificates. Formats with fewer details are smaller and easier for theorem provers to output but they require more sophistication from checkers since checking will involve some proof reconstruction. Conversely, certificate formats containing many details are larger but are checkable by less sophisticated checkers. Since the FPC framework is based on well-established proof theory principles, proof certificates can be manipulated in meaningful ways. In this paper, we illustrate how it is possible to automate moving from implicit to explicit (elaboration) and from explicit to implicit (distillation) proof evidence via the proof checking of a pair of proof certificates. Performing elaboration makes it possible to transform a proof certificate with details missing into a certificate packed with enough details so that a simple kernel (without support for proof reconstruction) can check the elaborated certificate. We illustrate how trust in only a single, simple checker of explicitly described proofs can be used to provide trust in a range of theorem provers employing a range of proof structures.
For the entire collection see [Zbl 1369.68037].Negation elimination in equational formulae (extended abstract)https://zbmath.org/1496.030402022-11-17T18:59:28.764376Z"Comon, Hubert"https://zbmath.org/authors/?q=ai:comon.hubert"Fernández, Maribel"https://zbmath.org/authors/?q=ai:fernandez.maribelSummary: An equational formula is a first order formula over an alphabet \(\mathcal{F}\) of function symbols and the equality predicate symbol. Such formulae are interpreted in the algebra \(T(\mathcal{F})\) of finite trees. An equational formula is w.e.d. (without equations in disjunctions) if its solved forms do not contain any subformula \(s = t \vee u \neq v\). A unification problem is any equational problem which does not contain any negation (in particular, it should not contain any disequation). We give a terminating set of transformation rules such that a w.e.d. formula \(\phi\) is (semantically) equivalent to a unification problem iff its irreducible form is a unification problem. This result can be formulated in another way: our set of transformation rules computes a finite complete set of ``most general unifiers'' for a w.e.d. formula each time such a finite set exists. Such results extend \textit{J. L. Lassez} and \textit{K. Marriott} results on ``explicit representation of terms defined by counter-examples'' [J. Autom. Reasoning 3, 301--317 (1987; Zbl 0641.68124)].
The above results are extended to quotients of the free algebra by a congruence \(=_E\) which can be generated by a set of shallow permulative equations \(E\).
For the entire collection see [Zbl 1415.68030].A decision procedure for restricted intensional setshttps://zbmath.org/1496.030412022-11-17T18:59:28.764376Z"Cristiá, Maximiliano"https://zbmath.org/authors/?q=ai:cristia.maximiliano"Rossi, Gianfranco"https://zbmath.org/authors/?q=ai:rossi.gianfrancoSummary: In this paper we present a decision procedure for restricted intensional sets (RIS), i.e. sets given by a property rather than by enumerating their elements, similar to set comprehensions available in specification languages such as B and Z. The proposed procedure is parametric with respect to a first-order language and theory \(\mathcal {X}\), providing at least equality and a decision procedure to check for satisfiability of \(\mathcal {X}\)-formulas. We show how this framework can be applied when \(\mathcal {X}\) is the theory of hereditarily finite sets as is supported by the language CLP(\(\mathcal {SET}\)). We also present a working implementation of RIS as part of the \(\{log\}\) tool and we show how it compares with a mainstream solver and how it helps in the automatic verification of code fragments.
For the entire collection see [Zbl 1369.68037].Splitting proofs for interpolationhttps://zbmath.org/1496.030442022-11-17T18:59:28.764376Z"Gleiss, Bernhard"https://zbmath.org/authors/?q=ai:gleiss.bernhard"Kovács, Laura"https://zbmath.org/authors/?q=ai:kovacs.laura-ildiko"Suda, Martin"https://zbmath.org/authors/?q=ai:suda.martinSummary: We study interpolant extraction from local first-order refutations. We present a new theoretical perspective on interpolation based on clearly separating the condition on logical strength of the formula from the requirement on the common signature. This allows us to highlight the space of all interpolants that can be extracted from a refutation as a space of simple choices on how to split the refutation into two parts. We use this new insight to develop an algorithm for extracting interpolants which are linear in the size of the input refutation and can be further optimized using metrics such as number of non-logical symbols or quantifiers. We implemented the new algorithm in first-order theorem prover \textsc{Vampire} and evaluated it on a large number of examples coming from the first-order proving community. Our experiments give practical evidence that our work improves the state-of-the-art in first-order interpolation.
For the entire collection see [Zbl 1369.68037].Proving and rewritinghttps://zbmath.org/1496.030452022-11-17T18:59:28.764376Z"Goguen, Joseph A."https://zbmath.org/authors/?q=ai:goguen.joseph-amadeeSummary: This paper presents some ways to prove theorems in first and second order logic, such that rewriting does the routine work automatically, and partially successful proofs often return information that suggests what to try next. The theoretical framework makes extensive use of general algebra, and main results include an extension of many-sorted equational logic to universal quantification over functions, some techniques for handling first order logic, and some structural induction principles. The OBJ language is used for illustration, and initiality is a recurrent theme.
For the entire collection see [Zbl 0763.68011].Theorem proving for metric temporal logic over the naturalshttps://zbmath.org/1496.030472022-11-17T18:59:28.764376Z"Hustadt, Ullrich"https://zbmath.org/authors/?q=ai:hustadt.ullrich"Ozaki, Ana"https://zbmath.org/authors/?q=ai:ozaki.ana"Dixon, Clare"https://zbmath.org/authors/?q=ai:dixon.clareSummary: We study translations from metric temporal logic (MTL) over the natural numbers to linear temporal logic (LTL). In particular, we present two approaches for translating from MTL to LTL which preserve the \textsc{ExpSpace} complexity of the satisfiability problem for MTL. In each of these approaches we consider the case where the mapping between states and time points is given by (1) a strict monotonic function and by (2) a non-strict monotonic function (which allows multiple states to be mapped to the same time point). Our translations allow us to utilise LTL solvers to solve satisfiability and we empirically compare the translations, showing in which cases one performs better than the other.
For the entire collection see [Zbl 1369.68037].Non-clausal connection calculi for non-classical logicshttps://zbmath.org/1496.030492022-11-17T18:59:28.764376Z"Otten, Jens"https://zbmath.org/authors/?q=ai:otten.jensSummary: The paper introduces non-clausal connection calculi for first-order intuitionistic and several first-order modal logics. The notion of a non-clausal matrix together with the non-clausal connection calculus for classical logic are extended to intuitionistic and modal logics by adding prefixes that encode the Kripke semantics of these logics. Details of the required prefix unification and some optimization techniques are described. Furthermore, compact Prolog implementations of the introduced non-classical calculi are presented. An experimental evaluation shows that non-clausal connection calculi are a solid basis for proof search in these logics, in terms of time complexity and proof size.
For the entire collection see [Zbl 1371.68015].A resolution-style proof system for DQBFhttps://zbmath.org/1496.030502022-11-17T18:59:28.764376Z"Rabe, Markus N."https://zbmath.org/authors/?q=ai:rabe.markus-nSummary: This paper presents a sound and complete proof system for dependency quantified Boolean formulas (DQBF) using resolution, universal reduction, and a new proof rule that we call fork extension. This opens new avenues for the development of efficient algorithms for DQBF.
For the entire collection see [Zbl 1368.68008].Cyclic proofs with ordering constraintshttps://zbmath.org/1496.030512022-11-17T18:59:28.764376Z"Stratulat, Sorin"https://zbmath.org/authors/?q=ai:stratulat.sorinSummary: \(\mathrm {CLKID}^{\omega}\) is a sequent-based cyclic inference system able to reason on first-order logic with inductive definitions. The current approach for verifying the soundness of \(\mathrm {CLKID}^{\omega}\) proofs is based on expensive model-checking techniques leading to an explosion in the number of states.
We propose proof strategies that guarantee the soundness of a class of \(\mathrm {CLKID}^{\omega}\) proofs if some ordering and derivability constraints are satisfied. They are inspired from previous works about cyclic well-founded induction reasoning, known to provide effective sets of ordering constraints. A derivability constraint can be checked in linear time. Under certain conditions, one can build proofs that implicitly satisfy the ordering constraints.
For the entire collection see [Zbl 1371.68015].Categorical structures for type theory in univalent foundationshttps://zbmath.org/1496.030532022-11-17T18:59:28.764376Z"Ahrens, Benedikt"https://zbmath.org/authors/?q=ai:ahrens.benedikt"Lumsdaine, Peter Lefanu"https://zbmath.org/authors/?q=ai:lumsdaine.peter-lefanu"Voevodsky, Vladimir"https://zbmath.org/authors/?q=ai:voevodsky.vladimir-aleksandrovichSummary: In this paper, we analyze and compare three of the many algebraic structures that have been used for modeling dependent type theories: \textit{categories with families}, \textit{split type-categories}, and \textit{representable maps of presheaves}. We study these in univalent type theory, where the comparisons between them can be given more elementarily than in set-theoretic foundations. Specifically, we construct maps between the various types of structures, and show that assuming the Univalence axiom, some of the comparisons are equivalences.
We then analyze how these structures transfer along (weak and strong) equivalences of categories, and, in particular, show how they descend from a category (not assumed univalent/saturated) to its Rezk completion. To this end, we introduce \textit{relative universes}, generalizing the preceding notions, and study the transfer of such relative universes along suitable structure.
We work throughout in (intensional) dependent type theory; some results, but not all, assume the univalence axiom. All the material of this paper has been formalized in Coq, over the \texttt{UniMath} library.A computational treatment of anaphora and its algorithmic implementationhttps://zbmath.org/1496.030572022-11-17T18:59:28.764376Z"Bernardy, Jean-Philippe"https://zbmath.org/authors/?q=ai:bernardy.jean-philippe"Chatzikyriakidis, Stergios"https://zbmath.org/authors/?q=ai:chatzikyriakidis.stergios"Maskharashvili, Aleksandre"https://zbmath.org/authors/?q=ai:maskharashvili.aleksandreSummary: In this paper, we propose a framework capable of dealing with anaphora and ellipsis which is both general and algorithmic. This generality is ensured by the compination of two general ideas. First, we use a dynamic semantics which reperent effects using a monad structure. Second we treat scopes flexibly, extending them as needed. We additionally implement this framework as an algorithm which translates abstract syntax to logical formulas. We argue that this framework can provide a unified account of a large number of anaphoric phenomena. Specifically, we show its effectiveness in dealing with pronominal and VP-anaphora, strict and lazy pronouns, lazy identity, bound variable anaphora, e-type pronouns, and cataphora. This means that in particular we can handle complex cases like Bach-Peters sentences, which require an account dealing simultaneously with several phenomena. We use Haskell as a meta-language to present the theory, which also consitutes an implementation of all the phenomena discussed in the paper. To demonstrate coverage, we propose a test suite that can be used to evaluate computational approaches to anaphora.An extension of system \(F\) with subtypinghttps://zbmath.org/1496.030582022-11-17T18:59:28.764376Z"Cardelli, Luca"https://zbmath.org/authors/?q=ai:cardelli.luca"Martini, Simone"https://zbmath.org/authors/?q=ai:martini.simone"Mitchell, John C."https://zbmath.org/authors/?q=ai:mitchell.john-c"Scedrov, Andre"https://zbmath.org/authors/?q=ai:scedrov.andreSummary: System \(F\) is a well-known typed \(\lambda\)-calculus with polymorphic types, which provides a basis for polymorphic programming languages. We study an extension of \(F\), called \(F_{<:}\), that combines parametric polymorphism with subtyping.
The main focus of the paper is the equational theory of \(F_{<:}\), which is related to PER models and the notion of parametricity. We study some categorical properties of the theory when restricted to closed terms, including interesting categorical isomorphisms. We also investigate proof-theoretical properties, such as the conservativity of typing judgments with respect to \(F\).
We demonstrate by a set of examples how a range of constructs may be encoded in \(F_{<:}\). These include record operations and subtyping hierarchies that are related to features of object-oriented languages.
For the entire collection see [Zbl 0875.00067].From term models to domainshttps://zbmath.org/1496.030632022-11-17T18:59:28.764376Z"Phoa, Wesley"https://zbmath.org/authors/?q=ai:phoa.wesleySummary: Let B be the closed term model of the \(\lambda\)-calculus in which terms with the same Böhm tree are identified. We investigate which partial equivalence relations (PERs) on B can be regarded as predomains or domains. Working inside the realizability topos on B, such PERs can be regarded simply as sets in a particular model of constructive set theory.
No well-behaved partial order has been identified for any class of PERs; but it is still possible to isolate those PERs which have `suprema of chains' in a certain sense, and all maps between such PERs in the model preserve such suprema of chains. One can also define what it means for such a PER to have a `bottom'; partial function spaces provide an example. For these PERs, fixed points of arbitrary endofunctions exist and are computed by the fixed point combinatory. There is also a notion of meet-closure for which all maps are stable.
The categories of predomains are closed under the formation of total and partial function spaces, polymorphic types and convex powerdomains. (Subtyping and bounded quantification can also be modelled.) They in fact form reflective subcategories of the realizability topos; and in this set-theoretic context, these constructions are very simple to describe.
For the entire collection see [Zbl 0875.00067].Verification and strategy synthesis for coalition announcement logichttps://zbmath.org/1496.030662022-11-17T18:59:28.764376Z"Alechina, Natasha"https://zbmath.org/authors/?q=ai:alechina.natasha"van Ditmarsch, Hans"https://zbmath.org/authors/?q=ai:van-ditmarsch.hans-pieter"Galimullin, Rustam"https://zbmath.org/authors/?q=ai:galimullin.rustam"Wang, Tuo"https://zbmath.org/authors/?q=ai:wang.tuo.1|wang.tuoSummary: Coalition announcement logic (CAL) is one of the family of the logics of quantified announcements. It allows us to reason about what a coalition of agents can achieve by making announcements in the setting where the anti-coalition may have an announcement of their own to preclude the former from reaching its epistemic goals. In this paper, we describe a PSPACE-complete model checking algorithm for CAL that produces winning strategies for coalitions. The algorithm is implemented in a proof-of-concept model checker.Decomposability helps for deciding logics of knowledge and beliefhttps://zbmath.org/1496.030672022-11-17T18:59:28.764376Z"Arnborg, Stefan"https://zbmath.org/authors/?q=ai:arnborg.stefanSummary: We show that decision problems in modal logics (logics of knowledge and belief) are easy for decomposable formulas. Satisfiability of a formula of size \(n\) and treewidth \(k\) can be decided in time \(O(n f (k))\), where \(f\) is a double exponential function. This result holds not only for the logics S5 and KD45 with NP-complete decision problems, but also for extensions to multiple agents as in the standard logics \(\mathrm{K}_{n}\), \(\mathrm{T}_{n}\), \(\mathrm{S}4_{n}\), \(\mathrm{S}5_{n}\) and \(\mathrm{KD45}_{n}\), whose decision problems are PSPACE complete for arbitrary formulas. Moreover, the method works for these logics extended with operators for distributed and common knowledge, which otherwise cause a complexity increase to exponential time for the satisfiability problem.
For the entire collection see [Zbl 0825.00054].A dynamic epistemic logic analysis of the equality negation taskhttps://zbmath.org/1496.030722022-11-17T18:59:28.764376Z"Goubault, Éric"https://zbmath.org/authors/?q=ai:goubault.eric"Lazić, Marijana"https://zbmath.org/authors/?q=ai:lazic.marijana"Ledent, Jérémy"https://zbmath.org/authors/?q=ai:ledent.jeremy"Rajsbaum, Sergio"https://zbmath.org/authors/?q=ai:rajsbaum.sergioSummary: In this paper we study the solvability of the equality negation task in a simple wait-free model where processes communicate by reading and writing shared variables or exchanging messages. In this task, two processes start with a private input value in the set \(\{0,1,2\}\), and after communicating, each one must decide a binary output value, so that the outputs of the processes are the same if and only if the input values of the processes are different. This task is already known to be unsolvable; our goal here is to prove this result using the dynamic epistemic logic (DEL) approach introduced by \textit{É. Goubault} et al. in [Electron. Proc. Theor. Comput. Sci. (EPTCS) 277, 73--87 (2018; Zbl 07447735)]. We show that in fact, there is no epistemic logic formula that explains why the task is unsolvable. We fix this issue by extending the language of our DEL framework, which allows us to construct such a formula, and discuss its utility.
For the entire collection see [Zbl 1430.03006].Stit semantics for epistemic notions based on information disclosure in interactive settingshttps://zbmath.org/1496.030762022-11-17T18:59:28.764376Z"Ramírez Abarca, Aldo Iván"https://zbmath.org/authors/?q=ai:ramirez-abarca.aldo-ivan"Broersen, Jan"https://zbmath.org/authors/?q=ai:broersen.jan-mSummary: We characterize four types of agentive knowledge using a stit semantics over branching discrete-time structures. These are ex ante knowledge, ex interim knowledge, ex post knowledge, and know-how. The first three are notions that arose from game-theoretical analyses on the stages of information disclosure across the decision making process, and the fourth has gained prominence both in logics of action and in deontic logic as a means to formalize ability. In recent years, logicians in AI have argued that any comprehensive study of responsibility attribution and blameworthiness should include proper treatment of these kinds of knowledge. This paper intends to clarify previous attempts to formalize them in stit logic and to propose alternative interpretations that in our opinion are more akin to the study of responsibility in the stit tradition. The logic we present uses an extension with knowledge operators of the Xstit language, and formulas are evaluated with respect to branching discrete-time models. We also present an axiomatic system for this logic, and address its soundness and completeness.
For the entire collection see [Zbl 1430.03006].Independence-friendly logic without Henkin quantificationhttps://zbmath.org/1496.031112022-11-17T18:59:28.764376Z"Barbero, Fausto"https://zbmath.org/authors/?q=ai:barbero.fausto"Hella, Lauri"https://zbmath.org/authors/?q=ai:hella.lauri-t"Rönnholm, Raine"https://zbmath.org/authors/?q=ai:ronnholm.raineSummary: We analyze from a global point of view the expressive resources of IF logic that do not stem from Henkin (partially-ordered) quantification. When one restricts attention to regular IF sentences, this amounts to the study of the fragment of IF logic which is individuated by the game-theoretical property of Action Recall. We prove that the fragment of Action Recall can express all existential second-order (ESO) properties. This can be accomplished already by the prenex fragment of Action Recall, whose only second-order source of expressiveness are the so-called signalling patterns. The proof shows that a complete set of Henkin prefixes is explicitly definable in the fragment of Action Recall. In the more general case, in which also irregular IF sentences are allowed, we show that full ESO expressive power can be achieved using neither Henkin nor signalling patterns.
For the entire collection see [Zbl 1369.03021].Independence-friendly logic without Henkin quantificationhttps://zbmath.org/1496.031122022-11-17T18:59:28.764376Z"Barbero, Fausto"https://zbmath.org/authors/?q=ai:barbero.fausto"Hella, Lauri"https://zbmath.org/authors/?q=ai:hella.lauri-t"Rönnholm, Raine"https://zbmath.org/authors/?q=ai:ronnholm.raineSummary: We analyze the expressive resources of \(\text{IF}\) logic that do not stem from Henkin (partially-ordered) quantification. When one restricts attention to regular \(\text{IF}\) sentences, this amounts to the study of the fragment of \(\text{IF}\) logic which is individuated by the game-theoretical property of action recall (AR). We prove that the fragment of prenex AR sentences can express all existential second-order properties. We then show that the same can be achieved in the non-prenex fragment of AR, by using ``signalling by disjunction'' instead of Henkin or signalling patterns. We also study irregular IF logic (in which requantification of variables is allowed) and analyze its correspondence to regular IF logic. By using new methods, we prove that the game-theoretical property of knowledge memory is a first-order syntactical constraint also for irregular sentences, and we identify another new first-order fragment. Finally we discover that irregular prefixes behave quite differently in finite and infinite models. In particular, we show that, over infinite structures, every irregular prefix is equivalent to a regular one; and we present an irregular prefix which is second order on finite models but collapses to a first-order prefix on infinite models.The logic of AGM learning from partial observationshttps://zbmath.org/1496.031232022-11-17T18:59:28.764376Z"Baltag, Alexandru"https://zbmath.org/authors/?q=ai:baltag.alexandru"Özgün, Aybüke"https://zbmath.org/authors/?q=ai:ozgun.aybuke"Vargas-Sandoval, Ana Lucia"https://zbmath.org/authors/?q=ai:vargas-sandoval.ana-luciaSummary: We present a dynamic logic for inductive learning from partial observations by a ``rational'' learner, that obeys AGM postulates for belief revision. We apply our logic to an example, showing how various concrete properties can be learnt with certainty or inductively by such an AGM learner. We present a sound and complete axiomatization, based on a combination of relational and neighbourhood version of the canonical model method.
For the entire collection see [Zbl 1430.03006].Resource separation in dynamic logic of propositional assignmentshttps://zbmath.org/1496.031242022-11-17T18:59:28.764376Z"Boudou, Joseph"https://zbmath.org/authors/?q=ai:boudou.joseph"Herzig, Andreas"https://zbmath.org/authors/?q=ai:herzig.andreas"Troquard, Nicolas"https://zbmath.org/authors/?q=ai:troquard.nicolasSummary: We extend dynamic logic of propositional assignments by adding an operator of parallel composition that is inspired by separation logics. We provide an axiomatisation via reduction axioms, thereby establishing decidability. We also prove that the complexity of both the model checking and the satisfiability problem stay in PSPACE.
For the entire collection see [Zbl 1430.03006].Biabduction (and related problems) in array separation logichttps://zbmath.org/1496.031252022-11-17T18:59:28.764376Z"Brotherston, James"https://zbmath.org/authors/?q=ai:brotherston.james"Gorogiannis, Nikos"https://zbmath.org/authors/?q=ai:gorogiannis.nikos"Kanovich, Max"https://zbmath.org/authors/?q=ai:kanovich.max-iSummary: We investigate array separation logic \((\mathsf {ASL})\), a variant of symbolic-heap separation logic in which the data structures are either pointers or arrays, i.e., contiguous blocks of memory. This logic provides a language for compositional memory safety proofs of array programs. We focus on the Biabduction problem for this logic, which has been established as the key to automatic specification inference at the industrial scale. We present an \(\mathsf {NP}\) decision procedure for biabduction in \(\mathsf {ASL}\), and we also show that the problem of finding a consistent solution is \(\mathsf {NP}\)-hard. Along the way, we study satisfiability and entailment in \(\mathsf {ASL}\), giving decision procedures and complexity bounds for both problems. We show satisfiability to be \(\mathsf {NP}\)-complete, and entailment to be decidable with high complexity. The surprising fact that biabduction is simpler than entailment is due to the fact that, as we show, the element of choice over biabduction solutions enables us to dramatically reduce the search space.
For the entire collection see [Zbl 1369.68037].Interpreting sequent calculi as client-server gameshttps://zbmath.org/1496.031272022-11-17T18:59:28.764376Z"Fermüller, Christian G."https://zbmath.org/authors/?q=ai:fermuller.christian-g"Lang, Timo"https://zbmath.org/authors/?q=ai:lang.timoSummary: Motivated by the interpretation of substructural logics as resource-conscious reasoning, we introduce a client-server game characterizing provability in single-conclusion sequent calculi. The set up is modular and allows to capture multiple logics, including intuitionistic and (affine) linear intuitionistic logic. We also provide a straightforward interpretation of subexponentials, and moreover introduce a game where the information provided by the server is organized as a stack, rather than as a multiset or list.
For the entire collection see [Zbl 1371.68015].Introducing interval differential dynamic logichttps://zbmath.org/1496.031282022-11-17T18:59:28.764376Z"Figueiredo, Daniel"https://zbmath.org/authors/?q=ai:figueiredo.daniel-rattonSummary: Differential dynamic logic \((d\mathcal{L})\) is a dynamic logic with first-order features which allows us to describe and reason about hybrid systems. We have already used this logic to reason about biological models. Here we explore some variants of its semantics in order to obtain a simplified and more intuitive way of describing errors/perturbations, unavoidable in real-case scenarios. More specifically, we introduce interval differential dynamic logic which takes \(d\mathcal{L}\) as its base and adapts its semantics for the interval setting.
For the entire collection see [Zbl 1489.68021].Logic in times of big datahttps://zbmath.org/1496.031292022-11-17T18:59:28.764376Z"Finger, Marcelo"https://zbmath.org/authors/?q=ai:finger.marceloFor the entire collection see [Zbl 1496.00078].An introduction to category-based equational logichttps://zbmath.org/1496.031302022-11-17T18:59:28.764376Z"Goguen, Joseph A."https://zbmath.org/authors/?q=ai:goguen.joseph-amadee"Diaconescu, Răzvan"https://zbmath.org/authors/?q=ai:diaconescu.razvanSummary: This paper surveys \textit{category-based equational logic}, which generalises both the theoretical and computational aspects of equational logic and its model theory (general algebra) far beyond terms, so as to include: Horn clause logic, with and without equality; all variants of order and many sorted equational logic, including working modulo a set of axioms; constraint logic programming over arbitrary user-defined data types; and any combination of the above. This unifies several important computational paradigms, and opens the door to still further generalisations. Results include completeness of deduction, a Herbrand theorem, completeness of paramodulation, generic modularisation techniques, and a model theoretic semantics for extensible constraint logic programing.
For the entire collection see [Zbl 1492.68008].Complexity thresholds in inclusion logichttps://zbmath.org/1496.031312022-11-17T18:59:28.764376Z"Hannula, Miika"https://zbmath.org/authors/?q=ai:hannula.miika"Hella, Lauri"https://zbmath.org/authors/?q=ai:hella.lauri-tSummary: Logics with team semantics provide alternative means for logical characterization of complexity classes. Both dependence and independence logic are known to capture non-deterministic polynomial time, and the frontiers of tractability in these logics are relatively well understood. Inclusion logic is similar to these team-based logical formalisms with the exception that it corresponds to deterministic polynomial time in ordered models. In this article we examine connections between syntactical fragments of inclusion logic and different complexity classes in terms of two computational problems: maximal subteam membership and the model checking problem for a fixed inclusion logic formula. We show that very simple quantifier-free formulae with one or two inclusion atoms generate instances of these problems that are complete for (non-deterministic) logarithmic space and polynomial time. Furthermore, we present a fragment of inclusion logic that captures non-deterministic logarithmic space in ordered models.
For the entire collection see [Zbl 1418.03008].Complexity thresholds in inclusion logichttps://zbmath.org/1496.031322022-11-17T18:59:28.764376Z"Hannula, Miika"https://zbmath.org/authors/?q=ai:hannula.miika"Hella, Lauri"https://zbmath.org/authors/?q=ai:hella.lauri-tSummary: Inclusion logic differs from many other logics of dependence and independence in that it can only describe polynomial-time properties. In this article we examine more closely connections between syntactic fragments of inclusion logic and different complexity classes. Our focus is on two computational problems: maximal subteam membership and the model checking problem for a fixed inclusion logic formula. We show that very simple quantifier-free formulae with one or two inclusion atoms generate instances of these problems that are complete for (non-deterministic) logarithmic space and polynomial time. We also present a safety game for the maximal subteam membership problem and use it to investigate this problem over teams in which one variable is a key. Furthermore, we relate our findings to consistent query answering over inclusion dependencies, and present a fragment of inclusion logic that captures non-deterministic logarithmic space in ordered models.Behavioural and abstractor specifications for a dynamic logic with binders and silent transitionshttps://zbmath.org/1496.031332022-11-17T18:59:28.764376Z"Hennicker, Rolf"https://zbmath.org/authors/?q=ai:hennicker.rolf"Knapp, Alexander"https://zbmath.org/authors/?q=ai:knapp.alexander"Madeira, Alexandre"https://zbmath.org/authors/?q=ai:madeira.alexandre"Mindt, Felix"https://zbmath.org/authors/?q=ai:mindt.felixSummary: We extend dynamic logic with binders (for state variables) by distinguishing between observable and silent transitions. This differentiation gives rise to two kinds of observational interpretations of the logic: abstractor and behavioural specifications. Abstractor specifications relax the standard model class semantics of a specification by considering its closure under weak bisimulation. Behavioural specifications, however, rely on a behavioural satisfaction relation which relaxes the interpretation of state variables and the satisfaction of modal formulas \(\langle\alpha\rangle\varphi\) and \([\alpha]\varphi\) by abstracting from silent transitions. A formal relation between abstractor and behavioural specifications is provided which shows that both coincide semantically under mild conditions. For the proof we instantiate the previously introduced concept of a behaviour-abstractor framework to the case of dynamic logic with binders and silent transitions.
For the entire collection see [Zbl 1430.03006].Coherence and valid isomorphism in closed categories. Applications of proof theory to category theory in a computer scientist perspective (notes for an invited lecture)https://zbmath.org/1496.031342022-11-17T18:59:28.764376Z"Longo, Guiseppe"https://zbmath.org/authors/?q=ai:longo.giuseppeFor the entire collection see [Zbl 0712.68006].A dynamic logic for QASM programshttps://zbmath.org/1496.031362022-11-17T18:59:28.764376Z"Tavares, Carlos"https://zbmath.org/authors/?q=ai:tavares.carlosSummary: We define a dynamic logic for QASM (Quantum Assembly) programming language, a language that requires the handling of quantum and probabilistic information. We provide a syntax and a model to this logic, providing a probabilistic semantics to the classical part. We exercise it with the quantum coin toss program.
For the entire collection see [Zbl 1430.03006].An algebraic approach to temporal logichttps://zbmath.org/1496.031372022-11-17T18:59:28.764376Z"von Karger, Burghard"https://zbmath.org/authors/?q=ai:von-karger.burghardSummary: The sequential calculus is an algebraic calculus, intended for reasoning about phenomena with a duration and their sequencing. It can be specialized to various domains used for reasoning about programs and systems, including Tarski's calculus of binary relations, Kleene's regular expressions, Hoare's CSP and Dijkstra's regularity calculus.
In this paper we use the sequential calculus as a tool for algebraizing temporal logics. We show that temporal operators are definable in terms of sequencing and we show how a specific logic may be selected by introducing additional axioms. All axioms of the complete proof system for discrete linear temporal logic (given in [\textit{Z. Manna} and \textit{A. Pnueli}, The temporal logic of reactive and concurrent systems. Specification. Berlin etc.: Springer-Verlag (1991; Zbl 0753.68003)]) are obtained as theorems of sequential algebra.
Our work embeds temporal logic into an algebra naturally equipped with sequencing constructs, and in which recursion is definable. This could be a first step towards a design calculus for transforming temporal specifications by stepwise refinement into executable programs.
For the entire collection see [Zbl 0835.68002].Satisfiability of compositional separation logic with tree predicates and data constraintshttps://zbmath.org/1496.031382022-11-17T18:59:28.764376Z"Xu, Zhaowei"https://zbmath.org/authors/?q=ai:xu.zhaowei"Chen, Taolue"https://zbmath.org/authors/?q=ai:chen.taolue"Wu, Zhilin"https://zbmath.org/authors/?q=ai:wu.zhilinSummary: In this paper, we propose compositional separation logic with tree predicates (CSLTP), where properties such as sortedness and height-balancedness of complex data structures (for instance, AVL trees and red-black trees) can be fully specified. We show that the satisfiability problem of CSLTP is decidable. The main technical ingredient of the decision procedure is to compute the least fixed point of a class of inductively defined predicates that are non-linear and involve dense-order and difference-bound constraints, which are of independent interests.
For the entire collection see [Zbl 1369.68037].On the existence of hidden machines in computational time hierarchieshttps://zbmath.org/1496.031642022-11-17T18:59:28.764376Z"Abrahão, Felipe S."https://zbmath.org/authors/?q=ai:abrahao.felipe-s"Wehmuth, Klaus"https://zbmath.org/authors/?q=ai:wehmuth.klaus"Ziviani, Artur"https://zbmath.org/authors/?q=ai:ziviani.arturFor the entire collection see [Zbl 1496.00078].The degree structure of 1-L reductionshttps://zbmath.org/1496.031652022-11-17T18:59:28.764376Z"Burtschick, Hans-Jörg"https://zbmath.org/authors/?q=ai:burtschick.hans-jorg"Hoene, Albrecht"https://zbmath.org/authors/?q=ai:hoene.albrechtSummary: A 1-L function is one that is computable by a logspace Turing machine that moves its input head only in one direction. We show that there exist 1-L complete sets for PSPACE that are not 1-L isomorphic. In other words, the 1-L complete degree for PSPACE does not collapse. This contrasts a result of Allender who showed that all 1-L complete sets for PSPACE are polynomial-time isomorphic. Since all 1-L complete sets for PSPACE are equivalent under 1-L reductions that are one-one and quadratically length-increasing this also provides an example of a \(\leq^{1- L}_{1, qli}\)-degree that does not collapse to a single 1-L isomorphism type.
Relatedly, we prove that there exist two sets \(A\) and \(B\) that are \(\leq^{1- L}_1\) equivalent but not \(\leq^{1- L}_{1, \mathrm{honest}}\) equivalent. That is, there is a one-one 1-L degree that is no honest one-one 1-L degree.
For the entire collection see [Zbl 1415.68030].Capturing complexity classes with Lindström quantifiershttps://zbmath.org/1496.031662022-11-17T18:59:28.764376Z"Makowsky, J. A."https://zbmath.org/authors/?q=ai:makowsky.johann-andreasSummary: We report on our efforts of unifying Descriptive Complexity Theory and Logic in the framework of axiomatic definitions of both Logics and Complexity Classes. (Joint work with Y. B. Pnueli.)
For the entire collection see [Zbl 0825.68120].Dimension spectra of lineshttps://zbmath.org/1496.031692022-11-17T18:59:28.764376Z"Lutz, Neil"https://zbmath.org/authors/?q=ai:lutz.neil"Stull, D. M."https://zbmath.org/authors/?q=ai:stull.donald-mSummary: This paper investigates the algorithmic dimension spectra of lines in the Euclidean plane. Given any line \(L\) with slope \(a\) and vertical intercept \(b\), the dimension spectrum \({\operatorname{sp}}(L)\) is the set of all effective Hausdorff dimensions of individual points on \(L\). We draw on Kolmogorov complexity and geometrical arguments to show that if the effective Hausdorff dimension \(\dim (a, b)\) is equal to the effective packing dimension \({\mathrm{Dim}}(a, b)\), then \({\operatorname{sp}}(L)\) contains a unit interval. We also show that, if the dimension \(\dim (a,b)\) is at least one, then \({\operatorname{sp}}(L)\) is infinite. Together with previous work, this implies that the dimension spectrum of any line is infinite.
For the entire collection see [Zbl 1362.68012].Randomness deficiencieshttps://zbmath.org/1496.031702022-11-17T18:59:28.764376Z"Novikov, Gleb"https://zbmath.org/authors/?q=ai:novikov.glebSummary: The notion of random sequence was introduced by
\textit{P. Martin-Löf} [Inf. Control 9, 602--619 (1966; Zbl 0244.62008)].
In the same article he defined the so-called randomness deficiency function that shows how close are random sequences to non-random (in some natural sense). Other deficiency functions can be obtained from the Levin-Schnorr theorem, that describes randomness in terms of Kolmogorov complexity. The difference between all of these deficiencies is bounded by a logarithmic term (Proposition 1). In this paper we show (Theorems 1 and 2) that the difference between some deficiencies can be as large as possible.
For the entire collection see [Zbl 1362.68012].The theory of the polynomial many-one degrees of recursive sets is undecidablehttps://zbmath.org/1496.031712022-11-17T18:59:28.764376Z"Ambos-Spies, Klaus"https://zbmath.org/authors/?q=ai:ambos-spies.klaus"Nies, André"https://zbmath.org/authors/?q=ai:nies.andre-otfridFor the entire collection see [Zbl 0904.68001].A cut-free cyclic proof system for Kleene algebrahttps://zbmath.org/1496.032292022-11-17T18:59:28.764376Z"Das, Anupam"https://zbmath.org/authors/?q=ai:das.anupam"Pous, Damien"https://zbmath.org/authors/?q=ai:pous.damienSummary: We introduce a sound non-wellfounded proof system whose regular (or ``cyclic'') proofs are complete for (in)equations between regular expressions. We achieve regularity by using hypersequents rather than usual sequents, with more structure in the succedent, and relying on the discreteness of rational languages to drive proof search. By inspection of the proof search space we extract a \textsc{PSpace} bound for the system, which is optimal for deciding such (in)equations.
For the entire collection see [Zbl 1371.68015].Realizability in cyclic proof: extracting ordering information for infinite descenthttps://zbmath.org/1496.032332022-11-17T18:59:28.764376Z"Rowe, Reuben N. S."https://zbmath.org/authors/?q=ai:rowe.reuben-n-s"Brotherston, James"https://zbmath.org/authors/?q=ai:brotherston.jamesSummary: In program verification, measures for proving the termination of programs are typically constructed using (notions of size for) the data manipulated by the program. Such data are often described by means of logical formulas. For example, the cyclic proof technique makes use of semantic approximations of inductively defined predicates to construct Fermat-style infinite descent arguments. However, logical formulas must often incorporate explicit size information (e.g. a list length parameter) in order to support inter-procedural analysis.
In this paper, we show that information relating the sizes of inductively defined data can be automatically extracted from cyclic proofs of logical entailments. We characterise this information in terms of a graph-theoretic condition on proofs, and show that this condition can be encoded as a containment between weighted automata. We also show that under certain conditions this containment falls within known decidability results. Our results can be viewed as a form of realizability for cyclic proof theory.
For the entire collection see [Zbl 1371.68015].On regular expression proof complexityhttps://zbmath.org/1496.032362022-11-17T18:59:28.764376Z"Beier, Simon"https://zbmath.org/authors/?q=ai:beier.simon"Holzer, Markus"https://zbmath.org/authors/?q=ai:holzer.markusSummary: We investigate the proof complexity of Salomaa's axiom system \(F_1\) for regular expression equivalence. We show that for two regular expression \(E\) and \(F\) over the alphabet \(\varSigma\) with \(L(E)=L(F)\) an equivalence proof of length \(O\left(|\varSigma|^4\cdot \operatorname{Tower}(\max \{h(E),h(F)\}+4)\right)\) can be derived within \(F_1\), where \(h(E)\) (\(h(F)\), respectively) refers to the height of \(E\) (\(F\), respectively) and the tower function is defined as \(\operatorname{Tower}(1)=2\) and \(\operatorname{Tower}(k+1)=2^{\operatorname{Tower}(k)}\), for \(k\geq 1\). In other words
\[
\operatorname{Tower}(k)= {{{{2^2}^2}^{\cdots}}^2} \biggr\} k .
\]
This is in sharp contrast to the fact, that regular expression equivalence admisses exponential proof length if not restricted to the axiom system \(F_1\). From the theoretical point of view the exponential proof length seems to be best possible, because we show that regular expression equivalence admits a polynomial bounded proof if and only if \(\mathrm{NP}=\mathrm{PSPACE}\).
For the entire collection see [Zbl 1369.68016].On the length of medial-switch-mix derivationshttps://zbmath.org/1496.032372022-11-17T18:59:28.764376Z"Bruscoli, Paola"https://zbmath.org/authors/?q=ai:bruscoli.paola"Straßburger, Lutz"https://zbmath.org/authors/?q=ai:strassburger.lutzSummary: Switch and medial are two inference rules that play a central role in many deep inference proof systems. In specific proof systems, the mix rule may also be present. In this paper we show that the maximal length of a derivation using only the inference rules for switch, medial, and mix, modulo associativity and commutativity of the two binary connectives involved, is quadratic in the size of the formula at the conclusion of the derivation. This shows, at the same time, the termination of the rewrite system.
For the entire collection see [Zbl 1369.03021].Typical forcings, NP search problems and an extension of a theorem of Riishttps://zbmath.org/1496.032392022-11-17T18:59:28.764376Z"Müller, Moritz"https://zbmath.org/authors/?q=ai:muller.moritzOne of the central results in proof complexity is Ajtai's theorem. In model-theoretic terms it states that there is a model of \(I\Delta_0\) plus a predicate that codes an injective map from \(n\) into \(n\) that is not surjective (for some nonstandard n). This has later been quantitatively improved to full bounded arithmetic \(T_2\). Besides the pigeonhole principle, only a few other principles are known to be, in this sense, independent of \(T_2\). A celebrated theorem of Riis is proved by a forcing-type argument and gives a general criterion for independence from the weaker theory \(T^1_2\). The criterion is simply that the principle fails in some infinite model (like the pigeonhole principle). The current paper extends Riis' theorem and shows that strong principles are independent from \(T^1_2\) extended with a weak principle. Being weak or strong are simple combinatorial properties. This is proved as an application of a general forcing method. The provability or independence of principles in bounded arithmetics is tightly connected to the computational complexity of associated total NP search problems. The paper covers this connection in some detail and gives some improvements of earlier results.
Reviewer: Mihai Prunescu (Bucharest)Truth definition for \(\Delta_0\) formulas and PSPACE computationshttps://zbmath.org/1496.032402022-11-17T18:59:28.764376Z"Zdanowski, Konrad"https://zbmath.org/authors/?q=ai:zdanowski.konradAn important open problem concerning weak theories of first-order arithmetic is whether it is possible to have a truth definition for \(\Delta_0\) (i.e. bounded) formulas that makes no use of the exponential function. It is well-known that \(\mathrm{I}\Delta_0 + \exp\) supports a truth definition for \(\Delta_0\) formulas, and moreover, that there is a \(\Delta_0\) formula with an additional parameter that correctly determines the truth value of a bounded formula (with Gödel number) below \(k\) on arguments below \(a\) once the additional parameter is greater than roughly \(2^{a^k}\). There are some conjectures to the effect that this bound is approximately optimal.
In this paper, the author shows that the availability of a reasonably well-behaved truth definition for bounded formulas is closely connected to the formalizability of polynomial-space computations. More precisely, the author considers the theory \(S^1_2(\mathrm{Tr})\), an extension of Buss' theory \(S^1_2\) (which corresponds to polynomial-time computation) that has an extra predicate symbol \(\mathrm{Tr}\) in its language and contains additional axioms stating that \(\mathrm{Tr}\) is a truth definition for \(\Delta_0\) formulas (of the original language of \(\mathrm{PA}\)) with the following properties: \(\mathrm{Tr}\) commutes with connectives and quantifiers, and moreover, the so-called polynomial induction scheme \(\mathrm{PIND}\) holds for \(\Sigma^b_1(\mathrm{Tr})\) formulas (i.e., roughly, for bounded existential formulas involving \(\mathrm{Tr}\)).
The theory \(S^1_2(\mathrm{Tr})\) is then related to \(U^1_2\), a well-known two-sorted theory capturing PSPACE. It is shown that there is a formula \(\Psi\) such that \(U^1_2\) proves the axioms of \(S^1_2(\mathrm{Tr})\) with \(\Psi\) substituted for \(\mathrm{Tr}\); thus, \(S^1_2(\mathrm{Tr})\) is a definitional extension of a subtheory of \(U^1_2\). In the other direction, it is proved that \(S^1_2(\mathrm{Tr})\) can define the output of any PSPACE computation in a well-behaved way. Combining this with a cut elimination argument shows that \(U^1_2\) is conservative over \(S^1_2(\mathrm{Tr})\) for (a strict superclass of) bounded formulas in the language of first-order arithmetic, and it also leads to a polynomial-space witnessing theorem for \(S^1_2(\mathrm{Tr})\).
Thus, the question whether a weak theory of arithmetic can support a truth definition for \(\Delta_0\) formulas is connected to its ability to represent PSPACE computation rather than to the availability of the exponential function as such. To the reader with some background in computational complexity, this will not be very surprising: the intuition is that modulo some syntactical issues, evaluating a given bounded formula on a given tuple of arguments is very similar to evaluating a given first-order formula in a given finite structure, which is a well-known PSPACE-complete problem. The contribution of the paper lies in stating the connection between truth definitions and polynomial space in the form of precise theorems, and in overcoming various technical challenges involved in proving those theorems.
Reviewer: Leszek Aleksander Kołodziejczyk (Warszawa)On the general position number of two classes of graphshttps://zbmath.org/1496.050442022-11-17T18:59:28.764376Z"Yao, Yan"https://zbmath.org/authors/?q=ai:yao.yan"He, Mengya"https://zbmath.org/authors/?q=ai:he.mengya"Ji, Shengjin"https://zbmath.org/authors/?q=ai:ji.shengjin(no abstract)Complexity dichotomy for list-5-coloring with a forbidden induced subgraphhttps://zbmath.org/1496.050512022-11-17T18:59:28.764376Z"Hajebi, Sepehr"https://zbmath.org/authors/?q=ai:hajebi.sepehr"Li, Yanjia"https://zbmath.org/authors/?q=ai:li.yanjia"Spirkl, Sophie"https://zbmath.org/authors/?q=ai:spirkl.sophie-theresaEfficient solvability of the weighted vertex coloring problem for some two hereditary graph classeshttps://zbmath.org/1496.050562022-11-17T18:59:28.764376Z"Razvenskaya, Ol'ga Olegovna"https://zbmath.org/authors/?q=ai:razvenskaya.olga-olegovna"Malyshev, Dmitriĭ Sergeevich"https://zbmath.org/authors/?q=ai:malyshev.dmitrii-sergeevichSummary: The weighted vertex coloring problem for a given weighted graph is to minimize the number of used colors so that for each vertex the number of the colors that are assigned to this vertex is equal to its weight and the assigned sets of vertices are disjoint for any adjacent vertices. For all but four hereditary classes that are defined by two connected 5-vertex induced prohibitions, the computational complexity is known of the weighted vertex coloring problem with unit weights. For four of the six pairwise intersections of these four classes, the solvability was proved earlier of the weighted vertex coloring problem in time polynomial in the sum of the vertex weights. Here we justify this fact for the remaining two intersections.Structural matrices for signed Petri nethttps://zbmath.org/1496.050652022-11-17T18:59:28.764376Z"Payal"https://zbmath.org/authors/?q=ai:payal.hiranwar"Kansal, Sangita"https://zbmath.org/authors/?q=ai:kansal.sangitaSummary: The developments in the field of Graph Theory and Petri net Theory in the form of balanceness and negative tokens respectively motivated the authors to bridge the gap between Petri net and Signed graph and introduce a new concept of Signed Petri net (SPN). In Petri net theory, matrices have been used to describe the structural behavior of the Petri net. Such matrices have been introduced for SPN which help in identifying relationships among the transitions and places of an SPN. Various subclasses of SPN are given along with characterizations of these subclasses using the matrices introduced. We consider ordinary SPNs (i.e. SPNs without multiple arcs) in the paper.A note on shortest sign-circuit cover of signed 3-edge-colorable cubic graphshttps://zbmath.org/1496.050672022-11-17T18:59:28.764376Z"Xu, Ronggui"https://zbmath.org/authors/?q=ai:xu.ronggui"Li, Jiaao"https://zbmath.org/authors/?q=ai:li.jiaao"Hou, Xinmin"https://zbmath.org/authors/?q=ai:hou.xinminA sign-circuit cover \(F\) of a signed graph (G; \(\sigma\)) is a family of sign-circuits which covers all edges of (G; \(\sigma\)). The shortest sign-circuit cover problem was initiated by \textit{E. Máčajová} et al. [J. Graph Theory 81, No. 2, 120--133 (2016; Zbl 1332.05066)] and received much attention in recent years.
A well-known conjecture, the shortest cycle cover conjecture, was proposed by \textit{N. Alon} and \textit{M. Tarsi} [SIAM J. Algebraic Discrete Methods 6, 345--350 (1985; Zbl 0581.05046)].
Here, the authors show that every flow-admissible signed 3-edgecolorable cubic graph (G; \(\sigma\)) has a sign-circuit cover with length at most \(\frac{20}{9}|E(G)|\).
The authors pose the problem to determine the optimal upper bound for the shortest sign-circuit cover of a signed 3-edge-colorable cubic graph that remains open.
This paper has many interesting results and many future problems are posed. This will help many young researchers working in the area of signed graphs. I have advised younger researchers to read this paper.
Reviewer: V. Lokesha (Bangalore)Eigenvector phase retrieval: recovering eigenvectors from the absolute value of their entrieshttps://zbmath.org/1496.051002022-11-17T18:59:28.764376Z"Steinerberger, Stefan"https://zbmath.org/authors/?q=ai:steinerberger.stefan"Wu, Hau-tieng"https://zbmath.org/authors/?q=ai:wu.hau-tiengSummary: We consider the eigenvalue problem \(A x = \lambda x\) where \(A \in \mathbb{R}^{n \times n}\) and the eigenvalue is also real \(\lambda \in \mathbb{R} \). If we are given \(A, \lambda\) and, additionally, the absolute value of the entries of \(x\) (the vector \((|x_i|)_{i = 1}^n)\), is there a fast way to recover \(x\)? In particular, can this be done quicker than computing \(x\) from scratch? This may be understood as a special case of the phase retrieval problem. We present a randomized algorithm which provably converges in expectation whenever \(\lambda\) is a simple eigenvalue. The problem should become easier when \(| \lambda |\) is large and we discuss another algorithm for that case as well.A note on digital sequence hypergraphs and 2-graph congruence arithmetichttps://zbmath.org/1496.051182022-11-17T18:59:28.764376Z"Rahman, Saifur"https://zbmath.org/authors/?q=ai:rahman.saifur"Chowdhury, Maitrayee"https://zbmath.org/authors/?q=ai:chowdhury.maitrayee(no abstract)On dominating set of some subclasses of string graphshttps://zbmath.org/1496.051232022-11-17T18:59:28.764376Z"Chakraborty, Dibyayan"https://zbmath.org/authors/?q=ai:chakraborty.dibyayan"Das, Sandip"https://zbmath.org/authors/?q=ai:das.sandip-kr|das.sandip-kumar"Mukherjee, Joydeep"https://zbmath.org/authors/?q=ai:mukherjee.joydeepAn intersection representation \(\mathcal{R}\) of a graph \( G = (V , E)\) is a family of \(\{\mathcal{R}_u\}\) \(u\in V\) sets such that \(uv \in E\) if and only if \({\mathcal{R}_u}\bigcap{\mathcal{R}_v}\neq \emptyset\). When \(\mathcal{R}\) is a collection of geometric objects, it is said to be a geometric intersection representation of \(G\). When \(\mathcal{R}\) is a collection of simple unbounded curves on the plane, it is called a string representation. A graph \(G\) is a string graph if \(G\) has a string representation. String graphs are important as it contains all intersection graphs of connected sets in \(\mathcal{R}^2\). String graphs have been intensively studied both for practical applications and theoretical interest. \textit{S. Benzer} [``On the topology of the genetic fine structure'', Proc. Natl. Acad. Sci. 45, No. 11, 1607--1620 (1959; \url{doi:10.1073/pnas.45.11.1607})] introduced string graphs while exploring the topology of genetic structures. \textit{F. W. Sinden} [Bell Syst. Tech. J. 45, 1639--1662 (1966; Zbl 0144.45601)] considered the same constructs at Bell Labs. In 1976, Graham introduced string graphs to the mathematics community at the open problem session of a conference in Keszthely. Since then, many researchers have been carrying out extensive research on string graphs.
The graph classes like planar graphs, chordal graphs, cocomparability graph, disk graphs, rectangle intersection graphs, segment graphs, and circular arc graphs are subclasses of string graphs. Any intersection graph of arc-connected sets on the plane is a string graph. However, not all graphs are string graphs and this is the reason why people study the computational complexities of various optimisation problems in string graphs and their subclasses.
\par In this paper, the authors propose constant factor approximation algorithms for the Minimum Dominating Set (MDS) problem on string graphs.
A dominating set of a graph \(G = (V , E)\) is a subset \(D\) of vertices \(V\) such that each vertex in \(V\backslash D\) is adjacent to some vertex in \(D\). The Minimum Dominating Set (MDS) problem is to find a minimum cardinality dominating set of a graph \(G\).
The readers can read the paper by \textit{M. Chlebík} and \textit{J. Chlebíková} [Inf. Comput. 206, No. 11, 1264--1275 (2008; Zbl 1169.68037)] to see that it is not possible to approximate the MDS problem on string graphs.
Thus, researchers are compelled to develop approximation algorithms for the MDS problem on various subclasses of string graphs. Planar graphs, chordal graphs, disk graphs, unit disk graphs, rectangle intersection graphs, intersection graphs of homothets of convex objects, etc. are examples. \textit{M. de Berg} et al. [Theor. Comput. Sci. 769, 18--31 (2019; Zbl 1421.68071)] studied the fixed-parameter tractability of the MDS problem on various classes of geometric intersection graphs. \textit{T. Erlebach} and \textit{E. J. van Leeuwen} [Lect. Notes Comput. Sci. 4957, 747--758 (2008; Zbl 1136.68568)] gave constant-factor approximation algorithms for intersection graphs of \(r\)-regular polygons, where \(r\) is an arbitrary constant, for pairwise homothetic triangles, and rectangles with bounded aspect ratio.
\textit{A. Asinowski} et al. [J. Graph Algorithms Appl. 16, No. 2, 129--150 (2012; Zbl 1254.68184)] introduced the concept of \(B_k\)-VPG graphs to initiate a systematic study of string graphs and its subclasses in the year. A path is a simple rectilinear curve made of axis-parallel line segments, and a \(k\)-bend path is a path having \(k\) bends. The \(B_k\)-VPG graphs are intersection graphs of \(k\)-bend paths. Asinowski et al. have shown that any string graph has a \(B_k\)-VPG representation for some \(k\). \textit{M. J. Katz} et al. [Comput. Geom. 30, No. 2, 197--205 (2005; Zbl 1162.68751)] proved the NP-hardness of the MDS problem on \(B_0\)-VPG graphs. An interesting fact is that a sublogarithmic approximation algorithm for the MDS problem on \(B_0\)-VPG graphs is still unknown. It is to be noted that intersection graphs of orthogonal segments having unit length, i.e. unit \(B_0\)-VPG graphs is a subclass of \(B_0\)-VPG graphs.
In this paper, the authors show that the MDS problem is NP-hard on unit \(B_0\)-VPG graphs. This result strengthens a result of Katz et al. [loc. cit.]. They also propose the first constant-factor approximation algorithm for the MDS problem on unit \(B_0\)-VPG graphs. Specifically, the authors prove the following theorems.
Theorem 1.
It is NP-Hard to solve the MDS problem on unit \(B_k\)-VPG graphs with \(k \geq 0\).
Theorem 2.
Given a unit \(B_0\)-VPG representation of a graph \(G\) with \(n\) vertices, there is an \(O(n^5)\)-time 18-approximation algorithm to solve the MDS problem on \(G\).
Theorem 3.
Given a unit \(B_k\)-VPG representation of a graph \(G\) with \(n\) vertices, there is an \(O(k^2n^5)\)-time \(O(k^4)\)-approximation algorithm to solve the MDS problem on \(G\).
Theorem 4.
Given a vertically-stabbed L-representation of a graph \(G\) with \(n\) vertices, there is an \(O(n^5)\)-time 8-approximation algorithm to solve the MDS problem on \(G\).
Theorem 5.
Assuming the Unique Games Conjecture to be true, it is not possible to have a polynomial time \((2 -\epsilon)\)-approximation algorithm for the MDS problem on rectangle overlap graphs for any \(\epsilon > 0\).
Theorem 6.
Given a stabbed rectangle overlap representation of a graph \(G\) with \(n\) vertices, there is an \(O(n^5)\)-time 656-approximation algorithm for the MDS problem on \(G\).
The interval overlap graphs and intersection graphs of diagonally anchored rectangles are strict subclasses of
stabbed rectangle overlap graphs.
Note that approximation algorithms for optimization problems like Maximum Independent Set and Minimum Hitting Set on intersection graphs of ``stabbed'' geometric objects have been studied by different authors.
Proofs of Theorem 2, 3, 4, and 6 use two crucial lemmas. The first one is about the stabbing segment with rays SSR
problem and the second one is about the stabbing rays with segment SRS problem, both introduced by Katz et al. [loc. cit.].
The definitions of both SSR and SRS problems are given below.
Stabbing segments with rays SSR.
Input: A set R of disjoint leftward-directed horizontal semi-infinite rays and a set of disjoint vertical segments.
Output: A minimum cardinality subset of \(R\) that intersect all segments in \(V\).
Stabbing rays with segments SRS.
Input: A set \(R\) of disjoint leftward-directed horizontal semi-infinite rays and a set of disjoint vertical segments.
Output: A minimum cardinality subset of \(V\) that intersect all rays in \(R\).
Let \(\mathrm{SSR}(R, V)\) (resp. \(\mathrm{SRS}(R,V)\)) denote an SSR instance (resp. an SRS instance) where \(R\) is a given set of disjoint leftward-directed horizontal semi-infinite rays and \(V\) is a given set of disjoint vertical segments. Katz et al. [loc. cit.] gave
dynamic programming-based polynomial time algorithms for both the SSR problem and the SRS problem. Here the authors, to prove Theorems 2, 3, 4, and 6, have developed an upper bound on the ratio of the cardinality of the optimal solution of an SSR
instance (and SRS instance) with the optimal cost of the corresponding relaxed LP formulation(s).
Therefore, they have proved the following lemmas.
Lemma 1.
Let \(C\) be an ILP formulation of an \(\mathrm{SSR}(R,V)\) instance. There is an \(O((n +m) \log(n +m))\)-time algorithm to compute a set \(D\subseteq R\) which gives a feasible solution of \(C\) and \(|D|\leq 2\cdot \mathrm{OPT}(C_1)\) where \(n = |R|, m- |V |\) and \(C_1\) is the relaxed LP formulation of \(C\).
Lemma 2.
Let \(C\) be an ILP formulation of an SRS(R,V) instance. There is an \(O(n \log n)\) time algorithm to compute a set \(D \subsetneq V\) which gives a feasible solution of \(C\) and
\(|D|\leq 2\cdot \mathrm{OPT}(C_l)\) where \(n = |V |\) and \(C_l\) is the relaxed LP formulation of \(C\).
To prove both the above lemma, the authors have not explicitly solved the LP(s). Moreover, since \(\mathrm{OPT}(C_l) \leq \mathrm{OPT}(C)\), the algorithm of Lemma 1 provides an approximate solution to the \(\mathrm{SSR}(R,V)\) instance with approximation ratio 2. So, it is argued that Theorem 7 is a consequence of Lemma 1.
Theorem 7.
There is an \(O((n +m) \log(n +m))\)-time 2-approximation algorithm for SSR problem where \(n\) and \(m\) are the number of rays and segments, respectively.
In Section 2.1 and Section 2.2, the authors have proved the hardness results (Theorem 1 and Theorem 5). In Section 3 and Section 4, they have proved Lemma 1 and Lemma 4, respectively. In Section 5, they have applied both Lemma 1 and Lemma 2 to prove Theorem 4.
Then in Sections 6, 7, and 8, they have proved Theorem 2, Theorem 3, and Theorem 6, respectively.
The authors end the paper with the following four questions.
Question 1. What are the integrality gaps of the SSR and the SRS problems?
Question 2. Is there a \(c\)-approximation algorithm for the MDS problem on unit \(B_0\)-VPG graphs with \(c < 18\)?
Question 3. Is there a constant-factor approximation algorithm for the MDS problem on \(B_0\)-VPG graphs? Is there an
\(O(\log k)\)-approximation algorithm for the MDS problem on \(B_k\)-VPG graphs?
Question 4. Is there a \(c\)-approximation algorithm for the MDS problem on stabbed rectangle overlap graphs with \(c < 656\)?
We see in this paper an amalgamation of areas such as linear programming, algorithms, NP-hardness and graph theory. It has an exhaustive list of bibliography with 49 papers and all of these papers are used in the writing of this paper. The standard of the paper is high. The researchers will learn a lot by reading this paper. There are four questions for which the researchers can break their heads and find answers. Overall the paper is classic and it contains a lot of treasure in it.
Reviewer: A. Lourdusamy (Palayamkottai)Perfect Italian domination in graphs: complexity and algorithmshttps://zbmath.org/1496.051342022-11-17T18:59:28.764376Z"Pradhan, D."https://zbmath.org/authors/?q=ai:pradhan.debasish|pradhan.dinabandhu|pradhan.dhiraj-k|pradhan.deepak-kumar|pradhan.dillip-kumar|pradhan.dina"Banerjee, S."https://zbmath.org/authors/?q=ai:banerjee.subhasis|banerjee.sudipto|banerjee.subho-sankar|banerjee.subhashis|banerjee.sayantan|banerjee.snehamay|banerjee.sayan|banerjee.shakti|banerjee.subrato|banerjee.subrata|banerjee.sreoshi|banerjee.samiran|banerjee.subhashish|banerjee.shamik|banerjee.swarnali|banerjee.satanjeev|banerjee.salil-k|banerjee.swapna|banerjee.subarsha|banerjee.sourabh|banerjee.sanjoy|banerjee.sanhita|banerjee.shrabani|banerjee.sujogya|banerjee.sanjibani|banerjee.shubho|banerjee.shantanu|banerjee.souvik|banerjee.supriya|banerjee.subhajit|banerjee.swarnendu|banerjee.sandip|banerjee.siddhartha|banerjee.sayanti|banerjee.santo|banerjee.subhasish|banerjee.samprit|banerjee.sanchayan|banerjee.suryapratim|banerjee.s-p|banerjee.simul|mukhopadhyay.santwana|banerjee.srimanta|banerjee.subhashish.1|banerjee.supratik|banerjee.sankha|banerjee.shuvadeep|banerjee.sunirmal|banerjee.sudarshan|banerjee.shohan|banerjee.snigdha|banerjee.sauvik|banerjee.soumyarup|banerjee.swapnendu|banerjee.sailendra-nath|banerjee.soumitro|banerjee.sibasish|banerjee.sumanta|banerjee.sumita|banerjee.s-r|banerjee.shilpak|banerjee.soumik|banerjee.sourav|banerjee.soumen|banerjee.s-b|banerjee.satarupa|banerjee.sourayan|banerjee.sharmila|banerjee.subhankar|banerjee.soumya-d|banerjee.suman|banerjee.satyajit|banerjee.soumyajyoti|banerjee.shreya|banerjee.saibal-kumar|banerjee.sayak|banerjee.sitansu|banerjee.subhashree"Liu, Jia-Bao"https://zbmath.org/authors/?q=ai:liu.jia-baoLet \(G\) be a graph. A map \(f:V(G)\rightarrow\{0,1,2\}\) is called a perfect Italian dominating function if \(\sum_{u\in N_G(v)}{f(u)}=2\) for every vertex \(v\) with \(f(v)=0\). The minimum weight \(\sum_{v\in V(G)}{f(v)}\) among all perfect Italian dominating functions is the perfect Italian domination number and the problem to obtain it is denoted by MIN-PIDF.
This work considers MIN-PIDF from an algorithmic perspective. It is shown that the problem is polynomially solvable for block graphs and series-parallel graphs. On the other side, the MIN-PIDF problem is NP-hard for chordal graphs, and the approximation harness is discussed in the general case. The complexity difference of MIN-PIDF with respect to the ordinary Italian domination function (where in the condition \(=2\) is replaced by \(\geq 2\)) is also considered.
Reviewer: Iztok Peterin (Maribor)Signed cycle domination in planar graphshttps://zbmath.org/1496.051382022-11-17T18:59:28.764376Z"Sundarakannan, M."https://zbmath.org/authors/?q=ai:sundarakannan.m"Arumugam, S."https://zbmath.org/authors/?q=ai:arumugam.subramanianSummary: Let \(G=(V,E)\) be a graph. A function \(f:E\rightarrow\{-1,1\}\) is called a signed cycle dominating function (SCDF) if \(\sum\limits_{e\in E(C)}f(e)\geq 1\) for every induced cycle \(C\) in \(G\). The signed cycle domination number \(\sigma(G)\) is defined as \(\sigma(G)=\min\left\{\sum\limits_{e\in E}f(e):f\right\}\) is an SCDF of \(G\). In this paper, we prove that for any positive integer \(\ell\) with \(n-2\leq\ell\leq 2n-6\), there exists a maximal planar graph \(G\) of order \(n\) such that \(\sigma(G)=\ell\). We also prove that the problem of determining the signed cycle domination number is NP-complete.
For the entire collection see [Zbl 1369.68008].A weighted perfect matching with constraints on weights of its partshttps://zbmath.org/1496.051412022-11-17T18:59:28.764376Z"Duginov, Oleg Ivanovich"https://zbmath.org/authors/?q=ai:duginov.oleg-ivanovichSummary: We consider the following strongly NP-hard problem. Given an edge-weighted balanced complete bipartite graph with a partition of its part into non-empty and pairwise disjoint subsets, the problem is to find a perfect matching of this graph such that maximum sum of weights of edges from the matching incident to vertices of a subset of the partition is minimum. We present a characterization of solutions of a special case of this problem, in which weights of graph edges take values from the set \(\{0, 1, \Delta\},\) where \(\Delta\) is an integer that is greater than the number of edges of the unit weight and there is a perfect matching of the graph that consists of edges with weights 0 and 1. Besides, we identify polynomially solvable and strongly NP-hard special cases of this problem. Finally, we show that if the number of subsets forming the partition is fixed then the considered problem is equivalent to the problem of finding a perfect matching of a given weight in an edge-weighted bipartite graph.Combinatorial algorithms for binary operations on LR-tableaux with entries equal to 1 with applications to nilpotent linear operatorshttps://zbmath.org/1496.051862022-11-17T18:59:28.764376Z"Kaniecki, Mariusz"https://zbmath.org/authors/?q=ai:kaniecki.mariusz"Kosakowska, Justyna"https://zbmath.org/authors/?q=ai:kosakowska.justynaSummary: In the paper we investigate an algorithmic associative binary operation \(\ast\) on the set \(\mathcal{LR}_1\) of Littlewood-Richardson tableaux with entries equal to one. We extend \(\ast\) to an algorithmic nonassociative binary operation on the set \(\mathcal{LR}_1 \times \mathbb{N}\) and show that it is equivalent to the operation of taking the generic extensions of objects in the category of homomorphisms from semisimple nilpotent linear operators to nilpotent linear operators. Thus we get a combinatorial algorithm computing generic extensions in this category.Milnor invariants of sorting networkshttps://zbmath.org/1496.051882022-11-17T18:59:28.764376Z"Arnold, Maxim"https://zbmath.org/authors/?q=ai:arnold.maxim"Kondor, Christian"https://zbmath.org/authors/?q=ai:kondor.christianThe authors investigate Milnor invariants of various braids arising from the signed sorting networks.
By taking a given signed sorting network they relate their set to the set of sorting braids. Consider two sorting networks \(S\) and \(T\), the word \(ST\) corresponds to a closed loop on the permutahedron, so that if one assigns signatures to each crossing in the wiring diagram for \(ST,\) one gets a pure braid on \(n\) elements. It is then possible to study the Milnor invariants of these braids.
The authors proceed to discuss the asymptotic invariants of two interesting special cases of sorting braids.
Reviewer: Stefano Serpente (Roma)Solving polynomial fixed point equationshttps://zbmath.org/1496.080052022-11-17T18:59:28.764376Z"Bloom, Stephen L."https://zbmath.org/authors/?q=ai:bloom.stephen-l"Ésik, Zoltán"https://zbmath.org/authors/?q=ai:esik.zoltanFor the entire collection see [Zbl 0825.68120].On congruence schemes for constant terms and their applicationshttps://zbmath.org/1496.110052022-11-17T18:59:28.764376Z"Straub, Armin"https://zbmath.org/authors/?q=ai:straub.arminAuthor's abstract: \textit{E. Rowland} and \textit{D. Zeilberger} [J. Difference Equ. Appl. 20, No. 7, 973--988 (2014; Zbl 1358.68175)] devised an approach to algorithmically determine the modulo \(p^r\) reductions of values of combinatorial sequences representable as constant terms (building on work of \textit{E. Rowland} and \textit{R. Yassawi} [J. Théor. Nombres Bordx. 27, No. 1, 245--288 (2015; Zbl 1384.11003)]). The resulting \(p\)-schemes are systems of recurrences and, depending on their shape, are classified as automatic or linear. We revisit this approach, provide some additional details such as bounding the number of states, and suggest a third natural type of scheme that combines benefits of automatic and linear ones. We illustrate the utility of these ``scaling'' schemes by confirming and extending a conjecture of Rowland and Yassawi on Motzkin numbers.
Reviewer: Michel Rigo (Liège)On strongly non-singular polynomial matriceshttps://zbmath.org/1496.150162022-11-17T18:59:28.764376Z"Abramov, Sergei A."https://zbmath.org/authors/?q=ai:abramov.sergei-a"Barkatou, Moulay A."https://zbmath.org/authors/?q=ai:barkatou.moulay-aSummary: We consider matrices with infinite power series as entries and suppose that those matrices are represented in an ``approximate'' form, namely, in a truncated form. Thus, it is supposed that a polynomial matrix \(P\) which is the \(l\)-truncation (\(l\) is a non-negative integer, \(\deg P=l\)) of a power series matrix \(M\) is given, and \(P\) is non-singular, i.e., \(\det P\neq 0\). We prove that the strong non-singularity testing, i.e., the testing whether \(P\) is not a truncation of a singular matrix having power series entries, is algorithmically decidable. Supposing that a non-singular power series matrix \(M\) (which is not known to us) is represented by a strongly non-singular polynomial matrix \(P\), we give a tight lower bound for the number of initial terms of \(M^{-1}\) which can be determined from \(P^{-1}\). In addition, we report on possibility of applying the proposed approach to ``approximate'' linear differential systems.
For the entire collection see [Zbl 1391.33001].On the condition number of the shifted real Ginibre ensemblehttps://zbmath.org/1496.150252022-11-17T18:59:28.764376Z"Cipolloni, Giorgio"https://zbmath.org/authors/?q=ai:cipolloni.giorgio"Erdös, László"https://zbmath.org/authors/?q=ai:erdos.laszlo"Schröder, Dominik"https://zbmath.org/authors/?q=ai:schroder.dominikSix model categories for directed homotopyhttps://zbmath.org/1496.180122022-11-17T18:59:28.764376Z"Gaucher, Philippe"https://zbmath.org/authors/?q=ai:gaucher.philippeTwo categories are considered for modeling directed homotopy. The first of them, the category of multipointed \(d\)-spaces, is described in [\textit{P. Gaucher}, Theory Appl. Categ. 22, 588--621 (2009; Zbl 1191.55013)]. The second, the category of flows introduced in [\textit{P. Gaucher}, Homology Homotopy Appl. 5, No. 1, 549--599 (2003; Zbl 1069.55008)]. In the paper under review, three model structures are constructed for each of these categories and a comparison of the six model categories obtained is carried out.
The model category \(\mathcal{K}\) is given by the model structure \((\mathcal{C}, \mathcal{W}, \mathcal{F})\), which consists of cofibrations \(\mathcal{C}\), weak equivalences \(\mathcal{W}\), and fibrations \(\mathcal{F}\) satisfying the axioms described in [\textit{M. Hovey}, Model categories. Providence, RI: American Mathematical Society (AMS) (1999; Zbl 0909.55001), Definition 1.1.4].
Let \(Top\) be the category of \(\Delta\)-generated spaces. The category of general topological spaces is denoted by \(\mathcal{TOP}\). There are the following three model structures on the category \(Top\).
\begin{itemize}
\item The \(q\)-model structure \((\mathcal{C}_q, \mathcal{W}_q, \mathcal{F}_q)\): the cofibrations are the retracts of the transfinite compositions of the inclusions \(S^{n-1}\subset D^n\) for \(n\geq 0\), the weak equivalences are the weak homotopy equivalence, and the fibrations are the maps satisfying the RLP with respect to inclusions \(D^n \subset D^{n+1}\) for \(n\geq 0\). The existence of this model goes back to [\textit{D. G. Quillen}, Homotopical algebra. Lecture Notes in Mathematics. 43. Berlin-Heidelberg-New York: Springer-Verlag. (1967; Zbl 0168.20903)].
\item The \(h\)-model structure \((\mathcal{C}_{\bar h}, \mathcal{W}_h, \mathcal{F}_h)\): the fibrations are the maps satisfying the RLP with respect to inclusions \(X\times \{0\} \subset X\times [0,1]\) for all topological spaces \(X\), and the weak equivalences are the homotopy equivalences. The \(h\)-model described in [\textit{A. Strøm}, Arch. Math. 23, 435--441 (1972; Zbl 0261.18015)] and [\textit{T. Barthel} and \textit{E. Riehl}, Algebr. Geom. Topol. 13, No. 2, 1089--1124 (2013; Zbl 1268.18001)].
\item The \(m\)-model structure \((\mathcal{C}_m, \mathcal{W}_m, \mathcal{F}_m)\): \((\mathcal{C}_m, \mathcal{W}_m, \mathcal{F}_m) = (\mathcal{C}_m, \mathcal{W}_q, \mathcal{F}_h)\): the cofibrations are constructed using the LLP condition with respect to \(\mathcal{W}_q\cap \mathcal{F}_h\). Its existence is a consequence of [\textit{M. Cole}, Topology Appl. 153, No. 7, 1016--1032 (2006; Zbl 1094.55015), Theorem 2.1].
\end{itemize}
Section 6 is devoted to the category of multipointed \(d\)-spaces \({\mathcal{G}}dTop\)
A multipointed \(d\)-space \(X\) is a pair \((|X|, X^0)\) where \(|X|\) is a topological space and \(X^0\) is a subset of \(|X|\). A morphism \(f: X=(|X|, X^0)\to Y=(|Y|, Y^0)\) is a continuous map \(|f|: |X|\to |Y|\) and a map \(f^0: X^0\to Y^0\) such that \((\forall s\in X^0)f^0(s)= |f|(s)\).
Let \(\mathcal{G}\) be the topological group of nondecreasing homeomorphisms of \([0, 1]\).
A multipoinded \(d\)-space \(X\) (Definition 6.3) is a triple \((|X|, X^0, \mathbb{P}^{\mathcal{G}}X)\) where \((|X|, X^0)\) is a multipointed space and \(\mathbb{P}^{\mathcal{G}}X\) is a set of continuous maps from \([0,1]\) to \(|X|\) called the execution paths, satisfying the following axioms:
\begin{itemize}
\item For any execution path \(\gamma\), one has \(\gamma(0), \gamma(1)\in X^0\).
\item For any execution path \(\gamma\) of \(X\), any composite \(\gamma.\phi\) with \(\phi\in \mathcal{G}\) is an execution path of \(X\).
\item If \(\gamma_1\) and \(\gamma_2\) are composable execution paths of \(X\), then the normalized composition \(\gamma_1*_N\gamma_2\) is an execution path of \(X\).
\end{itemize}
A morphism \(f: X\to Y\) of multipointed \(d\)-spaces is a map of multipointed spaces from \((|X|,X^0)\) to \((|Y|, Y^0)\) such that for any execution path \(\gamma\) of \(X\), the map \(f.\gamma\) is an execution path of \(Y\). The subset of execution paths from \(\alpha\) to \(\beta\) is the set of \(\gamma\in \mathbb{P}^{\mathcal{G}}X\) such that \(\gamma(0)=\alpha\) and \(\gamma(1)=\beta\) and is denoted by \(\mathbb{P}^{\mathcal{G}}_{\alpha, \beta}X\). It is equipped with the kelleyfication of the initial topology making the inclusion \(\mathbb{P}^{\mathcal{G}}_{\alpha,\beta}X \subset \mathcal{TOP}([0,1], |X|)\) is continuous.
Theorem 6.14. Let \((\mathcal{C}, \mathcal{W}, \mathcal{F})\) be one of the three model structures \[ (\mathcal{C}_q, \mathcal{W}_q, \mathcal{F}_q), (\mathcal{C}_{\bar h}, \mathcal{W}_h, \mathcal{F}_h), (\mathcal{C}_m, \mathcal{W}_m, \mathcal{F}_m) \] of \(Top\). Then there exists a unique model structure on \({\mathcal{G}}dTop\) such that:
\begin{itemize}
\item A map of multipointed \(d\)-spaces \(f: X \to Y\) is a weak equivalence if and only if \(f^0: X^0\to Y^0\) is a bijection and for all \((\alpha,\beta)\in X^0\times X^0\), the continuous map \(\mathbb{P}^{\mathcal{G}}_{\alpha, \beta}X \to \mathbb{P}^{\mathcal{G}}_{f(\alpha), f(\beta)}Y\) belongs to \(\mathcal{W}\).
\item A map of multipointed \(d\)-spaces \(f: X \to Y\) is a fibration if and only if for all \((\alpha,\beta)\in X^0\times X^0\), the continuous map \(\mathbb{P}^{\mathcal{G}}_{\alpha, \beta}X \to \mathbb{P}^{\mathcal{G}}_{f(\alpha), f(\beta)}Y\) belongs to \(\mathcal{F}\).
\end{itemize}
Moreover, this model structure is accessible and all objects are fibrant.
Section 7 is devoted to the category \(Flow\).
Definition 7.1. [\textit{P. Gaucher}, Homology Homotopy Appl. 5, No. 1, 549--599 (2003; Zbl 1069.55008)] A flow \(X\) consists of a topological space \(\mathbb{P} X\) of execution paths, a discrete space \(X^0\) of states, two continuous maps \(s\) and \(t\) from \(\mathbb{P} X\) to \(X^0\) called the source and target map, respectively, and a continuous and associative map
\[
*: \{(x,y)\in \mathbb{P} X \times \mathbb{P} X; t(x)= s(y)\}\to \mathbb{P} X
\]
such that \(s(x*y)=s(x)\) and \(t(x*y)= t(y)\). A morphism of flows \(f: X\to Y\) consists of a set map \(f^0: X^0\to Y^0\) together with a continuous map \(\mathbb{P} f: \mathbb{P} X \to \mathbb{P} Y\) such that \(f(s(x))= s(f(x))\), \(f(t(x))= t(f(x))\) and \(f(x*y)= f(x)*f(y)\). The corresponding category is denoted by \(Flow\). Let \(\mathbb{P}_{\alpha,\beta}X= \{x\in \mathbb{P} X | s(x)= \alpha \text{ and } t(x)=\beta\}\).
Theorem 7.4. Let \((\mathcal{C}, \mathcal{W}, \mathcal{F})\) be one of the three model structures
\[
(\mathcal{C}_q, \mathcal{W}_q, \mathcal{F}_q), (\mathcal{C}_{\bar h}, \mathcal{W}_h, \mathcal{F}_h), (\mathcal{C}_m, \mathcal{W}_m, \mathcal{F}_m)
\]
of \(Top\). Then there exists a unique model structure on \(Flow\) such that:
\begin{itemize}
\item A map of flows \(f: X \to Y\) is a weak equivalence if and only if \(f^0: X^0\to Y^0\) is a bijection and for all \((\alpha,\beta)\in X^0\times X^0\), the continuous map \(\mathbb{P}^{\mathcal{G}}_{\alpha, \beta}X \to \mathbb{P}^{\mathcal{G}}_{f(\alpha), f(\beta)}Y\) belongs to \(\mathcal{W}\).
\item A map of flows \(f: X \to Y\) is a fibration if and only if for all \((\alpha,\beta)\in X^0\times X^0\), the continuous map \(\mathbb{P}^{\mathcal{G}}_{\alpha, \beta}X \to \mathbb{P}^{\mathcal{G}}_{f(\alpha), f(\beta)}Y\) belongs to \(\mathcal{F}\).
\end{itemize}
Moreover, this model structure is accessible and all objects are fibrant.
From the text: ``We obtain the following results:
\begin{itemize}
\item a \(q\)-model structure, an \(h\)-model structure and an \(m\)-model structure on multipointed \(d\)-spaces and on flows in one step (!)
\item the identity functor induces a Quillen equivalence between the \(q\)-model structure and the \(m\)-model structure on multipointed \(d\)-spaces (on flows, respectively)
\item the two \(q\)-model structures are combinatorial and left determined and they coincide with that of \textit{P. Gaucher} [Homology Homotopy Appl. 5, No. 1, 549--599 (2003; Zbl 1069.55008); Theory Appl. Categ. 22, 588--621 (2009; Zbl 1191.55013); Cah. Topol. Géom. Différ. Catég. 61, No. 2, 208--226 (2020; Zbl 1452.18010)], respectively
\item the four other model structures (the two \(m\)-model structures and the two \(h\)-model structures) are accessible
\item all objects are fibrant in these six model structures
\item there are the implications \(q\)-cofibrant \(\Rightarrow\) \(m\)-cofibrant \(\Rightarrow\) \(h\)-cofibrant for multipointed \(d\)-spaces and flows
\item there exist multipointed \(d\)-spaces and flows which are not \(q\)-cofibrant, not \(h\)-cofibrant and not \(m\)-cofibrant.''
\end{itemize}
Reviewer: Ahmet A. Khusainov (Komsomolsk-om-Amur)Groupoids and the algebra of rewriting in group presentationshttps://zbmath.org/1496.200552022-11-17T18:59:28.764376Z"Gilbert, N. D."https://zbmath.org/authors/?q=ai:gilbert.nick-d"McDougall, E. A."https://zbmath.org/authors/?q=ai:mcdougall.e-aSummary: Presentations of groups by rewriting systems (that is, by monoid presentations), have been fruitfully studied by encoding the rewriting system in a 2-complex -- the Squier complex -- whose fundamental groupoid then describes the derivation of consequences of the rewrite rules. We describe a reduced form of the Squier complex, investigate the structure of its fundamental groupoid, and show that key properties of the presentation are still encoded in the reduced form.The word and order problems for self-similar and automata groupshttps://zbmath.org/1496.200562022-11-17T18:59:28.764376Z"Bartholdi, Laurent"https://zbmath.org/authors/?q=ai:bartholdi.laurent"Mitrofanov, Ivan"https://zbmath.org/authors/?q=ai:mitrofanov.ivan-viktorovichThe article concerns undecidability results for self-similar groups (Theorem A), automata groups (Theorem B), and contracting groups (Theorem C), and also offers different specific versions of the core theorems for the different classes of groups.
Recall that \textit{self-similar} groups (also called \textit{functionally-recursive} groups) are those groups \(G\) so that there is a faithful action of \(G\) on a set \(\mathcal{A}^*\) of finite words over some finite alphabet \(\mathcal{A}\), where this action is induced by some map \(\overline{\Phi}:\mathcal{A}\times G\to G\times\mathcal{A}\) and a rule
\[
(a_1a_2\dots a_m)^g=a_1'(a_2a_3\dots a_m)^{g'}
\]
with \((g',a_1')=\overline{\Phi}(a_1,g)\).
The class of self-similar groups is large, e.g., containing \textit{V. Nekrashevych}'s iterated monodromy groups [Lond. Math. Soc. Lect. Note Ser. 387, 41--93 (2011; Zbl 1235.37016)].
In the further case that \(G\) is a self-similar group generated as a semigroup by a finite set \(S\) of generators (e.g., if \(S\) is inverse closed) we obtain a lift \(\Phi\) of \(\overline{\Phi}\) obtained by restricting to the generators of \(G\): that is, the map \(\overline{\Phi}:\mathcal{A}\times S\to F_S\times\mathcal{A}\) for \(F_S\) the free monoid on \(S\). The map \(\Phi\) is a finite set of combinatorial data that determines the action of \(G\) on \(\mathcal{A}^*\), and hence determines \(G\), and which opens the door for investigating decidability properties for these groups. In this case, we say that \(G\) is presented by \(\Phi\) and write \(G=\langle\Phi\rangle\).
The first theorem is in this context:
Theorem A. There is no algorithm that, given \(\overline{\Phi}:\mathcal{A}\times S\to F_S\times\mathcal{A}\) and \(s\in S\), determines if \(s=1\) in \(\langle \Phi\rangle\).
If we now assume that the length of \(g'\) is at most the length of \(g\) in the \(S\) generating set, and if we ensure that the identity \(1\) is in the set \(S\) by adding it in if necessary, then \(G=\langle \Phi\rangle\) becomes an \textit{automata group}. In this case, the map \(\Phi\) takes the form \(\overline{\Phi}:\mathcal{A}\times S\to S\times\mathcal{A}\). The class of automata groups remains quite mysterious while containing many groups of abiding research interest (e.g., finitely generated linear groups [\textit{A. M. Brunner} and \textit{S. Sidki}, Int. J. Algebra Comput. 8, No. 1, 127--139 (1998; Zbl 0923.20023)] as well as groups of intermediate growth [\textit{R. I. Grigorchuk}, Sov. Math., Dokl. 28, 23--26 (1983; Zbl 0547.20025); translation from Dokl. Akad. Nauk SSSR 271, 30--33 (1983)]). In this context we have:
Theorem B. There is no algorithm that, given \(\overline{Phi}:\mathcal{A}\times S\to S\times\mathcal{A}\) and \(s \in S\), determines the order of \(s\) in \(\langle \Phi\rangle\), namely the cardinality of \(\langle s\rangle\).
Giving a (new) solution to Question 7.2.1(a) of [\textit{R. I. Grigorchuk} et al., in: Dynamical systems, automata, and infinite groups. Transl. from the Russian. Moscow: MAIK Nauka/Interperiodica Publishing. 128--203 (2000; Zbl 1155.37311); translation from Tr. Mat. Inst. Steklova 231, 134--214 (2000)] (\textit{P. Gillibert} announced a solution to this question in July 2017 for automata groups, which appears in [J. Algebra 497, 363--392 (2018; Zbl 1427.20040)], while \textit{J. Belk} and \textit{C. Bleak} show the problem is undecidable for groups generated by initial asynchronous transducers in [Trans. Am. Math. Soc. 369, No. 5, 3157--3172 (2017; Zbl 1364.20015)] (the original context of the question)). In fact, the article under current review shows the further undecidability results: given \(a\in A\), \(s\in S\) and the induced action of \(\langle \Phi\rangle\) on the Cantor space \(A^\omega\), in the context of Theorem A one cannot determine if \(a^\omega\) is fixed by \(s\), and in the context of Theorem B one cannot determine the cardinality of the orbit of \(a^\omega\) under the action of \(\langle s\rangle\).
Finally, if we assume \(G\) is finitely generated, self-similar, and that there are constants \(\lambda<1\) and \(C\) with \(|g'|\leq \lambda\cdot|g| + C\) for all \(g\in G\) then we say that \(G\) is a \textit{contracting group}. In this context if we replace \(S\) by all words of length less than \(C/(1-\lambda)\) over the original \(S\) then we have \(|g'|\leq |g|\), so these groups are again automata groups. The final result (Theorem C) is that the order and orbit order problems which are undecidable for automata groups (the two versions of Theorem B mentioned above) remain undecidable even in this more restricted subclass of groups.
Overall, the writing and proofs are clear and the results are very strong. The main method is to encode Minsky machines in self-similar groups but the authors also give an explanation of Theorem B using tilings.
Reviewer: Collin Bleak (St. Andrews)Algorithmic theory of solvable groupshttps://zbmath.org/1496.200602022-11-17T18:59:28.764376Z"Roman'kov, V. A."https://zbmath.org/authors/?q=ai:romankov.vitaly-aSummary: The purpose of this survey is to give some picture of what is known about algorithmic and decision problems in the theory of solvable groups. We will provide a number of references to various results, which are presented without proof. Naturally, the choice of the material reported on reflects the author's interests and many worthy contributions to the field will unfortunately go without mentioning. In addition to achievements in solving classical algorithmic problems, the survey presents results on other issues. Attention is paid to various aspects of modern theory related to the complexity of algorithms, their practical implementation, random choice, asymptotic properties. Results are given on various issues related to mathematical logic and model theory. In particular, a special section of the survey is devoted to elementary and universal theories of solvable groups. Special attention is paid to algorithmic questions regarding rational subsets of groups. Results on algorithmic problems related to homomorphisms, automorphisms, and endomorphisms of groups are presented in sufficient detail.Some trace monoids where both the star problem and the finite power property problem are decidable (extended abstract)https://zbmath.org/1496.200892022-11-17T18:59:28.764376Z"Richomme, Gwénaël"https://zbmath.org/authors/?q=ai:richomme.gwenaelSummary: We consider here the decidability of the Star Problem in trace monoids (``assuming \(X\) is a recognizable set of traces, is \(X^*\) recognizable?'') and the decidability of the Finite Power Property Problem (``assuming \(X\) is a recognizable trace set, does there exist an integer \(n\) such that \(X^* = \bigcup_{i\leq{n}} X^i\)?''. We define a family \(\mathcal{F}\) of free partially commutative monoids where both the Star Problem and the Finite Power Property Problem are decidable. The family \(\mathcal{F}\) strictly contains all the already known cases of decidability of the two problems.
For the entire collection see [Zbl 0825.68120].Several special functions in fractals and applications of the fractal in machine learninghttps://zbmath.org/1496.280122022-11-17T18:59:28.764376Z"Wang, Jun"https://zbmath.org/authors/?q=ai:wang.jun.24"Cao, Lei"https://zbmath.org/authors/?q=ai:cao.lei.1|cao.lei"Chen, Xiliang"https://zbmath.org/authors/?q=ai:chen.xiliang"Tang, Wei"https://zbmath.org/authors/?q=ai:tang.wei"Xu, Zhixiong"https://zbmath.org/authors/?q=ai:xu.zhixiongSymbolic computation on the long gravity water waves: scaling transformations, bilinear forms, \(N\)-soliton solutions and auto-Bäcklund transformation for a variable-coefficient variant Boussinesq systemhttps://zbmath.org/1496.351492022-11-17T18:59:28.764376Z"Gao, Xin-Yi"https://zbmath.org/authors/?q=ai:gao.xinyi"Guo, Yong-Jiang"https://zbmath.org/authors/?q=ai:guo.yongjiang"Shan, Wen-Rui"https://zbmath.org/authors/?q=ai:shan.wenruiSummary: Water waves are one of the most common phenomena in nature. Hereby, on a variable-coefficient variant Boussinesq system for the nonlinear and dispersive long gravity waves travelling in two horizontal directions in the shallow water with varying depth, with respect to the horizontal velocity of the water and height deviating from the equilibrium position of the water, our symbolic computation leads to the scaling transformations, bilinear forms, \(N\)-soliton solutions and auto-Bäcklund transformation with the sample solitons, where \(N\) is a positive integer. Our results are dependent on the water-wave variable coefficients and under the relevant variable-coefficient constraints.Attosecond soliton switching through the interactions of two and three solitons in an inhomogeneous fiberhttps://zbmath.org/1496.353692022-11-17T18:59:28.764376Z"Veni, S. Saravana"https://zbmath.org/authors/?q=ai:veni.s-saravana"Rajan, M. S. Mani"https://zbmath.org/authors/?q=ai:rajan.m-s-maniSummary: We obtain the exact two and three soliton solutions for higher order NLS equation with variable coefficients using Darboux transformation method with some algebraic manipulations based on constructed Lax Pair. For the first time, switching characteristics of two and three solitons in the attosecond regime via inelastic soliton interactions are discussed through some graphical illustrations. Additionally, effects of the inhomogeneous coefficients on propagation features of solitons are analyzed graphically. Our results have certain applications in the construction of optical switching and soliton management in optical communication systems.Predicting the dynamic process and model parameters of the vector optical solitons in birefringent fibers \textit{via} the modified PINNhttps://zbmath.org/1496.353722022-11-17T18:59:28.764376Z"Wu, Gang-Zhou"https://zbmath.org/authors/?q=ai:wu.gangzhou"Fang, Yin"https://zbmath.org/authors/?q=ai:fang.yin"Wang, Yue-Yue"https://zbmath.org/authors/?q=ai:wang.yueyue"Wu, Guo-Cheng"https://zbmath.org/authors/?q=ai:wu.guocheng"Dai, Chao-Qing"https://zbmath.org/authors/?q=ai:dai.chaoqingSummary: A modified physics-informed neural network is used to predict the dynamics of optical pulses including one-soliton, two-soliton, and rogue wave based on the coupled nonlinear Schrödinger equation in birefringent fibers. At the same time, the elastic collision process of the mixed bright-dark soliton is predicted. Compared the predicted results with the exact solution, the modified physics-informed neural network method is proven to be effective to solve the coupled nonlinear Schrödinger equation. Moreover, the dispersion coefficients and nonlinearity coefficients of the coupled nonlinear Schrödinger equation can be learned by modified physics-informed neural network. This provides a reference for us to use deep learning methods to study the dynamic characteristics of solitons in optical fibers.Data-driven rogue waves and parameters discovery in nearly integrable \(\mathcal{PT}\)-symmetric Gross-Pitaevskii equations via PINNs deep learninghttps://zbmath.org/1496.353732022-11-17T18:59:28.764376Z"Zhong, Ming"https://zbmath.org/authors/?q=ai:zhong.ming"Gong, Shibo"https://zbmath.org/authors/?q=ai:gong.shibo"Tian, Shou-Fu"https://zbmath.org/authors/?q=ai:tian.shoufu"Yan, Zhenya"https://zbmath.org/authors/?q=ai:yan.zhenyaSummary: In this paper, we explore the forward and inverse problems for the generalized Gross-Pitaevskii (GP) equation with complex \(\mathcal{PT}\)-symmetric potentials via the deep physics-informed neural networks (PINNs). The data-driven rogue waves (RWs) are mainly studied in the forward problem, where the comparisons between the data-driven RWs and numerical ones via the spectral method are used to present the PINNs solution accuracies. Besides, we also focus on the influences of several critical factors (e.g., the depths of neural networks, numbers of training points) on the performance of the PINNs algorithm. Finally, the inverse problem is also investigated such that the system parameters can be identified from the training data. The results obtained in this paper can be useful to further understand the neural networks on rogue wave structures in the nearly integrable \(\mathcal{PT}\)-symmetric nonlinear wave systems.Deep neural network surrogates for nonsmooth quantities of interest in shape uncertainty quantificationhttps://zbmath.org/1496.354672022-11-17T18:59:28.764376Z"Scarabosio, Laura"https://zbmath.org/authors/?q=ai:scarabosio.lauraA new fractional one dimensional chaotic map and its application in high-speed image encryptionhttps://zbmath.org/1496.370992022-11-17T18:59:28.764376Z"Talhaoui, Mohamed Zakariya"https://zbmath.org/authors/?q=ai:talhaoui.mohamed-zakariya"Wang, Xingyuan"https://zbmath.org/authors/?q=ai:wang.xingyuanSummary: Chaos theory has been widely used in the design of image encryption schemes. Some low-dimensional chaotic maps have been proved to be easily predictable because of their small chaotic space. On the other hand, high-dimensional chaotic maps have a larger chaotic space. However, their structures are too complicated, and consequently, they are not suitable for real-time image encryption. Motivated by this, we propose a new fractional one-dimensional chaotic map with a large chaotic space. The proposed map has a simple structure and a high chaotic behavior in an extensive range of its control parameters values. Several chaos theoretical tools and tests have been carried out to analyze and prove the proposed map's high chaotic behavior. Moreover, we use the proposed map in the design of a novel real-time image encryption scheme. In this new scheme, we combine the substitution and permutation stages to simultaneously modify both of the pixels' positions and values. The merge of these two stages and the use of the new simple one-dimensional chaotic map significantly increase the proposed scheme's security and speed. Besides, the simulation and experimental analysis prove that the proposed scheme has high performances.Problems of the algorithmization of algebraic systemshttps://zbmath.org/1496.460482022-11-17T18:59:28.764376Z"Ayupov, Sh. A."https://zbmath.org/authors/?q=ai:ayupov.shabkat-abdullaevich|ayupov.sh-a"Kabulov, V. K."https://zbmath.org/authors/?q=ai:kabulov.vasil-kabulovich(no abstract)Poncelet triangles: a theory for locus ellipticityhttps://zbmath.org/1496.510062022-11-17T18:59:28.764376Z"Helman, Mark"https://zbmath.org/authors/?q=ai:helman.mark"Laurain, Dominique"https://zbmath.org/authors/?q=ai:laurain.dominique"Garcia, Ronaldo"https://zbmath.org/authors/?q=ai:garcia.ronaldo-a"Reznik, Dan"https://zbmath.org/authors/?q=ai:reznik.dan-sSummary: We present a theory which predicts if the locus of a triangle center over certain Poncelet triangle families is a conic or not. We consider families interscribed in (i) the confocal pair and (ii) an outer ellipse and an inner concentric circular caustic. Previously, determining if a locus was a conic was done on a case-by-case basis. In the confocal case, we also derive conditions under which a locus degenerates to a segment or a circle. We show the locus' turning number is \(\pm 3\), while predicting its monotonicity with respect to the motion of a vertex of the triangle family.Geometry and generalization: eigenvalues as predictors of where a network will fail to generalizehttps://zbmath.org/1496.531072022-11-17T18:59:28.764376Z"Agarwala, Susama"https://zbmath.org/authors/?q=ai:agarwala.susama"Dees, Ben"https://zbmath.org/authors/?q=ai:dees.ben-k"Gearheart, Andrew"https://zbmath.org/authors/?q=ai:gearheart.andrew"Lowman, Corey"https://zbmath.org/authors/?q=ai:lowman.corey(no abstract)\(A_\infty\) persistent homology estimates detailed topology from pointcloud datasetshttps://zbmath.org/1496.550052022-11-17T18:59:28.764376Z"Belchí, Francisco"https://zbmath.org/authors/?q=ai:belchi.francisco"Stefanou, Anastasios"https://zbmath.org/authors/?q=ai:stefanou.anastasiosIn the study of pointcloud datasets, describing topological properties of the underlying spaces \(X\) has proven to be beneficial. Up to date there are many techniques that study and compute the Betti numbers of \(X\) from a finite set \(P\) of points approximating \(X\).
In this paper much more detailed topological properties of \(X\) are studied utilizing the techniques of \(A_\infty\)-persistent homology. As a consequence, the stability of cup products and generalised Massey products in persistent homology has been proved.
Reviewer: Jelena Grbić (Southampton)On the treewidth of triangulated 3-manifoldshttps://zbmath.org/1496.570272022-11-17T18:59:28.764376Z"Huszár, Kristóf"https://zbmath.org/authors/?q=ai:huszar.kristof"Spreer, Jonathan"https://zbmath.org/authors/?q=ai:spreer.jonathan"Wagner, Uli"https://zbmath.org/authors/?q=ai:wagner.uliSummary: In graph theory, as well as in 3-manifold topology, there exist several width-type parameters to describe how ``simple'' or ``thin'' a given graph or 3-manifold is. These parameters, such as pathwidth or treewidth for graphs, or the concept of thin position for 3-manifolds, play an important role when studying algorithmic problems; in particular, there is a variety of problems in computational 3-manifold topology -- some of them known to be computationally hard in general -- that become solvable in polynomial time as soon as the dual graph of the input triangulation has bounded treewidth.
In view of these algorithmic results, it is natural to ask whether every 3-manifold admits a triangulation of bounded treewidth. We show that this is not the case, i.e., that there exists an infinite family of closed 3-manifolds not admitting triangulations of bounded pathwidth or treewidth (the latter implies the former, but we present two separate proofs).
We derive these results from work of \textit{I. Agol} [Geom. Dedicata 102, 53--64 (2003; Zbl 1039.57008)] and of \textit{M. Scharlemann} and \textit{A. Thompson} [Contemp. Math. 164, 231--238 (1994; Zbl 0818.57013)], by exhibiting explicit connections between the topology of a 3-manifold \(\mathcal{M}\) on the one hand and width-type parameters of the dual graphs of triangulations of \(\mathcal{M}\) on the other hand, answering a question that had been raised repeatedly by researchers in computational 3-manifold topology. In particular, we show that if a closed, orientable, irreducible, non-Haken 3-manifold \(\mathcal{M}\) has a triangulation of treewidth (resp. pathwidth) \(k\) then the Heegaard genus of \(\mathcal{M}\) is at most \(48(k+1)\) (resp. \(4(3k+1)\)).
Editorial remark: The full version of this paper has been published in [\textit{K. Huszár} et al., J. Comput. Geom. 10, No. 2, 70--98 (2019; Zbl 07150581)]; see the review there.
For the entire collection see [Zbl 1390.68027].Queueing models for cognitive wireless networks with sensing time of secondary usershttps://zbmath.org/1496.601142022-11-17T18:59:28.764376Z"Phung-Duc, Tuan"https://zbmath.org/authors/?q=ai:phung-duc.tuan"Akutsu, Kohei"https://zbmath.org/authors/?q=ai:akutsu.kohei"Kawanishi, Ken'ichi"https://zbmath.org/authors/?q=ai:kawanishi.kenichi"Salameh, Osama"https://zbmath.org/authors/?q=ai:salameh.osama"Wittevrongel, Sabine"https://zbmath.org/authors/?q=ai:wittevrongel.sabineSummary: This paper considers queueing models for cognitive radio networks that account for the sensing time of secondary users (SUs). In cognitive radio networks, secondary users are allowed to opportunistically use idle channels originally allocated to primary users (PUs). To this end, SUs must sense the state of the channels before transmission. After sensing, if an idle channel is available, the SU can start transmission immediately; otherwise, the SU must carry out another channel sensing. In this paper, we study two retrial queueing models with an unlimited number of sensing SUs, where PUs have preemptive priority over SUs. The two models differ in whether or not an arriving PU can interrupt a SU transmission also in case there are still idle channels available. We show that both models have the same stability condition and that the model without interruptions in case of available idle channels has a slightly lower number of sensing SUs than the model with interruptions.Default risk prediction and feature extraction using a penalized deep neural networkhttps://zbmath.org/1496.620152022-11-17T18:59:28.764376Z"Lin, Cunjie"https://zbmath.org/authors/?q=ai:lin.cunjie"Qiao, Nan"https://zbmath.org/authors/?q=ai:qiao.nan"Zhang, Wenli"https://zbmath.org/authors/?q=ai:zhang.wenli"Li, Yang"https://zbmath.org/authors/?q=ai:li.yang.5"Ma, Shuangge"https://zbmath.org/authors/?q=ai:ma.shuanggeSummary: Online peer-to-peer lending platforms provide loans directly from lenders to borrowers without passing through traditional financial institutions. For lenders on these platforms to avoid loss, it is crucial that they accurately assess default risk so that they can make appropriate decisions. In this study, we develop a penalized deep learning model to predict default risk based on survival data. As opposed to simply predicting whether default will occur, we focus on predicting the probability of default over time. Moreover, by adding an additional one-to-one layer in the neural network, we achieve feature selection and estimation simultaneously by incorporating an \(L_1\)-penalty into the objective function. The minibatch gradient descent algorithm makes it possible to handle massive data. An analysis of a real-world loan data and simulations demonstrate the model's competitive practical performance, which suggests favorable potential applications in peer-to-peer lending platforms.Conditional density estimation and simulation through optimal transporthttps://zbmath.org/1496.620682022-11-17T18:59:28.764376Z"Tabak, Esteban G."https://zbmath.org/authors/?q=ai:tabak.esteban-g"Trigila, Giulio"https://zbmath.org/authors/?q=ai:trigila.giulio"Zhao, Wenjun"https://zbmath.org/authors/?q=ai:zhao.wenyunSummary: A methodology to estimate from samples the probability density of a random variable \(x\) conditional to the values of a set of covariates \(\{z_l\}\) is proposed. The methodology relies on a data-driven formulation of the Wasserstein barycenter, posed as a minimax problem in terms of the conditional map carrying each sample point to the barycenter and a potential characterizing the inverse of this map. This minimax problem is solved through the alternation of a flow developing the map in time and the maximization of the potential through an alternate projection procedure. The dependence on the covariates \(\{z_l\}\) is formulated in terms of convex combinations, so that it can be applied to variables of nearly any type, including real, categorical and distributional. The methodology is illustrated through numerical examples on synthetic and real data. The real-world example chosen is meteorological, forecasting the temperature distribution at a given location as a function of time, and estimating the joint distribution at a location of the highest and lowest daily temperatures as a function of the date.Model-based kernel sum rule: kernel Bayesian inference with probabilistic modelshttps://zbmath.org/1496.621012022-11-17T18:59:28.764376Z"Nishiyama, Yu"https://zbmath.org/authors/?q=ai:nishiyama.yu"Kanagawa, Motonobu"https://zbmath.org/authors/?q=ai:kanagawa.motonobu"Gretton, Arthur"https://zbmath.org/authors/?q=ai:gretton.arthur"Fukumizu, Kenji"https://zbmath.org/authors/?q=ai:fukumizu.kenjiSummary: Kernel Bayesian inference is a principled approach to nonparametric inference in probabilistic graphical models, where probabilistic relationships between variables are learned from data in a nonparametric manner. Various algorithms of kernel Bayesian inference have been developed by combining kernelized basic probabilistic operations such as the kernel sum rule and kernel Bayes' rule. However, the current framework is fully nonparametric, and it does not allow a user to flexibly combine nonparametric and model-based inferences. This is inefficient when there are good probabilistic models (or simulation models) available for some parts of a graphical model; this is in particular true in scientific fields where ``models'' are the central topic of study. Our contribution in this paper is to introduce a novel approach, termed the \textit{model-based kernel sum rule} (Mb-KSR), to combine a probabilistic model and kernel Bayesian inference. By combining the Mb-KSR with the existing kernelized probabilistic rules, one can develop various algorithms for hybrid (i.e., nonparametric and model-based) inferences. As an illustrative example, we consider Bayesian filtering in a state space model, where typically there exists an accurate probabilistic model for the state transition process. We propose a novel filtering method that combines model-based inference for the state transition process and data-driven, nonparametric inference for the observation generating process. We empirically validate our approach with synthetic and real-data experiments, the latter being the problem of vision-based mobile robot localization in robotics, which illustrates the effectiveness of the proposed hybrid approach.Robust estimation of Gaussian linear structural equation models with equal error varianceshttps://zbmath.org/1496.621022022-11-17T18:59:28.764376Z"Park, Sion"https://zbmath.org/authors/?q=ai:park.sion"Park, Gunwoong"https://zbmath.org/authors/?q=ai:park.gunwoongSummary: This study develops a new approach to learning Gaussian linear structural equation models (SEMs) with equal error variances from possibly corrupted observations by outliers. More precisely, we consider the two types of corrupted Gaussian linear SEMs depending on the outlier type and develop a structure learning algorithm for the models. The proposed algorithm consists of two steps in which the effect of outliers is eliminated: Step (1) infers the ordering using conditional variances, and Step (2) estimates the presence of edges using conditional independence relationships. Various numerical experiments verify that the proposed algorithm is empirically consistent even when corrupted samples exist. It is further confirmed that the proposed algorithm performs better than the state-of-the-art US, GDS, PC, and GES algorithms in noisy data settings. Through the corrupted real examination marks data, we also demonstrate that the proposed algorithm is well-suited to capturing the interpretable relationships between subjects.High-efficiency chaotic time series prediction based on time convolution neural networkhttps://zbmath.org/1496.621502022-11-17T18:59:28.764376Z"Cheng, Wei"https://zbmath.org/authors/?q=ai:cheng.wei"Wang, Yan"https://zbmath.org/authors/?q=ai:wang.yan.6|wang.yan.5|wang.yan.3"Peng, Zheng"https://zbmath.org/authors/?q=ai:peng.zheng"Ren, Xiaodong"https://zbmath.org/authors/?q=ai:ren.xiaodong"Shuai, Yubei"https://zbmath.org/authors/?q=ai:shuai.yubei"Zang, Shengyin"https://zbmath.org/authors/?q=ai:zang.shengyin"Liu, Hao"https://zbmath.org/authors/?q=ai:liu.hao|liu.hao.2|liu.hao.1"Cheng, Hao"https://zbmath.org/authors/?q=ai:cheng.hao"Wu, Jiagui"https://zbmath.org/authors/?q=ai:wu.jiaguiSummary: The prediction of chaotic time series is important for both science and technology. In recent years, this type of prediction has improved significantly with the development of deep learning. Here, we propose a temporal convolutional network (TCN) model for the prediction of chaotic time series. Our TCN model offers highly stable training, high parallelism, and flexible perception field. Comparative experiments with the classic long short-term memory (LSTM) network and hybrid (CNN-LSTM) neural network show that the TCN model can reduce the training time by a factor of more than two. Furthermore, the network can focus on more important information because of the attention mechanism. By embedding the convolutional block attention module (CBAM), which combines the spatial and channel attention mechanisms, we obtain a new model, TCN-CBAM. This model is comprehensively better than the LSTM, CNN-LSTM, and TCN models in the prediction of classical systems (Chen system, Lorenz system, and sunspots). In terms of prediction accuracy, the TCN-CBAM model obtains better results for the four main evaluation indicators: root mean square error, mean absolute error, coefficient of determination, and Spearman's correlation coefficient, with a maximum increase of 41.4\%. The TCN-CBAM has also the shortest training times among the previous classic four models.Empirical Bayesian learning in AR graphical modelshttps://zbmath.org/1496.621602022-11-17T18:59:28.764376Z"Zorzi, Mattia"https://zbmath.org/authors/?q=ai:zorzi.mattiaSummary: We address the problem of learning graphical models which correspond to high dimensional autoregressive stationary stochastic processes. A graphical model describes the conditional dependence relations among the components of a stochastic process and represents an important tool in many fields. We propose an empirical Bayes estimator of sparse autoregressive graphical models and latent-variable autoregressive graphical models. Numerical experiments show the benefit to take this Bayesian perspective for learning these types of graphical models.Multivariate deep learning model with ensemble pruning for time series forecastinghttps://zbmath.org/1496.621612022-11-17T18:59:28.764376Z"Kosuri, Mohit"https://zbmath.org/authors/?q=ai:kosuri.mohit"Tandu, Cherry"https://zbmath.org/authors/?q=ai:tandu.cherry"Sarkar, Sobhan"https://zbmath.org/authors/?q=ai:sarkar.sobhan"Maiti, J."https://zbmath.org/authors/?q=ai:maiti.jyotirmoySummary: To predict future events using historical data, Time Series Forecasting (TSF) should be used to get precise and accurate predictions. It has been a challenging issue to deal with the errors and value loss while predicting the future; hence, a dynamic error correction is proposed to overcome the errors. Additionally, it is important to find out a fast optimization technique to avoid this difficulty. Therefore, it is proposed in this study to use an improved stacking-based ensemble pruning method, namely Genetic Algorithm (GA)-II to produce high accuracy and strong stability in time series forecasting. A meta predictor known as Kernel Ridge Regression (KRR) is proposed for stacking ensemble models for its improved forecasting performance. The main goal of this study is to attain reliable and precise time-series forecasting. In the process of extracting various types of data features, particular types of Deep Neural networks are effective. Therefore, these types of models combine and increase the use of Deep Learning and Ensemble Learning techniques. It is better to use different Deep Neural Networks as Deep Learning models and use boosting and stacking techniques as neural networks take more time by using these types of methods, and the results would be better with low calculations. In time-series data, the value changes dynamically which may increase or decrease the accuracy of the prediction, so to overcome this type of problem, some error correction methods like Dynamic Error Correction (DEC) and a technique like Non-Dominated Sorting Genetic Algorithm (NSGA) and Multi-Populated Non-Dominated Sorting Genetic Algorithm-II (GA-II) to get optimal solutions in terms of accuracy are used.
For the entire collection see [Zbl 1491.65006].A two-fold multi-objective multi-verse optimization-based time series forecastinghttps://zbmath.org/1496.621622022-11-17T18:59:28.764376Z"Tandu, Cherry"https://zbmath.org/authors/?q=ai:tandu.cherry"Kosuri, Mohit"https://zbmath.org/authors/?q=ai:kosuri.mohit"Sarkar, Sobhan"https://zbmath.org/authors/?q=ai:sarkar.sobhan"Maiti, J."https://zbmath.org/authors/?q=ai:maiti.jyotirmoySummary: In this study, to overcome error due to high-dimensional data and to get the best forecasting prediction for time series data, we employ a feature selection method to obtain the best exploitation and exploratory performance. Due to a large number of irrelevant factors within data, it is imperative to classify the tasks by using a feature selection method. Therefore, a two-fold multi-objective multi-verse optimization as a feature selection optimization method has been proposed to obtain a trade-off between minimization loss and minimization of the number of features selected. The Convolution Neural Network (CNN) has been used as a basic predictor. A dynamic error correction is also proposed to reduce the error further to the deep learning models to get the best time series forecasting. However, many Multi-Objective Optimization techniques have been used to deal with high-dimensional data, the proposed method showed the best trade-off for feature selection.
For the entire collection see [Zbl 1491.65006].Nearest neighbor forecasting using sparse data representationhttps://zbmath.org/1496.621632022-11-17T18:59:28.764376Z"Vlachos, Dimitrios"https://zbmath.org/authors/?q=ai:vlachos.dimitrios"Thomakos, Dimitrios"https://zbmath.org/authors/?q=ai:thomakos.dimitrios-dSummary: The method of the nearest neighbors as well as its variants have proven to be very powerful tools in the non-parametric prediction and categorization of experimental measurements. On the other hand, the number of data available today as well as their dimensionality and complexity is growing rapidly in many scientific fields, such as economics, biology, chemistry, medicine, and others. Usually, the data and their characteristics have semantic dependence and a lot of noise. At this point, the sparse data representation that deals with these problems with great success is involved. In this paper we present the application of these two tried and tested techniques for prediction in various fields related to economics. New techniques are presented as well as exhaustive tests for the evaluation of the proposed methods. The results are encouraging to continue research into the possibilities of sparse representation combined with good proven machine learning techniques.
For the entire collection see [Zbl 1483.00042].Stock market predictions using FastRNN-based modelhttps://zbmath.org/1496.621762022-11-17T18:59:28.764376Z"Yadav, Konark"https://zbmath.org/authors/?q=ai:yadav.konark"Yadav, Milind"https://zbmath.org/authors/?q=ai:yadav.milind"Saini, Sandeep"https://zbmath.org/authors/?q=ai:saini.sandeepSummary: Predicting the correct values of stocks in fast fluctuating high-frequency financial data is always a challenging task. Existing state-of-the-art models are very efficient in terms of accuracy but lags in prediction speed. In this work, we aim to develop a deep-learning-based fast model for live predictions of stock values with minimum errors. The proposed model is based on fast recurrent neural networks (FastRNNs), which provides us with both of the desired features. We have considered the 1-min time interval stock data of four companies for a period of one day. The model is aimed to have a low computational complexity as well so that it can be run for live predictions as well. The model's performance is measured by root mean square error (RMSE) along with computation time. The model outperforms LSTM, CNN, and other deep learning models for live predictions of stock values.
For the entire collection see [Zbl 1491.65006].Regressive class modelling for predicting trajectories of COVID-19 fatalities using statistical and machine learning modelshttps://zbmath.org/1496.621792022-11-17T18:59:28.764376Z"Chowdhury, Rafiqul I."https://zbmath.org/authors/?q=ai:chowdhury.rafiqul-islam"Hasan, M. Tariqul"https://zbmath.org/authors/?q=ai:hasan.m-tariqul"Sneddon, Gary"https://zbmath.org/authors/?q=ai:sneddon.garySummary: The COVID-19 (SARS-CoV-2 virus) pandemic has led to a substantial loss of human life worldwide by providing an unparalleled challenge to the public health system. The economic, psychological, and social disarray generated by the COVID-19 pandemic is devastating. Public health experts and epidemiologists worldwide are struggling to formulate policies on how to control this pandemic as there is no effective vaccine or treatment available which provide long-term immunity against different variants of COVID-19 and to eradicate this virus completely. As the new cases and fatalities are recorded daily or weekly, the responses are likely to be repeated or longitudinally correlated. Thus, studying the impact of available covariates and new cases on deaths from COVID-19 repeatedly would provide significant insights into this pandemic's dynamics. For a better understanding of the dynamics of spread, in this paper, we study the impact of various risk factors on the new cases and deaths over time. To do that, we propose a marginal-conditional based joint modelling approach to predict trajectories, which is crucial to the health policy planners for taking necessary measures. The conditional model is a natural choice to study the underlying property of dependence in consecutive new cases and deaths. Using this model, one can examine the relationship between outcomes and predictors, and it is possible to calculate risks of the sequence of events repeatedly. The advantage of repeated measures is that one can see how individual responses change over time. The predictive accuracy of the proposed model is also compared with various machine learning techniques. The machine learning algorithms used in this paper are extended to accommodate repeated responses. The performance of the proposed model is illustrated using COVID-19 data collected from the Texas Health and Human Services.Learning large \(Q\)-matrix by restricted Boltzmann machineshttps://zbmath.org/1496.622082022-11-17T18:59:28.764376Z"Li, Chengcheng"https://zbmath.org/authors/?q=ai:li.chengcheng"Ma, Chenchen"https://zbmath.org/authors/?q=ai:ma.chenchen"Xu, Gongjun"https://zbmath.org/authors/?q=ai:xu.gongjunSummary: Estimation of the large \(Q\)-matrix in cognitive diagnosis models (CDMs) with many items and latent attributes from observational data has been a huge challenge due to its high computational cost. Borrowing ideas from deep learning literature, we propose to learn the large \(Q\)-matrix by restricted Boltzmann machines (RBMs) to overcome the computational difficulties. In this paper, key relationships between RBMs and CDMs are identified. Consistent and robust learning of the \(Q\)-matrix in various CDMs is shown to be valid under certain conditions. Our simulation studies under different CDM settings show that RBMs not only outperform the existing methods in terms of learning speed, but also maintain good recovery accuracy of the \(Q\)-matrix. In the end, we illustrate the applicability and effectiveness of our method through a TIMSS mathematics data set.MATLAB programming for numerical analysishttps://zbmath.org/1496.650022022-11-17T18:59:28.764376Z"Pérez López, César"https://zbmath.org/authors/?q=ai:perez-lopez.cesarPublisher's description: MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java.
This book introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. You will first become familiar with the MATLAB environment, and then you will begin to harness the power of MATLAB. You will learn the MATLAB language, starting with an introduction to variables, and how to manipulate numbers, vectors, matrices, arrays and character strings. You will learn about MATLAB's high-precision capabilities, and how you can use MATLAB to solve problems, making use of arithmetic, relational and logical operators in combination with the common functions and operations of real and complex analysis and linear algebra.
You will learn to implement various numerical methods for optimization, interpolation and solving non-linear equations. You will discover how MATLAB can solve problems in differential and integral calculus, both numerically and symbolically, including techniques for solving ordinary and partial differential equations, and how to graph the solutions in brilliant high resolution. You will then expand your knowledge of the MATLAB language by learning how to use commands which enable you to investigate the convergence of sequences and series, and explore continuity and other analytical features of functions in one and several variables.Stochastic partial differential equations for computer vision with uncertain datahttps://zbmath.org/1496.650032022-11-17T18:59:28.764376Z"Preusser, Tobias"https://zbmath.org/authors/?q=ai:preusser.tobias"Kirby, Robert M."https://zbmath.org/authors/?q=ai:kirby.robert-m-ii"Pätz, Torben"https://zbmath.org/authors/?q=ai:patz.torbenPublisher's description: In image processing and computer vision applications such as medical or scientific image data analysis, as well as in industrial scenarios, images are used as input measurement data. It is good scientific practice that proper measurements must be equipped with error and uncertainty estimates. For many applications, not only the measured values but also their errors and uncertainties, should be -- and more and more frequently are -- taken into account for further processing. This error and uncertainty propagation must be done for every processing step such that the final result comes with a reliable precision estimate.
The goal of this book is to introduce the reader to the recent advances from the field of uncertainty quantification and error propagation for computer vision, image processing, and image analysis that are based on partial differential equations (PDEs). It presents a concept with which error propagation and sensitivity analysis can be formulated with a set of basic operations. The approach discussed in this book has the potential for application in all areas of quantitative computer vision, image processing, and image analysis. In particular, it might help medical imaging finally become a scientific discipline that is characterized by the classical paradigms of observation, measurement, and error awareness.
This book is comprised of eight chapters. After an introduction to the goals of the book (Chapter 1), we present a brief review of PDEs and their numerical treatment (Chapter 2), PDE-based image processing (Chapter 3), and the numerics of stochastic PDEs (Chapter 4). We then proceed to define the concept of stochastic images (Chapter 5), describe how to accomplish image processing and computer vision with stochastic images (Chapter 6), and demonstrate the use of these principles for accomplishing sensitivity analysis (Chapter 7). Chapter 8 concludes the book and highlights new research topics for the future.Efficient reparametrization into standard form and algorithmic characterization of rational ruled surfaceshttps://zbmath.org/1496.650272022-11-17T18:59:28.764376Z"Alcázar, Juan Gerardo"https://zbmath.org/authors/?q=ai:alcazar.juan-gerardo"Hermoso, Carlos"https://zbmath.org/authors/?q=ai:hermoso.carlosSummary: We provide a simple and efficient algorithm for recognizing whether or not a given rational surface is ruled, and, in the affirmative case, for computing a rational \textit{standard} parametrization, i.e. a rational parametrization of the form \(\boldsymbol{x}(t, s) = \boldsymbol{u}(t) + s \boldsymbol{v}(t)\), where \(\boldsymbol{u}(t)\), \(\boldsymbol{v}(t)\) are rational vector functions. The results are based on the fact, proved in the paper, that the asymptotic directions of a ruled rational surface are rational.Multiscale approach for three-dimensional conformal image registrationhttps://zbmath.org/1496.650282022-11-17T18:59:28.764376Z"Han, Huan"https://zbmath.org/authors/?q=ai:han.huan"Wang, Zhengping"https://zbmath.org/authors/?q=ai:wang.zhengping"Zhang, Yimin"https://zbmath.org/authors/?q=ai:zhang.yimin|zhang.yimin.1Discovering faster matrix multiplication algorithms with reinforcement learninghttps://zbmath.org/1496.650602022-11-17T18:59:28.764376Z"Fawzi, Alhussein"https://zbmath.org/authors/?q=ai:fawzi.alhussein"Balog, Matej"https://zbmath.org/authors/?q=ai:balog.matej"Huang, Aja"https://zbmath.org/authors/?q=ai:huang.aja"Hubert, Thomas"https://zbmath.org/authors/?q=ai:hubert.thomas"Romera-Paredes, Bernardino"https://zbmath.org/authors/?q=ai:romera-paredes.bernardino"Barekatain, Mohammadamin"https://zbmath.org/authors/?q=ai:barekatain.mohammadamin"Novikov, Alexander"https://zbmath.org/authors/?q=ai:novikov.aleksandr-konstantinovich|novikov.alexander-a"Ruiz, Francisco J. R."https://zbmath.org/authors/?q=ai:ruiz.francisco-j-r"Schrittwieser, Julian"https://zbmath.org/authors/?q=ai:schrittwieser.julian"Swirszcz, Grzegorz"https://zbmath.org/authors/?q=ai:swirszcz.grzegorz-m"Silver, David"https://zbmath.org/authors/?q=ai:silver.david-m"Hassabis, Demis"https://zbmath.org/authors/?q=ai:hassabis.demis"Kohli, Pushmeet"https://zbmath.org/authors/?q=ai:kohli.pushmeetSummary: Improving the efficiency of algorithms for fundamental computations can have a widespread impact, as it can affect the overall speed of a large amount of computations. Matrix multiplication is one such primitive task, occurring in many systems -- from neural networks to scientific computing routines. The automatic discovery of algorithms using machine learning offers the prospect of reaching beyond human intuition and outperforming the current best human-designed algorithms. However, automating the algorithm discovery procedure is intricate, as the space of possible algorithms is enormous. Here we report a deep reinforcement learning approach based on AlphaZero for discovering efficient and provably correct algorithms for the multiplication of arbitrary matrices. Our agent, AlphaTensor, is trained to play a single-player game where the objective is finding tensor decompositions within a finite factor space. AlphaTensor discovered algorithms that outperform the state-of-the-art complexity for many matrix sizes. Particularly relevant is the case of \(4\times 4\) matrices in a finite field, where AlphaTensor's algorithm improves on Strassen's two-level algorithm for the first time, to our knowledge, since its discovery 50 years ago. We further showcase the flexibility of AlphaTensor through different use-cases: algorithms with state-of-the-art complexity for structured matrix multiplication and improved practical efficiency by optimizing matrix multiplication for runtime on specific hardware. Our results highlight AlphaTensor's ability to accelerate the process of algorithmic discovery on a range of problems, and to optimize for different criteria.Optimization and learning with nonlocal calculushttps://zbmath.org/1496.650722022-11-17T18:59:28.764376Z"Nagaraj, Sriram"https://zbmath.org/authors/?q=ai:nagaraj.sriramSummary: Nonlocal models have recently had a major impact in nonlinear continuum mechanics and are used to describe physical systems/processes which cannot be accurately described by classical, calculus based ``local'' approaches. In part, this is due to their multiscale nature that enables aggregation of micro-level behavior to obtain a macro-level description of singular/irregular phenomena such as peridynamics, crack propagation, anomalous diffusion and transport phenomena. At the core of these models are \textit{nonlocal} differential operators, including nonlocal analogs of the gradient/Hessian. This paper initiates the use of such nonlocal operators in the context of optimization and learning. We define and analyze the convergence properties of nonlocal analogs of (stochastic) gradient descent and Newton's method on Euclidean spaces. Our results indicate that as the nonlocal interactions become less noticeable, the optima corresponding to nonlocal optimization converge to the ``usual'' optima. At the same time, we argue that nonlocal learning is possible in situations where standard calculus fails. As a stylized numerical example of this, we consider the problem of non-differentiable parameter estimation on a non-smooth translation manifold and show that our \textit{nonlocal} gradient descent recovers the unknown translation parameter from a non-differentiable objective function.WARPd: a linearly convergent first-order primal-dual algorithm for inverse problems with approximate sharpness conditionshttps://zbmath.org/1496.650742022-11-17T18:59:28.764376Z"Colbrook, Matthew J."https://zbmath.org/authors/?q=ai:colbrook.matthew-jA new family of hybrid three-term conjugate gradient methods with applications in image restorationhttps://zbmath.org/1496.650762022-11-17T18:59:28.764376Z"Jiang, Xianzhen"https://zbmath.org/authors/?q=ai:jiang.xianzhen"Liao, Wei"https://zbmath.org/authors/?q=ai:liao.wei"Yin, Jianghua"https://zbmath.org/authors/?q=ai:yin.jianghua"Jian, Jinbao"https://zbmath.org/authors/?q=ai:jian.jinbaoSummary: In this paper, based on the hybrid conjugate gradient method and the convex combination technique, a new family of hybrid three-term conjugate gradient methods are proposed for solving unconstrained optimization. The conjugate parameter in the search direction is a hybrid of Dai-Yuan conjugate parameter and any one. The search direction then is the sum of the negative gradient direction and a convex combination in relation to the last search direction and the gradient at the previous iteration. Without choosing any specific conjugate parameters, we show that the search direction generated by the family always possesses the descent property independent of line search technique, and that it is globally convergent under usual assumptions and the weak Wolfe line search. To verify the effectiveness of the presented family, we further design a specific conjugate parameter, and perform medium-large-scale numerical experiments for smooth unconstrained optimization and image restoration problems. The numerical results show the encouraging efficiency and applicability of the proposed methods even compared with the state-of-the-art methods.A spatial color compensation model using saturation-value total variationhttps://zbmath.org/1496.650772022-11-17T18:59:28.764376Z"Wang, Wei"https://zbmath.org/authors/?q=ai:wang.wei.39|wang.wei.15|wang.wei.36|wang.wei.41|wang.wei.12|wang.wei.23|wang.wei.24|wang.wei.16|wang.wei.9|wang.wei.38|wang.wei.30|wang.wei.21|wang.wei.20|wang.wei.25|wang.wei.46|wang.wei.27|wang.wei.13|wang.wei.45|wang.wei.47|wang.wei.8|wang.wei.28|wang.wei.3|wang.wei.2|wang.wei.34|wang.wei.50|wang.wei.49|wang.wei.1|wang.wei.40|wang.wei.19|wang.wei.31|wang.wei.32|wang.wei.29"Yang, Yuming"https://zbmath.org/authors/?q=ai:yang.yuming"Ng, Michael K."https://zbmath.org/authors/?q=ai:ng.michael-k|ng.michael-ka-shingDeep learning solver for solving advection-diffusion equation in comparison to finite difference methodshttps://zbmath.org/1496.651272022-11-17T18:59:28.764376Z"Salman, Ahmed Khan"https://zbmath.org/authors/?q=ai:salman.ahmed-khan"Pouyaei, Arman"https://zbmath.org/authors/?q=ai:pouyaei.arman"Choi, Yunsoo"https://zbmath.org/authors/?q=ai:choi.yunsoo"Lops, Yannic"https://zbmath.org/authors/?q=ai:lops.yannic"Sayeed, Alqamah"https://zbmath.org/authors/?q=ai:sayeed.alqamahSummary: In numerical modeling, the advection-diffusion equation describes the long-range transport of atmospheric pollutants. Most numerical models in the atmospheric science community are based on finite difference methods (FDM). In this study, we conduct a comprehensive comparative analysis of standard FDM-based numerical solvers with a deep learning-based solver, the objective of which is to solve the 2D unsteady advection-diffusion equation. The performance is compared on key performance aspects accuracy, stability, and interpolation. In the analysis, we find that despite being trained with a coarse resolution, the DNN solver is the most accurate among all the solvers. For the DNN solver, the mean absolute error and maximum absolute error of fluid concentration are lowered up to 2 orders of magnitude than the FDM-based method, which corresponds to 95\% and 97\% relative error reduction, respectively. The analysis also shows that the DNN solver is more stable in coarse spatial-temporal domains. Owing to its continuous nature, the DNN can interpolate a solution with consistent accuracy in a resampled spatial and temporal domain magnified up to 5 and 16 times, respectively. This study highlights the fundamental differences in the partial differential equation solving methods by comparing the DNN and FDM-based solvers and presents the DNN solver as a potential alternative to the FDM-based solvers in atmospheric numerical modeling.An optimized, parallel computation of the ghost layer for adaptive hybrid forest mesheshttps://zbmath.org/1496.651502022-11-17T18:59:28.764376Z"Holke, Johannes"https://zbmath.org/authors/?q=ai:holke.johannes"Knapp, David"https://zbmath.org/authors/?q=ai:knapp.david.1"Burstedde, Carsten"https://zbmath.org/authors/?q=ai:burstedde.carstenThe authors develop and discuss a parallel algorithm to compute the ghost layer for adaptive, nonconforming forest-of-trees meshes of mixed element types including cubes, prisms, and tetrahedra. The algorithm is available in the \texttt{t8code} library and restricted to face neighbors. It extends algorithms developed for forests of octrees available for example in the \texttt{p4est} library and is both flexible and efficient due to its recursive nature, as demonstrated in some numerical experiments.
Reviewer: Hendrik Ranocha (Hamburg)Solving optical tomography with deep learninghttps://zbmath.org/1496.652002022-11-17T18:59:28.764376Z"Fan, Yuwei"https://zbmath.org/authors/?q=ai:fan.yuwei"Ying, Lexing"https://zbmath.org/authors/?q=ai:ying.lexingSummary: This paper presents a neural network approach for solving two-dimensional optical tomography (OT) problems based on the radiative transfer equation. The mathematical problem of OT is to recover the optical properties of an object based on the albedo operator that is accessible from boundary measurements. Both the forward map from the optical properties to the albedo operator and the inverse map are high-dimensional and nonlinear. For the circular tomography geometry, a perturbative analysis shows that the forward map can be approximated by a vectorized convolution operator in the angular direction. Motivated by this, we propose effective neural network architectures for the forward and inverse maps based on convolution layers, with weights learned from training datasets. Numerical results demonstrate the efficiency of the proposed neural networks.Eigen-convergence of Gaussian kernelized graph Laplacian by manifold heat interpolationhttps://zbmath.org/1496.652032022-11-17T18:59:28.764376Z"Cheng, Xiuyuan"https://zbmath.org/authors/?q=ai:cheng.xiuyuan"Wu, Nan"https://zbmath.org/authors/?q=ai:wu.nanSummary: We study the spectral convergence of graph Laplacians to the Laplace-Beltrami operator when the kernelized graph affinity matrix is constructed from \(N\) random samples on a \(d\)-dimensional manifold in an ambient Euclidean space. By analyzing Dirichlet form convergence and constructing candidate approximate eigenfunctions via convolution with manifold heat kernel, we prove eigen-convergence with rates as \(N\) increases. The best eigenvalue convergence rate is \(N^{-1/(d/2 +2)}\) (when the kernel bandwidth parameter \(\epsilon \sim (\log N/N)^{1/(d/2+2)}\)) and the best eigenvector 2-norm convergence rate is \(N^{-1/(d/2+3)}\) (when \(\epsilon \sim (\log N / N )^{1/(d/2+3)}\)). These rates hold up to a \(\log N\)-factor for finitely many low-lying eigenvalues of both un-normalized and normalized graph Laplacians. When data density is non-uniform, we prove the same rates for the density-corrected graph Laplacian, and we also establish new operator point-wise convergence rate and Dirichlet form convergence rate as intermediate results. Numerical results are provided to support the theory.Multiscale high-dimensional sparse Fourier algorithms for noisy datahttps://zbmath.org/1496.652422022-11-17T18:59:28.764376Z"Choi, Bosu"https://zbmath.org/authors/?q=ai:choi.bosu"Christlieb, Andrew"https://zbmath.org/authors/?q=ai:christlieb.andrew-j"Wang, Yang"https://zbmath.org/authors/?q=ai:wang.yang.10Summary: We develop an efficient and robust high-dimensional sparse Fourier algorithm for noisy samples. Earlier in the paper [the authors, ``High-dimensional sparse Fourier algorithms'', Preprint, \url{arXiv:1606.07407}], an efficient sparse Fourier algorithm with \(\Theta (ds \log s)\) average-case runtime and \(\Theta (ds)\) sampling complexity under certain assumptions was developed for signals that are \(s\)-sparse and bandlimited in the \(d\)-dimensional Fourier domain, i.e. there are at most \(s\) energetic frequencies and they are in \([- N/2, N/2)^d \cap \mathbb{Z}^d\). However, in practice the measurements of signals often contain noise, and in some cases may only be nearly sparse in the sense that they are well approximated by the best \(s\) Fourier modes. In this paper, we propose a multiscale sparse Fourier algorithm for noisy samples that proves to be both robust against noise and efficient.Learning in high-dimensional feature spaces using ANOVA-based fast matrix-vector multiplicationhttps://zbmath.org/1496.652432022-11-17T18:59:28.764376Z"Nestler, Franziska"https://zbmath.org/authors/?q=ai:nestler.franziska"Stoll, Martin"https://zbmath.org/authors/?q=ai:stoll.martin"Wagner, Theresa"https://zbmath.org/authors/?q=ai:wagner.theresaSummary: Kernel matrices are crucial in many learning tasks such as support vector machines or kernel ridge regression. The kernel matrix is typically dense and large-scale. Depending on the dimension of the feature space even the computation of all of its entries in reasonable time becomes a challenging task. For such dense matrices the cost of a matrix-vector product scales quadratically with the dimensionality \(N\), if no customized methods are applied. We propose the use of an ANOVA kernel, where we construct several kernels based on lower-dimensional feature spaces for which we provide fast algorithms realizing the matrix-vector products. We employ the non-equispaced fast Fourier transform (NFFT), which is of linear complexity for fixed accuracy. Based on a feature grouping approach, we then show how the fast matrix-vector products can be embedded into a learning method choosing kernel ridge regression and the conjugate gradient solver. We illustrate the performance of our approach on several data sets.The science of quantitative information flowhttps://zbmath.org/1496.680012022-11-17T18:59:28.764376Z"Alvim, Mário S."https://zbmath.org/authors/?q=ai:alvim.mario-s"Chatzikokolakis, Konstantinos"https://zbmath.org/authors/?q=ai:chatzikokolakis.konstantinos"McIver, Annabelle"https://zbmath.org/authors/?q=ai:mciver.annabelle-k"Morgan, Carroll"https://zbmath.org/authors/?q=ai:morgan.carroll-c"Palamidessi, Catuscia"https://zbmath.org/authors/?q=ai:palamidessi.catuscia"Smith, Geoffrey"https://zbmath.org/authors/?q=ai:smith.geoffrey-b|smith.geoffrey-d|smith.geoffrey|smith.geoffrey-howard|smith.geoffrey-s|smith.geoffrey-lPublisher's description: This book presents a comprehensive mathematical theory that explains precisely what information flow is, how it can be assessed quantitatively -- so bringing precise meaning to the intuition that certain information leaks are small enough to be tolerated -- and how systems can be constructed that achieve rigorous, quantitative information-flow guarantees in those terms. It addresses the fundamental challenge that functional and practical requirements frequently conflict with the goal of preserving confidentiality, making perfect security unattainable.
Topics include: a systematic presentation of how unwanted information flow, i.e., ``leaks'', can be quantified in operationally significant ways and then bounded, both with respect to estimated benefit for an attacking adversary and by comparisons between alternative implementations; a detailed study of capacity, refinement, and Dalenius leakage, supporting robust leakage assessments; a unification of information-theoretic channels and information-leaking sequential programs within the same framework; and a collection of case studies, showing how the theory can be applied to interesting realistic scenarios.
The text is unified, self-contained and comprehensive, accessible to students and researchers with some knowledge of discrete probability and undergraduate mathematics, and contains exercises to facilitate its use as a course textbook.Neural networks and numerical analysishttps://zbmath.org/1496.680022022-11-17T18:59:28.764376Z"Després, Bruno"https://zbmath.org/authors/?q=ai:despres.brunoPublisher's description: This book uses numerical analysis as the main tool to investigate methods in machine learning and neural networks. The efficiency of neural network representations for general functions and for polynomial functions is studied in detail, together with an original description of the Latin hypercube method and of the ADAM algorithm for training. Furthermore, unique features include the use of Tensorflow for implementation session, and the description of on going research about the construction of new optimized numerical schemes.
This timely volume uses numerical analysis as the main tool to study methods in machine learning and artificial intelligence. It explains mathematical notions, such as approximation and optimization, which are the roots of neural networks.A guide to graph algorithmshttps://zbmath.org/1496.680032022-11-17T18:59:28.764376Z"Kloks, Ton"https://zbmath.org/authors/?q=ai:kloks.ton"Xiao, Mingyu"https://zbmath.org/authors/?q=ai:xiao.mingyuThis book provides a guided tour through the research area of graph algorithms. The authors describe many advanced techniques in the design of graph algorithms, especially the well-known treewidth. Almost one third of the book is devoted to the application of graph decompositions, such as tree decomposition in solving NP-hard problems in a parameterized fashion. Moreover, the authors give a good survey on recent topics in graph algorithms, which are supported by results from theory.
The book is divided into four chapters. In the first chapter, the authors provide a basic introduction to graph theory and set their notation. Most of the proofs for this chapter are given as exercises. The second chapter gives a taste of algorithms in graph theory by illustrating some basic algorithms, including the Bron and Kerbosch algorithm and the Blossom algorithm, as well as basic techniques used in the analysis and design of algorithms, such as the Lovász local lemma or Szemerédi's regularity lemma. The second chapter also contains a short review of NP-hardness and complexity analysis. The third chapter is a very short review of graph algebras and monadic second-order logic, which has applications in showing fixed-parameter tractability of problems. The last chapter is a very long summary of recent trends in graph algorithms with various topics such as graph decompositions and related parameters (e.g. treewidth, rankwidth, and so on). Graph coloring, immersions, and dominations are the next topics discussed in this chapter. It also discusses homomorphisms, but just give pointers to isomorphism (isomorphism is not discussed in this book). Finally, graph products are discussed.
One of the main advantages of this book are its exercises. The exercises are the source for further research. In summary, this book is a good candidate for a course on graph algorithms intended for last-year undergraduates or early graduate students in computer science.
Reviewer: Ali Shakiba (Rafsanǧān)Data science and machine learning. Mathematical and statistical methodshttps://zbmath.org/1496.680042022-11-17T18:59:28.764376Z"Kroese, Dirk P."https://zbmath.org/authors/?q=ai:kroese.dirk-p"Botev, Zdravko I."https://zbmath.org/authors/?q=ai:botev.zdravko-i"Taimre, Thomas"https://zbmath.org/authors/?q=ai:taimre.thomas"Vaisman, Radislav"https://zbmath.org/authors/?q=ai:vaisman.radislavPublisher's description: The purpose of Data Science and Machine Learning: Mathematical and Statistical Methods is to provide an accessible, yet comprehensive textbook intended for students interested in gaining a better understanding of the mathematics and statistics that underpin the rich variety of ideas and machine learning algorithms in data science.
Key features:
\begin {itemize}
\item Focuses on mathematical understanding.
\item Presentation is self-contained, accessible, and comprehensive.
\item Extensive list of exercises and worked-out examples.
\item Many concrete algorithms with Python code.
\item Full color throughout.
\end {itemize}Lie group machine learninghttps://zbmath.org/1496.680052022-11-17T18:59:28.764376Z"Li, Fanzhang"https://zbmath.org/authors/?q=ai:li.fanzhang"Zhang, Li"https://zbmath.org/authors/?q=ai:zhang.li.6"Zhang, Zhao"https://zbmath.org/authors/?q=ai:zhang.zhao.1Publisher's description: This book explains deep learning concepts and derives semi-supervised learning and nuclear learning frameworks based on cognition mechanism and Lie group theory. Lie group machine learning is a theoretical basis for brain intelligence, neuromorphic learning (NL), advanced machine learning, and advanced artificial intelligence. The book further discusses algorithms and applications in tensor learning, spectrum estimation learning, Finsler geometry learning, homology boundary learning, and prototype theory. With abundant case studies, this book can be used as a reference book for senior college students and graduate students as well as college teachers and scientific and technical personnel involved in computer science, artificial intelligence, machine learning, automation, mathematics, management science, cognitive science, financial management, and data analysis. In addition, this text can be used as the basis for teaching the principles of machine learning.Exploring formalisation. A primer in human-readable mathematics in Lean 3 with examples from simplicial topologyhttps://zbmath.org/1496.680062022-11-17T18:59:28.764376Z"Löh, Clara"https://zbmath.org/authors/?q=ai:loh.claraPublisher's description: This primer on mathematics formalisation provides a rapid, hands-on introduction to proof verification in Lean.
After a quick introduction to Lean, the basic techniques of human-readable formalisation are introduced, illustrated by simple examples on maps, induction and real numbers. Subsequently, typical design options are discussed and brought to life through worked examples in the setting of simplicial complexes (a higher-dimensional generalisation of graph theory). Finally, the book demonstrates how current research in algebraic and geometric topology can be formalised by means of suitable abstraction layers.
Informed by the author's recent teaching and research experience, this book allows students and researchers to quickly get started with formalising and checking their proofs. The core material of the book is accessible to mathematics students with basic programming skills. For the final chapter, familiarity with elementary category theory and algebraic topology is recommended.Machine and deep learning algorithms and applicationshttps://zbmath.org/1496.680072022-11-17T18:59:28.764376Z"Shanthamallu, Uday Shankar"https://zbmath.org/authors/?q=ai:shanthamallu.uday-shankar"Spanias, Andreas"https://zbmath.org/authors/?q=ai:spanias.andreas-sPublisher's description: This book introduces basic machine learning concepts and applications for a broad audience that includes students, faculty, and industry practitioners. We begin by describing how machine learning provides capabilities to computers and embedded systems to learn from data. A typical machine learning algorithm involves training, and generally the performance of a machine learning model improves with more training data. Deep learning is a sub-area of machine learning that involves extensive use of layers of artificial neural networks typically trained on massive amounts of data. Machine and deep learning methods are often used in contemporary data science tasks to address the growing data sets and detect, cluster, and classify data patterns. Although machine learning commercial interest has grown relatively recently, the roots of machine learning go back to decades ago. We note that nearly all organizations, including industry, government, defense, and health, are using machine learning to address a variety of needs and applications.
The machine learning paradigms presented can be broadly divided into the following three categories: supervised learning, unsupervised learning, and semi-supervised learning. Supervised learning algorithms focus on learning a mapping function, and they are trained with supervision on labeled data. Supervised learning is further sub-divided into classification and regression algorithms. Unsupervised learning typically does not have access to ground truth, and often the goal is to learn or uncover the hidden pattern in the data. Through semi-supervised learning, one can effectively utilize a large volume of unlabeled data and a limited amount of labeled data to improve machine learning model performances. Deep learning and neural networks are also covered in this book. Deep neural networks have attracted a lot of interest during the last ten years due to the availability of graphics processing units (GPU) computational power, big data, and new software platforms. They have strong capabilities in terms of learning complex mapping functions for different types of data. We organize the book as follows. The book starts by introducing concepts in supervised, unsupervised, and semi-supervised learning. Several algorithms and their inner workings are presented within these three categories. We then continue with a brief introduction to artificial neural network algorithms and their properties. In addition, we cover an array of applications and provide extensive bibliography. The book ends with a summary of the key machine learning concepts.Model checking quantum systems. Principles and algorithmshttps://zbmath.org/1496.680082022-11-17T18:59:28.764376Z"Ying, Mingsheng"https://zbmath.org/authors/?q=ai:ying.mingsheng"Feng, Yuan"https://zbmath.org/authors/?q=ai:feng.yuanPublisher's description: Model checking is one of the most successful verification techniques and has been widely adopted in traditional computing and communication hardware and software industries. This book provides the first systematic introduction to model checking techniques applicable to quantum systems, with broad potential applications in the emerging industry of quantum computing and quantum communication as well as quantum physics. Suitable for use as a course textbook and for self-study, graduate and senior undergraduate students will appreciate the step-by-step explanations and the exercises included. Researchers and engineers in the related fields can further develop these techniques in their own work, with the final chapter outlining potential future applications.Competitive analysis of the online dial-a-ride problemhttps://zbmath.org/1496.680092022-11-17T18:59:28.764376Z"Birx, Alexander"https://zbmath.org/authors/?q=ai:birx.alexanderSummary: Online optimization, in contrast to classical optimization, deals with optimization problems whose input data is not immediately available, but instead is revealed piece by piece. An online algorithm has to make irrevocable optimization decisions based on the arriving pieces of data to compute a solution of the online problem. The quality of an online algorithm is measured by the competitive ratio, which is the quotient of the solution computed by the online algorithm and the optimum offline solution, i.e., the solution computed by an optimum algorithm that has knowledge about all data from the start.
In this thesis we examine the online optimization problem online Dial-a-Ride. This problem consists of a server starting at a distinct point of a metric space, called origin, and serving transportation requests that appear over time. The goal is to minimize the makespan, i.e., to complete serving all requests as fast as possible. We distinguish between a closed version, where the server is required to return to the origin, and an open version, where the server is allowed to stay at the destination of the last served request.
In this thesis, we provide new lower bounds for the competitive ratio of online Dial-a-Ride on the real line for both the open and the closed version by expanding upon the approach of Bjelde et al.'s work. In the case of the open version, the improved lower bound separates online Dial-a-Ride from its special case online TSP, where starting position and destination of requests coincide.
To produce improved upper bounds for the competitive ratio of online Dial-a-Ride, we generalize the design of the Ignore algorithm and the Smartstart algorithm into the class of schedule-based algorithms. We show lower bounds for the competitive ratios of algorithms of this class and then provide a thorough analysis of Ignore and Smartstart. Identifying and correcting a critical weakness of Smartstart gives us the improved Smarterstart algorithm. This schedule-based algorithm attains the best known upper bound for open online Dial-a-Ride on the real line as well as on arbitrary metric spaces.
Finally, we provide an analysis of the Replan algorithm improving several known bounds for the algorithm's competitive ratio.Modular neural networks and type-2 fuzzy systems for pattern recognitionhttps://zbmath.org/1496.680102022-11-17T18:59:28.764376Z"Melin, Patricia"https://zbmath.org/authors/?q=ai:melin.patriciaPublisher's description: This book describes hybrid intelligent systems using type-2 fuzzy logic and modular neural networks for pattern recognition applications. Hybrid intelligent systems combine several intelligent computing paradigms, including fuzzy logic, neural networks, and bio-inspired optimization algorithms, which can be used to produce powerful pattern recognition systems. Type-2 fuzzy logic is an extension of traditional type-1 fuzzy logic that enables managing higher levels of uncertainty in complex real world problems, which are of particular importance in the area of pattern recognition. The book is organized in three main parts, each containing a group of chapters built around a similar subject. The first part consists of chapters with the main theme of theory and design algorithms, which are basically chapters that propose new models and concepts, which are the basis for achieving intelligent pattern recognition. The second part contains chapters with the main theme of using type-2 fuzzy models and modular neural networks with the aim of designing intelligent systems for complex pattern recognition problems, including iris, ear, face and voice recognition. The third part contains chapters with the theme of evolutionary optimization of type-2 fuzzy systems and modular neural networks in the area of intelligent pattern recognition, which includes the application of genetic algorithms for obtaining optimal type-2 fuzzy integration systems and ideal neural network architectures for solving problems in this area.Tools and algorithms for the construction and analysis of systems. 26th international conference, TACAS 2020, held as part of the European joint conferences on theory and practice of software, ETAPS 2020, Dublin, Ireland, April 25--30, 2020. Proceedings. Part Ihttps://zbmath.org/1496.680112022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1408.68008; Zbl 1408.68024; Zbl 1408.68025]. For Part II of the proceedings of the present conference see [Zbl 1471.68010].Mathematical software -- ICMS 2020. 7th international conference, Braunschweig, Germany, July 13--16, 2020. Proceedingshttps://zbmath.org/1496.680122022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding conference see [Zbl 1391.68004].
Indexed articles:
\textit{Horigome, Noriyuki; Terui, Akira; Mikawa, Masahiko}, A design and an implementation of an inverse kinematics computation in robotics using Gröbner bases, 3-13 [Zbl 07600873]
\textit{Nair, Akshar; Davenport, James; Sankaran, Gregory}, Curtains in CAD: why are they a problem and how do we fix them?, 17-26 [Zbl 07600874]
\textit{Chen, Changbo}, Chordality preserving incremental triangular decomposition and its implementation, 27-36 [Zbl 07600875]
\textit{Bianchi, Francesca}, \(\mathbb{Q}(\sqrt{-3})\)-integral points on a Mordell curve, 39-50 [Zbl 07600876]
\textit{Li, Xiaxin; Rodriguez, Jose Israel; Wang, Botong}, A numerical approach for computing Euler characteristics of affine varieties, 51-60 [Zbl 07600877]
\textit{Hauenstein, Jonathan D.; Regan, Margaret H.}, Evaluating and differentiating a polynomial using a pseudo-witness set, 61-69 [Zbl 07600878]
\textit{Matsubara-Heo, Saiei-Jaeyeong; Takayama, Nobuki}, Algorithms for Pfaffian systems and cohomology intersection numbers of hypergeometric integrals, 73-84 [Zbl 07600879]
\textit{Elsenhans, Andreas-Stephan; Jahnel, Jörg}, Computations with algebraic surfaces, 87-93 [Zbl 07600880]
\textit{Farr, Ricky E.; Pauli, Sebastian; Saidak, Filip}, Evaluating fractional derivatives of the Riemann zeta function, 94-101 [Zbl 07600881]
\textit{Siccha, Sergio}, Towards efficient normalizers of primitive groups, 105-114 [Zbl 07600882]
\textit{Borovik, Alexandre; Yalçınkaya, Şükrü}, Homomorphic encryption and some black box attacks, 115-124 [Zbl 07600883]
\textit{Moede, Tobias}, Nilpotent quotients of associative \(\mathbb{Z}\)-algebras and augmentation quotients of Baumslag-Solitar groups, 125-130 [Zbl 07600884]
\textit{Eick, Bettina; Vaughan-Lee, Michael}, The GAP package LiePRing, 131-140 [Zbl 07600885]
\textit{Betten, Anton; Mukthineni, Tarun}, Classifying simplicial dissections of convex polyhedra with symmetry, 143-152 [Zbl 07600886]
\textit{De Bruyn, Bart}, Classification results for hyperovals of generalized quadrangles, 153-161 [Zbl 07600887]
\textit{Topalova, Svetlana; Zhelezova, Stela}, Isomorphism and invariants of parallelisms of projective spaces, 162-172 [Zbl 07600888]
\textit{Bouyuklieva, Stefka; Bouyukliev, Iliya}, Classification of linear codes by extending their residuals, 173-180 [Zbl 07600889]
\textit{Bouyukliev, Iliya}, The program \textsc{Generation} in the software package \textsc{QextNewEdition}, 181-189 [Zbl 07600890]
\textit{Bruns, Winfried}, Algebraic polytopes in Normaliz, 193-201 [Zbl 07600891]
\textit{Joswig, Michael; Vater, Paul}, Real tropical hyperfaces by patchworking in \texttt{polymake}, 202-211 [Zbl 07600892]
\textit{Chalkis, Apostolos; Emiris, Ioannis Z.; Fisikopoulos, Vissarion}, Practical volume estimation of zonotopes by a new annealing schedule for cooling convex bodies, 212-221 [Zbl 07600893]
\textit{Macchia, Antonio; Wiebe, Amy}, Slack ideals in Macaulay2, 222-231 [Zbl 07600894]
\textit{Kastner, Lars; Panizzut, Marta}, Hyperplane arrangements in \texttt{polymake}, 232-240 [Zbl 07600895]
\textit{Akian, Marianne; Allamigeon, Xavier; Boyet, Marin; Gaubert, Stéphane}, A convex programming approach to solve posynomial systems, 241-250 [Zbl 07600896]
\textit{Bauer, Andrej; Haselwarter, Philipp G.; Petković, Anja}, Equality checking for general type theories in Andromeda 2, 253-259 [Zbl 07600897]
\textit{Olšák, Miroslav}, GeoLogic -- graphical interactive theorem prover for Euclidean geometry, 263-271 [Zbl 07600898]
\textit{Fu, Yaoshun; Yu, Wensheng}, A formalization of properties of continuous functions on closed intervals, 272-280 [Zbl 07600899]
\textit{Chen, Changbo; Zhu, Zhangpeng; Chi, Haoyu}, Variable ordering selection for cylindrical algebraic decomposition with artificial neural networks, 281-291 [Zbl 07600900]
\textit{Brown, Christopher W.; Daves, Glenn Christopher}, Applying machine learning to heuristics for real polynomial constraint solving, 292-301 [Zbl 07600901]
\textit{Florescu, Dorian; England, Matthew}, A machine learning based software pipeline to Pick the variable ordering for algorithms with polynomial inputs, 302-311 [Zbl 07600902]
\textit{Johansson, Fredrik}, FunGrim: a symbolic library for special functions, 315-323 [Zbl 07600903]
\textit{Runnwerth, Mila; Stocker, Markus; Auer, Sören}, Operational research literature as a use case for the open research knowledge graph, 327-334 [Zbl 07600904]
\textit{Greiner-Petter, André; Schubotz, Moritz; Aizawa, Akiko; Gipp, Bela}, Making presentation math computable: proposing a context sensitive approach for translating LaTeX to computer algebra systems, 335-341 [Zbl 07600905]
\textit{Brandt, Alexander; Moir, Robert H. C.; Maza, Marc Moreno}, Employing C++ templates in the design of a computer algebra library, 342-352 [Zbl 07600906]
\textit{Halbach, Dennis Tobias}, Mathematical world knowledge contained in the multilingual Wikipedia project, 353-361 [Zbl 07600907]
\textit{Di Cosmo, Roberto}, Archiving and referencing source code with software heritage, 362-373 [Zbl 07600908]
\textit{Kaluba, Marek; Lorenz, Benjamin; Timme, Sascha}, Polymake.jl: a new interface to \texttt{polymake}, 377-385 [Zbl 07600909]
\textit{Marco-Buzunáriz, Miguel A.}, Web based notebooks for teaching, an experience at Universidad de Zaragoza, 386-392 [Zbl 07600910]
\textit{Bouillot, Olivier}, Phase portraits of bi-dimensional zeta values, 393-405 [Zbl 07600911]
\textit{Schaefer, Jan Frederik; Amann, Kai; Kohlhase, Michael}, Prototyping controlled mathematical languages in Jupyter notebooks, 406-415 [Zbl 07600912]
\textit{Hamada, Tatsuyoshi; Nakagawa, Yoshiyuki; Tamura, Makoto}, Method to create multiple choice exercises for computer algebra system, 419-425 [Zbl 07600914]
\textit{Nakamura, Kento; Ahara, Kazushi}, A flow-based programming environment for geometrical construction, 426-431 [Zbl 07600915]
\textit{Benner, Peter; Werner, Steffen W. R.}, MORLAB -- a model order reduction framework in MATLAB and Octave, 432-441 [Zbl 07600916]
\textit{Grasegger, Georg; Legerský, Jan}, FlexRiLoG -- a SageMath package for motions of graphs, 442-450 [Zbl 07600917]
\textit{Quinby, Francis; Kim, Seyeon; Kang, Sohee; Pollanen, Marco; Reynolds, Michael G.; Burr, Wesley S.}, Markov transition matrix analysis of mathematical expression input models, 451-461 [Zbl 07600918]
\textit{Abbott, John}, Certifying irreducibility in \(\mathbb{Z}[x]\), 462-472 [Zbl 07600919]
\textit{Hellström, Lars}, A content dictionary for in-object comments, 473-481 [Zbl 07600920]
\textit{van der Hoeven, Joris; Monagan, Michael}, Implementing the tangent Graeffe root finding method, 482-492 [Zbl 07600921]New perspectives on hybrid intelligent system design based on fuzzy logic, neural networks and metaheuristicshttps://zbmath.org/1496.680132022-11-17T18:59:28.764376ZPublisher's description: In this book, recent developments on fuzzy logic, neural networks and optimization algorithms, as well as their hybrid combinations, are presented. In addition, the above-mentioned methods are applied to areas such as, intelligent control and robotics, pattern recognition, medical diagnosis, time series prediction and optimization of complex problems. The book contains a collection of papers focused on hybrid intelligent systems based on soft computing techniques. There are some papers with the main theme of type-1 and type-2 fuzzy logic, which basically consists of papers that propose new concepts and algorithms based on type-1 and type-2 fuzzy logic and their applications. There also some papers that offer theoretical concepts and applications of meta-heuristics in different areas. Another group of papers describe diverse applications of fuzzy logic, neural networks and hybrid intelligent systems in medical problems. There are also some papers that present theory and practice of neural networks in different areas of application. In addition, there are papers that present theory and practice of optimization and evolutionary algorithms in different areas of application. Finally, there are some papers describing applications of fuzzy logic, neural networks and meta-heuristics in pattern recognition and classification problems.
The articles of mathematical interest will be reviewed individually.Approximation, randomization, and combinatorial optimization. Algorithms and techniques, APPROX/RANDOM 2022, University of Illinois, Urbana-Champaign, USA, virtual conference, September 19--21, 2022https://zbmath.org/1496.680142022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding conferences see [Zbl 1473.68020].30th annual European symposium on algorithms, ESA 2022, Berlin/Potsdam, Germany, September 5--9, 2022https://zbmath.org/1496.680152022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding symposium see [Zbl 1473.68018].Preface: Special issue of the 25th RCRA international workshop on experimental evaluation of algorithms for solving problems with combinatorial explosionhttps://zbmath.org/1496.680162022-11-17T18:59:28.764376Z(no abstract)Computer science -- theory and applications. 15th international computer science symposium in Russia, CSR 2020, Yekaterinburg, Russia, June 29 -- July 3, 2020. Proceedingshttps://zbmath.org/1496.680172022-11-17T18:59:28.764376Z"Fernau, Henning"https://zbmath.org/authors/?q=ai:fernau.henningThe articles of this volume will be reviewed individually. For the preceding symposium see [Zbl 1416.68013].
Indexed articles:
\textit{Ablayev, Farid; Ablayev, Marat; Vasiliev, Alexander}, Quantum hashing and fingerprinting for quantum cryptography and computations, 1-15 [Zbl 07603909]
\textit{Agrawal, Akanksha; Zehavi, Meirav}, Parameterized analysis of art gallery and terrain guarding, 16-29 [Zbl 07603910]
\textit{Brandes, Ulrik}, Central positions in social networks, 30-45 [Zbl 07603911]
\textit{Andrade De Melo, Alexsander; Oliveira, Mateus De Oliveira}, Second-order finite automata, 46-63 [Zbl 07603912]
\textit{Faliszewski, Piotr; Skowron, Piotr; Slinko, Arkadii; Szufa, Stanisław; Talmon, Nimrod}, Isomorphic distances among elections, 64-78 [Zbl 07603913]
\textit{Zhu, Binhai}, Tandem duplications, segmental duplications and deletions, and their applications, 79-102 [Zbl 07603914]
\textit{Akhmedov, Maxim}, Faster 2-disjoint-shortest-paths algorithm, 103-116 [Zbl 07603915]
\textit{Babu, Jasine; Benson, Deepu; Rajendraprasad, Deepak; Vaka, Sai Nishant}, An improvement to Chvátal and Thomassen's upper bound for oriented diameter, 117-129 [Zbl 07603916]
\textit{Bauwens, Bruno; Blinnikov, Ilya}, The normalized algorithmic information distance can not be approximated, 130-141 [Zbl 07603917]
\textit{Bazhenov, Nikolay}, Definable subsets of polynomial-time algebraic structures, 142-154 [Zbl 07603918]
\textit{Bodini, Olivier; Genitrini, Antoine; Naima, Mehdi; Singh, Alexandros}, Families of monotonic trees: combinatorial enumeration and asymptotics, 155-168 [Zbl 07603919]
\textit{Boneva, Iovka; Niehren, Joachim; Sakho, Momar}, Nested regular expressions can be compiled to small deterministic nested word automata, 169-183 [Zbl 07603920]
\textit{Çağırıcı, Onur}, On embeddability of unit disk graphs onto straight lines, 184-197 [Zbl 07603921]
\textit{Chistopolskaya, Anastasiya; Podolskii, Vladimir V.}, On the decision tree complexity of threshold functions, 198-210 [Zbl 07603922]
\textit{Datta, Samir; Gupta, Chetan; Jain, Rahul; Sharma, Vimal Raj; Tewari, Raghunath}, Randomized and symmetric catalytic computation, 211-223 [Zbl 07603923]
\textit{Fomin, Fedor V.; Ramamoorthi, Vijayaragunathan}, On the parameterized complexity of the expected coverage problem, 224-236 [Zbl 07603924]
\textit{Gurvich, Vladimir; Vyalyi, Mikhail}, Computational hardness of multidimensional subtraction games, 237-249 [Zbl 07603925]
\textit{Kanesh, Lawqueen; Maity, Soumen; Muluk, Komal; Saurabh, Saket}, Parameterized complexity of fair feedback vertex set problem, 250-262 [Zbl 07603926]
\textit{Kim, Jaeyoon; Volkovich, Ilya; Zhang, Nelson Xuzhi}, The power of Leibniz-like functions as oracles, 263-275 [Zbl 07603927]
\textit{Kosolobov, Dmitry; Merkurev, Oleg}, Optimal skeleton Huffman trees revisited, 276-288 [Zbl 07603928]
\textit{Kuske, Dietrich}, The subtrace order and counting first-order logic, 289-302 [Zbl 07603929]
\textit{Merkle, Wolfgang; Titov, Ivan}, Speedable left-c.e. numbers, 303-313 [Zbl 07603930]
\textit{Neveling, Marc; Rothe, Jörg; Zorn, Roman}, The complexity of controlling Condorcet, fallback, and \(k\)-veto elections by replacing candidates or voters, 314-327 [Zbl 07603931]
\textit{Okhotin, Alexander; Olkhovsky, Ilya}, On the transformation of LL(\(k\))-linear grammars to LL(1)-linear, 328-340 [Zbl 07603932]
\textit{Philip, Geevarghese; Rani, M. R.; Subashini, R.}, On computing the Hamiltonian index of graphs, 341-353 [Zbl 07603933]
\textit{Rupp, Tobias; Funke, Stefan}, A lower bound for the query phase of contraction hierarchies and hub labels, 354-366 [Zbl 07603934]
\textit{Sahu, Abhishek; Saurabh, Saket}, Kernelization of \textsc{arc disjoint cycle packing} in \(\alpha\)-bounded digraphs, 367-378 [Zbl 07603935]
\textit{Talambutsa, Alexey}, On subquadratic derivational complexity of semi-Thue systems, 379-392 [Zbl 07603936]
\textit{Volkovich, Ilya}, The untold story of \(\mathsf{SBP}\), 393-405 [Zbl 07603937]
\textit{Wu, Yaokun; Zhu, Yinfeng}, Weighted rooted trees: fat or tall?, 406-418 [Zbl 07603938]
\textit{Yamamura, Akihiro; Kase, Riki; Jajcayová, Tatiana B.}, Groupoid action and rearrangement problem of bicolor arrays by prefix reversals, 419-431 [Zbl 07603939]Treewidth, kernels, and algorithms. Essays dedicated to Hans L. Bodlaender on the occasion of his 60th birthdayhttps://zbmath.org/1496.680182022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually.
Indexed articles:
\textit{Arnborg, Stefan; Proskurowski, Andrzej}, Seeing arboretum for the (partial \(k\)-) trees, 3-6 [Zbl 07604200]
\textit{Fellows, Michael R.; Rosamond, Frances A.}, Collaborating with Hans: some remaining wonderments, 7-17 [Zbl 07604201]
\textit{Hermelin, Danny}, Hans Bodlaender and the theory of kernelization lower bounds, 18-21 [Zbl 07604202]
\textit{van Leeuwen, Jan}, Algorithms, complexity, and Hans, 22-27 [Zbl 07604203]
\textit{De Berg, Mark; Kisfaludi-Bak, Sándor}, Lower bounds for dominating set in ball graphs and for weighted dominating set in unit-ball graphs, 31-48 [Zbl 07604204]
\textit{Fluschnik, Till; Molter, Hendrik; Niedermeier, Rolf; Renken, Malte; Zschoche, Philipp}, As time goes by: reflections on treewidth for temporal graphs, 49-77 [Zbl 07604205]
\textit{Grigoriev, Alexander}, Possible and impossible attempts to solve the treewidth problem via ILPs, 78-88 [Zbl 07604206]
\textit{Jansen, Bart M. P.}, Crossing paths with Hans Bodlaender: a personal view on cross-composition for sparsification lower bounds, 89-111 [Zbl 07604207]
\textit{Lokshtanov, Daniel; Saurabh, Saket; Zehavi, Meirav}, Efficient graph minors theory and parameterized algorithms for (planar) disjoint paths, 112-128 [Zbl 07604208]
\textit{Marx, Dániel}, Four shorts stories on surprising algorithmic uses of treewidth, 129-144 [Zbl 07604209]
\textit{Nederlof, Jesper}, Algorithms for NP-hard problems via rank-related parameters of matrices, 145-164 [Zbl 07604210]
\textit{Otachi, Yota}, A survey on spanning tree congestion, 165-172 [Zbl 07604211]
\textit{Pilipczuk, Marcin}, Surprising applications of treewidth bounds for planar graphs, 173-188 [Zbl 07604212]
\textit{Pilipczuk, Michał}, Computing tree decompositions, 189-213 [Zbl 07604213]
\textit{Tamaki, Hisao}, Experimental analysis of treewidth, 214-221 [Zbl 07604214]
\textit{Thilikos, Dimitrios M.}, A retrospective on (meta) kernelization, 222-246 [Zbl 07604215]
\textit{van der Zanden, Tom C.}, Games, puzzles and treewidth, 247-261 [Zbl 07604216]
\textit{van Rooij, Johan M. M.}, Fast algorithms for join operations on tree decompositions, 262-297 [Zbl 07604217]Combinatorial algorithms. 31st international workshop, IWOCA 2020, Bordeaux, France, June 8--10, 2020, Proceedingshttps://zbmath.org/1496.680192022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding workshop see [Zbl 1416.68006].
Indexed articles:
\textit{Fekete, Sándor P.}, Coordinating swarms of objects at extreme dimensions, 3-13 [Zbl 07600993]
\textit{Acuña, Vicente; Lima, Leandro; Italiano, Giuseppe F.; Sciarria, Luca Pepè; Sagot, Marie-France; Sinaimeri, Blerina}, A family of tree-based generators for bubbles in directed graphs, 17-29 [Zbl 07600995]
\textit{Alecu, Bogdan; Lozin, Vadim; de Werra, Dominique}, The micro-world of cographs, 30-42 [Zbl 07600996]
\textit{Belmonte, Rémy; Hanaka, Tesshu; Kanzaki, Masaaki; Kiyomi, Masashi; Kobayashi, Yasuaki; Kobayashi, Yusuke; Lampis, Michael; Ono, Hirotaka; Otachi, Yota}, Parameterized complexity of \((A,\ell)\)-path packing, 43-55 [Zbl 07600997]
\textit{Bensmail, Julien; Fioravantes, Foivos; Nisse, Nicolas}, On proper labellings of graphs with minimum label sum, 56-68 [Zbl 07600998]
\textit{Blanché, Alexandre; Mizuta, Haruka; Ouvrard, Paul; Suzuki, Akira}, Decremental optimization of dominating sets under the reconfiguration framework, 69-82 [Zbl 07600999]
\textit{Böhnlein, Toni; Schaudt, Oliver}, On the complexity of Stackelberg matroid pricing problems, 83-96 [Zbl 07601000]
\textit{Bright, Curtis; Cheung, Kevin K. H.; Stevens, Brett; Kotsireas, Ilias; Ganesh, Vijay}, Nonexistence certificates for ovals in a projective plane of order ten, 97-111 [Zbl 07601001]
\textit{Campos, Victor; Lopes, Raul; Marino, Andrea; Silva, Ana}, Edge-disjoint branchings in temporal graphs, 112-125 [Zbl 07601002]
\textit{Chakraborty, Sankardeep; Sadakane, Kunihiko; Satti, Srinivasa Rao}, Optimal in-place algorithms for basic graph problems, 126-139 [Zbl 07601003]
\textit{Chen, Li-Hsuan; Hung, Ling-Ju; Lotze, Henri; Rossmanith, Peter}, Further results on online node- and edge-deletion problems with advice, 140-153 [Zbl 07601004]
\textit{Chiarelli, Nina; Krnc, Matjaž; Milanič, Martin; Pferschy, Ulrich; Pivač, Nevena; Schauer, Joachim}, Fair packing of independent sets, 154-165 [Zbl 07601005]
\textit{Choudhary, Pratibha}, Polynomial time algorithms for tracking path problems, 166-179 [Zbl 07601006]
\textit{Christman, Ananya; Chung, Christine; Jaczko, Nicholas; Li, Tianzhi; Westvold, Scott; Xu, Xinyue; Yuen, David}, New bounds for maximizing revenue in Online Dial-a-Ride, 180-194 [Zbl 07601007]
\textit{Cordasco, Gennaro; Gargano, Luisa; Rescigno, Adele A.}, Iterated type partitions, 195-210 [Zbl 07601008]
\textit{Damaschke, Peter}, Two robots patrolling on a line: integer version and approximability, 211-223 [Zbl 07601009]
\textit{Damaschke, Peter}, Ordering a sparse graph to minimize the sum of right ends of edges, 224-236 [Zbl 07601010]
\textit{Das, Avinandan; Kanesh, Lawqueen; Madathil, Jayakrishnan; Muluk, Komal; Purohit, Nidhi; Saurabh, Saket}, On the complexity of singly connected vertex deletion, 237-250 [Zbl 07601011]
\textit{Drgas-Burchardt, Ewa; Furmańczyk, Hanna; Sidorowicz, Elżbieta}, Equitable \(d\)-degenerate choosability of graphs, 251-263 [Zbl 07601012]
\textit{Foucaud, Florent; Gras, Benjamin; Perez, Anthony; Sikora, Florian}, On the complexity of \textsc{broadcast domination} and \textsc{Multipacking} in digraphs, 264-276 [Zbl 07601013]
\textit{Gowda, Kishen N.; Misra, Neeldhara; Patel, Vraj}, A parameterized perspective on attacking and defending elections, 277-288 [Zbl 07601014]
\textit{Groz, Benoît; Mallmann-Trenn, Frederik; Mathieu, Claire; Verdugo, Victor}, Skyline computation with noisy comparisons, 289-303 [Zbl 07601015]
\textit{Hamada, Koki; Miyazaki, Shuichi; Okamoto, Kazuya}, Strongly stable and maximum weakly stable noncrossing matchings, 304-315 [Zbl 07601016]
\textit{Hasunuma, Toru}, Connectivity keeping trees in 2-connected graphs with girth conditions, 316-329 [Zbl 07601017]
\textit{Jordán, Tibor; Kobayashi, Yusuke; Mahara, Ryoga; Makino, Kazuhisa}, The Steiner problem for count matroids, 330-342 [Zbl 07601018]
\textit{Kortsarz, Guy; Nutov, Zeev}, Bounded degree group Steiner tree problems, 343-354 [Zbl 07601019]
\textit{Hocquard, Hervé; Lajou, Dimitri; Lužar, Borut}, Between proper and strong edge-colorings of subcubic graphs, 355-367 [Zbl 07601020]
\textit{Lamprou, Ioannis; Sigalas, Ioannis; Zissimopoulos, Vassilis}, Improved budgeted connected domination and budgeted edge-vertex domination, 368-381 [Zbl 07601021]
\textit{Lanus, Erin; Colbourn, Charles J.}, Algorithms for constructing anonymizing arrays, 382-394 [Zbl 07601022]
\textit{Mkrtchyan, Vahan; Petrosyan, Garik; Subramani, K.; Wojciechowski, Piotr}, Parameterized algorithms for partial vertex covers in bipartite graphs, 395-408 [Zbl 07601023]
\textit{Panda, B. S.; Chaudhary, Juhi}, Acyclic matching in some subclasses of graphs, 409-421 [Zbl 07601024]Probabilistic and causal inference. The works of Judea Pearlhttps://zbmath.org/1496.680202022-11-17T18:59:28.764376ZPublisher's description: Professor Judea Pearl won the 2011 Turing Award ``for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning.'' This book contains the original articles that led to the award, as well as other seminal works, divided into four parts: heuristic search, probabilistic reasoning, causality, first period (1988--2001), and causality, recent period (2002--2020). Each of these parts starts with an introduction written by Judea Pearl. The volume also contains original, contributed articles by leading researchers that analyze, extend, or assess the influence of Pearl's work in different fields: from AI, Machine Learning, and Statistics to Cognitive Science, Philosophy, and the Social Sciences. The first part of the volume includes a biography, a transcript of his Turing Award Lecture, two interviews, and a selected bibliography annotated by him.
The articles of this volume will be reviewed individually.Formal techniques for distributed objects, components, and systems. 40th IFIP WG 6.1 international conference, FORTE 2020, held as part of the 15th international federated conference on distributed computing techniques, DisCoTec 2020, Valletta, Malta, June 15--19, 2020. Proceedingshttps://zbmath.org/1496.680212022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1410.68019].Foundations of intelligent systems. 25th international symposium, ISMIS 2020, Graz, Austria, September 23--25, 2020. Proceedingshttps://zbmath.org/1496.680222022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding symposium see [Zbl 1398.68019].15th international conference on spatial information theory, COSIT 2022, September 5--9, 2022, Kobe, Japanhttps://zbmath.org/1496.680232022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1423.68041].Developments in language theory. 24th international conference, DLT 2020, Tampa, FL, USA, May 11--15, 2020. Proceedingshttps://zbmath.org/1496.680242022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding conference see [Zbl 1416.68011].
Indexed articles:
\textit{Amazigh, Amrane; Bedon, Nicolas}, Equational theories of scattered and countable series-parallel posets, 1-13 [Zbl 07601057]
\textit{Barker, Laura; Fleischmann, Pamela; Harwardt, Katharina; Manea, Florin; Nowotka, Dirk}, Scattered factor-universality of words, 14-28 [Zbl 07601058]
\textit{Bleak, Collin}, On normalish subgroups of the R. Thompson groups, 29-42 [Zbl 07601059]
\textit{Cheon, Hyunjoon; Han, Yo-Sub}, Computing the shortest string and the edit-distance for parsing expression languages, 43-54 [Zbl 07601060]
\textit{Chouraqui, Fabienne}, An approach to the Herzog-Schönheim conjecture using automata, 55-68 [Zbl 07601061]
\textit{De Oliveira Oliveira, Mateus; Wehar, Michael}, On the fine grained complexity of finite automata non-emptiness of intersection, 69-82 [Zbl 07601062]
\textit{Fleischer, Lukas; Shallit, Jeffrey}, The state complexity of lexicographically smallest words and computing successors, 83-95 [Zbl 07601063]
\textit{Fleischmann, Pamela; Lejeune, Marie; Manea, Florin; Nowotka, Dirk; Rigo, Michel}, Reconstructing words from right-bounded-block words, 96-109 [Zbl 07601064]
\textit{Caron, Pascal; Hamel-De-le-court, Edwin; Luque, Jean-Gabriel}, A study of a simple class of modifiers: product modifiers, 110-121 [Zbl 07601065]
\textit{Hospodár, Michal; Mlynárčik, Peter}, Operations on permutation automata, 122-136 [Zbl 07601066]
\textit{Ibarra, Oscar H.; Jirásek, Jozef jun.; McQuillan, Ian; Prigioniero, Luca}, Space complexity of stack automata models, 137-149 [Zbl 07601067]
\textit{Kari, Lila; Ng, Timothy}, Descriptional complexity of semi-simple splicing systems, 150-163 [Zbl 07601068]
\textit{Koechlin, Florent; Nicaud, Cyril; Rotondo, Pablo}, On the degeneracy of random expressions specified by systems of combinatorial equations, 164-177 [Zbl 07601069]
\textit{Kopra, Johan}, Dynamics of cellular automata on beta-shifts and direct topological factorizations, 178-191 [Zbl 07601070]
\textit{Lietard, Florian; Rosenfeld, Matthieu}, Avoidability of additive cubes over alphabets of four numbers, 192-206 [Zbl 07601071]
\textit{Löbel, Raphaela; Luttenberger, Michael; Seidl, Helmut}, Equivalence of linear tree transducers with output in the free group, 207-221 [Zbl 07601072]
\textit{Löbel, Raphaela; Luttenberger, Michael; Seidl, Helmut}, On the balancedness of tree-to-word transducers, 222-236 [Zbl 07601073]
\textit{Maletti, Andreas; Stier, Kevin}, On tree substitution grammars, 237-250 [Zbl 07601074]
\textit{Modanese, Augusto}, Sublinear-time language recognition and decision by one-dimensional cellular automata, 251-265 [Zbl 07601075]
\textit{Průša, Daniel; Wehar, Michael}, Complexity of searching for 2 by 2 submatrices in Boolean matrices, 266-279 [Zbl 07601076]
\textit{Rowland, Eric; Stipulanti, Manon}, Avoiding \(5/4\)-powers on the alphabet of nonnegative integers (extended abstract), 280-293 [Zbl 07601077]
\textit{Rukavicka, Josef}, Transition property for \(\alpha\)-power free languages with \(\alpha \ge 2\) and \(k\ge 3\) letters, 294-303 [Zbl 07601078]
\textit{Sin'ya, Ryoma}, Context-freeness of Word-MIX languages, 304-318 [Zbl 07601079]
\textit{Frosini, Andrea; Tarsissi, Lama}, The characterization of rational numbers belonging to a minimal path in the Stern-Brocot tree according to a second order balancedness, 319-331 [Zbl 07601080]Algorithms and models for the web graph. 17th international workshop, WAW 2020, Warsaw, Poland, September 21--22, 2020. Proceedingshttps://zbmath.org/1496.680252022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding workshop see [Zbl 1416.68003].Inductive logic programming. 29th international conference, ILP 2019, Plovdiv, Bulgaria, September 3--5, 2019. Proceedingshttps://zbmath.org/1496.680262022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1453.68019].33rd international conference on concurrency theory, CONCUR 2022, Warsaw, Poland, September 12--16, 2022https://zbmath.org/1496.680272022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding conference see [Zbl 1468.68015].Approximation and online algorithms. 19th international workshop, WAOA 2021, Lisbon, Portugal, September 6--10, 2021. Revised selected papershttps://zbmath.org/1496.680282022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding workshop see [Zbl 1482.68027].
Indexed articles:
\textit{Gálvez, Waldo; Sanhueza-Matamala, Francisco; Soto, José A.}, Approximation algorithms for vertex-connectivity augmentation on the cycle, 1-22 [Zbl 07603881]
\textit{Blažej, Václav; Choudhary, Pratibha; Knop, Dušan; Křišt'an, Jan Matyáš; Suchý, Ondřej; Valla, Tomáš}, Constant factor approximation for tracking paths and fault tolerant feedback vertex set, 23-38 [Zbl 07603882]
\textit{Sun, Hao}, An improved approximation bound for minimum weight dominating set on graphs of bounded arboricity, 39-47 [Zbl 07603883]
\textit{Dudycz, Szymon; Manurangsi, Pasin; Marcinkowski, Jan}, Tight inapproximability of minimum maximal matching on bipartite graphs and related problems, 48-64 [Zbl 07603884]
\textit{Fujito, Toshihiro; Tatematsu, Takumi}, On \(b\)-matchings and \(b\)-edge dominating sets: a 2-approximation algorithm for the 4-edge dominating set problem, 65-79 [Zbl 07603885]
\textit{Huizing, Dylan; Schäfer, Guido}, The traveling \(k\)-median problem: approximating optimal network coverage, 80-98 [Zbl 07603886]
\textit{Jaykrishnan, G.; Levin, Asaf}, EPTAS for load balancing problem on parallel machines with a non-renewable resource, 99-116 [Zbl 07603887]
\textit{Epstein, Leah}, Several methods of analysis for cardinality constrained bin packing, 117-129 [Zbl 07603888]
\textit{Cohen, Ilan Reuven; Cohen, Izack; Zaks, Iyar}, Weighted completion time minimization for capacitated parallel machines, 130-143 [Zbl 07603889]
\textit{Maack, Marten; Meyer auf der Heide, Friedhelm; Pukrop, Simon}, Server cloud scheduling, 144-164 [Zbl 07603890]
\textit{Tauer, Bjoern; Vargas Koch, Laura}, FIFO and randomized competitive packet routing games, 165-187 [Zbl 07603891]
\textit{Giliberti, Jeff; Karrenbauer, Andreas}, Improved online algorithm for fractional knapsack in the random order model, 188-205 [Zbl 07603892]
\textit{Disser, Yann; Klimm, Max; Weckbecker, David}, Fractionally subadditive maximization under an incremental knapsack constraint, 206-223 [Zbl 07603893]
\textit{Bienkowski, Marcin; Böhm, Martin; Koutecký, Martin; Rothvoß, Thomas; Sgall, Jiří; Veselý, Pavel}, Improved analysis of online balanced clustering, 224-233 [Zbl 07603894]
\textit{Kolliopoulos, Stavros G.; Skarlatos, Antonis}, Precedence-constrained covering problems with multiplicity constraints, 234-251 [Zbl 07603895]
\textit{Bansal, Nikhil; Cohen, Ilan Reuven}, Contention resolution, matrix scaling and fair allocation, 252-274 [Zbl 07603896]Latin 2020: theoretical informatics. 14th Latin American symposium, São Paulo, Brazil, January 5--8, 2021. Proceedingshttps://zbmath.org/1496.680292022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding symposium see [Zbl 1428.68007].
Indexed articles:
\textit{Byrka, Jarosław; Lewandowski, Mateusz; Meesum, Syed Mohammad; Spoerhase, Joachim; Uniyal, Sumedha}, PTAS for Steiner tree on map graphs, 3-14 [Zbl 07600760]
\textit{Duan, Ran; He, Haoqing; Zhang, Tianyi}, Near-linear time algorithm for approximate minimum degree spanning trees, 15-26 [Zbl 07600761]
\textit{Elbassioni, Khaled}, Approximation algorithms for cost-robust discrete minimization problems based on their LP-relaxations, 27-37 [Zbl 07600762]
\textit{Fagnon, Vincent; Kacem, Imed; Lucarelli, Giorgio; Simon, Bertrand}, Scheduling on hybrid platforms: improved approximability window, 38-49 [Zbl 07600763]
\textit{Fernandes, Cristina G.; Lintzmayer, Carla N.}, Leafy spanning arborescences in DAGs, 50-62 [Zbl 07600764]
\textit{Pedrosa, Lehilton L. C.; Quesquén, Greis Y. O.}, Approximating routing and connectivity problems with multiple distances, 63-75 [Zbl 07600765]
\textit{Pedrosa, Lehilton L. C.; Rosado, Hugo K. K.}, A 2-approximation for the \(k\)-prize-collecting Steiner tree problem, 76-88 [Zbl 07600766]
\textit{Bliznets, Ivan; Sagunov, Danil}, Maximizing happiness in graphs of bounded clique-width, 91-103 [Zbl 07600767]
\textit{Golovach, Petr A.; Krithika, R.; Sahu, Abhishek; Saurabh, Saket; Zehavi, Meirav}, Graph Hamiltonicity parameterized by proper interval deletion set, 104-115 [Zbl 07600768]
\textit{Golovach, Petr A.; Lima, Paloma T.; Papadopoulos, Charis}, Graph square roots of small distance from degree one graphs, 116-128 [Zbl 07600769]
\textit{Gomes, Guilherme C. M.; Guedes, Matheus R.; dos Santos, Vinicius F.}, Structural parameterizations for equitable coloring, 129-140 [Zbl 07600770]
\textit{Avin, Chen; Mondal, Kaushik; Schmid, Stefan}, Dynamically optimal self-adjusting single-source tree networks, 143-154 [Zbl 07600771]
\textit{Bender, Michael A.; Goswami, Mayank; Medjedovic, Dzejla; Montes, Pablo; Tsichlas, Kostas}, Batched predecessor and sorting with size-priced information in external memory, 155-167 [Zbl 07600772]
\textit{Bonato, Anthony; Georgiou, Konstantinos; MacRury, Calum; Prałat, Paweł}, Probabilistically faulty searching on a half-line (extended abstract), 168-180 [Zbl 07600773]
\textit{Chaplick, Steven; Halldórsson, Magnús M.; de Lima, Murilo S.; Tonoyan, Tigran}, Query minimization under stochastic uncertainty, 181-193 [Zbl 07600774]
\textit{Inenaga, Shunsuke}, Suffix trees, DAWGs and CDAWGs for forward and backward tries, 194-206 [Zbl 07600775]
\textit{Kociumaka, Tomasz; Navarro, Gonzalo; Prezza, Nicola}, Towards a definitive measure of repetitiveness, 207-219 [Zbl 07600776]
\textit{Arseneva, Elena; Bose, Prosenjit; Cano, Pilar; Silveira, Rodrigo I.}, Flips in higher order Delaunay triangulations, 223-234 [Zbl 07600777]
\textit{Bauernöppel, Frank; Maheshwari, Anil; Sack, Jörg-Rüdiger}, An \(\varOmega (n^3)\) lower bound on the number of cell crossings for weighted shortest paths in 3-dimensional polyhedral structures, 235-246 [Zbl 07600778]
\textit{Bereg, Sergey}, Computing balanced convex partitions of lines, 247-257 [Zbl 07600779]
\textit{Buchin, K.; Kosolobov, D.; Sonke, W.; Speckmann, B.; Verbeek, K.}, Ordered strip packing, 258-270 [Zbl 07600780]
\textit{Kim, Mincheol; Yoon, Sang Duk; Ahn, Hee-Kap}, Shortest rectilinear path queries to rectangles in a rectangular domain, 271-282 [Zbl 07600781]
\textit{Mantas, Ioannis; Papadopoulou, Evanthia; Sacristán, Vera; Silveira, Rodrigo I.}, Farthest color Voronoi diagrams: complexity and algorithms, 283-295 [Zbl 07600782]
\textit{Pérez-Lantero, Pablo; Seara, Carlos; Urrutia, Jorge}, Rectilinear convex hull of points in 3D, 296-307 [Zbl 07600783]
\textit{Cavalar, Bruno Pasqualotto; Kumar, Mrinal; Rossman, Benjamin}, Monotone circuit lower bounds from robust sunflowers, 311-322 [Zbl 07600784]
\textit{Chaubal, Siddhesh; Gál, Anna}, Tight bounds on sensitivity and block sensitivity of some classes of transitive functions, 323-335 [Zbl 07600785]
\textit{Dantchev, Stefan; Ghani, Abdul; Martin, Barnaby}, Sherali-Adams and the binary encoding of combinatorial principles, 336-347 [Zbl 07600786]
\textit{Marcilon, Thiago; Martins, Nicolas; Sampaio, Rudini}, Hardness of variants of the graph coloring game, 348-359 [Zbl 07600787]
\textit{Rahman, Md Lutfar; Watson, Thomas}, Tractable unordered 3-CNF games, 360-372 [Zbl 07600788]
\textit{Bădescu, Costin; O'Donnell, Ryan}, Lower bounds for testing complete positivity and quantum separability, 375-386 [Zbl 07600789]
\textit{Shimizu, Kazuya; Mori, Ryuhei}, Exponential-time quantum algorithms for graph coloring problems, 387-398 [Zbl 07600790]
\textit{Nachum, Ido; Yehudayoff, Amir}, On symmetry and initialization for neural networks, 401-412 [Zbl 07600791]
\textit{Ancona, Bertie; Bajwa, Ayesha; Lynch, Nancy; Mallmann-Trenn, Frederik}, How to color a French flag. Biologically inspired algorithms for scale-invariant patterning, 413-424 [Zbl 07600792]
\textit{Pchelina, Daria; Schabanel, Nicolas; Seki, Shinnosuke; Ubukata, Yuki}, Simple intrinsic simulation of cellular automata in oritatami molecular folding model, 425-436 [Zbl 07600793]
\textit{Andriambolamalala, Ny Aina; Ravelomanana, Vlady}, Transmitting once to elect a leader on wireless networks, 439-450 [Zbl 07600794]
\textit{Daknama, Rami; Panagiotou, Konstantinos; Reisser, Simon}, Asymptotics for push on the complete graph, 451-463 [Zbl 07600795]
\textit{Read-McFarland, Andrew; Štefankovič, Daniel}, The hardness of sampling connected subgraphs, 464-475 [Zbl 07600796]
\textit{Carlson, Charles; Kolla, Alexandra; Li, Ray; Mani, Nitya; Sudakov, Benny; Trevisan, Luca}, Lower bounds for max-cut via semidefinite programming, 479-490 [Zbl 07600797]
\textit{Hàn, Hiệp; Kiwi, Marcos; Pavez-Signé, Matías}, Quasi-random words and limits of word sequences, 491-503 [Zbl 07600798]
\textit{Rossman, Benjamin}, Thresholds in the lattice of subspaces of \(\mathbb{F}_q^n\), 504-515 [Zbl 07600799]
\textit{Barequet, Gill; Ben-Shachar, Gil}, On minimal-perimeter lattice animals, 519-531 [Zbl 07600800]
\textit{Barequet, Gill; Shalah, Mira}, Improved upper bounds on the growth constants of polyominoes and polycubes, 532-545 [Zbl 07600801]
\textit{Seelbach Benkner, Louisa; Wagner, Stephan}, On the collection of fringe subtrees in random binary trees, 546-558 [Zbl 07600802]
\textit{Bóna, Miklós}, A method to prove the nonrationality of some combinatorial generating functions, 559-570 [Zbl 07600803]
\textit{Clément, Julien; Genitrini, Antoine}, Binary decision diagrams: from tree compaction to sampling, 571-583 [Zbl 07600804]
\textit{Alves, Sancrey Rodrigues; Couto, Fernanda; Faria, Luerbio; Gravier, Sylvain; Klein, Sulamita; Souza, Uéverton S.}, Graph sandwich problem for the property of being well-covered and partitionable into \(k\) independent sets and \(\ell\) cliques, 587-599 [Zbl 07600805]
\textit{Blair, Jean R. S.; Heggernes, Pinar; Lima, Paloma T.; Lokshtanov, Daniel}, On the maximum number of edges in chordal graphs of bounded degree and matching number, 600-612 [Zbl 07600806]
\textit{Bodlaender, Hans L.; Brettell, Nick; Johnson, Matthew; Paesani, Giacomo; Paulusma, Daniël; van Leeuwen, Erik Jan}, Steiner trees for hereditary graph classes, 613-624 [Zbl 07600807]
\textit{Deniz, Zakir; Nivelle, Simon; Ries, Bernard; Schindl, David}, On some subclasses of split \(B_1\)-EPG graphs, 625-636 [Zbl 07600808]
\textit{Groshaus, M.; Guedes, A. L. P.; Kolberg, F. S.}, On the Helly subclasses of interval bigraphs and circular arc bigraphs, 637-648 [Zbl 07600809]Programming languages and systems. 29th European symposium on programming, ESOP 2020, held as part of the European joint conferences on theory and practice of software, ETAPS 2020, Dublin, Ireland, April 25--30, 2020. Proceedingshttps://zbmath.org/1496.680302022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding symposium see [Zbl 1408.68010].Ontology-driven software developmenthttps://zbmath.org/1496.680312022-11-17T18:59:28.764376ZPublisher's description: This book is about a significant step forward in software development. It brings state-of-the-art ontology reasoning into mainstream software development and its languages. Ontology Driven Software Development is the essential, comprehensive resource on enabling technologies, consistency checking and process guidance for ontology-driven software development (ODSD). It demonstrates how to apply ontology reasoning in the lifecycle of software development, using current and emerging standards and technologies. You will learn new methodologies and infrastructures, additionally illustrated using detailed industrial case studies.
The book will help you:
\begin{itemize}
\item Learn how ontology reasoning allows validations of structure models and key tasks in behavior models.
\item Understand how to develop ODSD guidance engines for important software development activities, such as requirement engineering, domain modeling and process refinement.
\item Become familiar with semantic standards, such as the Web Ontology Language (OWL) and the SPARQL query language.
\item Make use of ontology reasoning, querying and justification techniques to integrate software models and to offer guidance and traceability supports.
\end{itemize}
This book is helpful for undergraduate students and professionals who are interested in studying how ontologies and related semantic reasoning can be applied to the software development process. In addition, itwill also be useful for postgraduate students, professionals and researchers who are going to embark on their research in areas related to ontology or software engineering.
The articles of this volume will not be indexed individually.Rigorous state-based methods. 7th international conference, ABZ 2020, Ulm, Germany, May 27--29, 2020. Proceedingshttps://zbmath.org/1496.680322022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1387.68015].
Indexed articles:
\textit{Börger, Egon; Schewe, Klaus-Dieter}, A characterization of distributed ASMs with partial-order runs, 78-92 [Zbl 07602226]
\textit{Schewe, Klaus-Dieter; Ferrarotti, Flavio}, A logic for reflective ASMs, 93-106 [Zbl 07602227]
\textit{Benyagoub, Sarah; Aït-Ameur, Yamine; Schewe, Klaus-Dieter}, Event-B-supported choreography-defined communicating systems, 155-168 [Zbl 07602231]Modelling and development of intelligent systems. Proceedings of the fifth international conference, Sibiu, Romania, June 23--25, 2017https://zbmath.org/1496.680332022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1415.68047].
Indexed articles:
\textit{Ciurea, Stelian}, An imperialist competitive algorithm optimized to solve the traveling salesman problem, 20-28 [Zbl 1424.90225]
\textit{Sangeorzan, Livia; Enache-David, Nicoleta}, Theoretical and practical approaches for documents classification, 67-71 [Zbl 1410.68323]
\textit{Stoica, Florin; Bărbulescu, Alina; Stoica, Laura Florentina}, Tuning extreme learning machines with genetic algorithms, 72-81 [Zbl 1410.68324]
\textit{Tuba, Eva; Capor-Hrosik, Romana; Alihodzic, Adis; Beko, Marko; Jovanovic, Raka}, Moth search algorithm for bound constrained optimization problems, 82-89 [Zbl 1410.68344]47th international symposium on mathematical foundations of computer science, MFCS 2022, Vienna, Austria, August 22--26, 2022https://zbmath.org/1496.680342022-11-17T18:59:28.764376ZThe articles of this volume will be reviewed individually. For the preceding symposium see [Zbl 1468.68011].Fundamental approaches to software engineering. 23rd international conference, FASE 2020, held as part of the European joint conferences on theory and practice of software, ETAPS 2020, Dublin, Ireland, April 25--30, 2020. Proceedingshttps://zbmath.org/1496.680352022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1408.68018].Parallel processing and applied mathematics. 13th international conference, PPAM 2019, Bialystok, Poland, September 8--11, 2019, Revised selected papers. Part IIhttps://zbmath.org/1496.680362022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1461.68016; Zbl 1461.68017]. For Part I of the papers of the present conference see [Zbl 1496.68037].Parallel processing and applied mathematics. 13th international conference, PPAM 2019, Bialystok, Poland, September 8--11, 2019, Revised selected papers. Part Ihttps://zbmath.org/1496.680372022-11-17T18:59:28.764376ZThe articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1461.68016; Zbl 1461.68017]. For Part II of the papers of the present conference see [Zbl 1496.68036].Modeling and verification methods for application design in heterogeneous architectureshttps://zbmath.org/1496.680382022-11-17T18:59:28.764376Z"Pogorilyy, S."https://zbmath.org/authors/?q=ai:pogorilyy.s-d"Slynko, M."https://zbmath.org/authors/?q=ai:slynko.m-sSummary: A methodology of application design for massive parallelism systems on the example of GPGPU systems focused on the algorithmic design stage is proposed. Two stages of design are considered: creation of a formal specification and its research and verification. For the first stage, the use of mathematical apparatus of the system of algorithmic algebras/modified system of algorithmic algebras and transition systems is proposed. For the second stage, the use of network and automatic models is analyzed, and the advantages of each model are given. In particular, the computational model in NVIDIA CUDA using Petri nets, as well as the linear-temporal logic formulas and automatic model, was studied.Killing nodes as a countermeasure to virus expansionhttps://zbmath.org/1496.680392022-11-17T18:59:28.764376Z"Bonnet, François"https://zbmath.org/authors/?q=ai:bonnet.francois"Bramas, Quentin"https://zbmath.org/authors/?q=ai:bramas.quentin"Défago, Xavier"https://zbmath.org/authors/?q=ai:defago.xavier"Nguyen, Thanh Dang"https://zbmath.org/authors/?q=ai:nguyen.thanh-dangSummary: The spread of a virus and the containment of such spread have been widely studied in the literature. These two problems can be abstracted as a two-players stochastic game in which one side tries to spread the infection to the entire system, while the other side aims to contain the infection to a finite area. Three parameters play a particularly important role: (1) the probability \(p\) of successful infection, (2) the topology of the network, and (3) the probability \(\alpha\) that a strategy message has priority over the infection.
This paper studies the effect of killing strategies, where a node sacrifices itself and possibly some of its neighbors, to contain the spread of a virus in an infinite grid. Our contribution is threefold: (1) we prove that the simplest killing strategy is equivalent to the problem of site percolation; (2) when killing messages have priority, we prove that there always exists a killing strategy that contains a virus, for any probability \(0\leq p<1\); in contrast, (3) when killing message do not have priority, there is not always a successful killing strategy, and we study the virus propagation for various \({0\leq\alpha<1}\).
For the entire collection see [Zbl 1381.68003].Short labeling schemes for topology recognition in wireless tree networkshttps://zbmath.org/1496.680402022-11-17T18:59:28.764376Z"Gorain, Barun"https://zbmath.org/authors/?q=ai:gorain.barun"Pelc, Andrzej"https://zbmath.org/authors/?q=ai:pelc.andrzejSummary: We consider the problem of topology recognition in wireless (radio) networks modeled as undirected graphs. Topology recognition is a fundamental task in which every node of the network has to output a map of the underlying graph i.e., an isomorphic copy of it, and situate itself in this map. In wireless networks, nodes communicate in synchronous rounds. In each round a node can either transmit a message to all its neighbors, or stay silent and listen. At the receiving end, a node \(v\) hears a message from a neighbor \(w\) in a given round, if \(v\) listens in this round, and if \(w\) is its only neighbor that transmits in this round. Nodes have labels which are (not necessarily different) binary strings. The length of a labeling scheme is the largest length of a label. We concentrate on wireless networks modeled by trees, and we investigate two problems.{\parindent = 0.5 cm \begin{itemize} \item[--]What is the shortest labeling scheme that permits topology recognition in all wireless tree networks of diameter \(D\) and maximum degree \(\varDelta\)? \item [--]What is the fastest topology recognition algorithm working for all wireless tree networks of diameter \(D\) and maximum degree \(\varDelta\), using such a short labeling scheme?
\end{itemize}}We are interested in deterministic topology recognition algorithms. For the first problem, we show that the minimum length of a labeling scheme allowing topology recognition in all trees of maximum degree \(\varDelta\geq 3\) is \(\varTheta(\log\log\varDelta)\). For such short schemes, used by an algorithm working for the class of trees of diameter \(D\geq 4\) and maximum degree \(\varDelta\geq 3\), we show almost matching bounds on the time of topology recognition: an upper bound \(O(D\varDelta)\), and a lower bound \(\varOmega(D\varDelta^{\epsilon})\), for any constant \(\epsilon <1\).
Our upper bounds are proven by constructing a topology recognition algorithm using a labeling scheme of length \(O(\log\log\varDelta)\) and using time \(O(D\varDelta)\). Our lower bounds are proven by constructing a class of trees for which any topology recognition algorithm must use a labeling scheme of length at least \(\varOmega(\log\log\varDelta)\), and a class of trees for which any topology recognition algorithm using a labeling scheme of length \(O(\log\log\varDelta)\) must use time at least \(\varOmega(D\varDelta^{\epsilon})\), on some tree of this class.
For the entire collection see [Zbl 1381.68003].Detection method of boundary malicious nodes in computer networkhttps://zbmath.org/1496.680412022-11-17T18:59:28.764376Z"Huang, Wensheng"https://zbmath.org/authors/?q=ai:huang.wensheng(no abstract)Space-time tradeoffs for distributed verificationhttps://zbmath.org/1496.680422022-11-17T18:59:28.764376Z"Ostrovsky, Rafail"https://zbmath.org/authors/?q=ai:ostrovsky.rafail"Perry, Mor"https://zbmath.org/authors/?q=ai:perry.mor"Rosenbaum, Will"https://zbmath.org/authors/?q=ai:rosenbaum.willSummary: Verifying that a network configuration satisfies a given Boolean predicate is a fundamental problem in distributed computing. Many variations of this problem have been studied, for example, in the context of proof labeling schemes (PLS), locally checkable proofs (LCP), and non-deterministic local decision (NLD). In all of these contexts, verification time is assumed to be constant.
\textit{A. Korman} et al. [in: Proceedings of the 30th annual ACM SIGACT-SIGOPS symposium on principles of distributed computing, PODC '11. New York, NY: Association for Computing Machinery (ACM). 311--320 (2011; Zbl 1321.68348)]
presented a proof-labeling scheme for MST, with poly-logarithmic verification time, and logarithmic memory at each vertex.
In this paper we introduce the notion of a \(t\)-PLS, which allows the verification procedure to run for super-constant time. Our work analyzes the tradeoffs of \(t\)-PLS between time, label size, message length, and computation space. We construct a universal \(t\)-PLS and prove that it uses the same amount of total communication as a known one-round universal PLS, and \(t\) factor smaller labels. In addition, we provide a general technique to prove lower bounds for space-time tradeoffs of \(t\)-PLS. We use this technique to show an optimal tradeoff for testing that a network is acyclic (cycle free). Our optimal \(t\)-PLS for acyclicity uses label size and computation space \(O((\log n)/t)\). We further describe a recursive \(O(\log ^* n)\) space verifier for acyclicity which does not assume previous knowledge of the run-time \(t\).
For the entire collection see [Zbl 1381.68003].Deadlock in packet switching networkshttps://zbmath.org/1496.680432022-11-17T18:59:28.764376Z"Stramaglia, Anna"https://zbmath.org/authors/?q=ai:stramaglia.anna"Keiren, Jeroen J. A."https://zbmath.org/authors/?q=ai:keiren.jeroen-j-a"Zantema, Hans"https://zbmath.org/authors/?q=ai:zantema.hansSummary: A deadlock in a packet switching network is a state in which one or more messages have not yet reached their target, yet cannot progress any further. We formalize three different notions of deadlock in the context of packet switching networks, to which we refer as global, local and weak deadlock. We establish the precise relations between these notions, and prove they characterize different sets of deadlocks. Moreover, we implement checking of deadlock freedom of packet switching networks using the symbolic model checker nuXmv. We show experimentally that the implementation is effective at finding subtle deadlock situations in packet switching networks.
For the entire collection see [Zbl 1489.68021].Mathematical tools for the Internet of Things analysishttps://zbmath.org/1496.680442022-11-17T18:59:28.764376Z"Mamonova, G."https://zbmath.org/authors/?q=ai:mamonova.ganna-v"Maidaniuk, N."https://zbmath.org/authors/?q=ai:maidaniuk.nSummary: An overview of recent publications on the use of mathematical methods and models for the analysis of the Internet of Things is given. It is shown that IoT modeling uses such sections of mathematics as game theory, probability theory, theory of random processes, Boolean and matrix algebra, graph theory, number theory, complex variable theory, measure theory, optimization theory, simulation modeling, cluster analysis, and numerical and mathematical analysis.Monitoring of domain-related problems in distributed data streamshttps://zbmath.org/1496.680452022-11-17T18:59:28.764376Z"Bemmann, Pascal"https://zbmath.org/authors/?q=ai:bemmann.pascal"Biermeier, Felix"https://zbmath.org/authors/?q=ai:biermeier.felix"Bürmann, Jan"https://zbmath.org/authors/?q=ai:burmann.jan"Kemper, Arne"https://zbmath.org/authors/?q=ai:kemper.arne"Knollmann, Till"https://zbmath.org/authors/?q=ai:knollmann.till"Knorr, Steffen"https://zbmath.org/authors/?q=ai:knorr.steffen"Kothe, Nils"https://zbmath.org/authors/?q=ai:kothe.nils"Mäcker, Alexander"https://zbmath.org/authors/?q=ai:macker.alexander"Malatyali, Manuel"https://zbmath.org/authors/?q=ai:malatyali.manuel"Meyer auf der Heide, Friedhelm"https://zbmath.org/authors/?q=ai:meyer-auf-der-heide.friedhelm"Riechers, Sören"https://zbmath.org/authors/?q=ai:riechers.soren"Schaefer, Johannes"https://zbmath.org/authors/?q=ai:schaefer.johannes"Sundermeier, Jannik"https://zbmath.org/authors/?q=ai:sundermeier.jannikSummary: Consider a network in which \(n\) distributed nodes are connected to a single server. Each node continuously observes a data stream consisting of one value per discrete time step. The server has to continuously monitor a given parameter defined over all information available at the distributed nodes. That is, in any time step \(t\), it has to compute an output based on all values currently observed across all streams. To do so, nodes can send messages to the server and the server can broadcast messages to the nodes. The objective is the minimisation of communication while allowing the server to compute the desired output.
We consider monitoring problems related to the domain \(D_t\) defined to be the set of values observed by at least one node at time \(t\). We provide randomised algorithms for monitoring \(D_t\), (approximations of) the size \(|D_t|\) and the frequencies of all members of \(D_t\). Besides worst-case bounds, we also obtain improved results when inputs are parameterised according to the similarity of observations between consecutive time steps. This parameterisation allows to exclude inputs with rapid and heavy changes, which usually lead to the worst-case bounds but might be rather artificial in certain scenarios.
For the entire collection see [Zbl 1381.68003].Evacuation from a disc in the presence of a faulty robothttps://zbmath.org/1496.680462022-11-17T18:59:28.764376Z"Czyzowicz, Jurek"https://zbmath.org/authors/?q=ai:czyzowicz.jurek"Georgiou, Konstantinos"https://zbmath.org/authors/?q=ai:georgiou.konstantinos"Godon, Maxime"https://zbmath.org/authors/?q=ai:godon.maxime"Kranakis, Evangelos"https://zbmath.org/authors/?q=ai:kranakis.evangelos"Krizanc, Danny"https://zbmath.org/authors/?q=ai:krizanc.danny"Rytter, Wojciech"https://zbmath.org/authors/?q=ai:rytter.wojciech"Włodarczyk, Michał"https://zbmath.org/authors/?q=ai:wlodarczyk.michalSummary: We consider the evacuation problem on a circle for three robots, at most one of which is faulty. The three robots starting from the center of a unit circle search for an exit placed at an unknown location on the perimeter (of the circle). During the search, robots can communicate wirelessly at any distance. The goal is to minimize the time that the latest non-faulty robot reaches the exit.
Our main contributions are two intuitive evacuation protocols for the non-faulty robots to reach the exit in two well-studied fault-models, Crash and Byzantine. Moreover, we complement our positive results by lower bounds in both models. A summary of our results reads as follows:{\parindent = 0.5 cm \begin{itemize} \item[--] case of Crash faults: lower bound \({\approx}5.188\); upper bound \({\approx}6.309\), \item [--] case of Byzantine faults: lower bound \({\approx}5.948\); upper bound \({\approx}6.921\).
\end{itemize}}
For comparison, it is known
(see [Zbl 1393.68164])
that in the case of three non-faulty robots with wireless communication we have a lower bound of 4.159, and an upper bound of 4.219 for evacuation, while for two non-faulty robots \(1+2\pi/3+\sqrt{3}\approx 4.779\) is a tight upper and lower bound for evacuation.
For the entire collection see [Zbl 1381.68003].On location hiding in distributed systemshttps://zbmath.org/1496.680472022-11-17T18:59:28.764376Z"Gotfryd, Karol"https://zbmath.org/authors/?q=ai:gotfryd.karol"Klonowski, Marek"https://zbmath.org/authors/?q=ai:klonowski.marek"Pająk, Dominik"https://zbmath.org/authors/?q=ai:pajak.dominikSummary: We consider the following problem -- a group of mobile agents perform some task on a terrain modeled as a graph. In a given moment of time an adversary gets access to the graph and agents' positions. Shortly before adversary's observation the devices have a chance to relocate themselves in order to hide their initial configuration, as the initial configuration may possibly reveal to the adversary some information about the task they performed. Clearly agents have to change their locations in possibly short time using minimal energy. In our paper we introduce a definition of a well hiding algorithm in which the starting and final configurations of the agents have small mutual information. Then we discuss the influence of various features of the model on running time of the optimal well hiding algorithm. We show that if the topology of the graph is known to the agents, then the number of steps proportional to the diameter of the graph is sufficient and necessary. In the unknown topology scenario we only consider a single agent case. We first show that the task is impossible in the deterministic case if the agent has no memory. Then we present a polynomial randomized algorithm. Finally in the model with memory we show that the number of steps proportional to the number of edges of the graph is sufficient and necessary. In some sense we investigate how complex is the problem of ``losing'' information about location (both physical and logical) for different settings.
For the entire collection see [Zbl 1381.68003].Reliable communication via semilattice properties of partial knowledgehttps://zbmath.org/1496.680482022-11-17T18:59:28.764376Z"Pagourtzis, Aris"https://zbmath.org/authors/?q=ai:pagourtzis.aris-t"Panagiotakos, Giorgos"https://zbmath.org/authors/?q=ai:panagiotakos.giorgos"Sakavalas, Dimitris"https://zbmath.org/authors/?q=ai:sakavalas.dimitrisSummary: A fundamental primitive in distributed computing is reliable message transmission (RMT), which refers to the task of correctly sending a message from a party to another, despite the presence of Byzantine corruptions. We explicitly consider the initial knowledge possessed by the parties-players by employing the recently introduced partial knowledge model
[\textit{A. Pagourtzis} et al., Distrib. Comput. 30, No. 2, 87--102 (2017; Zbl 1420.68023)],
where a player has knowledge over an arbitrary subgraph of the network, and the general adversary model of
\textit{M. Hirt} and \textit{U. Maurer} [in: Proceedings of the 16th annual ACM symposium on principles of distributed computing, PODC'97. New York, NY: Association for Computing Machinery (ACM). 25--34 (1997; Zbl 1374.68070)].
Our main contribution is a tight condition for the feasibility of RMT in the setting resulting from the combination of these two quite general models; this settles the central open question of
[Zbl 1420.68023].
Obtaining such a condition presents the need for knowledge exchange between players. To this end, we introduce the joint view operation which serves as a fundamental tool for deducing maximal useful information conforming with the exchanged local knowledge. Maximality of the obtained knowledge is proved in terms of the semilattice structure imposed by the operation on the space of partial knowledge. This in turn, allows for the definition of a novel network separator notion that yields a necessary condition for achieving RMT in this model. In order to show the sufficiency of the condition, we propose the RMT partial knowledge algorithm (RMT-PKA), an algorithm which employs the joint view operation to solve RMT in every instance where the necessary condition is met. To the best of our knowledge, this is the first protocol for RMT against general adversaries in the partial knowledge model. Due to the generality of the model, our results provide, for any level of topology knowledge and any adversary structure, an exact characterization of instances where RMT is possible and an algorithm to achieve RMT on such instances.
For the entire collection see [Zbl 1369.68029].Global versus local computations: fast computing with identifiershttps://zbmath.org/1496.680492022-11-17T18:59:28.764376Z"Rabie, Mikaël"https://zbmath.org/authors/?q=ai:rabie.mikaelSummary: This paper studies what can be computed by using probabilistic local interactions with agents with a very restricted power in polylogarithmic parallel time.
It is known that if agents are only finite state (corresponding to the population protocol model by Angluin et al.), then only semilinear predicates over the global input can be computed. In fact, if the population starts with a unique leader, these predicates can even be computed in a polylogarithmic parallel time.
If identifiers are added (corresponding to the community protocol model by Guerraoui and Ruppert), then more global predicates over the input multiset can be computed. Local predicates over the input sorted according to the identifiers can also be computed, as long as the identifiers are ordered. The time of some of those predicates might require exponential parallel time.
In this paper, we consider what can be computed with community protocol in a polylogarithmic number of parallel interactions. We introduce the class CPPL corresponding to protocols that use \(O(n\log ^k n)\), for some \(k\), expected interactions to compute their predicates, or equivalently a polylogarithmic number of parallel expected interactions.
We provide some computable protocols, some boundaries of the class, using the fact that the population can compute its size. We also prove two impossibility results providing some arguments showing that local computations are no longer easy: the population does not have the time to compare a linear number of consecutive identifiers. The linearly local languages, such that the rational language \((ab)^*\), are not computable.
For the entire collection see [Zbl 1381.68003].Perfect failure detection with very few bitshttps://zbmath.org/1496.680502022-11-17T18:59:28.764376Z"Fraigniaud, Pierre"https://zbmath.org/authors/?q=ai:fraigniaud.pierre"Rajsbaum, Sergio"https://zbmath.org/authors/?q=ai:rajsbaum.sergio"Travers, Corentin"https://zbmath.org/authors/?q=ai:travers.corentin"Kuznetsov, Petr"https://zbmath.org/authors/?q=ai:kuznetsov.petr"Rieutord, Thibault"https://zbmath.org/authors/?q=ai:rieutord.thibaultSummary: A failure detector is a distributed oracle that provides each process with a module that continuously outputs an estimate of which processes in the system have failed. The perfect failure detector provides accurate and eventually complete information about process failures. We show that, in asynchronous failure-prone message-passing systems, perfect failure detection can be achieved using an oracle that outputs at most \(\lceil \log \alpha(n) \rceil + 1\) bits per process in \(n\)-process systems, where \(\alpha\) denotes the inverse-Ackermann function. This result is essentially optimal, as we also show that, in the same environment, no failure detector outputting a constant number of bits per process can achieve perfect failure detection.Reliable data transmission in wireless sensor networks with data decomposition and ensemble recoveryhttps://zbmath.org/1496.680512022-11-17T18:59:28.764376Z"Li, Fengyong"https://zbmath.org/authors/?q=ai:li.fengyong"Zhou, Gang"https://zbmath.org/authors/?q=ai:zhou.gang"Lei, Jingsheng"https://zbmath.org/authors/?q=ai:lei.jingsheng(no abstract)Energy-efficient fast delivery by mobile agentshttps://zbmath.org/1496.680522022-11-17T18:59:28.764376Z"Bärtschi, Andreas"https://zbmath.org/authors/?q=ai:bartschi.andreas"Tschager, Thomas"https://zbmath.org/authors/?q=ai:tschager.thomasSummary: We consider the problem of collaboratively delivering a package from a specified source node \(s\) to a designated target node \(t\) in an undirected graph \(G=(V,E)\), using \(k\) mobile agents. Each agent \(i\) starts at time 0 at a node \(p_i\) and can move along edges subject to two parameters: its weight \(w_i\), which denotes the rate of energy consumption while travelling, and its velocity \(v_i\), which defines the speed with which agent \(i\) can travel.
We are interested in operating the agents such that we minimize the total energy consumption \(\mathcal {E}\) and the delivery time \(\mathcal {T}\) (time when the package arrives at \(t\)). Specifically, we are after a schedule of the agents that lexicographically minimizes the tuple \((\mathcal {E},\mathcal {T})\). We show that this problem can be solved in polynomial time \(\mathcal {O}(k|V|^2+\mathrm{APSP})\), where \(\mathcal {O}(\mathrm{APSP})\) denotes the running time of an all-pair shortest-paths algorithm. This completes previous research which shows that minimizing only \(\mathcal {E}\) or only \(\mathcal {T}\) is polynomial-time solvable
[the first author et al., LIPIcs -- Leibniz Int. Proc. Inform. 66, Article 10, 14 p. (2017; Zbl 1402.68178); LIPIcs -- Leibniz Int. Proc. Inform. 117, Article 56, 16 p. (2018; Zbl 07378373)],
while minimizing a convex combination of \(\mathcal {E}\) and \(\mathcal {T}\), or lexicographically minimizing the tuple \((\mathcal {T},\mathcal {E})\) are both \(\mathsf {NP}\)-hard
[Zbl 07378373].
For the entire collection see [Zbl 1369.68029].Tree-based cryptographic access controlhttps://zbmath.org/1496.680532022-11-17T18:59:28.764376Z"Alderman, James"https://zbmath.org/authors/?q=ai:alderman.james"Farley, Naomi"https://zbmath.org/authors/?q=ai:farley.naomi"Crampton, Jason"https://zbmath.org/authors/?q=ai:crampton.jasonSummary: As more and more data is outsourced to third party servers, the enforcement of access control policies using cryptographic techniques becomes increasingly important. Enforcement schemes based on symmetric cryptography typically issue users a small amount of secret material which, in conjunction with public information, allows the derivation of decryption keys for all data objects for which they are authorized.
We generalize the design of prior enforcement schemes by mapping access control policies to a graph-based structure. Unlike prior work, we envisage that this structure may be defined \textit{independently} of the policy to target different efficiency goals; the key issue then is how best to map policies to such structures. To exemplify this approach, we design a space-efficient KAS based on a binary tree which imposes a logarithmic bound on the required number of derivations whilst eliminating public information. In the worst case, users may require more cryptographic material than in prior schemes; we mitigate this by designing heuristic optimizations of the mapping and show through experimental results that our scheme performs well compared to existing schemes.
For the entire collection see [Zbl 1493.68010].Is my attack tree correct?https://zbmath.org/1496.680542022-11-17T18:59:28.764376Z"Audinot, Maxime"https://zbmath.org/authors/?q=ai:audinot.maxime"Pinchinat, Sophie"https://zbmath.org/authors/?q=ai:pinchinat.sophie"Kordy, Barbara"https://zbmath.org/authors/?q=ai:kordy.barbaraSummary: Attack trees are a popular way to represent and evaluate potential security threats on systems or infrastructures. The goal of this work is to provide a framework allowing to express and check whether an attack tree is consistent with the analyzed system. We model real systems using transition systems and introduce attack trees with formally specified node labels. We formulate the correctness properties of an attack tree with respect to a system and study the complexity of the corresponding decision problems. The proposed framework can be used in practice to assist security experts in manual creation of attack trees and enhance development of tools for automated generation of attack trees.
For the entire collection see [Zbl 1493.68010].POR for security protocol equivalences. Beyond action-determinismhttps://zbmath.org/1496.680552022-11-17T18:59:28.764376Z"Baelde, David"https://zbmath.org/authors/?q=ai:baelde.david"Delaune, Stéphanie"https://zbmath.org/authors/?q=ai:delaune.stephanie"Hirschi, Lucca"https://zbmath.org/authors/?q=ai:hirschi.luccaSummary: Formal methods have proved effective to automatically analyse protocols. Recently, much research has focused on verifying \textit{trace equivalence} on protocols, which is notably used to model interesting \textit{privacy} properties such as anonymity or unlinkability. Several tools for checking trace equivalence rely on a naive and expensive exploration of all interleavings of concurrent actions, which calls for partial-order reduction (POR) techniques. In this paper, we present the first POR technique for protocol equivalences that does not rely on an action-determinism assumption: we recast trace equivalence as a reachability problem, to which persistent and sleep set techniques can be applied, and we show how to effectively apply these results in the context of symbolic execution. We report on a prototype implementation, improving the tool DeepSec.
For the entire collection see [Zbl 1493.68018].Labeled homomorphic encryption. Scalable and privacy-preserving processing of outsourced datahttps://zbmath.org/1496.680562022-11-17T18:59:28.764376Z"Barbosa, Manuel"https://zbmath.org/authors/?q=ai:barbosa.manuel"Catalano, Dario"https://zbmath.org/authors/?q=ai:catalano.dario"Fiore, Dario"https://zbmath.org/authors/?q=ai:fiore.darioSummary: In privacy-preserving processing of outsourced data a Cloud server stores data provided by one or multiple data providers and then is asked to compute several functions over it. We propose an efficient methodology that solves this problem with the guarantee that a honest-but-curious Cloud learns no information about the data and the receiver learns nothing more than the results. Our main contribution is the proposal and efficient instantiation of a new cryptographic primitive called \textit{Labeled Homomorphic Encryption} (\textsf{labHE}). The fundamental insight underlying this new primitive is that homomorphic computation can be significantly accelerated whenever the program that is being computed over the encrypted data is known to the decrypter and is not secret -- previous approaches to homomorphic encryption do not allow for such a trade-off. Our realization and implementation of \textsf{labHE} targets computations that can be described by degree-two multivariate polynomials. As an application, we consider privacy preserving Genetic Association Studies (GAS), which require computing risk estimates from features in the human genome. Our approach allows performing GAS efficiently, non interactively and without compromising neither the privacy of patients nor potential intellectual property of test laboratories.
For the entire collection see [Zbl 1493.68010].Reusable two-round MPC from LPNhttps://zbmath.org/1496.680572022-11-17T18:59:28.764376Z"Bartusek, James"https://zbmath.org/authors/?q=ai:bartusek.james"Garg, Sanjam"https://zbmath.org/authors/?q=ai:garg.sanjam"Srinivasan, Akshayaram"https://zbmath.org/authors/?q=ai:srinivasan.akshayaram"Zhang, Yinuo"https://zbmath.org/authors/?q=ai:zhang.yinuoSummary: We present a new construction of maliciously-secure, two-round multiparty computation (MPC) in the CRS model, where the first message is reusable an unbounded number of times. The security of the protocol relies on the Learning Parity with Noise (LPN) assumption with inverse polynomial noise rate \(1/n^{1-\epsilon }\) for small enough constant \(\epsilon \), where \(n\) is the LPN dimension. Prior works on reusable two-round MPC required assumptions such as DDH or LWE that imply some flavor of homomorphic computation. We obtain our result in two steps:
\begin{itemize}
\item[--] In the first step, we construct a two-round MPC protocol in the \textit{silent pre-processing model}
[\textit{E. Boyle} et al., Lect. Notes Comput. Sci. 11694, 489--518 (2019; Zbl 07178325)].
Specifically, the parties engage in a computationally inexpensive setup procedure that generates some correlated random strings. Then, the parties commit to their inputs. Finally, each party sends a message depending on the function to be computed, and these messages can be decoded to obtain the output. Crucially, the complexity of the pre-processing phase and the input commitment phase do not grow with the size of the circuit to be computed. We call this\textit{ multiparty silent NISC} (msNISC), generalizing the notion of two-party silent NISC of
\textit{E. Boyle} et al. [``Efficient two-round OT extension and silent non-interactive secure computation'', in: Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, CCS'19. New York, NY: Association for Computing Machinery (ACM). 219--308 (2019; \url{doi:10.1145/3319535.3354255})].
We provide a construction of msNISC from LPN in the random oracle model.
\item[--] In the second step, we give a transformation that removes the pre-processing phase and use of random oracle from the previous protocol. This transformation additionally adds (unbounded) reusability of the first round message, giving the first construction of reusable two-round MPC from the LPN assumption. This step makes novel use of randomized encoding of circuits
[\textit{B. Applebaum} et al., SIAM J. Comput. 36, No. 4, 845--888 (2006; Zbl 1126.94014)]
and a variant of the ``tree of MPC message'' technique of
\textit{P. Ananth} et al. [Lect. Notes Comput. Sci. 12550, 28--57 (2020; Zbl 1479.94113)] and \textit{J. Bartusek} et al. [Lect. Notes Comput. Sci. 12551, 320--348 (2020; Zbl 07496584)].
\end{itemize}
For the entire collection see [Zbl 1490.94004].Modular verification of protocol equivalence in the presence of randomnesshttps://zbmath.org/1496.680582022-11-17T18:59:28.764376Z"Bauer, Matthew S."https://zbmath.org/authors/?q=ai:bauer.matthew-steven"Chadha, Rohit"https://zbmath.org/authors/?q=ai:chadha.rohit"Viswanathan, Mahesh"https://zbmath.org/authors/?q=ai:viswanathan.maheshSummary: Security protocols that provide privacy and anonymity guarantees are growing increasingly prevalent in the online world. The highly intricate nature of these protocols makes them vulnerable to subtle design flaws. Formal methods have been successfully deployed to detect these errors, where protocol correctness is formulated as a notion of equivalence (indistinguishably). The high overhead for verifying such equivalence properties, in conjunction with the fact that protocols are never run in isolation, has created a need for modular verification techniques. Existing approaches in formal modeling and (compositional) verification of protocols for privacy have abstracted away a fundamental ingredient in the effectiveness of these protocols, randomness. We present the first composition results for equivalence properties of protocols that are explicitly able to toss coins. Our results hold even when protocols share data (such as long term keys) provided that protocol messages are tagged with the information of which protocol they belong to.
For the entire collection see [Zbl 1493.68010].Zero round-trip time for the extended access control protocolhttps://zbmath.org/1496.680592022-11-17T18:59:28.764376Z"Brendel, Jacqueline"https://zbmath.org/authors/?q=ai:brendel.jacqueline"Fischlin, Marc"https://zbmath.org/authors/?q=ai:fischlin.marcSummary: The Extended Access Control (EAC) protocol allows to create a shared cryptographic key between a client and a server. While originally used in the context of identity card systems and machine readable travel documents, the EAC protocol is increasingly adopted as a universal solution to secure transactions or for attribute-based access control with smart cards. Here we discuss how to enhance the EAC protocol by a so-called zero-round trip time (0RTT) mode. Through this mode the client can, without further interaction, immediately derive a new key from cryptographic material exchanged in previous executions. This makes the 0RTT mode attractive from an efficiency viewpoint such that the upcoming TLS 1.3 standard, for instance, will include its own 0RTT mode. Here we show that also the EAC protocol can be augmented to support a 0RTT mode. Our proposed EAC+0RTT protocol is compliant with the basic EAC protocol and adds the 0RTT mode smoothly on top. We also prove the security of our proposal according to the common security model of Bellare and Rogaway in the multi-stage setting.
For the entire collection see [Zbl 1493.68010].Efficiently deciding equivalence for standard primitives and phaseshttps://zbmath.org/1496.680602022-11-17T18:59:28.764376Z"Cortier, Véronique"https://zbmath.org/authors/?q=ai:cortier.veronique"Dallon, Antoine"https://zbmath.org/authors/?q=ai:dallon.antoine"Delaune, Stéphanie"https://zbmath.org/authors/?q=ai:delaune.stephanieSummary: Privacy properties like anonymity or untraceability are now well identified, desirable goals of many security protocols. Such properties are typically stated as equivalence properties. However, automatically checking equivalence of protocols often yields efficiency issues.
We propose an efficient algorithm, based on graph planning and SAT-solving. It can decide equivalence for a bounded number of sessions, for protocols with standard cryptographic primitives and phases (often necessary to specify privacy properties), provided protocols are well-typed, that is encrypted messages cannot be confused. The resulting implementation, SAT-Equiv, demonstrates a significant speed-up w.r.t. other existing tools that decide equivalence, covering typically more than 100 sessions. Combined with a previous result, SAT-Equiv can now be used to prove security, for some protocols, for an unbounded number of sessions.
For the entire collection see [Zbl 1493.68018].A better composition operator for quantitative information flow analyseshttps://zbmath.org/1496.680612022-11-17T18:59:28.764376Z"Engelhardt, Kai"https://zbmath.org/authors/?q=ai:engelhardt.kaiSummary: Given a description of the quantitative information flow (qif) for components, how can we determine the qif of a system composed from components? We explore this fundamental question mathematically and provide an answer based on a new composition operator. We investigate its properties and prove that it generalises existing composition operators. We illustrate the results with a fresh look on Chaum's dining cryptographers. We show that the new operator enjoys various convenient algebraic properties and that it is well-behaved under composition refinement.
For the entire collection see [Zbl 1493.68010].Rifflescrambler -- a memory-hard password storing functionhttps://zbmath.org/1496.680622022-11-17T18:59:28.764376Z"Gotfryd, Karol"https://zbmath.org/authors/?q=ai:gotfryd.karol"Lorek, Paweł"https://zbmath.org/authors/?q=ai:lorek.pawel"Zagórski, Filip"https://zbmath.org/authors/?q=ai:zagorski.filipSummary: We introduce RiffleScrambler: a new family of directed acyclic graphs and a corresponding data-independent memory hard function with password independent memory access. We prove its memory hardness in the random oracle model.
RiffleScrambler is similar to Catena -- updates of hashes are determined by a graph (bit-reversal or double-butterfly graph in Catena). The advantage of the RiffleScrambler over Catena is that the underlying graphs are not predefined but are generated per salt, as in Balloon Hashing. Such an approach leads to higher immunity against practical parallel attacks. RiffleScrambler offers better efficiency than Balloon Hashing since the in-degree of the underlying graph is equal to 3 (and is much smaller than in Ballon Hashing). At the same time, because the underlying graph is an instance of a Superconcentrator, our construction achieves the same time-memory trade-offs.
For the entire collection see [Zbl 1493.68017].Generic traceable proxy re-encryption and accountable extension in consensus networkhttps://zbmath.org/1496.680632022-11-17T18:59:28.764376Z"Guo, Hui"https://zbmath.org/authors/?q=ai:guo.hui.1"Zhang, Zhenfeng"https://zbmath.org/authors/?q=ai:zhang.zhenfeng"Xu, Jing"https://zbmath.org/authors/?q=ai:xu.jing"Xia, Mingyuan"https://zbmath.org/authors/?q=ai:xia.mingyuanSummary: Proxy re-encryption provides a promising solution to share encrypted data in consensus network. When the data owner is going to share her encrypted data with some receiver, he will generate re-encryption key for this receiver and distribute the key among the consensus network nodes following some rules. By using the re-encryption key, the nodes can transform the ciphertexts for the receiver without learning anything about the underlying plaintexts. However, if malicious nodes and receivers collude, they can obtain the capability to decrypt all transformable ciphertexts of the data owner, especially for multi-nodes setting of consensus network. In order to address this problem, some ``tracing mechanisms'' are naturally required to identify misbehaving nodes and foster accountability when the re-encryption key is abused for distributing the decryption capability.
In this paper, we propose a generic traceable proxy re-encryption construction from any proxy re-encryption scheme, with the twice size ciphertext as the underlying proxy re-encryption scheme. Then our construction can be instantiated properly to yield the first traceable proxy re-encryption with constant size ciphertext, which greatly reduces both the communication and storage costs in consensus network. Furthermore, we show how to generate an undeniable proof for node's misbehavior and support accountability to any proxy re-encryption scheme. Our construction is the first traceable proxy re-encryption scheme with accountability, which is desirable in consensus network so that malicious node can be traced and cannot deny his leakage of re-encryption capabilities.
For the entire collection see [Zbl 1493.68022].Anonymous single-sign-on for \(n\) designated services with traceabilityhttps://zbmath.org/1496.680642022-11-17T18:59:28.764376Z"Han, Jinguang"https://zbmath.org/authors/?q=ai:han.jinguang"Chen, Liqun"https://zbmath.org/authors/?q=ai:chen.liqun.1"Schneider, Steve"https://zbmath.org/authors/?q=ai:schneider.steve-a"Treharne, Helen"https://zbmath.org/authors/?q=ai:treharne.helen"Wesemeyer, Stephan"https://zbmath.org/authors/?q=ai:wesemeyer.stephanSummary: Anonymous Single-Sign-On authentication schemes have been proposed to allow users to access a service protected by a verifier without revealing their identity. This has become more important with the introduction of strong privacy regulations. In this paper we describe a new approach whereby anonymous authentication to different verifiers is achieved via authorisation tags and pseudonyms. The particular innovation of our scheme is that authentication can occur only between a user and its designated verifier for a service, and the verification cannot be performed by any other verifier. The benefit of this authentication approach is that it prevents information leakage of a user's service access information, even if the verifiers for these services collude. Our scheme also supports a trusted third party who is authorised to de-anonymise the user and reveal her whole service access information if required. Furthermore, our scheme is lightweight because it does not rely on attribute or policy-based signature schemes to enable access to multiple services. The scheme's security model is given together with a security proof, an implementation and a performance evaluation.
For the entire collection see [Zbl 1493.68018].Stateful protocol compositionhttps://zbmath.org/1496.680652022-11-17T18:59:28.764376Z"Hess, Andreas V."https://zbmath.org/authors/?q=ai:hess.andreas-v"Mödersheim, Sebastian A."https://zbmath.org/authors/?q=ai:modersheim.sebastian-alexander"Brucker, Achim D."https://zbmath.org/authors/?q=ai:brucker.achim-dSummary: We prove a parallel compositionality result for protocols with a shared mutable state, i.e., stateful protocols. For protocols satisfying certain compositionality conditions our result shows that verifying the component protocols in isolation is sufficient to prove security of their composition. Our main contribution is an extension of the compositionality paradigm to stateful protocols where participants maintain shared databases. Because of the generality of our result we also cover many forms of sequential composition as a special case of stateful parallel composition. Moreover, we support declassification of shared secrets. As a final contribution we prove the core of our result in Isabelle/HOL, providing a strong correctness guarantee of our proofs.
For the entire collection see [Zbl 1493.68018].Local obfuscation mechanisms for hiding probability distributionshttps://zbmath.org/1496.680662022-11-17T18:59:28.764376Z"Kawamoto, Yusuke"https://zbmath.org/authors/?q=ai:kawamoto.yusuke|kawamoto.yusuke.2"Murakami, Takao"https://zbmath.org/authors/?q=ai:murakami.takaoSummary: We introduce a formal model for the information leakage of probability distributions and define a notion called distribution privacy as the local differential privacy for probability distributions. Roughly, the distribution privacy of a local obfuscation mechanism means that the attacker cannot significantly gain any information on the distribution of the mechanism's input by observing its output. Then we show that existing local mechanisms can hide input distributions in terms of distribution privacy, while deteriorating the utility by adding too much noise. For example, we prove that the Laplace mechanism needs to add a large amount of noise proportionally to the infinite Wasserstein distance between the two distributions we want to make indistinguishable. To improve the tradeoff between distribution privacy and utility, we introduce a local obfuscation mechanism, called a tupling mechanism, that adds random dummy data to the output. Then we apply this mechanism to the protection of user attributes in location based services. By experiments, we demonstrate that the tupling mechanism outperforms popular local mechanisms in terms of attribute obfuscation and service quality.
For the entire collection see [Zbl 1493.68022].Efficient proof composition for verifiable computationhttps://zbmath.org/1496.680672022-11-17T18:59:28.764376Z"Keuffer, Julien"https://zbmath.org/authors/?q=ai:keuffer.julien"Molva, Refik"https://zbmath.org/authors/?q=ai:molva.refik"Chabanne, Hervé"https://zbmath.org/authors/?q=ai:chabanne.herveSummary: Outsourcing machine learning algorithms helps users to deal with large amounts of data without the need to develop the expertise required by these algorithms. Outsourcing however raises severe security issues due to potentially untrusted service providers. Verifiable computing (VC) tackles some of these issues by assuring computational integrity for an outsourced computation. In this paper, we design a VC protocol tailored to verify a sequence of operations for which no existing VC scheme is suitable to achieve realistic performance objective for the entire sequence. We thus suggest a technique to compose several specialized and efficient VC schemes with a general purpose VC protocol, like Parno et al.'s Pinocchio, by integrating the verification of the proofs generated by these specialized schemes as a function that is part of the sequence of operations verified using the general purpose scheme. The resulting scheme achieves the objectives of the general purpose scheme with increased efficiency for the prover. The scheme relies on the underlying cryptographic assumptions of the composed protocols for correctness and soundness.
For the entire collection see [Zbl 1493.68018].Making \textit{any} attribute-based encryption accountable, efficientlyhttps://zbmath.org/1496.680682022-11-17T18:59:28.764376Z"Lai, Junzuo"https://zbmath.org/authors/?q=ai:lai.junzuo"Tang, Qiang"https://zbmath.org/authors/?q=ai:tang.qiangSummary: Attribute-based encryption (ABE) as one of the most interesting multi-recipient public encryption systems, naturally requires some ``tracing mechanisms'' to identify misbehaving users to foster accountability when unauthorized key re-distributions are taken place.
We give a generic construction of (black-box) traceable ABE which only doubles the ciphertext size of the underlying ABE scheme. When instantiating properly, it yields the first such scheme with constant size ciphertext and expressive access control.
Furthermore, we extend our generic construction of traceable ABE to support authority accountability. This property is essential for generating an un-deniable proof for user misbehaviors. Our new generic construction gives the first black-box traceable ABE with authority accountability, and constant size ciphertext. All properties are achieved in standard security models.
For the entire collection see [Zbl 1493.68017].Iterative selection of categorical variables for log data anomaly detectionhttps://zbmath.org/1496.680692022-11-17T18:59:28.764376Z"Landauer, Max"https://zbmath.org/authors/?q=ai:landauer.max"Höld, Georg"https://zbmath.org/authors/?q=ai:hold.georg"Wurzenberger, Markus"https://zbmath.org/authors/?q=ai:wurzenberger.markus"Skopik, Florian"https://zbmath.org/authors/?q=ai:skopik.florian"Rauber, Andreas"https://zbmath.org/authors/?q=ai:rauber.andreasSummary: Log data is a well-known source for anomaly detection in cyber security. Accordingly, a large number of approaches based on self-learning algorithms have been proposed in the past. Most of these approaches focus on numeric features extracted from logs, since these variables are convenient to use with commonly known machine learning techniques. However, system log data frequently involves multiple categorical features that provide further insights into the state of a computer system and thus have the potential to improve detection accuracy. Unfortunately, it is non-trivial to derive useful correlation rules from the vast number of possible values of all available categorical variables. Therefore, we propose the Variable Correlation Detector (VCD) that employs a sequence of selection constraints to efficiently disclose pairs of variables with correlating values. The approach also comprises of an online mode that continuously updates the identified variable correlations to account for system evolution and applies statistical tests on conditional occurrence probabilities for anomaly detection. Our evaluations show that the VCD is well adjustable to fit properties of the data at hand and discloses associated variables with high accuracy. Our experiments with real log data indicate that the VCD is capable of detecting attacks such as scans and brute-force intrusions with higher accuracy than existing detectors.
For the entire collection see [Zbl 1487.68009].Efficient and secure outsourcing of differentially private data publicationhttps://zbmath.org/1496.680702022-11-17T18:59:28.764376Z"Li, Jin"https://zbmath.org/authors/?q=ai:li.jin.1"Ye, Heng"https://zbmath.org/authors/?q=ai:ye.heng"Wang, Wei"https://zbmath.org/authors/?q=ai:wang.wei.23|wang.wei.50|wang.wei.39|wang.wei.58|wang.wei.29|wang.wei.30|wang.wei.12|wang.wei.19|wang.wei.40|wang.wei.55|wang.wei.8|wang.wei.15|wang.wei.25|wang.wei.20|wang.wei.46|wang.wei.45|wang.wei.24|wang.wei.47|wang.wei.65|wang.wei.16|wang.wei.52|wang.wei.62|wang.wei.49|wang.wei.31|wang.wei.34|wang.wei.64|wang.wei.9|wang.wei.28|wang.wei.38|wang.wei.44|wang.wei.3|wang.wei.27|wang.wei.13|wang.wei.36|wang.wei.2|wang.wei.59|wang.wei.57|wang.wei.21|wang.wei.32|wang.wei.1|wang.wei.41|wang.wei.53"Lou, Wenjing"https://zbmath.org/authors/?q=ai:lou.wenjing"Hou, Y. Thomas"https://zbmath.org/authors/?q=ai:hou.yiwei-thomas"Liu, Jiqiang"https://zbmath.org/authors/?q=ai:liu.jiqiang"Lu, Rongxing"https://zbmath.org/authors/?q=ai:lu.rongxingSummary: While big data becomes a main impetus to the next generation of IT industry, big data privacy, as an unevadable topic in big data era, has received considerable attention in recent years. To deal with the privacy challenges, differential privacy has been widely discussed as one of the most popular privacy-enhancing techniques. However, with today's differential privacy techniques, it is impossible to generate a sanitized dataset that can suit different algorithms or applications regardless of the privacy budget. In other words, in order to adapt to various applications and privacy budgets, different kinds of noises have to be added, which inevitably incur enormous costs for both communication and storage. To address the above challenges, in this paper, we propose a novel scheme for outsourcing differential privacy in cloud computing, where an additive homomorphic encryption (e.g., Paillier encryption) is employed to compute noise for differential privacy by cloud servers to boost efficiency. The proposed scheme allows data providers to outsource their dataset sanitization procedure to cloud service providers with a low communication cost. In addition, the data providers can go offline after uploading their datasets and noise parameters, which is one of the critical requirements for a practical system. We present a detailed theoretical analysis of our proposed scheme, including proofs of differential privacy and security. Moreover, we also report an experimental evaluation on real UCI datasets, which confirms the effectiveness of the proposed scheme.
For the entire collection see [Zbl 1493.68017].Automated identification of desynchronisation attacks on shared secretshttps://zbmath.org/1496.680712022-11-17T18:59:28.764376Z"Mauw, Sjouke"https://zbmath.org/authors/?q=ai:mauw.sjouke"Smith, Zach"https://zbmath.org/authors/?q=ai:smith.zach"Toro-Pozo, Jorge"https://zbmath.org/authors/?q=ai:toro-pozo.jorge"Trujillo-Rasua, Rolando"https://zbmath.org/authors/?q=ai:trujillo-rasua.rolandoSummary: Key-updating protocols are a class of communication protocol that aim to increase security by having the participants change encryption keys between protocol executions. However, such protocols can be vulnerable to desynchronisation attacks, a denial of service attack in which the agents are tricked into updating their keys improperly, impeding future communication. In this work we introduce a method that can be used to automatically verify (or falsify) resistance to desynchronisation attacks for a range of protocols. This approach is then used to identify previously unreported vulnerabilities in two published RFID grouping protocols.
For the entire collection see [Zbl 1493.68018].Decentralized policy-hiding ABE with receiver privacyhttps://zbmath.org/1496.680722022-11-17T18:59:28.764376Z"Michalevsky, Yan"https://zbmath.org/authors/?q=ai:michalevsky.yan"Joye, Marc"https://zbmath.org/authors/?q=ai:joye.marcSummary: Attribute-based encryption (ABE) enables limiting access to encrypted data to users with certain attributes. Different aspects of ABE were studied, such as the multi-authority setting (MA-ABE), and policy hiding, meaning the access policy is unknown to unauthorized parties. However, no practical scheme so far provably provides both properties, which are often desirable in real-world applications: supporting decentralization while hiding the access policy. We present the first practical decentralized ABE scheme with a proof of being policy-hiding. Our construction is based on a decentralized inner-product predicate encryption scheme, introduced in this paper, which hides the encryption policy. It results in an ABE scheme supporting conjunctions, disjunctions and threshold policies, that protects the access policy from parties that are not authorized to decrypt the content. Further, we address the issue of receiver privacy. By using our scheme in combination with vector commitments, we hide the overall set of attributes possessed by the receiver from individual authorities, only revealing the attribute that the authority is controlling. Finally, we propose randomizing-polynomial encodings that immunize the scheme in the presence of corrupt authorities.
For the entire collection see [Zbl 1493.68017].Symmetric-key corruption detection: when XOR-MACs meet combinatorial group testinghttps://zbmath.org/1496.680732022-11-17T18:59:28.764376Z"Minematsu, Kazuhiko"https://zbmath.org/authors/?q=ai:minematsu.kazuhiko"Kamiya, Norifumi"https://zbmath.org/authors/?q=ai:kamiya.norifumiSummary: We study a class of MACs, which we call corruption detectable MAC, that is able to not only check the integrity of the whole message, but also detect a part of the message that is corrupted. It can be seen as an application of the classical Combinatorial Group Testing (CGT) to message authentication. However, previous work on this application has an inherent limitation in its communication cost. We present a novel approach to combine CGT and a class of linear MACs (XOR-MAC) that breaks this limit. Our proposal, \textsf{XOR}-\textsf{GTM}, has a significantly smaller communication cost than any of the previous corruption detectable MACs, while keeping the same corruption detection capability. Our numerical examples for storage application show a reduction of communication by a factor of around 15 to 70 compared with previous schemes. \textsf{XOR}-\textsf{GTM} is parallelizable and is as efficient as standard MACs. We prove that \textsf{XOR}-\textsf{GTM} is provably secure under the standard cryptographic assumptions.
For the entire collection see [Zbl 1493.68022].Forward-secure revocable identity-based encryptionhttps://zbmath.org/1496.680742022-11-17T18:59:28.764376Z"Qin, Baodong"https://zbmath.org/authors/?q=ai:qin.baodong"Bai, Xue"https://zbmath.org/authors/?q=ai:bai.xue"Zheng, Dong"https://zbmath.org/authors/?q=ai:zheng.dong"Cui, Hui"https://zbmath.org/authors/?q=ai:cui.hui"Luo, Yiyuan"https://zbmath.org/authors/?q=ai:luo.yiyuanSummary: For identity-based encryption (IBE), if a user's private key is compromised, the security of his/her ciphertexts will fail completely. Revocation capability provides an effective way to mitigate above harm, so that the adversary cannot access to future ciphertexts anymore. However, current revocable IBE schemes do not provide any means to guarantee the security of the user's previous ciphertexts. In this paper, we propose a new cryptographic primitive, namely forward-secure revocable identity-based encryption (FS-RIBE), to address this issue. In FS-RIBE, when the event of full exposure of the user's current private key occurs, the forward security can guarantee that the user's private keys prior to this remain secure, while the revocation capability further guarantees that the adversary cannot obtain any valid decryption keys for future times. We provide formal definition and security model for FS-RIBE, and give a generic construction that is secure under the security model from (Hierarchical) IBE. Finally, we show some results of instantiations from various IBE and Hierarchical IBE schemes.
For the entire collection see [Zbl 1487.68013].Cross-domain attribute-based access control encryptionhttps://zbmath.org/1496.680752022-11-17T18:59:28.764376Z"Sedaghat, Mahdi"https://zbmath.org/authors/?q=ai:sedaghat.mahdi"Preneel, Bart"https://zbmath.org/authors/?q=ai:preneel.bartSummary: Logic access control enforces who can read and write data; the enforcement is typically performed by a fully trusted entity.
At TCC 2016 [Lect. Notes Comput. Sci. 9986, 547--576 (2016; Zbl 1400.94138)], \textit{I. Damgård} et al.
proposed Access Control Encryption (ACE) schemes where a predicate function decides whether or not users can read (decrypt) and write (encrypt) data, while the message secrecy and the users' anonymity are preserved against malicious parties. Subsequently, several ACE constructions with an arbitrary identity-based access policy have been proposed, but they have huge ciphertext and key sizes and/or rely on indistinguishability obfuscation.
At SP 2021 [``Cross-domain access control encryption: arbitrary-policy, constant-size, efficient'', in: Proceeding of the 2021 IEEE symposium on security and privacy, SP 2021. Los Alamitos, CA: IEEE Computer Society. 748--761 (2021; \url{doi:10.1109/SP40001.2021.00023})], \textit{X. Wang} and \textit{S. M. Chow}
proposed a Cross-Domain ACE scheme with constant-size ciphertext and arbitrary identity-based policy; the key generators are separated into two distinct parties, called Sender Authority and Receiver Authority. In this paper, we improve over their work with a novel construction that provides a more expressive access control policy based on attributes rather than on identities, the security of which relies on standard assumptions. Our generic construction combines Structure-Preserving Signatures, Non-Interactive Zero-Knowledge proofs, and Re-randomizable Ciphertext-Policy Attribute-Based Encryption schemes. Moreover, we propose an efficient scheme in which the sizes of ciphertexts and encryption and decryption keys are constant and thus independent of the number of receivers and their attributes. Our experiments demonstrate that not only is our system more flexible, but it also is more efficient and results in shorter decryption keys (reduced from about 100 to 47 bytes) and ciphertexts (reduced from about 1400 to 1047).
For the entire collection see [Zbl 1490.68011].Dynamic searchable symmetric encryption schemes supporting range queries with forward (and backward) securityhttps://zbmath.org/1496.680762022-11-17T18:59:28.764376Z"Zuo, Cong"https://zbmath.org/authors/?q=ai:zuo.cong"Sun, Shi-Feng"https://zbmath.org/authors/?q=ai:sun.shifeng"Liu, Joseph K."https://zbmath.org/authors/?q=ai:liu.joseph-k-k"Shao, Jun"https://zbmath.org/authors/?q=ai:shao.jun"Pieprzyk, Josef"https://zbmath.org/authors/?q=ai:pieprzyk.josef-pSummary: Dynamic searchable symmetric encryption (DSSE) is a useful cryptographic tool in encrypted cloud storage. However, it has been reported that DSSE usually suffers from file-injection attacks and content leak of deleted documents. To mitigate these attacks, forward security and backward security have been proposed. Nevertheless, the existing forward/backward-secure DSSE schemes can only support single keyword queries. To address this problem, in this paper, we propose two DSSE schemes supporting range queries. One is forward-secure and supports a large number of documents. The other can achieve both forward security and backward security, while it can only support a limited number of documents. Finally, we also give the security proofs of the proposed DSSE schemes in the random oracle model.
For the entire collection see [Zbl 1493.68017].A decade of TAPSOFT. Aspects of progress and prospects in theory and practice of software developmenthttps://zbmath.org/1496.680772022-11-17T18:59:28.764376Z"Ehrig, Hartmut"https://zbmath.org/authors/?q=ai:ehrig.hartmut"Mahr, Bernd"https://zbmath.org/authors/?q=ai:mahr.berndSummary: The relationship between theory and practice of software development on the background of the driving forces in the 70'ies and 80'ies was the main topic of the first TAPSOFT conference in 1985. After a decade of TAPSOFT the intention of this survey is not so much to give a complete review of the TAPSOFT conferences but to discuss the general background and to focus on specific aspects of theory and practice which seem to be typical for TAPSOFT: The support of software development by algebraic methods, techniques and tools, in particular corresponding activities at TU Berlin. The survey in this paper shows that there is quite a different kind of progress in the decades before and after TAPSOFT'85: Before 1985 the focus was more on the development of new concepts while consolidation and attempts to adapt to practical needs was dominant after 1985.
Finally the expectations for the future of theory and practice of software development are discussed on the background of the driving forces in the 90'ies hoping that TAPSOFT will be able to meet these requirements.
For the entire collection see [Zbl 0835.68002].Theory and practice of software development. Stages in a debatehttps://zbmath.org/1496.680782022-11-17T18:59:28.764376Z"Floyd, Christiane"https://zbmath.org/authors/?q=ai:floyd.christianeSummary: Starting from the experience gained in organizing TAPSOFT'85, the paper discusses the place of formal methods in software development. It distinguishes two notions of theory: the mathematical science of computation and the treatment of computing as a human activity. An adequate software theory needs to take both theoretical perspectives into account. Therefore, the paper explores the borderline of formalization and human activity in several directions: concerning the role and scope of formalized procedures, the relation between formal models and situated use, the process of learning in software development and the ways computer programs become effective in use. Fundamental assumptions underlying formal methods and their relation to emancipatory approaches such as participatory design are discussed. The paper closes with calling for a dialogical framework for further pursuing these questions.
For the entire collection see [Zbl 0835.68002].Precise interprocedural dataflow analysis with applications to constant propagationhttps://zbmath.org/1496.680792022-11-17T18:59:28.764376Z"Sagiv, Mooly"https://zbmath.org/authors/?q=ai:sagiv.mooly"Reps, Thomas"https://zbmath.org/authors/?q=ai:reps.thomas-w"Horwitz, Susan"https://zbmath.org/authors/?q=ai:horwitz.susanSummary: This paper concerns interprocedural dataflow-analysis problems in which the dataflow information at a program point is represented by an environment (i.e., a mapping from symbols to values), and the effect of a program operation is represented by a distributive environment transformer. We present an efficient dynamic-programming algorithm that produces precise solutions.
The method is applied to solve precisely and efficiently two (decidable) variants of the interprocedural constant-propagation problem: \textit{copy constant propagation and linear constant propagation}. The former interprets program statements of the form \(x :=7\) and \(x :=y\). The latter also interprets statements of the form \(x :=5*y+17\).
For the entire collection see [Zbl 0835.68002].Locally abstract, globally concrete semantics of concurrent programming languageshttps://zbmath.org/1496.680802022-11-17T18:59:28.764376Z"Din, Crystal Chang"https://zbmath.org/authors/?q=ai:din.crystal-chang"Hähnle, Reiner"https://zbmath.org/authors/?q=ai:hahnle.reiner"Johnsen, Einar Broch"https://zbmath.org/authors/?q=ai:johnsen.einar-broch"Pun, Ka I"https://zbmath.org/authors/?q=ai:pun.ka-i"Tapia Tarifa, Silvia Lizeth"https://zbmath.org/authors/?q=ai:tapia-tarifa.silvia-lizethSummary: Language semantics that is formal and mathematically precise, is the essential prerequisite for the design of logics and calculi that permit automated reasoning about programs. The most popular approach to programming language semantics -- small step operational semantics (SOS) -- is not modular in the sense that it does not separate conceptual layers in the target language. SOS is also hard to relate formally to program logics and calculi. Minimalist semantic formalisms, such as automata, Petri nets, or \(\pi\)-calculus are inadequate for rich programming languages. We propose a new formal trace semantics for a concurrent, active objects language. It is designed with the explicit aim of being compatible with a sequent calculus for a program logic and has a strong model theoretic flavor. Our semantics separates sequential and object-local from concurrent computation: the former yields abstract traces which in a second stage are combined into global system behavior.
For the entire collection see [Zbl 1371.68015].Towards critical pair analysis for the graph programming language GP 2https://zbmath.org/1496.680812022-11-17T18:59:28.764376Z"Hristakiev, Ivaylo"https://zbmath.org/authors/?q=ai:hristakiev.ivaylo"Plump, Detlef"https://zbmath.org/authors/?q=ai:plump.detlefSummary: We present the foundations of critical pair analysis for the graph programming language GP 2. Our goal is to develop a static checker that can prove or refute confluence (functional behaviour) for a large class of graph programs. In this paper, we introduce symbolic critical pairs of GP 2 rule schemata, which are labelled with expressions, and establish the completeness and finiteness of the set of symbolic critical pairs over a finite set of rule schemata. We give a procedure for their construction.
For the entire collection see [Zbl 1428.68025].Verification of logic programs with delay declarationshttps://zbmath.org/1496.680822022-11-17T18:59:28.764376Z"Apt, Krzysztof R."https://zbmath.org/authors/?q=ai:apt.krzysztof-rafal"Luitjes, Ingrid"https://zbmath.org/authors/?q=ai:luitjes.ingridSummary: Logic programs augmented with delay declarations form a higly expressive programming language in which dynamic networks of processes that communicate asynchronously by means of multiparty channels can be easily created. In this paper we study correctness these programs. In particular, we propose proof methods allowing us to deal with occur check freedom, absence of deadlock, absence of errors in presence of arithmetic relations, and termination. These methods turn out to be simple modifications of the corresponding methods dealing with Prolog programs. This allows us to derive correct delay declarations by analyzing Prolog programs. Finally, we point out difficulties concerning proofs of termination.
For the entire collection see [Zbl 1492.68008].Computing the well-founded semantics fasterhttps://zbmath.org/1496.680832022-11-17T18:59:28.764376Z"Berman, Kenneth A."https://zbmath.org/authors/?q=ai:berman.kenneth-a"Schlipf, John S."https://zbmath.org/authors/?q=ai:schlipf.john-stewart"Franco, John V."https://zbmath.org/authors/?q=ai:franco.john-vSummary: We address methods of speeding up the calculation of the well-founded semantics for normal propositional logic programs. We first consider two algorithms already reported in the literature and show that these, plus a variation upon them, have much improved worst-case behavior for special cases of input. Then we propose a general algorithm to speed up the calculation for logic programs with at most two positive subgoals per clause, intended to improve the \textit{worst case} performance of the computation. For a logic program \(\mathcal{P}\) in atoms \(\mathcal{A}\), the speed up over the straight Van Gelder alternating fixed point algorithm (assuming worst-case behavior for both algorithms) is approximately \((|\mathcal{P}|/|\mathcal{A}|)^{(1/3)}\). For \(|\mathcal{P}|\geq |\mathcal{A}|^4\), the algorithm runs in time linear in \(|\mathcal{P}|\).
For the entire collection see [Zbl 0875.00116].Game characterizations of logic program propertieshttps://zbmath.org/1496.680842022-11-17T18:59:28.764376Z"Blair, Howard A."https://zbmath.org/authors/?q=ai:blair.howard-aSummary: A family of simple two-player games will be presented which vary by how play passes between players and by what constraint must be maintained by the players in order to avoid losing. The players are representable as interacting almost independent logic programs. A correspondence between winning strategies, well-founded dependencies, constructive ordinals and hyperarithmetic sets is presented. Complexity results can be obtained for logic program properties in a uniform way. This paper demonstrates the technique as applied to two apparently divers properties, each of a very high degree of undecidability.
For the entire collection see [Zbl 0875.00116].Default consequence relations as a logical framework for logic programshttps://zbmath.org/1496.680852022-11-17T18:59:28.764376Z"Bochman, Alexander"https://zbmath.org/authors/?q=ai:bochman.alexanderSummary: We consider the use of default consequence relations suggested in
[the author, ``On the relation between default and modal consequence relations'', in: Proceedings of the 4th international conference on principles of knowledge representation and reasoning, KR'94. Burlington, MA: Morgan Kaufmann. 63--74 (1994; \url{doi:10.1016/B978-1-4832-1452-8.50103-2});
Ann. Math. Artif. Intell. 15, No. 1, 101--123 (1995; Zbl 0855.03012)]
as a `logical basis' for normal logic programs. We give a representation of major semantics for logic programs in this framework and study the question what kind of default reasoning is appropriate for them. It is shown, in particular, that default consequence relations based on three-valued inference are adequate for these semantics.
For the entire collection see [Zbl 0875.00116].Characterizations of the stable semantics by partial evaluationhttps://zbmath.org/1496.680862022-11-17T18:59:28.764376Z"Brass, Stefan"https://zbmath.org/authors/?q=ai:brass.stefan"Dix, Jürgen"https://zbmath.org/authors/?q=ai:dix.jurgenSummary: There are three most prominent semantics defined for certain subclasses of disjunctive logic programs: GCWA (for positive programs), PERFECT (for stratified programs) and STABLE (defined for the whole class of all disjunctive programs). While there are various competitors based on 3-valued models, notably WFS and its disjunctive counterparts, there are no other semantics consisting of 2-valued models. We argue that the reason for this is the \textit{Partial Evaluation}-property (also called \textit{Unfolding} or \textit{Partial Deduction}) wellknown from Logic Programming. In fact, we prove characterizations of these semantics and show that if a semantics SEM satisfies \textit{Partial Evaluation} and \textit{Elimination of Taulologies} then (1) \textit{SEM is based on 2-volued minimal models for positive programs}, and (2) \textit{if SEM satisfies in addition} Elimination of Contradictions, \textit{it is based on stable models}. We also show that if we require \textit{Isomorphy} and \textit{Relevance} then STABLE is \textit{completely} determined on the class of all stratified disjunctive logic programs. The underlying notion of a semantics is very general and our abstract properties state that certain \textit{syntactical transformations} on programs are equivalence preserving.
For the entire collection see [Zbl 0875.00116].Stable classes and operator pairs for disjunctive programshttps://zbmath.org/1496.680872022-11-17T18:59:28.764376Z"Kalinski, Jürgen"https://zbmath.org/authors/?q=ai:kalinski.jurgenSummary: \textit{C. R. Baral} and \textit{V. S. Subrahmanian} [J. Autom. Reasoning 10, No. 3, 399--420 (1993; Zbl 0782.68074)] introduced the notion of stable classes for normal logic programs. In contrast to stable models stable classes always exist and can be given a constructive characterization. We generalize the Baral-Subrahmanian approach to disjunctive programs and propose \(mf\)-stable classes for different functions \(mf\). Such \(mf\)-stable classes always exist and are sound with respect to stable model semantics. Operationalizations for approximate but efficient query evaluation are defined in terms of three-valued interpretations and their relation with \(mf\)-stable classes is analyzed. Finally, analogous concepts are given for an approach based on states instead of models.
For the entire collection see [Zbl 0875.00116].A refinement of import/export declarations in modular logic programming and its semanticshttps://zbmath.org/1496.680882022-11-17T18:59:28.764376Z"Karali, Isambo"https://zbmath.org/authors/?q=ai:karali.isambo"Halatsis, Constantin"https://zbmath.org/authors/?q=ai:halatsis.constantinSummary: Encapsulation constructs with import/export declarations is the structuring facility offered in most commercial Prolog systems. However, real-life applications have shown to require a finer information exchange between encapsulated pieces of code. In this paper, a refinement of import/export declarations for modules of logic programs is presented. This offers a stricter form of communication between the modules and a larger variety of visibility states of their predicates, the standard approaches being special cases of it. The semantics of this module system has been examined and model-theoretic, fixpoint and operational ones are given and have been proved to be equivalent. Instead of using other logics, all these semantics extend the ones of Horn clause logic using concepts commonly used in it. In addition, the module system has been naturally transformed to Horn clause logic exploiting the distinction of the predicates within a module according to the interface declarations of this module. A form of equivalence with the other semantics of the system is given. In addition, the employed transformation has provided us with a basis for a preprocessor based implementation of the module system.
For the entire collection see [Zbl 0835.68002].Loop checking and the well-founded semanticshttps://zbmath.org/1496.680892022-11-17T18:59:28.764376Z"Lifschitz, Vladimir"https://zbmath.org/authors/?q=ai:lifschitz.vladimir"McCain, Norman"https://zbmath.org/authors/?q=ai:mccain.norman"Przymusinski, Teodor C."https://zbmath.org/authors/?q=ai:przymusinski.teodor-c"Stärk, Robert F."https://zbmath.org/authors/?q=ai:stark.robert-fSummary: Using a calculus of goals, we define the success and failure of a goal for propositional programs in the presence of loop checking. The calculus is sound with respect to the well-founded semantics; for finite programs, it is also complete. A Prolog-style proof search strategy for a modification of this calculus provides a query evaluation algorithm for finite propositional programs under the well-founded semantics. This algorithm is implemented as a meta-interpreter.
For the entire collection see [Zbl 0875.00116].Incremental methods for optimizing partial instantiationhttps://zbmath.org/1496.680902022-11-17T18:59:28.764376Z"Ng, Raymond T."https://zbmath.org/authors/?q=ai:ng.raymond-t"Tian, Xiaomei"https://zbmath.org/authors/?q=ai:tian.xiaomeiSummary: It has been shown that mixed integer programming methods can effectively support minimal model, stable model and well-founded model semantics for ground deductive databases. Recently, a novel approach called partial instantiation has been developed which, when integrated with mixed integer programming methods, can handle non-ground logic programs. The goal of this paper is to explore how this integrated framework based on partial instantiation can be optimized. In particular, we develop an incremental algorithm that minimizes repetitive computations. We also develop optimization techniques to further enhance the efficiency of our incremental algorithm. Experimental results indicate that our algorithm and optimization techniques can bring about very significant improvement in run-time performance.
For the entire collection see [Zbl 0875.00116].Trans-epistemic semantics for logic programshttps://zbmath.org/1496.680912022-11-17T18:59:28.764376Z"Rajasekar, Arcot"https://zbmath.org/authors/?q=ai:rajasekar.arcot-kSummary: Each stable model of a logic program is computed in isolation. This does not allow one to reason in any stable model with information from other stable models. Such information interchange is needed when computing with full introspection, as performed by Gelfond's epistemic specifications, or when modeling multi-agent reasoning using stable models. In this paper, we define syntactic and semantic structures that allow the use of information from multiple stable models when computing one stable model. Hence a notion of second order stability is introduced and every computed model should be stable at that level. We define a concept of trans-epistemic (te-) logic programs that is reduced to a logic program using information from a trans-epistemic interpretation. The te-interpretation is checked for stability against the set of stable models of the logic program using a consensus function. We discus the properties of trans-epistemic stable models and motivate their use with examples.
For the entire collection see [Zbl 0875.00116].Logic programming in tensor spaceshttps://zbmath.org/1496.680922022-11-17T18:59:28.764376Z"Sakama, Chiaki"https://zbmath.org/authors/?q=ai:sakama.chiaki"Inoue, Katsumi"https://zbmath.org/authors/?q=ai:inoue.katsumi"Sato, Taisuke"https://zbmath.org/authors/?q=ai:sato.taisukeSummary: This paper introduces a novel approach to computing logic programming semantics. First, a propositional Herbrand base is represented in a vector space and if-then rules in a program are encoded in a matrix. Then the least fixpoint of a definite logic program is computed by matrix-vector products with a non-linear operation. Second, disjunctive logic programs are represented in third-order tensors and their minimal models are computed by algebraic manipulation of tensors. Third, normal logic programs are represented by matrices and third-order tensors, and their stable models are computed. The result of this paper exploits a new connection between linear algebraic computation and symbolic computation, which has the potential to realize logical inference in huge scale of knowledge bases.A transformation of propositional Prolog programs into classical logichttps://zbmath.org/1496.680932022-11-17T18:59:28.764376Z"Stärk, Robert F."https://zbmath.org/authors/?q=ai:stark.robert-fSummary: We transform a propositional Prolog program \(P\) into a set of propositional formulas \(\operatorname{prl}(P)\) and show that Prolog, using its depth-first left-to-right search, is sound and complete with respect to \(\operatorname{prl}(P)\). This means that a goal succeeds in Prolog if and only if it follows from \(\operatorname{prl}(P)\) in classical propositional logic. The generalization of \(\operatorname{prl}(P)\) to predicate logic leads to a system for which Prolog is still sound but unfortunately not complete. If one changes, however, the definition of the termination operator, then one obtains a theory that allows to prove termination of arbitrary non-floundering goals under Prolog.
For the entire collection see [Zbl 0875.00116].On the extension of logic programming with negation through uniform proofshttps://zbmath.org/1496.680942022-11-17T18:59:28.764376Z"Yuan, Li Yan"https://zbmath.org/authors/?q=ai:yuan.li-yan"You, Jia Huai"https://zbmath.org/authors/?q=ai:you.jia-huaiSummary: In the past, logic program semantics have been studied often separately from the underlying proof system, and this, consequently, leads to a somewhat confusing status of semantics. In this paper we show that elegant, yet natural semantics can be obtained by building a mechanism of justifying default assumptions on top of a proof system. In particular, we propose extended logic programming languages with negation through \textit{uniform proofs}. The result is a very general framework, in which \textit{any} abstract logic programming language can be extended to a nonmonotonic reasoning system, and many semantics, previously proposed and new, can be characterized and understood in terms of uniform proofs.
For the entire collection see [Zbl 0875.00116].Space-efficient latent contractshttps://zbmath.org/1496.680952022-11-17T18:59:28.764376Z"Greenberg, Michael"https://zbmath.org/authors/?q=ai:greenberg.michael-dSummary: Standard higher-order contract monitoring breaks tail recursion and leads to space leaks that can change a program's asymptotic complexity; space-efficiency restores tail recursion and bounds the amount of space used by contracts. Space-efficient contract monitoring for contracts enforcing simple type disciplines (a/k/a gradual typing) is well studied. Prior work establishes a space-efficient semantics for manifest contracts without dependency
[\textit{M. Greenberg}, in: Proceedings of the 42nd ACM SIGPLAN-SIGACT symposium on principles of programming languages, POPL'15. New York, NY: Association for Computing Machinery (ACM). 181--194 (2015; Zbl 1345.68054)];
we adapt that work to a latent calculus with dependency. We guarantee space efficiency when no dependency is used; we cannot \textit{generally} guarantee space efficiency when dependency is used, but instead offer a framework for making such programs space efficient on a case-by-case basis.
For the entire collection see [Zbl 1409.68027].Strictness and totality analysis with conjunctionhttps://zbmath.org/1496.680962022-11-17T18:59:28.764376Z"Solberg, Kirsten Lackner"https://zbmath.org/authors/?q=ai:solberg.kirsten-lacknerSummary: We extend the strictness and totality analysis of
[the author et al., Sci. Comput. Program. 31, No. 1, 113--145 (1998; Zbl 0941.68021)]
by allowing conjunction at all levels rather than at the top-level. We prove the strictness and totality analysis correct with respect to a denotational semantics and finally construct an algorithm for inferring the strictness and totality properties.
For the entire collection see [Zbl 0835.68002].Protocol schedulinghttps://zbmath.org/1496.680972022-11-17T18:59:28.764376Z"Dokter, Kasper"https://zbmath.org/authors/?q=ai:dokter.kasper"Arbab, Farhad"https://zbmath.org/authors/?q=ai:arbab.farhadSummary: Interactions amongst different processes in concurrent software are governed by a protocol. The blocking I/O operations involved in a protocol may temporarily suspend the execution of some processes in an application. Scheduling consists of the allocation of available processors to the appropriate non-suspended processes in an application, such that some specified criteria (e.g., shortest execution time or highest throughput) are met. We use a generic, game-theoretic scheduling framework to find optimal non-preemptive schedules for an application. We then show how such schedules themselves can be encoded as protocols, which in our framework, can be composed with the original application protocol. The resulting composed protocol restricts the number of ready processes to the number of available processors, which enables standard preemptive schedulers of modern operating-systems to closely approximate the behavior and the performance of the optimal non-preemptive scheduler of the application. We evaluate our work by comparing the throughput of two versions of a cyclo-static dataflow network: one version with the usual protocol, and the other version with a restricted protocol.
For the entire collection see [Zbl 1489.68021].An imperative object calculushttps://zbmath.org/1496.680982022-11-17T18:59:28.764376Z"Abadi, Martín"https://zbmath.org/authors/?q=ai:abadi.martin"Cardelli, Luca"https://zbmath.org/authors/?q=ai:cardelli.lucaSummary: We develop an imperative calculus of objects. Its main type constructor is the one for object types, which incorporate variance annotations and Self types. A subtyping relation between object types supports object subsumption. The type system for objects relies on unusual but beneficial assumptions about the possible subtypes of an object type. With the addition of polymorphism, the calculus can express classes and inheritance.
For the entire collection see [Zbl 0835.68002].A model inference system for generic specification with application to code sharinghttps://zbmath.org/1496.680992022-11-17T18:59:28.764376Z"Bert, Didier"https://zbmath.org/authors/?q=ai:bert.didier"Oriat, Catherine"https://zbmath.org/authors/?q=ai:oriat.catherineSummary: This paper presents a model inference system to control instantiation of generic modules. Generic parameters are specified by properties which represent classes of modules sharing some common features. Just as type checking consists in verifying that an expression is well typed, \textit{model checking} allows to detect whether a (possibly generic) instantiation of a generic module is valid, i.e. whether the instantiation module is a \textit{model} of the parameterizing property. Equality of instances can be derived from a canonical representation of modules. At last, we show how the code of generic modules can be shared for all instances of modules.
For the entire collection see [Zbl 0835.68002].Verifying constant-time implementations by abstract interpretationhttps://zbmath.org/1496.681002022-11-17T18:59:28.764376Z"Blazy, Sandrine"https://zbmath.org/authors/?q=ai:blazy.sandrine"Pichardie, David"https://zbmath.org/authors/?q=ai:pichardie.david"Trieu, Alix"https://zbmath.org/authors/?q=ai:trieu.alixSummary: Constant-time programming is an established discipline to secure programs against timing attackers. Several real-world secure C libraries such as NaCl, mbedTLS, or Open Quantum Safe, follow this discipline. We propose an advanced static analysis, based on state-of-the-art techniques from abstract interpretation, to report time leakage during programming. To that purpose, we analyze source C programs and use full context-sensitive and arithmetic-aware alias analyses to track the tainted flows.
We give semantic evidences of the correctness of our approach on a core language. We also present a prototype implementation for C programs that is based on the CompCert compiler toolchain and its companion Verasco static analyzer. We present verification results on various real-world constant-time programs and report on a successful verification of a challenging SHA-256 implementation that was out of scope of previous tool-assisted approaches.
For the entire collection see [Zbl 1493.68010].On mechanizing proofs within a complete proof system for Unityhttps://zbmath.org/1496.681012022-11-17T18:59:28.764376Z"Brown, Naïma"https://zbmath.org/authors/?q=ai:brown.naima"Mokkedem, Abdelillah"https://zbmath.org/authors/?q=ai:mokkedem.abdelillahSummary: The solution proposed by
\textit{B. A. Sanders} in [Formal Asp. Comput. 3, No. 2, 189--205 (1991; Zbl 0715.68059)]
consists of eliminating the need of the substitution axiom from Unity in order to eliminate the unsoundness problem caused by this axiom in Unity without loss of completeness. Sander's solution is based on the \textbf{strongest invariant} concept and provides theoretical advantages by formally capturing the effects of the initial conditions on the properties of a program. This solution is less convincing from a practical point of view because it assumes proofs of strongest invariant in the meta-level. In this paper we reconsider this solution showing that the general concept of invariant is sufficient to eliminate the substitution axiom and to provide a sound and relatively complete proof system for Unity logic. The advantage of the new solution is that proofs of invariants are mechanized inside the Unity logic itself.
For the entire collection see [Zbl 1492.68008].Logical foundations for compositional verification and development of concurrent programs in UNITYhttps://zbmath.org/1496.681022022-11-17T18:59:28.764376Z"Collette, Pierre"https://zbmath.org/authors/?q=ai:collette.pierre"Knapp, Edgar"https://zbmath.org/authors/?q=ai:knapp.edgarSummary: To achieve modularity, we view UNITY specifications as describing open (rather than closed) systems. These may be composed in parallel or through hiding of global variables. Adopting the assumption-commitment paradigm, conventional properties of UNITY programs are extended with an explicit rely condition on interference; previous variants of the logic can be retrieved by specialising or omitting this rely condition. The outcome is a complete compositional proof system for both safety and progress properties.
For the entire collection see [Zbl 1492.68008].A program logic for fresh name generationhttps://zbmath.org/1496.681032022-11-17T18:59:28.764376Z"Eliott, Harold Pancho"https://zbmath.org/authors/?q=ai:eliott.harold-pancho"Berger, Martin"https://zbmath.org/authors/?q=ai:berger.martin-jSummary: We present a program logic for Pitts and Stark's \(\nu \)-calculus, an extension of the call-by-value simply-typed \(\lambda \)-calculus with a mechanism for the generation of fresh names. Names can be compared for equality and inequality, producing programs with subtle observable properties. Hidden names produced by interactions between generation and abstraction are captured logically with a second-order quantifier over type contexts. We illustrate usage of the logic through reasoning about well-known difficult cases from the literature.
For the entire collection see [Zbl 1489.68021].Confluence in concurrent constraint programminghttps://zbmath.org/1496.681042022-11-17T18:59:28.764376Z"Falaschi, Moreno"https://zbmath.org/authors/?q=ai:falaschi.moreno"Gabbrielli, Maurizio"https://zbmath.org/authors/?q=ai:gabbrielli.maurizio"Marriott, Kim"https://zbmath.org/authors/?q=ai:marriott.kim"Palamidessi, Catuscia"https://zbmath.org/authors/?q=ai:palamidessi.catusciaSummary: We investigate the subset of concurrent constraint programs (ccp) which are confluent in the sense that different process schedulings lead to the same possible outcomes. Confluence is an important and desirable property as it allows the program to be understood by considering any desired scheduling rule, rather than having to consider all possible schedulings. The subset of confluent programs is less expressive than full ccp. For example it cannot express fair merge although it can express demonic merge. We give a simple closure based denotational semantics for confluent ccp. We also study admissible programs which is a subset of confluent ccp closed under composition. We consider then applications of our results to give a framework for the efficient yet accurate analysis of full ccp. The basic idea is to approximate an arbitrary ccp program by an admissible program which is then analyzed.
For the entire collection see [Zbl 1492.68008].An institution for Event-Bhttps://zbmath.org/1496.681052022-11-17T18:59:28.764376Z"Farrell, Marie"https://zbmath.org/authors/?q=ai:farrell.marie"Monahan, Rosemary"https://zbmath.org/authors/?q=ai:monahan.rosemary"Power, James F."https://zbmath.org/authors/?q=ai:power.james-fSummary: This paper presents a formalisation of the Event-B formal specification language in terms of the theory of institutions. The main objective of this paper is to provide: (1) a mathematically sound semantics and (2) modularisation constructs for Event-B using the specification-building operations of the theory of institutions. Many formalisms have been improved in this way and our aim is thus to define an appropriate institution for Event-B, which we call \(\mathcal{EVT}\). We provide a definition of \(\mathcal{EVT}\) and the proof of its satisfaction condition. A motivating example of a traffic-light simulation is presented to illustrate our approach.
For the entire collection see [Zbl 1428.68025].Dynamic matrices and the cost analysis of concurrent programshttps://zbmath.org/1496.681062022-11-17T18:59:28.764376Z"Ferrari, Gianluigi"https://zbmath.org/authors/?q=ai:ferrari.gian-luigi"Montanari, Ugo"https://zbmath.org/authors/?q=ai:montanari.ugo-gSummary: The problem of the cost analysis of concurrent programs can be formulated and studied by dynamic methods based on matrix calculi. However, standard matrix calculi can handle only the case of programs whose dimensions are rigidly fixed. In this paper, the notion of dynamic matrix is presented. Dynamic matrices are special matrices having extensible dimensions (rows and columns) which allow matrix product to be always defined. We put forward the theory of dynamic matrices as the correct framework to study the problems of cost analysis of concurrent programs which can change dynamically their dimensions, i.e. their amount of parallelism.
For the entire collection see [Zbl 1492.68008].Testing can be formal, toohttps://zbmath.org/1496.681072022-11-17T18:59:28.764376Z"Gaudel, Marie-Claude"https://zbmath.org/authors/?q=ai:gaudel.marie-claudeSummary: The paper presents a theory of program testing based on formal specifications. The formal semantics of the specifications is the basis for a notion of an exhaustive test set. Under some minimal hypotheses on the program under test, the success of this test set is equivalent to the satisfaction of the specification.
The selection of a finite subset of the exhaustive test set can be seen as the introduction of more hypotheses on the program, called selection hypotheses. Several examples of commonly used selection hypotheses are presented.
Another problem is the observability of the results of a program with respect to its specification: contrary to some common belief, the use of a formal specification is not always sufficient to decide whether a test execution is a success. As soon as the specification deals with more abstract entities than the program, program results may appear in a form which is not obviously equivalent to the specified results. A solution to this problem is proposed in the case of algebraic specifications.
For the entire collection see [Zbl 0835.68002].Equational logic as a toolhttps://zbmath.org/1496.681082022-11-17T18:59:28.764376Z"Gries, David"https://zbmath.org/authors/?q=ai:gries.davidSummary: Software tools and methods that approach being formal are not readily used by programmers, software engineers, and even most computer scientists. (There are avid users of mechanical verifiers and proof checkers, but they are a small the minority.) One reason for this is that the foundation of many formalisms -- propositional and predicate logic -- has been viewed and taught more as an object of study than as a useful tool.
We believe that formal logic \textit{can} a useful mental tool. In fact, logic is the glue that binds together methods of reasoning, in all domains. Further, logic can be taught in a way that imparts appreciation for logic and rigorous proof, as well as some skill in formal manipulation. This is most easily done using an \textit{equational} logic -- a logic based on substitution of equals for equals and the kinds of manipulations people in scientific disciplines already perform. We outline this logic, explain its pedagogical advantages, and discuss teaching it.
For the entire collection see [Zbl 1492.68008].Observational semantics for dynamic logic with bindershttps://zbmath.org/1496.681092022-11-17T18:59:28.764376Z"Hennicker, Rolf"https://zbmath.org/authors/?q=ai:hennicker.rolf"Madeira, Alexandre"https://zbmath.org/authors/?q=ai:madeira.alexandreSummary: The dynamic logic with binders \(\mathcal{D}^{\downarrow}\) was recently introduced as a suitable formalism to support a rigorous stepwise development method for reactive software. The commitment of this logic concerning bisimulation equivalence is, however, not satisfactory: the model class semantics of specifications in \(\mathcal{D}^{\downarrow}\) is not closed under bisimulation equivalence; there are \(\mathcal{D}^{\downarrow}\)-sentences that distinguish bisimulation equivalent models, i.e., \( \mathcal{D}^{\downarrow}\) does not enjoy the modal invariance property. This paper improves on these limitations by providing an observational semantics for dynamic logic with binders. This involves the definition of a new model category and of a more relaxed satisfaction relation. We show that the new logic \(\mathcal{D}^{\downarrow}_\sim\) enjoys modal invariance and even the Hennessy-Milner property. Moreover, the new model category provides a categorical characterisation of bisimulation equivalence by observational isomorphism. Finally, we consider abstractor semantics obtained by closing the model class of a specification \(SP\) in \(\mathcal{D}^{\downarrow}\) under bisimulation equivalence. We show that, under mild conditions, abstractor semantics of \(SP\) in \(\mathcal{D}^{\downarrow}\) is the same as observational semantics of \(SP\) in \(\mathcal{D}^{\downarrow}_\sim\).
For the entire collection see [Zbl 1428.68025].Mongruences and cofree coalgebrashttps://zbmath.org/1496.681102022-11-17T18:59:28.764376Z"Jacobs, Bart"https://zbmath.org/authors/?q=ai:jacobs.bartSummary: A coalgebra is introduced here as a model of a certain signature consisting of a type \(X\) with various ``destructor'' function symbols, satisfying certain equations. These destructor function symbols are like methods and attributes in object-oriented programming: they provide access to the type (or state) \(X\). We show that the category of such coalgebras and structure preserving functions is comonadic over sets. Therefore we introduce the notion of a `mongruence' (predicate) on a coalgebra. It plays the dual role of a congrence (relation) on an algebra.
For the entire collection see [Zbl 1492.68008].Partial order programming (revisited)https://zbmath.org/1496.681112022-11-17T18:59:28.764376Z"Jayaraman, Bharat"https://zbmath.org/authors/?q=ai:jayaraman.bharat"Osorio, Mauricio"https://zbmath.org/authors/?q=ai:osorio.mauricio-a"Moon, Kyonghee"https://zbmath.org/authors/?q=ai:moon.kyongheeSummary: This paper shows the use of partial-order program clauses and lattice domains for functional and logic programming. We illustrate the paradigm using a variety of examples: graph problems, program analysis, and database querying. These applications are characterized by a need to solve circular constraints and perform aggregate operations, a capability that is very clearly and efficiently provided by partial-order clauses. We present a novel approach to their model-theoretic and operational semantics. The least Herbrand model for any function is not the intersection of all models, but the \textit{glb/lub} of the respective terms defined for this function in the different models. The operational semantics combines top-down goal reduction with \textit{monotonic memo-tables}. In general, when functions are defined circularly in terms of one another through \textit{monotonic} functions, a memoized entry may have to monotonically updated until the least (or greatest) fixed-point is reached. This partial-order programming paradigm has been implemented and all examples shown in this paper have been tested using this implementation.
For the entire collection see [Zbl 1492.68008].Non-speculative and upward invocation of continuations in a parallel languagehttps://zbmath.org/1496.681122022-11-17T18:59:28.764376Z"Moreau, Luc"https://zbmath.org/authors/?q=ai:moreau.lucSummary: A method of preserving the sequential semantics in parallel programs with first-class continuations is to invoke continuations non-speculatively. This method, which prevents a continuation from being invoked as long as its invocation can infringe the sequential semantics, reduces parallelism by the severe conditions that it imposes, especially on upward uses. In this paper, we present new conditions for invoking continuations in an upward way and both preserving the sequential semantics and providing parallelism. This new approach is formalised in the PCKS-machine, which is proved to be correct by showing that it has the same observational equivalence theory as the sequential semantics.
For the entire collection see [Zbl 0835.68002].Static and dynamic processor allocation for higher-order concurrent languageshttps://zbmath.org/1496.681132022-11-17T18:59:28.764376Z"Nielson, Hanne Riis"https://zbmath.org/authors/?q=ai:riis-nielson.hanne"Nielson, Flemming"https://zbmath.org/authors/?q=ai:nielson.flemmingSummary: Starting from the process algebra for Concurrent ML we develop two program analyses that facilitate the intelligent placement of processes on processors. Both analyses are obtained by augmenting an inference system for counting the number of channels created, the number of input and output operations performed, and the number of processes spawned by the execution of a Concurrent ML program. One analysis provides information useful for making a static decision about processor allocation; to this end it accumulates the communication cost for all processes with the same label. The other analysis provides information useful for making a dynamic decision about processor allocation; to this end it determines the maximum communication cost among processes with the same label. We prove the soundness of the inference system and the two analyses and demonstrate how to implement them; the latter amounts to transforming the syntax-directed inference problems to instances of syntax-free equation solving problems.
For the entire collection see [Zbl 0835.68002].Can you trust your data?https://zbmath.org/1496.681142022-11-17T18:59:28.764376Z"Ørbæk, Peter"https://zbmath.org/authors/?q=ai:orbaek.peterSummary: A new program analysis is presented, and two compile time methods for this analysis are given. The analysis attempts to answer the question: ``Given some trustworthy and some untrustworthy input, can we trust the value of a given variable after execution of some code''. The analyses are based on an abstract interpretation framework and a constraint generation framework respectively. The analyses are proved safe with respect to an instrumented semantics. We explicitly deal with a language with pointers and possible aliasing problems. The constraint based analysis is related \textit{directly} to the abstract interpretation and therefore indirectly to the instrumented semantics.
For the entire collection see [Zbl 0835.68002].Comparing flow-based binding-time analyseshttps://zbmath.org/1496.681152022-11-17T18:59:28.764376Z"Palsberg, Jens"https://zbmath.org/authors/?q=ai:palsberg.jensSummary: Binding-time analyses based on flow analysis have been presented by Bondorf, Consel, Bondorf and Jørgensen, and Schwartzbach and the present author. The analyses are formulated in radically different ways, making comparison non-trivial.
In this paper we demonstrate how to compare such analyses. We prove that the first and the fourth analyses can be specified by constraint systems of a particular form, enabling direct comparison. As corollaries, we get that Bondorf's analysis is more conservative than ours, that both analyses can be performed in cubic time, and that the core of Bondorf's analysis is correct. Our comparison is of analyses that apply to the pure \(\lambda\)-calculus.
For the entire collection see [Zbl 0835.68002].Higher-order narrowing with convergent systemshttps://zbmath.org/1496.681162022-11-17T18:59:28.764376Z"Prehofer, Christian"https://zbmath.org/authors/?q=ai:prehofer.christianSummary: Higher-order narrowing is a general method for higher-order equational reasoning and serves for instance as the foundation for the integration of functional and logic programming. We present several refinements of higher-order lazy narrowing for convergent (terminating and confluent) term rewrite systems and their application to program transformation. The improvements of narrowing include a restriction of narrowing at variables, generalizing the first-order case. Furthermore, functional evaluation via normalization is shown to be complete and a partial answer to the eager variable elimination problem is presented.
For the entire collection see [Zbl 1492.68008].Generic Hoare logic for order-enriched effects with exceptionshttps://zbmath.org/1496.681172022-11-17T18:59:28.764376Z"Rauch, Christoph"https://zbmath.org/authors/?q=ai:rauch.christoph"Goncharov, Sergey"https://zbmath.org/authors/?q=ai:goncharov.sergei-savostyanovich"Schröder, Lutz"https://zbmath.org/authors/?q=ai:schroder.lutzSummary: In programming semantics, monads are used to provide a generic encapsulation of side-effects. We introduce a monad-based metalanguage that extends Moggi's computational metalanguage with native exceptions and iteration, interpreted over monads supporting a dcpo structure. We present a Hoare calculus with abnormal postconditions for this metalanguage and prove relative completeness using weakest liberal preconditions, extending earlier work on the exception-free case.
For the entire collection see [Zbl 1428.68025].Proving the correctness of recursion-based automatic program transformationshttps://zbmath.org/1496.681182022-11-17T18:59:28.764376Z"Sands, David"https://zbmath.org/authors/?q=ai:sands.davidSummary: This paper shows how the \textit{Improvement Theorem} -- a semantic condition for the total correctness of program transformation on higher-order functional programs -- has practical value in proving the correctness of automatic techniques, including deforestation and supercompilation. This is aided by a novel formulation (and generalisation) of deforestation-like transformations, which also greatly adds to the modularity of the proof with respect to extensions to both the language and the transformation rules.
For the entire collection see [Zbl 0835.68002].Encoding natural semantics in Coqhttps://zbmath.org/1496.681192022-11-17T18:59:28.764376Z"Terrasse, Delphine"https://zbmath.org/authors/?q=ai:terrasse.delphineSummary: We address here the problem of automatically translating the Natural Semantics of programming languages to Coq, in order to prove formally general properties of languages. Natural Semantics [\textit{G. Kahn}, Lect. Notes Comput. Sci. 247, 22--39 (1987; Zbl 0635.68007)] is a formalism for specifying semantics of programming languages inspired by Plotkin's Structural Operational Semantics. The Coq proof development system, based on the Calculus of Constructions extended with inductive types (CCind), provides mechanized support including tactics for building goal-directed proofs. Our representation of a language in Coq is influenced by the encoding of logics used by \textit{A. Church} [J. Symb. Log. 5, 56--68 (1940; Zbl 0023.28901; JFM 66.1192.06)] and in the Edinburgh Logical Framework (ELF).
For the entire collection see [Zbl 1492.68008].Time-bounded termination analysis for probabilistic programs with delayshttps://zbmath.org/1496.681202022-11-17T18:59:28.764376Z"Xu, Ming"https://zbmath.org/authors/?q=ai:xu.ming.2"Deng, Yuxin"https://zbmath.org/authors/?q=ai:deng.yuxinSummary: This paper investigates the model of probabilistic program with delays (PPD) that consists of a few program blocks. Performing each block has an additional time-consumption -- waiting to be executed -- besides the running time. We interpret the operational semantics of PPD by Markov automata with a cost structure on transitions. Our goal is to measure those individual execution paths of a PPD that terminates within a given time bound, and to compute the minimum termination probability, i.e. the termination probability under a demonic scheduler that resolves the nondeterminism inherited from probabilistic programs. When running time plus waiting time is bounded, the demonic scheduler can be determined by comparison between a class of well-formed real numbers. The method is extended to parametric PPDs. When only the running time is bounded, the demonic scheduler can be determined by real root isolation over a class of well-formed real functions under Schanuel's conjecture. Finally we give the complexity upper bounds of the proposed methods.Ways of synthesizing binary programs admitting recursive call of procedureshttps://zbmath.org/1496.681212022-11-17T18:59:28.764376Z"Zhukov, V. V."https://zbmath.org/authors/?q=ai:zhukov.vladimir-v|zhukov.vitalii-vladimirovichSummary: A model of binary programs implementing the functions of the algebra of logic (Boolean functions) is considered. The programs consist of one or several modules containing instructions of three types: computational and redirecting instructions and instructions for summoning the procedures. In contrast to earleir models of binary programs, a model is introduced that admits the recursive summoning of procedures; i.e., the procedures can directly summon themselves while executing a binary program, or through other procedures. The functioning of this model of programs is described, as is its relationship to other discrete control systems (e.g., circuits made of functional elements or binary decision diagrams). Ways are presented for obtaining lower and upper estimates of the Shannon function for the complexity of using Boolean functions in the class of binary programs. The proposed technique allows the asymptotics of the Shannon function to be established under certain structural and parametric limitations imposed on the model of binary programs.Software obfuscation with non-linear mixed Boolean-arithmetic expressionshttps://zbmath.org/1496.681222022-11-17T18:59:28.764376Z"Liu, Binbin"https://zbmath.org/authors/?q=ai:liu.binbin"Feng, Weijie"https://zbmath.org/authors/?q=ai:feng.weijie"Zheng, Qilong"https://zbmath.org/authors/?q=ai:zheng.qilong"Li, Jing"https://zbmath.org/authors/?q=ai:li.jing.6|li.jing.5|li.jing.2|li.jing.12|li.jing.17|li.jing.7|li.jing|li.jing.11|li.jing.3|li.jing.10|li.jing.1|li.jing.4"Xu, Dongpeng"https://zbmath.org/authors/?q=ai:xu.dongpengSummary: Mixed Boolean-Arithmetic (MBA) expression mixes bitwise operations (e.g., AND, OR, and NOT) and arithmetic operations (e.g., ADD and IMUL). It enables a semantic-preserving program transformation to convert a simple expression to a difficult-to-understand but equivalent form. MBA expression has been widely adopted as a highly effective and low-cost obfuscation scheme. However, state-of-the-art deobfuscation research proposes substantial challenges to the MBA obfuscation technique. Attacking methods such as bit-blasting, pattern matching, program synthesis, deep learning, and mathematical transformation can successfully simplify specific categories of MBA expressions. Existing MBA obfuscation must be enhanced to overcome these emerging challenges.
In this paper, we first review existing MBA obfuscation methods and reveal that existing MBA obfuscation is based on ``linear MBA'', a simple subset of MBA transformation. This leaves the more complex ``non-linear MBA'' in its infancy. Therefore, we propose a new obfuscation method to unleash the power of non-linear MBA. Non-linear MBA expressions are generated from the combination or transformation of linear MBA rules based on a solid theoretical underpinning. Comparing to existing MBA obfuscation, our method can generate significantly more complex MBA expressions. To present the practicability of the non-linear MBA obfuscation scheme, we apply non-linear MBA obfuscation to the Tiny Encryption Algorithm (TEA). We have implemented the method as a prototype tool, named \textit{MBA-Obfuscator}, to produce a large-scale dataset. We run all existing MBA simplification tools on the dataset, and at most 147 out of 1,000 non-linear MBA expressions can be successfully simplified. Our evaluation shows \textit{MBA-Obfuscator} is a practical obfuscation scheme with a solid theoretical cornerstone.
For the entire collection see [Zbl 1487.68012].Breakpoint distance and PQ-treeshttps://zbmath.org/1496.681232022-11-17T18:59:28.764376Z"Jiang, Haitao"https://zbmath.org/authors/?q=ai:jiang.haitao"Liu, Hong"https://zbmath.org/authors/?q=ai:liu.hong.1|liu.hong"Chauve, Cedric"https://zbmath.org/authors/?q=ai:chauve.cedric"Zhu, Binhai"https://zbmath.org/authors/?q=ai:zhu.binhaiSummary: The PQ-tree is a fundamental data structure that has also been used in comparative genomics to model ancestral genomes with some uncertainty. To quantify the evolution between genomes represented by PQ-trees, in this paper we study two fundamental problems of PQ-tree comparison motivated by this application. First, we show that the problem of comparing two PQ-trees by computing the minimum breakpoint distance among all pairs of permutations generated respectively by the two considered PQ-trees is NP-complete for unsigned permutations. Next, we consider a generalization of the classical Breakpoint Median problem, where an ancestral genome is represented by a PQ-tree and \(p \geq 1\) permutations are given and we want to compute a permutation generated by the PQ-tree that minimizes the sum of the breakpoint distances to the \(p\) permutations (or \(k)\). We show that this problem is also NP-complete for \(p \geq 2\), and is fixed-parameter tractable with respect to \(k\) for \(p \geq 1\).An operator for composing deductive data bases with theories of constraintshttps://zbmath.org/1496.681242022-11-17T18:59:28.764376Z"Aquilino, D."https://zbmath.org/authors/?q=ai:aquilino.d"Asirelli, P."https://zbmath.org/authors/?q=ai:asirelli.patrizia"Renso, C."https://zbmath.org/authors/?q=ai:renso.chiara"Turini, F."https://zbmath.org/authors/?q=ai:turini.francoSummary: An operation for restricting deductive databases represented as logic programs is introduced. The restrictions are represented in a separate deductive database. The operation is given an abstract semantics in terms of the immediate consequence operator. A transformational implementation is given and its correctness is proved with respect to the abstract semantics.
For the entire collection see [Zbl 0875.00116].An algebraic construction of the well-founded modelhttps://zbmath.org/1496.681252022-11-17T18:59:28.764376Z"Bagai, Rajiv"https://zbmath.org/authors/?q=ai:bagai.rajiv"Sunderraman, Rajshekhar"https://zbmath.org/authors/?q=ai:sunderraman.rajshekharSummary: An algebraic method for the construction of the well-founded model of general deductive databases is presented. The method adopts paraconsistent relations as the semantic objects associated with the predicate symbols of the database. Paraconsistent relations are a generalization of ordinary relations in that they allow manipulation of incomplete as well as inconsistent information. Algebraic operators, such as union, join, selection, are defined for paraconsistent relations. The first step in the model construction method is to transform the database clauses into paraconsistent relation definitions involving these operators. The second step is to build the well-founded model iteratively. Algorithms for both steps along with arguments for their termination and correctness are presented.
For the entire collection see [Zbl 1492.68008].Inference-proof updating of a weakened view under the modification of input parametershttps://zbmath.org/1496.681262022-11-17T18:59:28.764376Z"Biskup, Joachim"https://zbmath.org/authors/?q=ai:biskup.joachim"Preuß, Marcel"https://zbmath.org/authors/?q=ai:preuss.marcelSummary: We treat a challenging problem of confidentiality-preserving data publishing: how to repeatedly update a released weakened view under a modification of the input parameter values, while continuously enforcing the confidentiality policy, i.e., without revealing a prohibited piece of information, neither for the updated view nor retrospectively for the previous versions of the view. In our semantically ambitious approach, a weakened view is determined by a two-stage procedure that takes three input parameters: (i) a confidentiality policy consisting of prohibitions in the form of pieces of information that the pertinent receiver of the view should not be able to learn, (ii) the assumed background knowledge of that receiver, and (iii) the actually stored relation instance, or the respective modification requests. Assuming that the receiver is aware of the specification of both the underlying view generation procedure and the proposed updating procedure and additionally of the declared confidentiality policy, the main challenge has been to block all meta-inferences that the receiver could make by relating subsequent views.
For the entire collection see [Zbl 1493.68009].Update rules in Datalog programshttps://zbmath.org/1496.681272022-11-17T18:59:28.764376Z"Halfeld Ferrari Alves, M."https://zbmath.org/authors/?q=ai:halfeld-ferrari-alves.mirian"Laurent, D."https://zbmath.org/authors/?q=ai:laurent.dominique"Spyratos, N."https://zbmath.org/authors/?q=ai:spyratos.nicolasSummary: We consider \(\mathrm{Datalog}^{\mathrm{neg}}\) databases containing two kinds of rules: update rules and query rules. We regard update rules as constraints, \textit{all} consequences of which must hold in the database until a new update. We introduce a semantics framework for database updates and query answering based on the well-founded semantics. In this framework, updating over intensional predicates is deterministic.
For the entire collection see [Zbl 0875.00116].Dissemination of authenticated tree-structured data with privacy protection and fine-grained control in outsourced databaseshttps://zbmath.org/1496.681282022-11-17T18:59:28.764376Z"Liu, Jianghua"https://zbmath.org/authors/?q=ai:liu.jianghua"Ma, Jinhua"https://zbmath.org/authors/?q=ai:ma.jinhua"Zhou, Wanlei"https://zbmath.org/authors/?q=ai:zhou.wanlei"Xiang, Yang"https://zbmath.org/authors/?q=ai:xiang.yang"Huang, Xinyi"https://zbmath.org/authors/?q=ai:huang.xinyiSummary: The advent of cloud computing has inspired an increasing number of users outsourcing their data to remote servers to enjoy flexible and affordable data management services. However, storing data in a remote cloud server raises data privacy and security concerns, i.e., the integrity and origin of the query results. Although some solutions have been proposed to address these issues, none of them consider the arbitrary dissemination control of authenticated tree-structured data while disseminating to other users.
To address the above concerns, in this paper, we first propose a novel and efficient redactable signature scheme which features editable homomorphic operation and redaction control on tree-structured data. Subsequently, we prove the security properties of our scheme and conduct extensive theoretical and experimental analyses. The experimental results show that our scheme outperforms the existing solutions in disseminating of authenticated tree-structured data with privacy protection and dissemination control in outsourced database (ODB) model.
For the entire collection see [Zbl 1493.68017].A generic algebra for data collections based on constructive logichttps://zbmath.org/1496.681292022-11-17T18:59:28.764376Z"Rajagopalan, P."https://zbmath.org/authors/?q=ai:rajagopalan.p-k"Tsang, C. P."https://zbmath.org/authors/?q=ai:tsang.chi-pingSummary: Data collections form the basis for the representation and manipulation of data in database systems. We describe an algebra for manipulating data collections. It has been developed using constructive logic, and is a generalisation of relational algebra. We have applied the proofs-as-programs paradigm of intiutionistic type theory for deriving executable functions from specifications of algebra operations. The properties of algebra operators such as associativity, commutativity and distributivity have been verified using the same formal system.
For the entire collection see [Zbl 1492.68008].Towards efficient verifiable conjunctive keyword search for large encrypted databasehttps://zbmath.org/1496.681302022-11-17T18:59:28.764376Z"Wang, Jianfeng"https://zbmath.org/authors/?q=ai:wang.jianfeng|wang.jianfeng.2|wang.jianfeng.1"Chen, Xiaofeng"https://zbmath.org/authors/?q=ai:chen.xiaofeng"Sun, Shi-Feng"https://zbmath.org/authors/?q=ai:sun.shifeng"Liu, Joseph K."https://zbmath.org/authors/?q=ai:liu.joseph-k-k"Au, Man Ho"https://zbmath.org/authors/?q=ai:au.man-ho"Zhan, Zhi-Hui"https://zbmath.org/authors/?q=ai:zhan.zhihuiSummary: Searchable Symmetric Encryption (SSE) enables a client to securely outsource large encrypted database to a server while supporting efficient keyword search. Most of the existing works are designed against the honest-but-curious server. That is, the server will be curious but execute the protocol in an honest manner. Recently, some researchers presented various verifiable SSE schemes that can resist to the malicious server, where the server may not honestly perform all the query operations. However, they either only considered single-keyword search or cannot handle very large database. To address this challenge, we propose a new verifiable conjunctive keyword search scheme by leveraging accumulator. Our proposed scheme can not only ensure verifiability of search result even if an empty set is returned but also support efficient conjunctive keyword search with sublinear overhead. Besides, the verification cost of our construction is independent of the size of search result. In addition, we introduce a sample check method for verifying the completeness of search result with a high probability, which can significantly reduce the computation cost on the client side. Security and efficiency evaluation demonstrate that the proposed scheme not only can achieve high security goals but also has a comparable performance.
For the entire collection see [Zbl 1493.68017].Formalizing and validating the \(P\)-Store replicated data store in Maudehttps://zbmath.org/1496.681312022-11-17T18:59:28.764376Z"Ölveczky, Peter Csaba"https://zbmath.org/authors/?q=ai:olveczky.peter-csabaSummary: \(P\)-Store is a well-known partially replicated transactional data store that combines wide-area replication, data partition, some fault tolerance, serializability, and limited use of atomic multicast. In addition, a number of recent data store designs can be seen as extensions of \(P\)-Store. This paper describes the formalization and formal analysis of \(P\)-Store using the rewriting logic framework Maude. As part of this work, this paper specifies group communication commitment and defines an abstract Maude model of atomic multicast, both of which are key building blocks in many data store designs. Maude model checking analysis uncovered a non-trivial error in \(P\)-Store; this paper also formalizes a correction of \(P\)-Store whose analysis did not uncover any flaw.
For the entire collection see [Zbl 1428.68025].Generic multi-keyword ranked search on encrypted cloud datahttps://zbmath.org/1496.681322022-11-17T18:59:28.764376Z"Kasra Kermanshahi, Shabnam"https://zbmath.org/authors/?q=ai:kasra-kermanshahi.shabnam"Liu, Joseph K."https://zbmath.org/authors/?q=ai:liu.joseph-k-k"Steinfeld, Ron"https://zbmath.org/authors/?q=ai:steinfeld.ron"Nepal, Surya"https://zbmath.org/authors/?q=ai:nepal.suryaSummary: Although searchable encryption schemes allow secure search over the encrypted data, they mostly support conventional Boolean keyword search, without capturing any relevance of the search results. This leads to a large amount of post-processing overhead to find the most matching documents and causes an unnecessary communication cost between the servers and end-users. Such problems can be addressed efficiently using a ranked search system that retrieves the most relevant documents. However, existing state-of-the-art solutions in the context of Searchable Symmetric Encryption (SSE) suffer from either (a) security and privacy breaches due to the use of Order Preserving Encryption (OPE) or (b) non-practical solutions like using the two non-colluding servers. In this paper, we present a generic solution for multi-keyword ranked search over the encrypted cloud data. The proposed solution can be applied over different symmetric searchable encryption schemes. To demonstrate the practicality of our technique, in this paper we leverage the Oblivious Cross Tags (OXT) protocol of
\textit{D. Cash} et al. [Lect. Notes Comput. Sci. 8042, 353--373 (2013; Zbl 1311.68057)]
due to its scalability and remarkable flexibility to support different settings. Our proposed scheme supports the multi-keyword search on Boolean, ranked and limited range queries while keeping all of the OXT's properties intact. The key contribution of this paper is that our scheme is resilience against all common attacks that take advantage of OPE leakage while only a single cloud server is used. Moreover, the results indicate that using the proposed solution the communication overhead decreases drastically when the number of matching results is large.
For the entire collection see [Zbl 1493.68023].Towards efficient verifiable forward secure searchable symmetric encryptionhttps://zbmath.org/1496.681332022-11-17T18:59:28.764376Z"Zhang, Zhongjun"https://zbmath.org/authors/?q=ai:zhang.zhongjun"Wang, Jianfeng"https://zbmath.org/authors/?q=ai:wang.jianfeng.2|wang.jianfeng|wang.jianfeng.1"Wang, Yunling"https://zbmath.org/authors/?q=ai:wang.yunling"Su, Yaping"https://zbmath.org/authors/?q=ai:su.yaping"Chen, Xiaofeng"https://zbmath.org/authors/?q=ai:chen.xiaofengSummary: Searchable Symmetric Encryption (SSE) allows a server to perform search directly over encrypted data outsourced by user. Recently, the primitive of forward secure SSE has attracted significant attention due to its favorable property for dynamic data searching. That is, it can prevent the linkability from newly update data to previously searched keyword. However, the server is assumed to be honest-but-curious in the existing work. How to achieve verifiable forward secure SSE in malicious server model remains a challenging problem. In this paper, we propose an efficient verifiable forward secure SSE scheme, which can simultaneously achieve verifiability of search result and forward security property. In particular, we propose a new verifiable data structure based on the primitive of multiset hash functions, which enables efficient verifiable data update by incrementally hash operation. Compared with the state-of-the-art solution, our proposed scheme is superior in search and update efficiency while providing verifiability of search result. Finally, we present a formal security analysis and implement our scheme, which demonstrates that our proposed scheme is equipped with the desired security properties with practical efficiency.
For the entire collection see [Zbl 1493.68023].Dynamic searchable symmetric encryption with forward and stronger backward privacyhttps://zbmath.org/1496.681342022-11-17T18:59:28.764376Z"Zuo, Cong"https://zbmath.org/authors/?q=ai:zuo.cong"Sun, Shi-Feng"https://zbmath.org/authors/?q=ai:sun.shifeng"Liu, Joseph K."https://zbmath.org/authors/?q=ai:liu.joseph-k-k"Shao, Jun"https://zbmath.org/authors/?q=ai:shao.jun"Pieprzyk, Josef"https://zbmath.org/authors/?q=ai:pieprzyk.josef-pSummary: Dynamic Searchable Symmetric Encryption (DSSE) enables a client to perform updates and searches on encrypted data which makes it very useful in practice. To protect DSSE from the leakage of updates (leading to break query or data privacy), two new security notions, forward and backward privacy, have been proposed recently. Although extensive attention has been paid to forward privacy, this is not the case for backward privacy. Backward privacy, first formally introduced by Bost et al., is classified into three types from weak to strong, exactly Type-III to Type-I. To the best of our knowledge, however, no practical DSSE schemes without trusted hardware (e.g. SGX) have been proposed so far, in terms of the strong backward privacy and constant roundtrips between the client and the server.
In this work, we present a new DSSE scheme by leveraging simple symmetric encryption with homomorphic addition and bitmap index. The new scheme can achieve both forward and backward privacy with one roundtrip. In particular, the backward privacy we achieve in our scheme (denoted by Type-I\(^-\)) is stronger than Type-I. Moreover, our scheme is very practical as it involves only lightweight cryptographic operations. To make it scalable for supporting billions of files, we further extend it to a multi-block setting. Finally, we give the corresponding security proofs and experimental evaluation which demonstrate both security and practicality of our schemes, respectively.
For the entire collection see [Zbl 1493.68023].On the privacy of a code-based single-server computational PIR schemehttps://zbmath.org/1496.681352022-11-17T18:59:28.764376Z"Bordage, Sarah"https://zbmath.org/authors/?q=ai:bordage.sarah"Lavauzelle, Julien"https://zbmath.org/authors/?q=ai:lavauzelle.julienSummary: We show that the single-server computational PIR protocol proposed by \textit{L. Holzbaur} et al. in [``Computational code-based single-server private information retrieval'', Preprint, \url{arxiv:2001.07049}] is not private, in the sense that the server can recover in polynomial time the index of the desired file with very high probability. The attack relies on the following observation. Removing rows of the query matrix corresponding to the desired file yields a large decrease of the dimension over \(\mathbb{F}_q\) of the vector space spanned by the rows of this punctured matrix. Such a dimension loss only shows up with negligible probability when rows unrelated to the requested file are deleted.Formalizing and proving privacy properties of voting protocols using alpha-beta privacyhttps://zbmath.org/1496.681362022-11-17T18:59:28.764376Z"Gondron, Sébastien"https://zbmath.org/authors/?q=ai:gondron.sebastien"Mödersheim, Sebastian"https://zbmath.org/authors/?q=ai:modersheim.sebastian-alexanderSummary: Most common formulations of privacy-type properties for security protocols are specified as bisimilarity of processes in applied-\( \pi\) calculus. For instance, voting privacy is specified as the bisimilarity between two processes that differ only by a swap of two votes. Similar methods are applied to formalize receipt-freeness. We believe that there exists a gap between these technical encodings and an intuitive understanding of these properties.
We use \((\alpha ,\beta )\)-privacy to formalize privacy goals in a different way, namely as a reachability problem. Every state consists of a pair of formulae: \( \alpha\) expresses the publicly released information (like the result of the vote) and \(\beta\) expresses the additional information available to the intruder (like observed messages). Privacy holds in a state if every model of \(\alpha\) can be extended to a model of \(\beta \), i.e., the intruder cannot make any deductions beyond what was deliberately released; and privacy of a protocol holds if privacy holds in every reachable state.
This allows us to give formulations of voting privacy and receipt-freeness that are more declarative than the common bisimilarity based formulations, since we reason about models that are consistent with all observations like interaction with coerced (but potentially lying) voters. Also, we show a relation between the goals: receipt-freeness implies voting privacy.
Finally, the logical approach also allows for declarative manual proofs (as opposed to long machine-generated proofs) like reasoning about a permutation of votes and the intruder's knowledge about that permutation.
For the entire collection see [Zbl 1493.68022].A differential privacy mechanism that accounts for network effects for crowdsourcing systemshttps://zbmath.org/1496.681372022-11-17T18:59:28.764376Z"Luo, Yuan"https://zbmath.org/authors/?q=ai:luo.yuan"Jennings, Nicholas R."https://zbmath.org/authors/?q=ai:jennings.nicholas-rSummary: In crowdsourcing systems, it is important for the crowdsource campaign initiator to incentivize users to share their data to produce results of the desired computational accuracy. This problem becomes especially challenging when users are concerned about the privacy of their data. To overcome this challenge, existing work often aims to provide users with differential privacy guarantees to incentivize privacy-sensitive users to share their data. However, this work neglects the network effect that a user enjoys greater privacy protection when he aligns his participation behaviour with that of other users. To explore this network effect, we formulate the interaction among users regarding their participation decisions as a population game, because a user's welfare from the interaction depends not only on his own participation decision but also the distribution of others' decisions. We show that the Nash equilibrium of this game consists of a threshold strategy, where all users whose privacy sensitivity is below a certain threshold will participate and the remaining users will not. We characterize the existence and uniqueness of this equilibrium, which depends on the privacy guarantee, the reward provided by the initiator and the population size. Based on this equilibria analysis, we design the PINE (Privacy Incentivization with Network Effects) mechanism and prove that it maximizes the initiator's payoff while providing participating users with a guaranteed degree of privacy protection. Numerical simulations, on both real and synthetic data, show that (i) PINE improves the initiator's expected payoff by up to 75\%, compared to state of the art mechanisms that do not consider this effect; (ii) the performance gain by exploiting the network effect is particularly good when the majority of users are flexible over their privacy attitudes and when there are a large number of low quality task performers.BDPL: a boundary differentially private layer against machine learning model extraction attackshttps://zbmath.org/1496.681382022-11-17T18:59:28.764376Z"Zheng, Huadi"https://zbmath.org/authors/?q=ai:zheng.huadi"Ye, Qingqing"https://zbmath.org/authors/?q=ai:ye.qingqing"Hu, Haibo"https://zbmath.org/authors/?q=ai:hu.haibo"Fang, Chengfang"https://zbmath.org/authors/?q=ai:fang.chengfang"Shi, Jie"https://zbmath.org/authors/?q=ai:shi.jieSummary: Machine learning models trained by large volume of proprietary data and intensive computational resources are valuable assets of their owners, who merchandise these models to third-party users through prediction service API. However, existing literature shows that model parameters are vulnerable to extraction attacks which accumulate a large number of prediction queries and their responses to train a replica model. As countermeasures, researchers have proposed to reduce the rich API output, such as hiding the precise confidence level of the prediction response. Nonetheless, even with response being only one bit, an adversary can still exploit fine-tuned queries with differential property to infer the decision boundary of the underlying model. In this paper, we propose boundary differential privacy (\(\epsilon \)-BDP) as a solution to protect against such attacks by obfuscating the prediction responses near the decision boundary. \( \epsilon \)-BDP guarantees an adversary cannot learn the decision boundary by a predefined precision no matter how many queries are issued to the prediction API. We design and prove a perturbation algorithm called boundary randomized response that can achieve \(\epsilon \)-BDP. The effectiveness and high utility of our solution against model extraction attacks are verified by extensive experiments on both linear and non-linear models.
For the entire collection see [Zbl 1493.68022].Generating \((2,3)\)-codeshttps://zbmath.org/1496.681392022-11-17T18:59:28.764376Z"Anisimov, A. V."https://zbmath.org/authors/?q=ai:anisimov.anatoly-vSummary: The \((2,3)\)-representation of integers utilizes the mixed numeration base of the radix-2 and auxilary radix-3. This representation yields a universal prefix-free binary encoding of all natural numbers with a variety of useful properties: robustness (self-synchronization), local error corrections, statistic regularities of code parameters, etc. The paper describes a procedure of monotonic generation of \((2,3)\)-codewords in ascending order of their lengths.A trace partitioned Gray code for \(q\)-ary generalized Fibonacci stringshttps://zbmath.org/1496.681402022-11-17T18:59:28.764376Z"Bernini, A."https://zbmath.org/authors/?q=ai:bernini.antonia|bernini.antonio"Bilotta, S."https://zbmath.org/authors/?q=ai:bilotta.stefano"Pinzani, R."https://zbmath.org/authors/?q=ai:pinzani.renzo"Vajnovszki, V."https://zbmath.org/authors/?q=ai:vajnovszki.vincent(no abstract)Busy beaver scores and alphabet sizehttps://zbmath.org/1496.681412022-11-17T18:59:28.764376Z"Petersen, Holger"https://zbmath.org/authors/?q=ai:petersen.holgerSummary: We investigate the Busy Beaver Game introduced by
\textit{T. Radó} [Bell Syst. Tech. J. 41, 877--884 (1962; Zbl 07609004)]
generalized to non-binary alphabets.
\textit{J. Harland} [``Generating candidate busy beaver machines (or how to build the zany zoo)'', Preprint, \url{arXiv:1610.03184}]
conjectured that activity (number of steps) and productivity (number of non-blank symbols) of candidate machines grow as the alphabet size increases. We prove this conjecture for any alphabet size under the condition that the number of states is sufficiently large. For the measure activity we show that increasing the alphabet size from two to three allows an increase. By a classical construction it is even possible to obtain a two-state machine increasing activity and productivity of any machine if we allow an alphabet size depending on the number of states of the original machine. We also show that an increase of the alphabet by a factor of three admits an increase of activity.
For the entire collection see [Zbl 1369.68029].On derandomized composition of Boolean functionshttps://zbmath.org/1496.681422022-11-17T18:59:28.764376Z"Meir, Or"https://zbmath.org/authors/?q=ai:meir.orSummary: The (block-)composition of two Boolean functions \(f : \{0, 1\}^m \rightarrow \{0, 1\}, g : \{0, 1\}^n \rightarrow \{0, 1\}\) is the function \(f \diamond g\) that takes as inputs \(m\) strings \(x_1, \ldots , x_m \in \{0, 1\}^n\) and computes \[(f \diamond g)(x_1, \ldots , x_m) = f (g(x_1), \ldots , g(x_m)).\] This operation has been used several times in the past for amplifying different hardness measures of \(f\) and \(g\). This comes at a cost: the function \(f \diamond g\) has input length \(m \cdot n\) rather than \(m\) or \(n\), which is a bottleneck for some applications.
In this paper, we propose to decrease this cost by ``derandomizing'' the composition: instead of feeding into \(f \diamond g\) independent inputs \(x_1, \ldots , x_m,\) we generate \(x_1, \ldots , x_m\) using a shorter seed. We show that this idea can be realized in the particular setting of the composition of functions and universal relations
[\textit{D. Gavinsky} et al., SIAM J. Comput. 46, No. 1, 114--131 (2017; Zbl 1359.68103); \textit{M. Karchmer} et al., Comput. Complexity 5, No. 3--4, 191--204 (1995; Zbl 0851.68034)].
To this end, we provide two different techniques for achieving such a derandomization: a technique based on averaging samplers and a technique based on Reed-Solomon codes.Complexities for high-temperature two-handed tile self-assemblyhttps://zbmath.org/1496.681432022-11-17T18:59:28.764376Z"Schweller, Robert"https://zbmath.org/authors/?q=ai:schweller.robert-t"Winslow, Andrew"https://zbmath.org/authors/?q=ai:winslow.andrew"Wylie, Tim"https://zbmath.org/authors/?q=ai:wylie.timSummary: Tile self-assembly is a formal model of computation capturing DNA-based nanoscale systems. Here we consider the popular two-handed tile self-assembly model or 2HAM. Each 2HAM system includes a temperature parameter, which determines the threshold of bonding strength required for two assemblies to attach. Unlike most prior study, we consider general temperatures not limited to small, constant values. We obtain two results. First, we prove that the computational complexity of determining whether a given tile system uniquely assembles a given assembly is coNP-complete, confirming a conjecture of
\textit{S. Cannon} et al. [LIPIcs -- Leibniz Int. Proc. Inform. 20, 172--184 (2013; Zbl 1354.68078)].
Second, we prove that larger temperature values decrease the minimum number of tile types needed to assemble some shapes. In particular, for any temperature \(\tau\in\{3,\dots\}\), we give a class of shapes of size \(n\) such that the ratio of the minimum number of tiles needed to assemble these shapes at temperature \(\tau\) and any temperature less than \(\tau\) is \(\varOmega(n^{1/(2\tau+2)})\).
For the entire collection see [Zbl 1369.68012].Cell-like P systems with evolutional symport/antiport rules and membrane creationhttps://zbmath.org/1496.681442022-11-17T18:59:28.764376Z"Song, Bosheng"https://zbmath.org/authors/?q=ai:song.bosheng"Li, Kenli"https://zbmath.org/authors/?q=ai:li.kenli"Orellana-Martín, David"https://zbmath.org/authors/?q=ai:orellana-martin.david"Valencia-Cabrera, Luis"https://zbmath.org/authors/?q=ai:valencia-cabrera.luis"Pérez-Jiménez, Mario J."https://zbmath.org/authors/?q=ai:perez-jimenez.mario-jSummary: Cell-like P systems with symport/antiport rules are computing models inspired by the conservation law, in the sense that they compute by changing the places of objects with respect to the membranes, and not by changing the objects themselves. In this work, a variant of these kinds of membrane systems, called cell-like P systems with evolutional symport/antiport rules, where objects can evolve in the execution of such rules, is introduced. Besides, inspired by the autopoiesis process (ability of a system to maintain itself), membrane creation rules are considered as an efficient mechanism to provide an exponential workspace in terms of membranes. The presumed efficiency of these computing models (ability to solve computationally hard problems in polynomial time and uniform way) is explored. Specifically, an efficient solution to the problem is provided by means of a family of recognizer cell-like P systems with evolutional symport/antiport rules and membrane creation which make use of communication rules involving a restricted number of objects.A stochastic approach to shortcut bridging in programmable matterhttps://zbmath.org/1496.681452022-11-17T18:59:28.764376Z"Andrés Arroyo, Marta"https://zbmath.org/authors/?q=ai:andres-arroyo.marta"Cannon, Sarah"https://zbmath.org/authors/?q=ai:cannon.sarah-m"Daymude, Joshua J."https://zbmath.org/authors/?q=ai:daymude.joshua-j"Randall, Dana"https://zbmath.org/authors/?q=ai:randall.dana-j"Richa, Andréa W."https://zbmath.org/authors/?q=ai:richa.andrea-werneckSummary: In a self-organizing particle system, an abstraction of programmable matter, simple computational elements called particles with limited memory and communication self-organize to solve system-wide problems of movement, coordination, and configuration. In this paper, we consider stochastic, distributed, local, asynchronous algorithms for ``shortcut bridging'', in which particles self-assemble bridges over gaps that simultaneously balance minimizing the length and cost of the bridge. Army ants of the genus Eticon have been observed exhibiting a similar behavior in their foraging trails, dynamically adjusting their bridges to satisfy an efficiency tradeoff using local interactions
[\textit{C. R. Reid} et al., ``Army ants dynamically adjust living bridges in response to a cost-benefit trade-off'', Proc. Natl. Acad. Sci. USA 112, No. 49, 15113--15118 (2015; \url{doi:10.1073/pnas.151224111})].
Using techniques from Markov chain analysis, we rigorously analyze our algorithm, show it achieves a near-optimal balance between the competing factors of path length and bridge cost, and prove that it exhibits a dependence on the angle of the gap being ``shortcut'' similar to that of the ant bridges. We also present simulation results that qualitatively compare our algorithm with the army ant bridging behavior. The proposed algorithm demonstrates the robustness of the stochastic approach to algorithms for programmable matter, as it is a surprisingly simple generalization of a stochastic algorithm for compression
[loc. cit.].
For the entire collection see [Zbl 1369.68012].Weighted models for higher-order computationhttps://zbmath.org/1496.681462022-11-17T18:59:28.764376Z"Laird, James"https://zbmath.org/authors/?q=ai:laird.james-dSummary: We study a class of quantitative models for higher-order computation: Lafont categories with (infinite) biproducts. Each of these has a complete ``internal semiring'' and can be enriched over its modules. We describe a semantics of nondeterministic PCF weighted over this semiring in which fixed points are obtained from the bifree algebra over its exponential structure. By characterizing them concretely as infinite sums of approximants indexed over nested finite multisets, we prove computational adequacy.
We can construct examples of our semantics by weighting existing models such as categories of games over a complete semiring. This transition from qualitative to quantitative semantics is characterized as a ``change of base'' of enriched categories arising from a monoidal functor from coherence spaces to modules over a complete semiring. For example, the game semantics of Idealized Algol is coherence space enriched and thus gives rise to to a weighted model, which is fully abstract.Measuring concurrency of regular distributed computationshttps://zbmath.org/1496.681472022-11-17T18:59:28.764376Z"Bareau, Cyrille"https://zbmath.org/authors/?q=ai:bareau.cyrille"Caillaud, Benoît"https://zbmath.org/authors/?q=ai:caillaud.benoit"Jard, Claude"https://zbmath.org/authors/?q=ai:jard.claude"Thoraval, René"https://zbmath.org/authors/?q=ai:thoraval.reneSummary: In this paper we present a concurrency measure that is especially adapted to distributed programs that exhibit regular run-time behaviours, including many programs that are obtained by automatic parallelization of sequential code. This measure is based on the antichain lattice of the partial order that models the distributed execution under consideration. We show the conditions under which the measure is computable on an infinite execution that is the repetition of a finite pattern. There, the measure can be computed by considering only a bounded number of patterns, the bound being at most the number of processors.
For the entire collection see [Zbl 0835.68002].Improved signature schemes for secure multi-party computation with certified inputshttps://zbmath.org/1496.681482022-11-17T18:59:28.764376Z"Blanton, Marina"https://zbmath.org/authors/?q=ai:blanton.marina"Jeong, Myoungin"https://zbmath.org/authors/?q=ai:jeong.myounginSummary: The motivation for this work comes from the need to strengthen security of secure multi-party protocols with the ability to guarantee that the participants provide their truthful inputs in the computation. This is outside the traditional security models even in the presence of malicious participants, but input manipulation can often lead to privacy and result correctness violations. Thus, in this work we treat the problem of combining secure multi-party computation (SMC) techniques based on secret sharing with signatures to enforce input correctness in the form of certification. We modify two currently available signature schemes to achieve private verification and efficiency of batch verification and show how to integrate them with two prominent SMC protocols.
For the entire collection see [Zbl 1493.68017].Enforcing input correctness via certification in garbled circuit evaluationhttps://zbmath.org/1496.681492022-11-17T18:59:28.764376Z"Zhang, Yihua"https://zbmath.org/authors/?q=ai:zhang.yihua"Blanton, Marina"https://zbmath.org/authors/?q=ai:blanton.marina"Bayatbabolghani, Fattaneh"https://zbmath.org/authors/?q=ai:bayatbabolghani.fattanehSummary: Secure multi-party computation allows a number of participants to securely evaluate a function on their private inputs and has a growing number of applications. Two standard adversarial models that treat the participants as semi-honest or malicious, respectively, are normally considered for showing security of constructions in this framework. In this work, we go beyond the standard security model in the presence of malicious participants and treat the problem of enforcing correct inputs to be entered into the computation. We achieve this by having a certification authority certify user's information, which is consequently used in secure two-party computation based on garbled circuit evaluation. The focus of this work on enforcing correctness of garbler's inputs via certification, as prior work already allows one to achieve this goal for circuit evaluator's input. Thus, in this work, we put forward a novel approach for certifying user's input and tying certification to garbler's input used during secure function evaluation based on garbled circuits. Our construction achieves notable performance of adding only one (standard) signature verification and \(O(n\rho )\) symmetric key/hash operations to the cost of garbled circuit evaluation in the malicious model via cut-and-choose, in which \(\rho\) circuits are garbled and \(n\) is the length of the garbler's input in bits. Security of our construction is rigorously proved in the standard model.
For the entire collection see [Zbl 1493.68009].Simulation theorems via pseudo-random propertieshttps://zbmath.org/1496.681502022-11-17T18:59:28.764376Z"Chattopadhyay, Arkadev"https://zbmath.org/authors/?q=ai:chattopadhyay.arkadev"Koucký, Michal"https://zbmath.org/authors/?q=ai:koucky.michal"Loff, Bruno"https://zbmath.org/authors/?q=ai:loff.bruno"Mukhopadhyay, Sagnik"https://zbmath.org/authors/?q=ai:mukhopadhyay.sagnikSummary: We generalize the deterministic simulation theorem of
\textit{R. Raz} and \textit{P. McKenzie} [Combinatorica 19, No. 3, 403--435 (1999; Zbl 0977.68037)],
to any gadget which satisfies a certain hitting property. We prove that inner product and gap-Hamming satisfy this property, and as a corollary, we obtain a deterministic simulation theorem for these gadgets, where the gadget's input size is logarithmic in the input size of the outer function. This yields the first deterministic simulation theorem with a logarithmic gadget size, answering an open question posed by
\textit{M. Göös} et al. [in: Proceedings of the 47th annual ACM symposium on theory of computing, STOC'15. New York, NY: Association for Computing Machinery (ACM). 257--266 (2015; Zbl 1321.68313); SIAM J. Comput. 45, No. 5, 1835--1869 (2016; Zbl 1353.68130)].
Our result also implies the previous results for the indexing gadget, with better parameters than was previously known. Moreover, a simulation theorem with logarithmic-sized gadget implies a quadratic separation in the deterministic communication complexity and the logarithm of the 1-partition number, no matter how high the 1-partition number is with respect to the input size -- something which is not achievable by previous results of
Göös et al. [loc. cit.].A quantum algorithm for a FULL adder operation based on registers of the CPU in a quantum-gated computerhttps://zbmath.org/1496.681512022-11-17T18:59:28.764376Z"Nagata, Koji"https://zbmath.org/authors/?q=ai:nagata.koji"Nakamura, Tadao"https://zbmath.org/authors/?q=ai:nakamura.tadaoThe paper is devoted to the problem of describing operations like AND, OR, XOR in quantum computing by using quantum gates through both the superposition and the phase factor connected by the phase kick-back formation. The main result is exposed in Section 2. Start from the input state \(|\psi_0\rangle=|0\rangle^{\otimes 4}|1\rangle \), and use the componentwise Hadamard transform to arrive at the state \(|\psi_1\rangle= \sum_{x\in\{0,1\}} \frac{|x\rangle}{2^4}\left[ \frac{|0\rangle-|1\rangle}{\sqrt{2}}\right]\). The main idea is next to use the kick-back formation operator \(U_f |x\rangle|y\rangle = |x\rangle|y \oplus f(x)\rangle\) to obtain the state \(|\psi_2\rangle=\sum_{x\in\{0,1\}} \frac{(-1)^{f(x)}|x\rangle}{2^4}\left[ \frac{|0\rangle-|1\rangle}{\sqrt{2}}\right]\). The last phase is to apply again the componentwise Hadamard transform to arrive at the state \(\psi_3=\pm|f(0,0)f(0,1)f(1,0)f(1,1)\rangle\left[ \frac{|0\rangle-|1\rangle}{\sqrt{2}}\right]\). The authors apply this to the Boolean function \(f_1(x,y) = x \wedge y\) and obtain the recognition of AND on quantum-gated computers. This technique can be applied to other Boolean functions like \(f_6(x,y) = \mathrm{XOR}(x,y)\), \(f_7(x,y) = \mathrm{OR}(x,y)\).
Reviewer: Do Ngoc Diep (Hanoi)Revisiting Deutsch-Jozsa algorithmhttps://zbmath.org/1496.681522022-11-17T18:59:28.764376Z"Qiu, Daowen"https://zbmath.org/authors/?q=ai:qiu.daowen"Zheng, Shenggen"https://zbmath.org/authors/?q=ai:zheng.shenggenSummary: The Deutsch-Jozsa algorithm is essentially faster than any possible deterministic classical algorithm for solving a promise problem that is in fact a symmetric partial Boolean function, named as the Deutsch-Jozsa problem. The Deutsch-Jozsa problem can be equivalently described as a partial function \(D J_n^0 : \{0, 1\}^n \to \{0, 1\}\) defined as: \(D J_n^0(x) = 1\) for \(| x | = n / 2, D J_n^0(x) = 0\) for \(| x | = 0, n\), and it is undefined for the remaining cases, where \(n\) is even, and \(| x |\) is the Hamming weight of \(x\). The Deutsch-Jozsa algorithm needs only one query to compute \(D J_n^0\) but the classical deterministic algorithm requires \(\frac{n}{2} + 1\) queries to compute it in the worse case.
We present all symmetric partial Boolean functions with degree 1 and 2; We prove the exact quantum query complexity of all symmetric partial Boolean functions with degree 1 and 2. We prove Deutsch-Jozsa algorithm can compute any symmetric partial Boolean function \(f\) with exact quantum 1-query complexity.A structured view on weighted counting with relations to counting, quantum computation and applicationshttps://zbmath.org/1496.681532022-11-17T18:59:28.764376Z"de Campos, Cassio P."https://zbmath.org/authors/?q=ai:de-campos.cassio-polpo"Stamoulis, Georgios"https://zbmath.org/authors/?q=ai:stamoulis.georgios"Weyland, Dennis"https://zbmath.org/authors/?q=ai:weyland.dennisSummary: Weighted counting problems are a natural generalization of counting problems where a weight is associated with every computational path of polynomial-time non-deterministic Turing machines. The goal is to compute the sum of weights of all paths (instead of number of accepting paths). Useful properties and plenty of applications make them interesting. The definition captures even undecidable problems, but obtaining an exponentially small additive approximation is just as hard as solving conventional counting. We present a structured view by defining classes that depend on the functions that assign weights to paths and by showing their relationships and how they generalize counting problems. Weighted counting is flexible and allows us to cast a number of famous results of computational complexity, including quantum computation, probabilistic graphical models and stochastic combinatorial optimization. Using the weighted counting terminology, we are able to simplify and to answer some open questions.SAT-based local improvement for finding tree decompositions of small widthhttps://zbmath.org/1496.681542022-11-17T18:59:28.764376Z"Fichte, Johannes K."https://zbmath.org/authors/?q=ai:fichte.johannes-klaus"Lodha, Neha"https://zbmath.org/authors/?q=ai:lodha.neha"Szeider, Stefan"https://zbmath.org/authors/?q=ai:szeider.stefanSummary: Many hard problems can be solved efficiently for problem instances that can be decomposed by tree decompositions of small width. In particular for problems beyond NP, such as \#P-complete counting problems, tree decomposition-based methods are particularly attractive. However, finding an optimal tree decomposition is itself an NP-hard problem. Existing methods for finding tree decompositions of small width either (a) yield optimal tree decompositions but are applicable only to small instances or (b) are based on greedy heuristics which often yield tree decompositions that are far from optimal. In this paper, we propose a new method that combines (a) and (b), where a heuristically obtained tree decomposition is improved locally by means of a SAT encoding. We provide an experimental evaluation of our new method.
For the entire collection see [Zbl 1368.68008].Hardness magnification near state-of-the-art lower boundshttps://zbmath.org/1496.681552022-11-17T18:59:28.764376Z"Oliveira, Igor Carboni"https://zbmath.org/authors/?q=ai:oliveira.igor-carboni"Pich, Ján"https://zbmath.org/authors/?q=ai:pich.jan"Santhanam, Rahul"https://zbmath.org/authors/?q=ai:santhanam.rahulSummary: This work continues the development of hardness magnification. The latter proposes a new strategy for showing strong complexity lower bounds by reducing them to a refined analysis of weaker models, where combinatorial techniques might be successful.\par We consider gap versions of the meta-computational problems MKtP and MCSP, where one needs to distinguish instances (strings or truth-tables) of complexity \(\le s_1(N)\) from instances of complexity \(\ge s_2(N)\), and \(N= 2^n\) denotes the input length. In MCSP, complexity is measured by circuit size, while in MKtP one considers Levin's notion of time-bounded Kolmogorov complexity. (In our results, the parameters \(s_1(N)\) and \(s_2(N)\) are asymptotically quite close, and the problems almost coincide with their standard formulations without a gap.) We establish that for Gap-MKtP\([s_1,s_2]\) and Gap-MCSP\([s_1,s_2]\), a marginal improvement over the state-of-the-art in unconditional lower bounds in a variety of computational models would imply explicit super-polynomial lower bounds.\par Theorem. There exists a universal constant \(c\ge 1\) for which the following hold. If there exists \(\varepsilon>0\) such that for every small enough \(\beta>0\)\par (1) \(\operatorname{Gap-MCSP}[2^{\beta n}/cn, 2^{\beta n}]\not\in\mathrm{Circuit}[N^{1+\varepsilon}]\), then \(\mathrm{NP}\nsubseteq\mathrm{Circuit[poly]}\).\par (2) \(\operatorname{Gap-MKtP}[2^{\beta n}, 2^{\beta n}+ cn]\not\in\mathrm{TC}^0[N^{1+\varepsilon}]\), then \(\mathrm{EXP}\nsubseteq\mathrm{TC}^0[\mathrm{poly}]\).\par (3) \(\operatorname{Gap-MKt}P[2^{\beta n}, 2^{\beta n}+cn]\not\in B_2\text{-}\mathrm{Formula}[N^{2+\varepsilon}]\), then \(\mathrm{EXP}\nsubseteq\mathrm{Formula[poly]}\).\par (4) \(\operatorname{Gap-MKtP}[2^{\beta n}, 2^{\beta n}+cn]\not\in U_2\text{-}\mathrm{Formula}[N^{3+\varepsilon}]\), then \(\mathrm{EXP}\nsubseteq\mathrm{Formula[poly]}\).\par (5) \(\operatorname{Gap-MKtP}[2^{\beta n}, 2^{\beta n}+ cn]\not\in\mathrm{BP}[N^{2+\varepsilon}]\), then \(\mathrm{EXP}\nsubseteq\mathrm{BP[poly]}\).\par (6) \(\operatorname{Gap-MKtP}[2^{\beta n}, 2^{\beta n}+cn]\not\in(\mathrm{AC}^0[6])[N^{1+\varepsilon}]\), then \(\mathrm{EXP}\nsubseteq\mathrm{AC}^0[6]\).\par These results are complemented by lower bounds for Gap-MCSP and Gap-MKtP against different models. For instance, the lower bound assumed in (1) holds for \(U_2\)-formulas of near-quadratic size, and lower bounds similar to (3)--(5) hold for various regimes of parameters.\par We also identify a natural computational model under which the hardness magnification threshold for Gap-MKtP lies below existing lower bounds: \(U_2\)-formulas that can compute parity functions at the leaves (instead of just literals). As a consequence, if one managed to adapt the existing lower bound techniques against such formulas to work with Gap-MKtP, then \(\mathrm{EXP}\nsubseteq\mathrm{NC}^1\) would follow via hardness magnification.
For the entire collection see [Zbl 1414.68009].Hardness magnification near state-of-the-art lower boundshttps://zbmath.org/1496.681562022-11-17T18:59:28.764376Z"Oliveira, Igor C."https://zbmath.org/authors/?q=ai:oliveira.igor-carboni"Pich, Ján"https://zbmath.org/authors/?q=ai:pich.jan"Santhanam, Rahul"https://zbmath.org/authors/?q=ai:santhanam.rahulSummary: This article continues the development of hardness magnification, an emerging area that proposes a new strategy for showing strong complexity lower bounds by reducing them to a refined analysis of weaker models, where combinatorial techniques might be successful.
We consider gap versions of the meta-computational problems \(\mathsf{MKtP}\) and \(\mathsf{MCSP} \), where one needs to distinguish instances (strings or truth-tables) of complexity \(\leq s_1(N)\) from instances of complexity \(\geq s_2(N)\), and \(N = 2^n\) denotes the input length. In \(\mathsf{MCSP} \), complexity is measured by circuit size, while in \(\mathsf{MKtP}\) one considers Levin's notion of time-bounded Kolmogorov complexity. (In our results, the parameters \(s_1(N)\) and \(s_2(N)\) are asymptotically quite close, and the problems almost coincide with their standard formulations without a gap.) We establish that for \(\mathsf{Gap}\text{-}\mathsf{MKtP}[s_1,s_2]\) and \(\mathsf{Gap}\text{-}\mathsf{MCSP}[s_1,s_2]\), a marginal improvement over the state of the art in unconditional lower bounds in a variety of computational models would imply explicit superpolynomial lower bounds, including \(\mathsf{P}\neq \mathsf{NP} \).
Theorem. There exists a universal constant \(c \geq 1\) for which the following hold. If there exists \(\varepsilon > 0\) such that for every small enough \(\beta > 0\)
\begin{itemize}
\item[(1)] \( \mathsf{Gap}\text{-}\mathsf{MCSP}[2^{\beta n}/c n, 2^{\beta n}] \notin \mathsf{Circuit}[N^{1 + \varepsilon}]\), then \(\mathsf{NP} \nsubseteq \mathsf{Circuit}[\mathsf{poly}]\).
\item[(2)] \( \mathsf{Gap}\text{-}\mathsf{MKtP}[2^{\beta n},\, 2^{\beta n} + cn] \notin B_2\text{-}\mathsf{Formula}[N^{2 + \varepsilon}]\), then \(\mathsf{EXP} \nsubseteq \mathsf{Formula}[\mathsf{poly}]\).
\item[(3)] \( \mathsf{Gap}\text{-}\mathsf{MKtP}[2^{\beta n},\, 2^{\beta n} + cn] \notin U_2\text{-}\mathsf{Formula}[N^{3 + \varepsilon}]\), then \(\mathsf{EXP} \nsubseteq \mathsf{Formula}[\mathsf{poly}]\).
\item[(4)] \( \mathsf{Gap}\text{-}\mathsf{MKtP}[2^{\beta n},\, 2^{\beta n} + cn] \notin \mathsf{BP}[N^{2 + \varepsilon}]\), then \(\mathsf{EXP} \nsubseteq \mathsf{BP}[\mathsf{poly}]\).
\end{itemize}
These results are complemented by lower bounds for \(\mathsf{Gap}\text{-}\mathsf{MCSP}\) and \(\mathsf{Gap}\text{-}\mathsf{MKtP}\) against different models. For instance, the lower bound assumed in (1) holds for \(U_2\)-formulas of near-quadratic size, and lower bounds similar to (2)--(4) hold for various regimes of parameters.
We also identify a natural computational model under which the hardness magnification threshold for \(\mathsf{Gap}\text{-}\mathsf{MKtP}\) lies below existing lower bounds: \(U_2\)-formulas that can compute parity functions at the leaves (instead of just literals). As a consequence, if one managed to adapt the existing lower bound techniques against such formulas to work with \(\mathsf{Gap}\text{-}\mathsf{MKtP} \), then \(\mathsf{EXP} \nsubseteq \mathsf{NC}^1\) would follow via hardness magnification.
A conference version of this paper appeared in [LIPIcs -- Leibniz Int. Proc. Inform. 137, Article 27, 29 p. (2019; Zbl 1496.68155)].Beyond \#CSP: a dichotomy for counting weighted Eulerian orientations with ARShttps://zbmath.org/1496.681572022-11-17T18:59:28.764376Z"Cai, Jin-Yi"https://zbmath.org/authors/?q=ai:cai.jin-yi"Fu, Zhiguo"https://zbmath.org/authors/?q=ai:fu.zhiguo"Shao, Shuai"https://zbmath.org/authors/?q=ai:shao.shuaiSummary: We define and explore a notion of unique prime factorization for constraint functions, and use this as a new tool to prove a complexity classification for counting weighted Eulerian orientation problems with arrow reversal symmetry (\textsc{ars}). We prove that all such problems are either polynomial-time computable or \#P-hard. We show that the class of weighted Eulerian orientation problems subsumes all weighted counting constraint satisfaction problems (\#CSP) on Boolean variables. More significantly, we establish a novel connection between \#CSP and counting weighted Eulerian orientation problems that is global in nature. This connection is based on a structural determination of all half-weighted affine linear subspaces over \(\mathbb{Z}_2\), which is proved using Möbius inversion.Parameterized provability in equational logichttps://zbmath.org/1496.681582022-11-17T18:59:28.764376Z"de Oliveira Oliveira, Mateus"https://zbmath.org/authors/?q=ai:de-oliveira-oliveira.mateusSummary: In this work we study the validity problem in equational logic from the perspective of parameterized complexity theory. We introduce a variant of equational logic in which sentences are pairs of the form \((t_1 =t_2,\omega)\), where \(t_1=t_2\) is an equation, and \(\omega\) is an arbitrary ordering of the positions corresponding to subterms of \(t_1\) and \(t_2\). We call such pairs ordered equations. With each ordered equation, one may naturally associate a notion of width, and with each proof of validity of an ordered equation, one may naturally associate a notion of depth. We define the width of such a proof as the maximum width of an ordered equation occurring in it. Finally, we introduce a parameter \(b\) that restricts the way in which variables are substituted for terms. We say that a proof is \(b\)-bounded if all substitutions used in it satisfy such restriction.
Our main result states that the problem of determining whether an ordered equation \((t_1=t_2,\omega)\) has a \(b\)-bounded proof of depth \(d\) and width \(c\), from a set of axioms \(E\), can be solved in time \(f(E,d,c,b)\cdot |t_1 =t_2|\). In other words, this task is fixed parameter linear with respect to the depth, width and bound of the proof. Subsequently, we show that given a classical equation \(t_1=t_2\), one may determine whether there exists an ordering \(\omega\) such that \((t_1=t_2,\omega)\) has a \(b\)-bounded proof, of depth \(d\) and width \(c\), in time \(f(E,d,c,b)\cdot|t_1 =t_2|^{O(c)}\). In other words this task is fixed parameter tractable with respect to the depth and bound of the proof, and is in polynomial time for constant values of width. This second result is particularly interesting because the ordering \(\omega\) is not given a priori, and thus, we are indeed parameterizing the provability of equations in classical equational logic. In view of the expressiveness of equational logic, our results give new fixed parameter tractable algorithms for a whole spectrum of problems, such as polynomial identity testing, program verification, automated theorem proving and the validity problem in undecidable equational theories.
For the entire collection see [Zbl 1371.68015].An extended coding theorem with application to quantum complexitieshttps://zbmath.org/1496.681592022-11-17T18:59:28.764376Z"Epstein, Samuel"https://zbmath.org/authors/?q=ai:epstein.samuelSummary: This paper introduces a new inequality in algorithmic information theory that can be seen as an extended coding theorem. This inequality has applications in new bounds between quantum complexity measures.Bounding the dimension of points on a linehttps://zbmath.org/1496.681602022-11-17T18:59:28.764376Z"Lutz, Neil"https://zbmath.org/authors/?q=ai:lutz.neil"Stull, D. M."https://zbmath.org/authors/?q=ai:stull.donald-mSummary: We use Kolmogorov complexity methods to give a lower bound on the effective Hausdorff dimension of the point \((x, ax + b)\), given real numbers \(a, b\), and \(x\). We apply our main theorem to a problem in fractal geometry, giving an improved lower bound on the (classical) Hausdorff dimension of generalized sets of Furstenberg type.Automatic Kolmogorov complexity and normality revisitedhttps://zbmath.org/1496.681612022-11-17T18:59:28.764376Z"Shen, Alexander"https://zbmath.org/authors/?q=ai:shen.alexanderSummary: It is well known that normality (all factors of a given length appear in an infinite sequence with the same frequency) can be described as incompressibility via finite automata. Still the statement and the proof of this result as given by
\textit{V. Becher} and \textit{P. A. Heiber} [Theor. Comput. Sci. 477, 109--116 (2013; Zbl 1261.68079)]
in terms of ``lossless finite-state compressors'' do not follow the standard scheme of Kolmogorov complexity definition (an automaton is used for compression, not decompression). We modify this approach to make it more similar to the traditional Kolmogorov complexity theory (and simpler) by explicitly defining the notion of automatic Kolmogorov complexity and using its simple properties. Other known notions
[\textit{J. Shallit} and \textit{M.-W. Wang}, J. Autom. Lang. Comb. 6, No. 4, 537--554 (2001; Zbl 1004.68077); \textit{C. S. Calude} et al., Theor. Comput. Sci. 412, No. 41, 5668--5677 (2011; Zbl 1235.68088)]
of description complexity related to finite automata are discussed (see the last section). As a byproduct, this approach provides simple proofs of classical results about normality (equivalence of definitions with aligned occurrences and all occurrences, Wall's theorem saying that a normal number remains normal when multiplied by a rational number, and Agafonov's result saying that normality is preserved by automatic selection rules).
For the entire collection see [Zbl 1369.68029].Learning families of algebraic structures from informanthttps://zbmath.org/1496.681622022-11-17T18:59:28.764376Z"Bazhenov, Nikolay"https://zbmath.org/authors/?q=ai:bazhenov.n-a"Fokina, Ekaterina"https://zbmath.org/authors/?q=ai:fokina.ekaterina-b"San Mauro, Luca"https://zbmath.org/authors/?q=ai:san-mauro.lucaSummary: We combine computable structure theory and algorithmic learning theory to study learning of families of algebraic structures. Our main result is a model-theoretic characterization of the learning type \(\mathbf{InfEx}_{\cong}\), consisting of the structures whose isomorphism types can be learned in the limit. We show that a family of structures is \(\mathbf{InfEx}_{\cong}\)-learnable if and only if the structures can be distinguished in terms of their \(\Sigma_2^{\inf}\)-theories. We apply this characterization to familiar cases and we show the following: there is an infinite learnable family of distributive lattices; no pair of Boolean algebras is learnable; no infinite family of linear orders is learnable.CPO models for infinite term rewritinghttps://zbmath.org/1496.681632022-11-17T18:59:28.764376Z"Corradini, Andrea"https://zbmath.org/authors/?q=ai:corradini.andrea"Gadducci, Fabio"https://zbmath.org/authors/?q=ai:gadducci.fabioSummary: Infinite terms in universal algebras are a well-known topic since the seminal work of the ADJ group [\textit{J. A. Goguen} et al., J. Assoc. Comput. Mach. 24, 68--95 (1977; Zbl 0359.68018)]. The recent interest in the field of \textit{term rewriting} (tr) for infinite terms is due to the use of \textit{term graph rewriting} to implement tr, where terms are represented by graphs: so, a cyclic graph is a finitary description of a possibly infinite term. In this paper we introduce \textit{infinite rewriting logic}, working on the framework of \textit{rewriting logic} proposed by \textit{J. Meseguer} [Functorial semantics of rewrite systems. Techn. Rep. CSL-93-02R, Computer Science Laboratory (1990); Theor. Comput. Sci. 96, No. 1, 73--155 (1992; Zbl 0758.68043)]. We provide a simple algebraic presentation of infinite computations, recovering the \textit{infinite parallel term rewriting}, originally presented by the first author [Lect. Notes Comput. Sci. 668, 468--484 (1993; \url{doi:10.1007/3-540-56610-4_83})] to extend the classical, set-theoretical approach to tr with infinite terms. Moreover, we put all the formalism on firm theoretical bases, providing (for the first time, to the best of our knowledge, for infinitary rewriting systems) a clean algebraic semantics by means of (internal) 2-categories.
For the entire collection see [Zbl 1492.68008].ESM systems and the composition of their computationshttps://zbmath.org/1496.681642022-11-17T18:59:28.764376Z"Janssens, D."https://zbmath.org/authors/?q=ai:janssens.dirk|janssens.davySummary: ESM systems are graph rewriting systems where productions are morphisms in a suitable category, ESM. The way graphs are transformed in ESM systems is essentially the same as in actor grammars, which were introduced in
[the author and \textit{G. Rozenberg}, Math. Syst. Theory 22, No. 2, 75--107 (1989; Zbl 0677.68082)].
It is demonstrated that a rewriting step corresponds to a (single) pushout construction, as in the approach from
[\textit{M. Löwe}, Theor. Comput. Sci. 109, No. 1--2, 181--224 (1993; Zbl 0787.18001)].
Rewriting processes in ESM systems are represented by computation structures, and it is shown that communication of rewriting processes corresponds to a gluing operation on computation structures. In the last section we briefly sketch how one may develop a semantics for ESM systems, based on computation structures, that is compositional w.r.t. the union of ESM systems.
For the entire collection see [Zbl 0825.00054].Semi-completeness of hierarchical and super-hierarchical combinations of term rewriting systemshttps://zbmath.org/1496.681652022-11-17T18:59:28.764376Z"Krishna Rao, M. R. K."https://zbmath.org/authors/?q=ai:rao.m-r-k-krishnaSummary: In this paper, we study modular aspects of hierarchical and super hierarchical combinations of term rewriting systems. In particular, a sufficient condition for modularity of semi-completeness of hierarchical and super hierarchical combinations is proposed. We first establish modularity of weak normalization for this class (defined by the sufficient condition) and modularity of semi-completeness for a class of crosswise independent unions. From these results, we obtain modularity of semi-completeness for a class of hierarchical and super hierarchical combinations. Our results generalize the semi-completeness results of
\textit{E. Ohlebusch} [Lect. Notes Comput. Sci. 787, 261--275 (1994; Zbl 0938.68683)]
and \textit{A. Middeldorp} and \textit{Y. Toyama} [J. Symb. Comput. 15, No. 3, 331--348 (1993; Zbl 0778.68050)].
The notion of crosswise independent unions is a generalization of both constructor sharing unions as well as Plump's crosswise disjoint unions.
For the entire collection see [Zbl 0835.68002].Generalized rewrite theories, coherence completion, and symbolic methodshttps://zbmath.org/1496.681662022-11-17T18:59:28.764376Z"Meseguer, José"https://zbmath.org/authors/?q=ai:meseguer.jose.1|meseguer.joseSummary: A new notion of generalized rewrite theory suitable for symbolic reasoning and generalizing the standard notion in
[\textit{R. Bruni} and the author, Theor. Comput. Sci. 360, No. 1--3, 386--414 (2006; Zbl 1097.68051)]
is motivated and defined. Also, new requirements for \textit{symbolic executability} of generalized rewrite theories that extend those in
[\textit{F. Durán} and the author, J. Log. Algebr. Program. 81, No. 7--8, 816--850 (2012; Zbl 1272.03139)]
for standard rewrite theories, including a generalized notion of \textit{coherence}, are given. Symbolic executability, including coherence, is both ensured and made available for a wide class of such theories by automatable \textit{theory transformations}. Using these foundations, several \textit{symbolic reasoning methods} using generalized rewrite theories are studied, including: (i) symbolic description of sets of terms by \textit{pattern predicates}; (ii) reasoning about universal reachability properties by \textit{generalized rewriting}; (iii) reasoning about existential reachability properties by \textit{constrained narrowing}; and (iv) symbolic verification of \textit{safety properties} such as invariants and stability properties.Lazy narrowing: strong completeness and eager variable elimination (extended abstract)https://zbmath.org/1496.681672022-11-17T18:59:28.764376Z"Okui, Satoshi"https://zbmath.org/authors/?q=ai:okui.satoshi"Middeldorp, Aart"https://zbmath.org/authors/?q=ai:middeldorp.aart"Ida, Tetsuo"https://zbmath.org/authors/?q=ai:ida.tetsuoSummary: Narrowing is an important method for solving unification problems in equational theories that are presented by confluent term rewriting systems. Because narrowing is a rather complicated operation, several authors studied calculi in which narrowing is replaced by more simple inference rules. This paper is concerned with one such calculus. Contrary to what has been stated in the literature, we show that the calculus lacks strong completeness, so selection functions to cut down the search space are not applicable. We prove completeness of the calculus and we establish an interesting connection between its strong completeness and the completeness of basic narrowing. We also address the eager variable elimination problem. It is known that many redundant derivations can be avoided if the variable elimination rule, one of the inference rules of our calculus, is given precedence over the other inference rules. We prove the completeness of a restricted variant of eager variable elimination in the case of orthogonal term rewriting systems.
For the entire collection see [Zbl 0835.68002].Term rewriting on GPUshttps://zbmath.org/1496.681682022-11-17T18:59:28.764376Z"van Eerd, Johri"https://zbmath.org/authors/?q=ai:van-eerd.johri"Groote, Jan Friso"https://zbmath.org/authors/?q=ai:groote.jan-friso"Hijma, Pieter"https://zbmath.org/authors/?q=ai:hijma.pieter"Martens, Jan"https://zbmath.org/authors/?q=ai:martens.jan"Wijs, Anton"https://zbmath.org/authors/?q=ai:wijs.anton-jSummary: We present a way to implement term rewriting on a GPU. We do this by letting the GPU repeatedly perform a massively parallel evaluation of all subterms. We find that if the term rewrite systems exhibit sufficient internal parallelism, GPU rewriting substantially outperforms the CPU. Since we expect that our implementation can be further optimized, and because in any case GPUs will become much more powerful in the future, this suggests that GPUs are an interesting platform for term rewriting. As term rewriting can be viewed as a universal programming language, this also opens a route towards programming GPUs by term rewriting, especially for irregular computations.
For the entire collection see [Zbl 1489.68021].On the expressive power of algebraic graph grammars with application conditionshttps://zbmath.org/1496.681692022-11-17T18:59:28.764376Z"Wagner, Annika"https://zbmath.org/authors/?q=ai:wagner.annikaSummary: In this paper we introduce positive, negative and conditional application conditions for the single and the double pushout approach to graph transformation. To give the reader some intuition how the formalism can be used for specification we consider consistency and an interesting representation for specific conditions, namely (conditional) equations. Using a graph grammar notion without nonterminal graphs, i.e. each derivation step leads to a graph of the generated language, we prove a hierarchy: graph grammars over rules with positive application conditions are as powerful as the ones over rules without any extra application condition. Introducing negative application conditions makes the formalism more powerful. Graph grammars over rules with conditional application conditions are on top of the hierarchy.
For the entire collection see [Zbl 0835.68002].The equivalence problem for letter-to-letter bottom-up tree transducers is solvablehttps://zbmath.org/1496.681702022-11-17T18:59:28.764376Z"Andre, Yves"https://zbmath.org/authors/?q=ai:andre.yves"Bossut, Francis"https://zbmath.org/authors/?q=ai:bossut.francisSummary: Letter-to-letter bottom-up tree transducers are investigated in this paper. With an encoding of the so defined tree transformations into relabelings, we establish the decidability of equivalence for this class of tree transducers. Some extensions are next given.
For the entire collection see [Zbl 0835.68002].The geometric properties of an infinitary line and plane languageshttps://zbmath.org/1496.681712022-11-17T18:59:28.764376Z"Arulraj, Anitta"https://zbmath.org/authors/?q=ai:arulraj.anitta"Thamburaj, Robinson"https://zbmath.org/authors/?q=ai:thamburaj.robinson"Samuel, Huldah"https://zbmath.org/authors/?q=ai:samuel.huldahSummary: Formal grammar is an area of discrete mathematics dealing with syntactic techniques of generation of words and their properties. The concept of Parikh vector (PV) and Generalized Parikh vector (GPV) play important role in the combinatorial properties of words under formal grammars. The concept of Generalized Parikh Vector introduced by Siromoney et al. gives the positions of symbols in linear strings. It has been proved that GPVs of the strings of the same length lie on a hyper plane. As any formal language over binary letters are represented geometrically, the study done in this paper explore geometrical properties of lines induced by corresponding languages. The study is extended over three alphabet set giving rise to plane objects.Optimizing the combined automation scheme in the ASIS basishttps://zbmath.org/1496.681722022-11-17T18:59:28.764376Z"Barkalov, A. A."https://zbmath.org/authors/?q=ai:barkalov.alexander-a-jun|barkalov.alexander-a"Titarenko, L. A."https://zbmath.org/authors/?q=ai:titarenko.l-a"Baiev, A. V."https://zbmath.org/authors/?q=ai:baev.andrei-vladimirovich|baev.artem-v"Matviienko, A. V."https://zbmath.org/authors/?q=ai:matviienko.a-vSummary: A method is proposed for decreasing the area of the ASIS occupied by the scheme of a combined automation. The method is based on the encoding of the classes of pseudoequivalent states of Moore automation by additional variables. This approach leads to a four-level scheme implemented as two nano-PLAs and decreases the area of a nano-PLA that generates microoperations of the Moore automation and additional variables. An example of synthesis with the use of the proposed scheme is considered. The results of the efficiency analysis of the proposed method with the use of the library of benchmarks are presented.A language hierarchy of binary relationshttps://zbmath.org/1496.681732022-11-17T18:59:28.764376Z"Brough, Tara"https://zbmath.org/authors/?q=ai:brough.tara"Cain, Alan J."https://zbmath.org/authors/?q=ai:cain.alan-jSummary: Motivated by the study of word problems of monoids, we explore two ways of viewing binary relations on \(X^\ast\) as languages. We exhibit a hierarchy of classes of binary relations on \(X^\ast \), according to the class of languages the relation belongs to and the chosen viewpoint. We give examples of word problems of monoids distinguishing the various classes. Aside from the algebraic interest, these examples demonstrate that the hierarchy still holds when restricted to equivalence relations.Collectives of automata in finitely generated groupshttps://zbmath.org/1496.681742022-11-17T18:59:28.764376Z"Gusev, D. V."https://zbmath.org/authors/?q=ai:gusev.daniil-vladimirovich"Ivanov-Pogodaev, I. A."https://zbmath.org/authors/?q=ai:ivanov-pogodaev.ilya"Kanel-Belov, A. Ya."https://zbmath.org/authors/?q=ai:kanel-belov.alexeiSummary: The present paper is devoted to traversing a maze by a collective of automata. This part of automata theory gave rise to a fairly wide range of diverse problems, including those related to problems of the theory of computational complexity and probability theory. It turns out that the consideration of complicated algebraic objects, such as Burnside groups, can be interesting in this context. In the paper, we show that the Cayley graph a finitely generated group cannot be traversed by a collective of automata if and only if the group is infinite and its every element is periodic.Weak equivalence of higher-dimensional automatahttps://zbmath.org/1496.681752022-11-17T18:59:28.764376Z"Kahl, Thomas"https://zbmath.org/authors/?q=ai:kahl.thomasSummary: This paper introduces a notion of equivalence for higher-dimensional automata, called weak equivalence. Weak equivalence focuses mainly on a traditional trace language and a new homology language, which captures the overall independence structure of an HDA. It is shown that weak equivalence is compatible with both the tensor product and the coproduct of HDAs and that, under certain conditions, HDAs may be reduced to weakly equivalent smaller ones by merging and collapsing cubes.Reversibility of computations in graph-walking automatahttps://zbmath.org/1496.681762022-11-17T18:59:28.764376Z"Kunc, Michal"https://zbmath.org/authors/?q=ai:kunc.michal"Okhotin, Alexander"https://zbmath.org/authors/?q=ai:okhotin.alexanderSummary: Graph-walking automata (GWA) are finite-state devices that traverse graphs given as an input by following their edges; they have been studied both as a theoretical notion and as a model of pathfinding in robotics. If a graph is regarded as the set of memory configurations of a certain abstract machine, then various families of devices can be described as GWA: such are two-way finite automata, their multi-head and multi-tape variants, tree-walking automata and their extension with pebbles, picture-walking automata, space-bounded Turing machines, etc. This paper defines a transformation of an arbitrary deterministic GWA to a reversible GWA. This is done with a linear blow-up in the number of states, where the constant factor depends on the degree of the graphs being traversed. The construction directly applies to all basic models representable as GWA, and, in particular, subsumes numerous existing results for making individual models halt on every input.A tour of recent results on word transducershttps://zbmath.org/1496.681772022-11-17T18:59:28.764376Z"Muscholl, Anca"https://zbmath.org/authors/?q=ai:muscholl.ancaSummary: Regular word transductions extend the robust notion of regular languages from acceptors to transformers. They were already considered in early papers of formal language theory, but turned out to be much more challenging. The last decade brought considerable research around various transducer models, aiming to achieve similar robustness as for automata and languages.
In this talk we survey some recent results on regular word transducers. We discuss how classical connections between automata, logic and algebra extend to transducers, as well as some genuine definability questions. For a recent, more detailed overview of the theory of regular word transductions the reader is referred to the excellent survey
[\textit{E. Filiot} and \textit{P.-A. Reynier}, ``Transducers, logic and algebra for functions of finite words'', ACM SIGLOG News 3, No. 3, 4--19 (2016; \url{doi:10.1145/2984450.2984453})].
For the entire collection see [Zbl 1369.68029].First-order logic on finite treeshttps://zbmath.org/1496.681782022-11-17T18:59:28.764376Z"Potthoff, Andreas"https://zbmath.org/authors/?q=ai:potthoff.andreasSummary: We present effective criteria for first-order definability of regular tree languages. It is known that over words the absence of modulo counting (the ``noncounting property'') characterizes the expressive power of first-order logic (McNaughton, Schützenberger), whereas non-counting regular tree languages exist which are not first-order definable. We present new conditions on regular tree languages (more precisely, on tree automata) which imply nondefinability in first-order logic. One method is based on tree homomorphisms which allow to deduce nondefinability of one tree language from nondefinability of another tree language. Additionly we introduce a structural property of tree automata (the socalled \(\land\text{-}\lor\)-patterns) which also causes tree languages to be undefinable in first-order logic. Finally, it is shown that this notion does not yet give a complete characterization of first-order logic over trees. The proofs rely on the method of Ehrenfeucht-Fraïssé games.
For the entire collection see [Zbl 0835.68002].Subclasses of recognizable trace languageshttps://zbmath.org/1496.681792022-11-17T18:59:28.764376Z"Reineke, Henning"https://zbmath.org/authors/?q=ai:reineke.henningSummary: Mazurkiewicz's traces combine the concepts of formal language theory with concurrency. The class of recognizable trace languages can be characterized by means of Zielonka's finite asynchronous automaton which is representable by a labelled safe Petri net. In this paper subclasses of the recognizable trace languages are defined by restricting the structure of the automaton. The subclasses are characterized and relations between them are examined.
For the entire collection see [Zbl 1492.68015].Decidability of equivalence for deterministic synchronized tree automatahttps://zbmath.org/1496.681802022-11-17T18:59:28.764376Z"Salomaa, Kai"https://zbmath.org/authors/?q=ai:salomaa.kai-tSummary: Synchronized tree automata allow limited communication between computations in independent subtrees of the input. This enables them to verify, for instance, the equality of two unary subtrees of unlimited size. The class of tree languages recognized by synchronized tree automata is strictly included in the context-free tree languages. As our main result we show that equivalence of tree languages recognized by deterministic synchronized tree automata can be effectively decided. This contrasts the earlier undecidability result for the equivalence problem for nondeterministic synchronized tree automata.
For the entire collection see [Zbl 0835.68002].Fine hierarchy of regular \(\omega\)-languageshttps://zbmath.org/1496.681812022-11-17T18:59:28.764376Z"Selivanov, Victor"https://zbmath.org/authors/?q=ai:selivanov.victor-lSummary: By applying descriptive set theory we get several facts on the fine structure of regular \(\omega\)-languages considered by K. Wagner. We present quite different, shorter proofs for main his results and get new results. Our description of the fine structure is new, very clear and automata-free. We prove also a closure property of the fine structure under Boolean operations. Our results demonstrate deep interconnections between descriptive set theory and theory of \(\omega\)-languages.
For the entire collection see [Zbl 0835.68002].Amendable automaton for the language of finite strings of rectangular Hilbert curvehttps://zbmath.org/1496.681822022-11-17T18:59:28.764376Z"Thiagarajan, K."https://zbmath.org/authors/?q=ai:thyagarajan.krishna|thiagarajan.krish-p"Balasubramanian, P."https://zbmath.org/authors/?q=ai:balasubramanian.padmanabhan|balasubramanian.parasuram|balasubramanian.praveen"Navaneetham, K."https://zbmath.org/authors/?q=ai:navaneetham.k"Brahnam, S."https://zbmath.org/authors/?q=ai:brahnam.sheryl(no abstract)Computing the Wadge degree, the Lifschitz degree, and the Rabin index of a regular language of infinite words in polynomial timehttps://zbmath.org/1496.681832022-11-17T18:59:28.764376Z"Wilke, Thomas"https://zbmath.org/authors/?q=ai:wilke.thomas"Yoo, Haiseung"https://zbmath.org/authors/?q=ai:yoo.haiseungSummary: Based on a detailed graph theoretical analysis, Wagner's fundamental results of 1979 are turned into efficient algorithms to compute the Wadge degree, the Lifschitz degree, and the Rabin index of a regular \(\omega\)-language: the two former can be computed in time \(\mathcal{O}(f^2 qb+k \log k)\) and the latter in time \(\mathcal{O}(f^2 qb)\) if the language is represented by a deterministic Muller automaton over an alphabet of cardinality \(b\), with \(f\) accepting sets, \(q\) states, and \(k\) strongly connected components.
For the entire collection see [Zbl 0835.68002].Context-free event domains are recognizablehttps://zbmath.org/1496.681842022-11-17T18:59:28.764376Z"Badouel, Eric"https://zbmath.org/authors/?q=ai:badouel.eric"Darondeau, Philippe"https://zbmath.org/authors/?q=ai:darondeau.philippe"Raoult, Jean -claude"https://zbmath.org/authors/?q=ai:raoult.jean-claudeSummary: The possibly non distributive event domains which arise from Winskel's event structures with binary conflict are known to coincide with the domains of configurations of Stark's trace automata. We prove that whenever the transitive reduction of the order on finite elements in an event domain is a context-free graph in the sense of Müller and Schupp, that event domain may also be generated from a finite trace automaton, where both the set of states and the concurrent alphabet are finite. We show that the set of graph grammars which generate event domains is a recursive set. We obtain altogether an effective procedure which decides from an unlabelled graph grammar whether it generates an event domain and which constructs in that case a finite trace automaton recognizing that event domain. The advantage of trace automata over unlabelled graph grammars is to provide for a more concrete and therefore more tractable representation of event domains, well suited to an automated verification of their properties.
For the entire collection see [Zbl 1492.68008].Rational spaces and set constraintshttps://zbmath.org/1496.681852022-11-17T18:59:28.764376Z"Kozen, Dexter"https://zbmath.org/authors/?q=ai:kozen.dexter-cSummary: Set constraints are inclusions between expressions denoting sets of ground terms. They have been used extensively in program analysis and type inference. In this paper we investigate the topological structure of the spaces of solutions to systems of set constraints. We identify a family of topological spaces called \textit{rational spaces}, which formalize the notion of a topological space with a regular or self-similar structure, such as the Cantor discontinuum or the space of runs of a finite automaton. We develop the basic theory of rational spaces and derive generalizations and proofs from topological principles of some results in the literature on set constraints.
For the entire collection see [Zbl 0835.68002].Universal equivalence and majority of probabilistic programs over finite fieldshttps://zbmath.org/1496.681862022-11-17T18:59:28.764376Z"Barthe, Gilles"https://zbmath.org/authors/?q=ai:barthe.gilles"Jacomme, Charlie"https://zbmath.org/authors/?q=ai:jacomme.charlie"Kremer, Steve"https://zbmath.org/authors/?q=ai:kremer.steveTableaux for policy synthesis for MDPs with PCTL* constraintshttps://zbmath.org/1496.681872022-11-17T18:59:28.764376Z"Baumgartner, Peter"https://zbmath.org/authors/?q=ai:baumgartner.peter"Thiébaux, Sylvie"https://zbmath.org/authors/?q=ai:thiebaux.sylvie"Trevizan, Felipe"https://zbmath.org/authors/?q=ai:trevizan.felipe-wSummary: Markov decision processes (MDPs) are the standard formalism for modelling sequential decision making in stochastic environments. Policy synthesis addresses the problem of how to control or limit the decisions an agent makes so that a given specification is met. In this paper we consider PCTL*, the probabilistic counterpart of CTL*, as the specification language. Because in general the policy synthesis problem for PCTL* is undecidable, we restrict to policies whose execution history memory is finitely bounded a priori. Surprisingly, no algorithm for policy synthesis for this natural and expressive framework has been developed so far. We close this gap and describe a tableau-based algorithm that, given an MDP and a PCTL* specification, derives in a non-deterministic way a system of (possibly nonlinear) equalities and inequalities. The solutions of this system, if any, describe the desired (stochastic) policies. Our main result in this paper is the correctness of our method, i.e., soundness, completeness and termination.
For the entire collection see [Zbl 1371.68015].Minimisation of \(\mathrm{ATL}^*\) modelshttps://zbmath.org/1496.681882022-11-17T18:59:28.764376Z"Cerrito, Serenella"https://zbmath.org/authors/?q=ai:cerrito.serenella"David, Amélie"https://zbmath.org/authors/?q=ai:david.amelieSummary: The aim of this work is to provide a general method to minimize the size (number of states) of a model \(\mathcal {M}\) of an \(\mathrm{ATL}^*\) formula. Our approach is founded on the notion of alternating bisimulation: given a model \(\mathcal {M}\), it is transformed in a stepwise manner into a new model \(\mathcal M'\) minimal with respect to bisimulation. The method has been implemented and will be integrated into the prover TATL, that constructively decides satifiability of an \(\mathrm{ATL}^*\) formula by building a tableau from which, when open, models of the input formula can be extracted.
For the entire collection see [Zbl 1371.68015].Verification in continuous time by discrete reasoninghttps://zbmath.org/1496.681892022-11-17T18:59:28.764376Z"de Alfaro, Luca"https://zbmath.org/authors/?q=ai:de-alfaro.luca"Manna, Zohar"https://zbmath.org/authors/?q=ai:manna.zoharFor the entire collection see [Zbl 1492.68008].On behavioural abstraction and behavioural satisfaction in higher-order logichttps://zbmath.org/1496.681902022-11-17T18:59:28.764376Z"Hofmann, Martin"https://zbmath.org/authors/?q=ai:hofmann.martin.1"Sannella, Donald"https://zbmath.org/authors/?q=ai:sannella.donald-tSummary: The behavioural semantics of specifications with higher-order formulae as axioms is analyzed. A characterization of behavioural abstraction via behavioural satisfaction of formulae in which the equality symbol is interpreted as indistinguishability, due to Reichel and recently generalized to the case of first-order logic by Bidoit et al., is further generalized to this case. The fact that higher-order logic is powerful enough to express the indistinguishability relation is used to characterize behavioural satisfaction in terms of ordinary satisfaction, and to develop new methods for reasoning about specifications under behavioural semantics.
For the entire collection see [Zbl 0835.68002].Theory refinement for program verificationhttps://zbmath.org/1496.681912022-11-17T18:59:28.764376Z"Hyvärinen, Antti E. J."https://zbmath.org/authors/?q=ai:hyvarinen.antti-e-j"Asadi, Sepideh"https://zbmath.org/authors/?q=ai:asadi.sepideh"Even-Mendoza, Karine"https://zbmath.org/authors/?q=ai:even-mendoza.karine"Fedyukovich, Grigory"https://zbmath.org/authors/?q=ai:fedyukovich.grigory"Chockler, Hana"https://zbmath.org/authors/?q=ai:chockler.hana"Sharygina, Natasha"https://zbmath.org/authors/?q=ai:sharygina.natashaSummary: Recent progress in automated formal verification is to a large degree due to the development of constraint languages that are sufficiently light-weight for reasoning but still expressive enough to prove properties of programs. Satisfiability modulo theories (SMT) solvers implement efficient decision procedures, but offer little direct support for adapting the constraint language to the task at hand. Theory refinement is a new approach that modularly adjusts the modeling precision based on the properties being verified through the use of combination of theories. We implement the approach using an augmented version of the theory of bit-vectors and uninterpreted functions capable of directly injecting non-clausal refinements to the inherent Boolean structure of SMT. In our comparison to a state-of-the-art model checker, our prototype implementation is in general competitive, being several orders of magnitudes faster on some instances that are challenging for flattening, while computing models that are significantly more succinct.
For the entire collection see [Zbl 1368.68008].Assumption/guarantee specifications in linear-time temporal logic (extended abstract)https://zbmath.org/1496.681922022-11-17T18:59:28.764376Z"Jonsson, Bengt"https://zbmath.org/authors/?q=ai:jonsson.bengt"Tsay, Yih-Kuen"https://zbmath.org/authors/?q=ai:tsay.yih-kuenSummary: Previous works on assumption/guarantee specifications typically reason about relevant properties at the semantic level or define a special-purpose logic. We feel it is beneficial to formulate such specifications in a more widely used formalism. Specifically, we adopt the linear-time temporal logic (LTL) of Manna and Pnueli. We find that, with past temporal operators, LTL admits a succinct \textit{syntactic} formulation of assumption/guarantee specifications. This contrasts, in particular, with the work by Abadi and Lamport using TLA, where working at the syntactic level is more complicated. Our composition rules are derived entirely within LTL and can also handle internal variables. We had to overcome a number of technical problems in this pursuit, in particular, the problem of extracting the safety closure of a temporal formula. As a by-product, we identify general conditions under which the safety closure can be expressed in a succinct way that facilitates syntactic manipulation.
For the entire collection see [Zbl 0835.68002].A Benders decomposition approach to deciding modular linear integer arithmetichttps://zbmath.org/1496.681932022-11-17T18:59:28.764376Z"Kafle, Bishoksan"https://zbmath.org/authors/?q=ai:kafle.bishoksan"Gange, Graeme"https://zbmath.org/authors/?q=ai:gange.graeme"Schachte, Peter"https://zbmath.org/authors/?q=ai:schachte.peter"Søndergaard, Harald"https://zbmath.org/authors/?q=ai:sondergaard.harald"Stuckey, Peter J."https://zbmath.org/authors/?q=ai:stuckey.peter-jSummary: Verification tasks frequently require deciding systems of linear constraints over modular (machine) arithmetic. Existing approaches for reasoning over modular arithmetic use bit-vector solvers, or else approximate machine integers with mathematical integers and use arithmetic solvers. Neither is ideal; the first is sound but inefficient, and the second is efficient but unsound. We describe a linear encoding which correctly describes modular arithmetic semantics, yielding an optimistic but sound approach. Our method abstracts the problem with linear arithmetic, but progressively refines the abstraction when modular semantics is violated. This preserves soundness while exploiting the mostly integer nature of the constraint problem. We present a prototype implementation, which gives encouraging experimental results.
For the entire collection see [Zbl 1368.68008].Verification of asynchronous circuits by BDD-based model checking of Petri netshttps://zbmath.org/1496.681942022-11-17T18:59:28.764376Z"Roig, Oriol"https://zbmath.org/authors/?q=ai:roig.oriol"Cortadella, Jordi"https://zbmath.org/authors/?q=ai:cortadella.jordi"Pastor, Enric"https://zbmath.org/authors/?q=ai:pastor.enricSummary: This paper presents a methodology for the verification of speed-independent asynchronous circuits against a Petri net specification. The technique is based on symbolic reachability analysis, modeling both the specification and the gate-level network behavior by means of Boolean functions. These functions are efficiently handled by using \textit{Binary Decision Diagrams}. Algorithms for verifying the correctness of designs, as well as several circuit properties are proposed. Finally, the applicability of our verification method has been proven by checking the correctness of different benchmarks.
For the entire collection see [Zbl 1492.68015].A novel approach for supervisor synthesis to enforce opacity of discrete event systemshttps://zbmath.org/1496.681952022-11-17T18:59:28.764376Z"Souid, Nour Elhouda"https://zbmath.org/authors/?q=ai:souid.nour-elhouda"Klai, Kais"https://zbmath.org/authors/?q=ai:klai.kaisSummary: Opacity is a property of information flow that characterizes the ability of a system to keep its secret information hidden from a third party called an attacker. In the state-of-the-art, opacity of Discrete Event Systems (DES) has been investigated using a variety of techniques. Methods based on Supervisory Control Theory (SCT) emerge as an efficient approach for enforcing this property. In this paper, we address the problem of enforcing the opacity of a DES through the definition of a supervisor whose role is to restrain the behavior of the system keeping ``good'' runs only i.e., executions that exactly correspond to the opaque subset of the system's state space. The proposed approach is based on Symbolic Observation Graph: a hybrid graph where nodes are subsets of reachable states linked with unobservable actions. Encoding such nodes symbolically using binary decision diagrams allows to tackle the state space explosion problem.
We designed a reduced-cost algorithm that synthesizes an optimal supervisor (at design time) to ensure the opacity of the system (at runtime). Moreover, we implemented our approach in C++ language and we validated our proposition using a real-life case study.
For the entire collection see [Zbl 1487.68013].The trace modalityhttps://zbmath.org/1496.681962022-11-17T18:59:28.764376Z"Steinhöfel, Dominic"https://zbmath.org/authors/?q=ai:steinhofel.dominic"Hähnle, Reiner"https://zbmath.org/authors/?q=ai:hahnle.reinerSummary: We propose the trace modality, a concept to uniformly express a wide range of program verification problems. To demonstrate its usefulness, we formalize several program verification problems in it: Functional Verification, Information Flow Analysis, Temporal Model Checking, Program Synthesis, Correct Compilation, and Program Evolution. To reason about the trace modality, we translate programs and specifications to regular symbolic traces and construct simulation relations on first-order symbolic automata. The idea with this uniform representation is that it helps to identify synergy potential -- theoretically and practically -- between so far separate verification approaches.
For the entire collection see [Zbl 1430.03006].Event-driven temporal logic pattern for control software requirements specificationhttps://zbmath.org/1496.681972022-11-17T18:59:28.764376Z"Zyubin, Vladimir"https://zbmath.org/authors/?q=ai:zyubin.vladimir-evgenevich"Anureev, Igor"https://zbmath.org/authors/?q=ai:anureev.igor-sergeevich"Garanina, Natalia"https://zbmath.org/authors/?q=ai:garanina.natalia-o"Staroletov, Sergey"https://zbmath.org/authors/?q=ai:staroletov.sergey"Rozov, Andrei"https://zbmath.org/authors/?q=ai:rozov.andrei"Liakh, Tatiana"https://zbmath.org/authors/?q=ai:liakh.tatianaSummary: This paper presents event-driven temporal logic (EDTL), a specification formalism that allows the users to describe the behavior of control software in terms of events (including timeouts) and logical operations over inputs and outputs, and therefore consider the control system as a ``black box''. We propose the EDTL-based pattern that provides a simple but powerful and semantically rigorous conceptual framework oriented on industrial process plant developers in order to organize their effective interaction with the software developers and provide a seamless transition to the stages of requirement consistency checking and verification.
For the entire collection see [Zbl 1489.68021].Relations as abstract datatypes: an institution to specify relations between algebrashttps://zbmath.org/1496.681982022-11-17T18:59:28.764376Z"Baumeister, Hubert"https://zbmath.org/authors/?q=ai:baumeister.hubertSummary: One way to view the execution state of an imperative program is as a many sorted algebra. Program variables are (constant) functions and their types are sorts. The execution of a program defines a relation between the state of the program (algebra) before and after the execution of the program. In this paper we shall define an institution for the specification of relations between structures of some base institution (eg. the institution of equational logic or first order predicate logic). Sets of structures over a common signature, abstract datatypes, in this institution denote relations between structures of the base institution. This makes it possible to apply a rich repertoire of existing techniques for specifying abstract datatypes to the specification of relations. This paper tries to narrow the gap between algebraic specification languages like Clear, ASL or Act-One and model theoretic based specification languages like Z, VDM-SL or the Larch Interface language.
For the entire collection see [Zbl 0835.68002].Proving the correctness of behavioural implementationshttps://zbmath.org/1496.681992022-11-17T18:59:28.764376Z"Bidoit, Michel"https://zbmath.org/authors/?q=ai:bidoit.michel"Hennicker, Rolf"https://zbmath.org/authors/?q=ai:hennicker.rolfSummary: We introduce a concept of behavioural implementation for algebraic specifications which is based on an indistinguishability relation (called behavioural equality). The central objective of this work is the investigation of proof rules that first allow us to establish the correctness of behavioural implementations in a modular way and moreover are practicable enough to induce proof obligations that can be discharged with existing theorem provers. Our proof technique can also be applied for proving abstractor implementations in the sense of Sannella and Tarlecki.
For the entire collection see [Zbl 1492.68008].Semantic typing for parametric algebraic specificationshttps://zbmath.org/1496.682002022-11-17T18:59:28.764376Z"Cengarle, María Victoria"https://zbmath.org/authors/?q=ai:cengarle.maria-victoriaSummary: The implementation relation of refinement of specifications is studied in the framework of the calculus of higher-order parameterization of specifications. An existing system of derivation of the relation among non-parametric specifications is enlarged so as to comprise parametric specifications. The new system is correct and complete under certain assumptions. By means of this system the calculus of parametric specifications can be enhanced with semantic types, and in this way a specification is a valid argument of a parametric specification as long as it shows the particular behavior demanded by the semantic parameter restriction. This typing can be derived, and so function application is conditional to the derivability of the parameter restrictions instantiated with the actual argument.
For the entire collection see [Zbl 1492.68008].Order-sorted algebraic specifications with higher-order functionshttps://zbmath.org/1496.682012022-11-17T18:59:28.764376Z"Haxthausen, Anne Elisabeth"https://zbmath.org/authors/?q=ai:haxthausen.anne-elisabethSummary: This paper gives a proposal for how order-sorted algebraic specification languages can be extended with higher-order functions. The approach taken is a generalisation to the order-sorted case of an approach given by Möller, Tarlecki and Wirsing for the many-sorted case. The main idea in the proposal is to only consider reachable extensional algebras. This leads to a very simple theory, where it is possible to relate the higher-order specifications to first order specifications.
For the entire collection see [Zbl 1492.68008].Theorising monitoring: algebraic models of web monitoring in organisationshttps://zbmath.org/1496.682022022-11-17T18:59:28.764376Z"Johnson, Kenneth"https://zbmath.org/authors/?q=ai:johnson.kenneth-a|johnson.kenneth-l|johnson.kenneth-d|johnson.kenneth-c|johnson.kenneth-h|johnson.kenneth-r|johnson.kenneth-o"Tucker, John V."https://zbmath.org/authors/?q=ai:tucker.john-v"Wang, Victoria"https://zbmath.org/authors/?q=ai:wang.victoriaSummary: Our lives are facilitated and mediated by software. Thanks to software, data on nearly everything can be generated, accessed and analysed for all sorts of reasons. Software technologies, combined with political and commercial ideas and practices, have led to a wide range of our activities being monitored, which is the source of concerns about surveillance and privacy. We pose the questions: What is monitoring? Do diverse and disparate monitoring systems have anything in common? What role does monitoring play in contested issues of surveillance and privacy? We are developing an abstract theory for studying monitoring that begins by capturing structures common to many different monitoring practices. The theory formalises the idea that monitoring is a process that observes the behaviour of people and objects in a context. Such entities and their behaviours can be represented by abstract data types and their observable attributes by logics. In this paper, we give a formal model of monitoring based on the idea that behaviour is modelled by streams of data, and apply the model to a social context: the monitoring of web usage by staff and members of an organisation.
For the entire collection see [Zbl 1428.68025].Canonical selection of colimitshttps://zbmath.org/1496.682032022-11-17T18:59:28.764376Z"Mossakowski, Till"https://zbmath.org/authors/?q=ai:mossakowski.till"Rabe, Florian"https://zbmath.org/authors/?q=ai:rabe.florian"Codescu, Mihai"https://zbmath.org/authors/?q=ai:codescu.mihaiSummary: Colimits are a powerful tool for the combination of objects in a category. In the context of modeling and specification, they are used in the institution-independent semantics (1) of instantiations of parameterised specifications (e.g. in the specification language CASL), and (2) of combinations of networks of specifications (in the OMG standardised language DOL).
The problem of using colimits as the semantics of certain language constructs is that they are defined only up to isomorphism. However, the semantics of a complex specification in these languages is given by a signature and a class of models over that signature -- not by an isomorphism class of signatures. This is particularly relevant when a specification with colimit semantics is further translated or refined. The user needs to know the symbols of a signature for writing a correct refinement.
Therefore, we study how to usefully choose one representative of the isomorphism class of all colimits of a given diagram. We develop criteria that colimit selections should meet. We work over arbitrary inclusive categories, but start the study how the criteria can be met with \(\mathbb Set\)-like categories, which are often used as signature categories for institutions.
For the entire collection see [Zbl 1428.68025].Detecting isomorphisms of modular specifications with diagramshttps://zbmath.org/1496.682042022-11-17T18:59:28.764376Z"Oriat, Catherine"https://zbmath.org/authors/?q=ai:oriat.catherineSummary: We propose to detect isomorphisms of algebraic modular specifications, by representing specifications as diagrams over a category \(\mathcal{C}_o\) of base specifications and specification morphisms. We start with a formulation of modular specifications as terms, which are interpreted as diagrams. This representation has the advantage of being more abstract, i.e. less dependent of one specific construction than terms. For that, we define a category \(\operatorname{diagr} (\mathcal{C}_o)\) of diagrams, which is a completion of \(\mathcal{C}_o\) with finite colimits. The category \(\operatorname{diagr} (\mathcal{C}_o)\) is \textit{finitely cocomplete}, even if \(C_o\) is \textit{not} finitely cocomplete. We define a functor \(\mathcal{D} \square: \operatorname{Term}(\mathcal{C}_o) \rightarrow \operatorname{diagr} (\mathcal{C}_o)\) which maps specifications to diagrams, and specification morphisms to diagram morphisms. This interpretation is sound in that the colimit of a diagram representing a specification is isomorphic to this specification. The problem of isomorphisms of modular specifications is solved by detecting isomorphisms of diagrams.
For the entire collection see [Zbl 1492.68008].Algebraic model management: a surveyhttps://zbmath.org/1496.682052022-11-17T18:59:28.764376Z"Schultz, Patrick"https://zbmath.org/authors/?q=ai:schultz.patrick"Spivak, David I."https://zbmath.org/authors/?q=ai:spivak.david-i"Wisnesky, Ryan"https://zbmath.org/authors/?q=ai:wisnesky.ryanSummary: We survey the field of model management and describe a new model management approach based on algebraic specification.
For the entire collection see [Zbl 1428.68025].Nonfinite axiomatizability of shuffle inequalitieshttps://zbmath.org/1496.682062022-11-17T18:59:28.764376Z"Bloom, Stephen L."https://zbmath.org/authors/?q=ai:bloom.stephen-l"Ésik, Zoltán"https://zbmath.org/authors/?q=ai:esik.zoltanSummary: There is some set of inequations \(t \leq t^\prime\) whose models are the algebras in the variety of ordered algebras generated by the algebras \(\mathcal{L}_{\Sigma} =(P_{\Sigma},\cdot,\otimes,1)\) where \(P_{\Sigma}\) consists of all subsets of the free monoid \(\Sigma^*\), \(B \cdot C=\{uv : u \in B, \upsilon \in C\}\), and \(B \otimes C\) is the shuffle product of the two languages. We show that there is no finite set of such inequations.
For the entire collection see [Zbl 0835.68002].Crisp-determinization of weighted tree automata over additively locally finite and past-finite monotonic strong bimonoids is decidablehttps://zbmath.org/1496.682072022-11-17T18:59:28.764376Z"Droste, Manfred"https://zbmath.org/authors/?q=ai:droste.manfred"Fülöp, Zoltán"https://zbmath.org/authors/?q=ai:fulop.zoltan"Kószó, Dávid"https://zbmath.org/authors/?q=ai:koszo.david"Vogler, Heiko"https://zbmath.org/authors/?q=ai:vogler.heikoSummary: A weighted tree automaton is crisp-deterministic if it is deterministic and each of its transitions carries either the additive or multiplicative unit of the underlying weight algebra; weights different from these units may only appear at the root of the given input tree. A weighted tree automaton is crisp-determinizable if there exists an equivalent crisp-deterministic one. We prove that it is decidable whether weighted tree automata over additively locally finite and past-finite monotonic strong bimonoids are crisp-determinizable.
For the entire collection see [Zbl 1465.68022].Crisp-determinization of weighted tree automata over strong bimonoidshttps://zbmath.org/1496.682082022-11-17T18:59:28.764376Z"Fülöp, Zoltán"https://zbmath.org/authors/?q=ai:fulop.zoltan"Kószo, Dávid"https://zbmath.org/authors/?q=ai:koszo.david"Vogler, Heiko"https://zbmath.org/authors/?q=ai:vogler.heikoSummary: We consider weighted tree automata (wta) over strong bimonoids and their initial algebra semantics and their run semantics. There are wta for which these semantics are different; however, for bottom-up deterministic wta and for wta over semirings, the difference vanishes. A wta is crisp-deterministic if it is bottom-up deterministic and each transition is weighted by one of the unit elements of the strong bimonoid. We prove that the class of weighted tree languages recognized by crisp-deterministic wta is the same as the class of recognizable step mappings. Moreover, we investigate the following two crisp-determinization problems: for a given wta \({\mathcal A}\), (a) does there exist a crisp-deterministic wta which computes the initial algebra semantics of \({\mathcal A}\) and (b) does there exist a crisp-deterministic wta which computes the run semantics of \({\mathcal A}\)? We show that the finiteness of the Nerode algebra \({\mathcal N}({\mathcal A})\) of \({\mathcal A}\) implies a positive answer for (a), and that the finite order property of \({\mathcal A}\) implies a positive answer for (b). We show a sufficient condition which guarantees the finiteness of \({\mathcal N}({\mathcal A})\) and a sufficient condition which guarantees the finite order property of \({\mathcal A}\). Also, we provide an algorithm for the construction of the crisp-deterministic wta according to (a) if \({\mathcal N}({\mathcal A})\) is finite, and similarly for (b) if \({\mathcal A}\) has finite order property. We prove that it is undecidable whether an arbitrary wta \({\mathcal A}\) is crisp-determinizable. We also prove that both, the finiteness of \({\mathcal N}({\mathcal A})\) and the finite order property of \({\mathcal A}\) are undecidable.Semi-trace morphisms and rational transductionshttps://zbmath.org/1496.682092022-11-17T18:59:28.764376Z"Wacrenier, Pierre-André"https://zbmath.org/authors/?q=ai:wacrenier.pierre-andreSummary: We investigate trace and semi-trace morphisms from an algebraic point of view thanks to rational transductions. The main result is a characterization of (semi-) trace morphisms which are equivalent to some rational transduction. Within this result we easily characterize context-free trace morphisms.
For the entire collection see [Zbl 0835.68002].CPO models for a class of GSOS languageshttps://zbmath.org/1496.682102022-11-17T18:59:28.764376Z"Aceto, Luca"https://zbmath.org/authors/?q=ai:aceto.luca"Ingólfsdóttir, Anna"https://zbmath.org/authors/?q=ai:ingolfsdottir.annaSummary: In this paper, we present a general way of giving denotational semantics to a class of languages equipped with an operational semantics that fits the GSOS format of Bloom, Istrail and Meyer. The canonical model used for this purpose will be Abramsky's domain of synchronization trees, and the denotational semantics automatically generated by our methods will be guaranteed to be fully abstract with respect to the finitely observable part of the bisimulation preorder. In the process of establishing the full abstraction result, we also obtain several general results on the bisimulation preorder (including a complete axiomatization for it), and give a novel operational interpretation of GSOS languages.
For the entire collection see [Zbl 0835.68002].Reasoning about higher-order processeshttps://zbmath.org/1496.682112022-11-17T18:59:28.764376Z"Amadio1, Roberto M."https://zbmath.org/authors/?q=ai:amadio.roberto-m"Dam, Mads"https://zbmath.org/authors/?q=ai:dam.madsSummary: We address the specification and verification problem for process calculi such as Chocs, CML and Facile where processes or functions are transmissible values. Our work takes place in the context of a static treatment of restriction and of a bisimulation-based semantics. As a paradigmatic and simple case we concentrate on (Plain) Chocs. We show that Chocs bisimulation can be characterized by an extension of Hennessy-Milner logic including a constructive implication, or function space constructor. This result is a non-trivial extension of the classical characterization result for labelled transition systems. In the second part of the paper we address the problem of developing a proof system for the verification of process specifications. Building on previous work for CCS we present a sound proof system for a Chocs sub-calculus not including restriction. We present two completeness results: one for the full specification language using an infinitary system, and one for a special class of so-called \textit{well-described} specifications using a finitary system.
For the entire collection see [Zbl 0835.68002].Polynomial algorithms for the synthesis of bounded netshttps://zbmath.org/1496.682122022-11-17T18:59:28.764376Z"Badouel, Eric"https://zbmath.org/authors/?q=ai:badouel.eric"Bernardinello, Luca"https://zbmath.org/authors/?q=ai:bernardinello.luca"Darondeau, Philippe"https://zbmath.org/authors/?q=ai:darondeau.philippeSummary: The so-called synthesis problem for nets, which consists in deciding whether a given graph is isomorphic to the case graph of some net, and then constructing the net, has been solved in the litterature for various types of nets, ranging from elementary nets to Petri nets. The common principle for the synthesis is the idea of regions in graphs, representing possible extensions of places in nets. However, no practical algorithm has been defined so far for the synthesis. We give here explicit algorithms solving in polynomial time the synthesis problem for bounded nets from regular languages or from finite automata.
For the entire collection see [Zbl 0835.68002].On liveness in extended non self-controlling netshttps://zbmath.org/1496.682132022-11-17T18:59:28.764376Z"Barkaoui, K."https://zbmath.org/authors/?q=ai:barkaoui.kamel"Couvreur, J. M."https://zbmath.org/authors/?q=ai:couvreur.jean-michel"Dutheillet, C."https://zbmath.org/authors/?q=ai:dutheillet.claudeSummary: For several years, research has been done to establish relations between the liveness of a net and the structure of the underlying graph. This work has resulted in the proposition of polynomial algorithms to check liveness for particular classes of nets. In this paper, we present Extended Non Self-Controlling Nets, a class of nets that includes Extended Free-Choice Nets and Non Self-Controlling Nets. We develop some properties of this new class of nets and we propose polynomial algorithms whose application domain is wider than the domain of the previous algorithms.
For the entire collection see [Zbl 1492.68015].An algebraic semantics for hierarchical P/T netshttps://zbmath.org/1496.682142022-11-17T18:59:28.764376Z"Basten, Twan"https://zbmath.org/authors/?q=ai:basten.twan"Voorhoeve, Marc"https://zbmath.org/authors/?q=ai:voorhoeve.marcSummary: The first part of this paper gives an algebraic semantics for Place/Transition nets in terms of an algebra which is based on the process algebra ACP. The algebraic semantics is such that a P/T net and its term representation have the same operational behavior. As opposed to other approaches in the literature, the actions in the algebra do not correspond to the firing of a transition, but to the consumption or production of tokens. Equality of P/T nets can be determined in a purely equational way.
The second part of this paper extends the results to hierarchical P/T nets. It gives a compositional algebraic semantics for both their complete operational behavior and their high-level, observable behavior. By means of a non-trivial example, the Alternating-Bit Protocol, it is shown that the notions of abstraction and verification in the process algebra ACP can be used to verify in an equational way whether a hierarchical P/T net satisfies some algebraic specification of its observable behavior. Thus, the theory in this paper can be used to determine whether two hierarchical P/T nets have the same observable behavior. As an example, it is shown that the Alternating-Bit Protocol behaves as a simple one-place buffer. The theory forms a basis for a modular, top-down design methodology based on Petri nets.
For the entire collection see [Zbl 1492.68015].Symbolic timing deviceshttps://zbmath.org/1496.682152022-11-17T18:59:28.764376Z"Bergeron, Anne"https://zbmath.org/authors/?q=ai:bergeron.anneSummary: Timing devices such as timers, clocks, or stopwatches, are used in a vast range of processes. In computer science, the need to specify, verify and implement real-time applications had given rise to many different formalizations of timed concurrent processes. This paper is an attempt to understand the underlying ideas of many of these approaches by focusing primarily on the timing devices. Starting with an abstract definition of a \textit{timer}, we use the formalism of synchronized products, as developed by Arnold and Nivat, to study different formal languages associated with the concurrent operation of \(n\) timers.
For the entire collection see [Zbl 1492.68008].A class of composable high level Petri netshttps://zbmath.org/1496.682162022-11-17T18:59:28.764376Z"Best, Eike"https://zbmath.org/authors/?q=ai:best.eike"Fleischhack, Hans"https://zbmath.org/authors/?q=ai:fleischhack.hans"Fraczak, Wojciech"https://zbmath.org/authors/?q=ai:fraczak.wojciech"Hopkins, Richard P."https://zbmath.org/authors/?q=ai:hopkins.richard-p"Klaudel, Hanna"https://zbmath.org/authors/?q=ai:klaudel.hanna"Pelz, Elisabeth"https://zbmath.org/authors/?q=ai:pelz.elisabethSummary: In this paper a high-level Petri net model called M-nets (for multilabeled nets) is developed. A distinctive feature of this model is that it allows not only vertical unfolding, as do most other high-level net models, but also horizontal composition -- in particular, synchronisation -- in a manner similar to process algebras such as CCS. This turns the set of M-nets into a domain whose composition operations satisfy various algebraic properties. The operations are shown to be consistent with unfolding in the sense that the unfolding of a composite high-level net is the composition of the unfoldings of its components. A companion paper shows how this algebra can be used to define the semantics of a concurrent programming language compositionally.
For the entire collection see [Zbl 1492.68015].A refined view of the box algebrahttps://zbmath.org/1496.682172022-11-17T18:59:28.764376Z"Best, Eike"https://zbmath.org/authors/?q=ai:best.eike"Koutny, Maciej"https://zbmath.org/authors/?q=ai:koutny.maciejSummary: This paper presents the operational semantics and the Petri net semantics of a fragment of the box algebra in tutorial style. For the operational semantics, inductive rules for marked expressions are given. For the net semantics, a general mechanism of refinement and relabelling is introduced, using which the connectives of the algebra are defined. A companion paper shows how this mechanism can be extended to handle recursion.
For the entire collection see [Zbl 1492.68015].The \texttt{link}-calculus for open multiparty interactionshttps://zbmath.org/1496.682182022-11-17T18:59:28.764376Z"Bodei, Chiara"https://zbmath.org/authors/?q=ai:bodei.chiara"Brodo, Linda"https://zbmath.org/authors/?q=ai:brodo.linda"Bruni, Roberto"https://zbmath.org/authors/?q=ai:bruni.robertoSummary: We present the \texttt{link}-calculus, an extension of \(\pi\)-calculus, that models interactions that are multiparty, i.e. that may involve more than two processes, mutually exchanging data. Communications are seen as chains of suitably combined links (which record the source and the target ends of each hop of interactions), each contributed by one party. Values are exchanged by means of message tuples, still provided by each party. We develop semantic theories and proof techniques for \texttt{link}-calculus and apply them in reasoning about complex distributing computing scenarios, where more than two participants need to synchronise in order to perform a task. In particular, we introduce the notion of linked bisimilarity in analogy with the early bisimilarity of the \(\pi\)-calculus. Differently from the \(\pi\)-calculus case, we can show that it is a congruence with respect to all the \texttt{link}-calculus operators and that is also closed under name substitution.An efficient algorithm for the computation of stubborn sets of well formed Petri netshttps://zbmath.org/1496.682192022-11-17T18:59:28.764376Z"Brgan, Robert"https://zbmath.org/authors/?q=ai:brgan.robert"Poitrenaud, Denis"https://zbmath.org/authors/?q=ai:poitrenaud.denisSummary: The state space analysis of a Petri Net allows the validation of system properties but its drawback is the explosion in time and space. The Stubborn Set method of A. Valmari, permits an efficient reduction of the graph based on the parallelism expressed by the model. Colored Petri Nets introduce new complexities for the computation of Stubborn Sets due to the color management. For Well Formed Colored Petri Nets, we present an efficient implementation of the Stubborn Set method based on the solving of constraint systems. These systems represent the dependences between transitions induced by the Stubborn Set definition. They are constructed before the graph generation in a symbolic form independently of the system parameters. These constraint systems are repetitively solved during the graph construction.
For the entire collection see [Zbl 1492.68015].A notion of equivalence for stochastic Petri netshttps://zbmath.org/1496.682202022-11-17T18:59:28.764376Z"Buchholz, Peter"https://zbmath.org/authors/?q=ai:buchholz.peterSummary: Equivalence is a central concept for the qualitative analysis of dynamic systems. Several different notions of equivalence preserving qualitative properties of a system appeared in the literature on Petri nets (PNs). If apart from qualitative also quantitative aspects of a systems should be analysed, then there exists the class of stochastic Petri nets (SPNs) extending PNs by associating exponentially distributed delays with transitions. However, relations to define equivalence of systems according to quantitative aspects in a systematic way have not been published. This paper proposes a first approach to define quantitative equivalence of SPNs. It is shown that one of the presented relations is an extension of bisimulation equivalence for nets without time. Furthermore, quantitative equivalence is a congruence according to the parallel composition of SPNs as introduced in this paper. For the proposed quantitative equivalence an algorithm to compute the minimal equivalent realisation of a SPN on marking space level is presented.
For the entire collection see [Zbl 1492.68015].Petri nets, traces, and local model checkinghttps://zbmath.org/1496.682212022-11-17T18:59:28.764376Z"Cheng, Allan"https://zbmath.org/authors/?q=ai:cheng.allanSummary: It has been observed that the behavioural view of concurrent systems that all possible sequences of actions are relevant is too generous; not all sequences should be considered as likely behaviours. By taking progress fairness assumptions into account one obtains a more realistic behavioural view of the systems. In this paper we consider the problem of performing model checking relative to this behavioural view. We present a CTL-like logic which is interpreted over the model of concurrent systems labelled 1-safe nets. It turns out that Mazurkiewicz trace theory provides a useful setting in which the progress fairness assumptions can be formalized in a natural way. We provide the first, to our knowledge, set of sound and complete tableau rules for a CTL-like logic interpreted under progress fairness assumptions.
For the entire collection see [Zbl 1492.68008].Modular state space analysis of coloured Petri netshttps://zbmath.org/1496.682222022-11-17T18:59:28.764376Z"Christensen, Søren"https://zbmath.org/authors/?q=ai:christensen.soren-torholm|christensen.soren-gram|christensen.soren.2"Petrucci, Laure"https://zbmath.org/authors/?q=ai:petrucci.laureSummary: State Space Analysis is one of the most developed analysis methods for Petri Nets. The main problem of state space analysis is the size of the state spaces. Several ways to reduce it have been proposed but cannot yet handle industrial size systems.
Large models often consist of a set of modules. Local properties of each module can be checked separately, before checking the validity of the entire system. We want to avoid the construction of a single state space of the entire system.
When considering transition sharing, the behaviour of the total system can be captured by the state spaces of modules combined with a Synchronisation Graph. To verify that we do not lose information we show how the full state space can be constructed.
We show how it is possible to determine usual Petri Nets properties, without unfolding to the ordinary state space.
For the entire collection see [Zbl 1492.68015].On the decidability of process equivalences for the \(\pi\)-calculushttps://zbmath.org/1496.682232022-11-17T18:59:28.764376Z"Dam, Mads"https://zbmath.org/authors/?q=ai:dam.madsSummary: We present general results for showing process equivalences applied to the finite control fragment of the \(\pi\)-calculus decidable. Firstly a Finite Reachability Theorem states that up to finite name spaces and up to a static normalisation procedure, the set of reachable agent expressions is finite. Secondly a Boundedness Lemma shows that no potential computations are missed when name spaces are chosen large enough, but finite. We show how these results lead to decidability for a number of \(\pi\)-calculus equivalences such as strong or weak, late or early bismulation equivalence. Furthermore, for strong late equivalence we show how our techniques can be used to adapt the well-known Paige-Tarjan algorithm. Strikingly this results in a single exponential running time not much worse than the running time for the case of for instance CCS. Our results considerably strengthens previous results on decidable equivalences for parameter-passing process calculi.
For the entire collection see [Zbl 1492.68008].Causal semantics for BPP nets with silent moveshttps://zbmath.org/1496.682242022-11-17T18:59:28.764376Z"Gorrieri, Roberto"https://zbmath.org/authors/?q=ai:gorrieri.robertoSummary: BPP nets, a subclass of finite Place/Transition Petri nets, are equipped with some causal behavioral semantics, which are variations of fully-concurrent bisimilarity
[\textit{E. Best} et al., Acta Inf. 28, No. 3, 231--264 (1991; Zbl 0718.68034)],
inspired by weak
[\textit{R. Milner}, Communication and concurrency. New York etc.: Prentice Hall (1989; Zbl 0683.68008)]
or branching bisimulation
[\textit{R. J. van Glabbeek} and \textit{W. P. Weijland}, J. ACM 43, No. 3, 555--600 (1996; Zbl 0882.68085)]
on labeled transition systems. Then, we introduce novel, efficiently decidable, distributed semantics, inspired by team bisimulation
[the author, Acta Inf. 58, No. 5, 529--569 (2021; Zbl 07404758)]
and h-team bisimulation
[the author, Lect. Notes Comput. Sci. 12152, 153--175 (2020; Zbl 07580931)],
and show how they relate to these variants of fully-concurrent bisimulation.Directed homotopy in non-positively curved spaceshttps://zbmath.org/1496.682252022-11-17T18:59:28.764376Z"Goubault, Éric"https://zbmath.org/authors/?q=ai:goubault.eric"Mimram, Samuel"https://zbmath.org/authors/?q=ai:mimram.samuelThe notion of non-positively curved precubical set, which can be thought of as an algebraic analogue of the well-known one for metric spaces, captures the geometric properties of the precubical sets associated with concurrent programs using only mutexes, which are the most widely used synchronization primitives. A precubical set is non-positively curved if it is geometric, satisfies the cube property and satisfies the unique \(n\)-cube property for \(n\geq 3\). Using this, as well as categorical rewriting techniques, the authors are then able to show that directed and non-directed homotopy coincide for directed paths in these precubical sets. Finally, they study the geometric realization of precubical sets in metric spaces, to show that the conditions on precubical sets actually coincide with those for metric spaces. Since the category of metric spaces is not cocomplete, they are led to work with generalized metric spaces and study some of their properties.
Reviewer: Philippe Gaucher (Paris)Symbolic reachability graph and partial symmetrieshttps://zbmath.org/1496.682262022-11-17T18:59:28.764376Z"Haddad, S."https://zbmath.org/authors/?q=ai:haddad.serge"Ilié, J. M."https://zbmath.org/authors/?q=ai:ilie.jean-michel"Taghelit, M."https://zbmath.org/authors/?q=ai:taghelit.m"Zouari, B."https://zbmath.org/authors/?q=ai:zouari.belhassenSummary: The construction of symbolic reachability graphs is a useful technique for reducing state explosion in High-level Petri nets. Such a reduction is obtained by exploiting the symmetries of the whole net. In this paper, we extend this method to deal with partial symmetries. In a first time, we introduce an example which shows the interest and the principles of our method. Then we develop the general algorithm. Lastly we enumerate the properties of this Extended Symbolic Reachability Graph, including the reachability equivalence.
For the entire collection see [Zbl 1492.68015].A calculus of countable broadcasting systemshttps://zbmath.org/1496.682272022-11-17T18:59:28.764376Z"Isobe, Yoshinao"https://zbmath.org/authors/?q=ai:isobe.yoshinao"Sato, Yutaka"https://zbmath.org/authors/?q=ai:sato.yutaka"Ohmaki, Kazuhito"https://zbmath.org/authors/?q=ai:ohmaki.kazuhitoSummary: In this paper we propose a process algebra named CCB (a Calculus of Countable Broadcasting Systems). We define an observational congruence relation in CCB after basic definitions of CCB, and give a sound and complete axiom system for the congruence relation of finite agents.
CCB is developed for analyzing a \textit{multi-agent model} with broadcast communication. The most important property of CCB is that a broadcaster of a message can know the number of receivers of the message after broadcasting. The property is not easily described in the other process algebras.
The multi-agent model is useful for constructing extensible systems. A disadvantage of the multi-agent model is that agents must be designed very carefully because unexpected behavior may arise by interactions between the agents. Therefore we want to analyze behavior of the agents.
For the entire collection see [Zbl 1492.68008].Causality and true concurrency: a data-flow analysis of the pi-calculushttps://zbmath.org/1496.682282022-11-17T18:59:28.764376Z"Jagadeesan, Lalita Jategaonkar"https://zbmath.org/authors/?q=ai:jagadeesan.lalita-jategaonkar"Jagadeesan, Radha"https://zbmath.org/authors/?q=ai:jagadeesan.radhaFor the entire collection see [Zbl 1492.68008].High undecidability of weak bisimilarity for Petri netshttps://zbmath.org/1496.682292022-11-17T18:59:28.764376Z"Jančar, Petr"https://zbmath.org/authors/?q=ai:jancar.petrSummary: It is shown that the problem whether two labelled place/transition Petri nets (with initial markings) are weakly bisimilar is highly undecidable -- it resides at least at level \(\omega\) of the hyperarithmetical hierarchy; on the other hand it belongs to \(\Sigma^1_1\) (the first level of the analytical hierarchy). It contrasts with \(\Pi^0_1\)-completeness of the same problem for trace (language) equivalence. Relations to similar problems for the process algebra BPP (Basic Parallel Processes) are also discussed.
For the entire collection see [Zbl 0835.68002].A calculus of virtually timed ambientshttps://zbmath.org/1496.682302022-11-17T18:59:28.764376Z"Johnsen, Einar Broch"https://zbmath.org/authors/?q=ai:johnsen.einar-broch"Steffen, Martin"https://zbmath.org/authors/?q=ai:steffen.martin"Stumpf, Johanna Beate"https://zbmath.org/authors/?q=ai:stumpf.johanna-beateSummary: A virtual machine, which is a software layer representing an execution environment, can be placed inside another virtual machine. As virtual machines at every level in a location hierarchy compete with other processes for processing time, the computing power of a virtual machine depends on its position in this hierarchy and may change if the virtual machine moves. These effects of nested virtualization motivate the calculus of virtually timed ambients, a formal model of hierarchical locations for execution with explicit resource provisioning, introduced in this paper. Resource provisioning in this model is based on virtual time slices as a local resource. To reason about timed behavior in this setting, weak timed bisimulation for virtually timed ambients is defined as an extension of bisimulation for mobile ambients. We show that the equivalence of contextual bisimulation and reduction barbed congruence is preserved by weak timed bisimulation. The calculus of virtually timed ambients is illustrated by examples.
For the entire collection see [Zbl 1428.68025].Causal behaviours and netshttps://zbmath.org/1496.682312022-11-17T18:59:28.764376Z"Katoen, Joost-Pieter"https://zbmath.org/authors/?q=ai:katoen.joost-pieterSummary: Specification formalisms in which causality and independence of actions can be explicitly expressed are beneficial from a design point of view. The explicit presence (or absence) of a causal dependency between actions can be used effectively during the design. We consider a specification formalism in which causal relations between actions play a central role and provide a semantics in terms of (an extension of) labelled place/transition nets. The behaviour of nets is defined by labelled partially ordered sets.
For the entire collection see [Zbl 1492.68015].From coloured Petri nets to object Petri netshttps://zbmath.org/1496.682322022-11-17T18:59:28.764376Z"Lakos, Charles"https://zbmath.org/authors/?q=ai:lakos.charles-aSummary: This paper seeks to establish within a formal framework how Coloured Petri Nets can be enhanced to produce Object Petri Nets. It does so by defining a number of intermediate Petri Net formalisms and identifying the features introduced at each step of the development. Object Petri Nets support a complete integration of object-oriented concepts into Petri Nets, including inheritance and the associated polymorphism and dynamic binding. In particular, Object Petri Nets have a single class hierarchy which includes both token types and subnet types. Interaction between subnets can be either synchronous or asynchronous depending on whether the subnet is defined as a super place or a super transition. The single class hierarchy readily supports multiple levels of activity in the net and the generation and removal of tokens has been defined so that all subcomponents are simultaneously generated or removed, thus simplifying memory management. Despite this descriptive power, Object Petri Nets can be transformed into behaviourally equivalent Coloured Petri Nets, thus providing a basis for adapting existing analysis techniques.
For the entire collection see [Zbl 1492.68015].WQO dichotomy for 3-graphshttps://zbmath.org/1496.682332022-11-17T18:59:28.764376Z"Lasota, Sławomir"https://zbmath.org/authors/?q=ai:lasota.slawomir"Piórkowski, Radosław"https://zbmath.org/authors/?q=ai:piorkowski.radoslawSummary: We investigate data-enriched models, like Petri nets with data, where executability of a transition is conditioned by a relation between data values involved. Decidability status of various decision problems in such models may depend on the structure of data domain. According to the WQO Dichotomy Conjecture, if a data domain is homogeneous then it either exhibits a well quasi-order (in which case decidability follows by standard arguments), or essentially all the decision problems are undecidable for Petri nets over that data domain.
We confirm the conjecture for data domains being 3-graphs (graphs with 2-colored edges). On the technical level, this results is a significant step beyond known classification results for homogeneous structures.
For the entire collection see [Zbl 1386.68002].WQO dichotomy for 3-graphshttps://zbmath.org/1496.682342022-11-17T18:59:28.764376Z"Lasota, Sławomir"https://zbmath.org/authors/?q=ai:lasota.slawomir"Piórkowski, Radosław"https://zbmath.org/authors/?q=ai:piorkowski.radoslawSummary: We investigate data-enriched models, like Petri nets with data, where executability of a transition is conditioned by a relation between data values involved. Decidability status of various decision problems in such models may depend on the structure of data domain. According to the WQO Dichotomy Conjecture, if a data domain is homogeneous then it either exhibits a well quasi-order (in which case decidability follows by standard arguments), or essentially all the decision problems are undecidable for Petri nets over that data domain.
We confirm the conjecture for data domains being 3-graphs (graphs with 2-colored edges). On the technical level, this result is a significant step towards classification of homogeneous 3-graphs, going beyond known classification results for homogeneous structures.Reachability of scope-bounded multistack pushdown systemshttps://zbmath.org/1496.682352022-11-17T18:59:28.764376Z"La Torre, Salvatore"https://zbmath.org/authors/?q=ai:la-torre.salvatore"Napoli, Margherita"https://zbmath.org/authors/?q=ai:napoli.margherita"Parlato, Gennaro"https://zbmath.org/authors/?q=ai:parlato.gennaroSummary: A multi-stack pushdown system is a natural model of concurrent programs. The basic verification problems are undecidable and a common trend is to consider under-approximations of the system behaviors to gain decidability. In this paper, we restrict the semantics such that a symbol that is pushed onto a stack \(s\) can be popped only within a given number of contexts involving \(s\), i.e., we bound the scope (in terms of number of contexts) of matching push and pop transitions. This restriction permits runs with unboundedly many contexts even between matching push and pop transitions (for systems with at least three stacks). We call the resulting model a multi-stack pushdown system with scope-bounded matching relations (\textsc{SMpds}). We show that the configuration reachability and the location reachability problems for \textsc{SMpds} are both \textsc{Pspace}-complete, and that the set of the reachable configurations can be captured by a finite automaton.Handles and reachability analysis of free choice netshttps://zbmath.org/1496.682362022-11-17T18:59:28.764376Z"Lee, Dong-Ik"https://zbmath.org/authors/?q=ai:lee.dongik"Kumagai, Sadatoshi"https://zbmath.org/authors/?q=ai:kumagai.sadatoshi"Kodama, Shinzo"https://zbmath.org/authors/?q=ai:kodama.shinzoSummary: In this paper, we discuss on the reachability analysis of free choice nets based on structure theory associated to handles. First half of the paper is devoted to clarify the relationship between handles and deadlocks/traps from the aspect emphasized on the reachability analysis. In the second half of the paper, the reachability criteria for free choice nets is discussed based on the structure theory. The reachability condition is expressed in terms of token distribution at the initial or end marking in an appropriately reduced net associated token-free deadlocks or traps. On reachability points of view, classes of Petri nets discussed in the paper involve several important classes of Petri nets as its special cases. The result is extended to extended free choice nets.
For the entire collection see [Zbl 1492.68015].An algebraic framework for developing and maintaining real-time systemshttps://zbmath.org/1496.682372022-11-17T18:59:28.764376Z"Leonard, Elizabeth I."https://zbmath.org/authors/?q=ai:leonard.elizabeth-i"Zwarico, Amy E."https://zbmath.org/authors/?q=ai:zwarico.amy-eSummary: In this paper we address the problem of safely replacing components of a real-time system, especially with faster ones. We isolate a class of real-time processes we call the \textit{nonpre-emptive processes}. These processes can be related by their speed (relative efficiency) as well as their relative degrees of nondeterminism. A process algebra of nonpreemptive processes, N-CCS, is presented that includes a language that expresses exactly the nonpre-emptive processes, testing preorders, and sound and complete axiomatizations of the preorders for finite N-CCS. The utility of this framework is demonstrated by an example.
For the entire collection see [Zbl 1492.68008].Complete inference systems for weak bisimulation equivalences in the \(\pi\)-calculushttps://zbmath.org/1496.682382022-11-17T18:59:28.764376Z"Lin, Huimin"https://zbmath.org/authors/?q=ai:lin.huiminSummary: Proof systems for weak bisimulation equivalences in the \(\pi\)-calculus are presented, and their soundness and completeness are shown. The proofs of the completeness results rely on the notion of \textit{symbolic bisimulation}. Two versions of \(\pi\)-calculus are investigated, one without and the other with the \textit{mismatch} construction. For each version of the calculus proof systems for both \textit{late} and \textit{early} weak bisimulation equivalences are studied. Thus there are four proof systems in all. These proof systems are related in a natural way: the proof systems for early and late equivalences differ only in the inference rule for the input prefix, while the proof system for the version of \(\pi\)-calculus with mismatch is obtained by adding a single inference rule for the version without it.
For the entire collection see [Zbl 0835.68002].Confluence of processes and systems of objectshttps://zbmath.org/1496.682392022-11-17T18:59:28.764376Z"Liu, Xinxin"https://zbmath.org/authors/?q=ai:liu.xinxin"Walker, David"https://zbmath.org/authors/?q=ai:walker.david-m|walker.david-w|walker.david-h|walker.david-t|walker.david-aSummary: An extension to the theory of confluence in the process calculus CCS is presented. The theory is generalized to an extension of the \(\pi\)-calculus. This calculus is used to provide semantics by translation for a parallel object-oriented programming language. The confluence theory is applied to prove the indistinguishability in an arbitrary program context of two class definitions which generate binary tree data structures one of which allows concurrent operations.
For the entire collection see [Zbl 0835.68002].Performance bounds for stochastic timed Petri netshttps://zbmath.org/1496.682402022-11-17T18:59:28.764376Z"Liu, Zhen"https://zbmath.org/authors/?q=ai:liu.zhen.2|liu.zhen|liu.zhen.3|liu.zhen.1Summary: Stochastic timed Petri nets are a useful tool in performance analysis of concurrent systems such as parallel computers, communication networks and flexible manufacturing systems. In general, performance measures of stochastic timed Petri nets are difficult to obtain for problems of practical sizes. In this paper, we provide a method to compute efficiently upper and lower bounds for the throughputs and mean token numbers in general Markovian timed Petri nets. Our approach is based on uniformization technique and linear programming.
For the entire collection see [Zbl 1492.68015].A parametric framework for reversible \(\pi\)-calculihttps://zbmath.org/1496.682412022-11-17T18:59:28.764376Z"Medić, Doriana"https://zbmath.org/authors/?q=ai:medic.doriana"Mezzina, Claudio Antares"https://zbmath.org/authors/?q=ai:mezzina.claudio-antares"Phillips, Iain"https://zbmath.org/authors/?q=ai:phillips.iain-w"Yoshida, Nobuko"https://zbmath.org/authors/?q=ai:yoshida.nobukoSummary: This paper presents a study of causality in a reversible, concurrent setting. There exist various notions of causality in \(\pi\)-calculus, which differ in the treatment of parallel extrusions of the same name. Hence, by using a parametric way of bookkeeping the order and the dependencies among extruders it is possible to map different causal semantics into the same framework. Starting from this simple observation, we present a uniform framework for reversible \(\pi\)-calculi that is parametric with respect to a data structure that stores information about the extrusion of a name. Different data structures yield different approaches to the parallel extrusion problem. We map three well-known causal semantics into our framework. We prove causal-consistency for the three instances of our framework. Furthermore, we prove a causal correspondence between the appropriate instances of the framework and the Boreale-Sangiorgi semantics and an operational correspondence with the reversible \(\pi\)-calculus causal semantics.Statecharts, transition structures and transformationshttps://zbmath.org/1496.682422022-11-17T18:59:28.764376Z"Peron, Adriano"https://zbmath.org/authors/?q=ai:peron.adrianoSummary: Statecharts are state-transition machines endowed with hierarchy on states and parallelism on transitions. It is shown that a statechart is described by a pair of relations over transitions (a transition structure), the former describing causality and the other describing a notion of asymmetric independence. A statechart can be effectively constructed from its transition structure. Transition structures corresponding to a subclass of Statecharts are characterized. Natural notions of morphisms among transition structures allow to define classes of statechart transformations which preserve behaviour.
For the entire collection see [Zbl 0835.68002].Distributability of mobile ambientshttps://zbmath.org/1496.682432022-11-17T18:59:28.764376Z"Peters, Kirstin"https://zbmath.org/authors/?q=ai:peters.kirstin"Nestmann, Uwe"https://zbmath.org/authors/?q=ai:nestmann.uweSummary: Modern society is dependent on distributed software systems and to verify them different modelling languages such as mobile ambients were developed. They focus on mobility by allowing both a dynamic network topology as well as the movement of code within the network. To analyse the quality of mobile ambients as a good foundational model for distributed computation, we analyse the level of synchronisation between distributed components that they can express. Therefore, we rely on earlier established synchronisation patterns. It turns out that mobile ambients are not fully distributed, because they can express enough synchronisation to express a synchronisation pattern called \(\mathsf{M}\). However, they can express strictly less synchronisation than the pi-calculus. For this reason, we can show that there is no good and distributability-preserving encoding from the pi-calculus into mobile ambients and also no such encoding from mobile ambients into the join-calculus, i.e., the expressive power of mobile ambients is in between these languages. Finally, we discuss how these results can be used to obtain a fully distributed variant of mobile ambients and present one example. Such a fully distributed variant of mobile ambients is a good foundation for distributed computation.\(\pi\)I: a symmetric calculus based on internal mobilityhttps://zbmath.org/1496.682442022-11-17T18:59:28.764376Z"Sangiorgi, Davide"https://zbmath.org/authors/?q=ai:sangiorgi.davideFor the entire collection see [Zbl 0835.68002].On the category of Petri net computationshttps://zbmath.org/1496.682452022-11-17T18:59:28.764376Z"Sassone, Vladimiro"https://zbmath.org/authors/?q=ai:sassone.vladimiroSummary: We introduce the notion of \textit{strongly concatenable process} as a refinement of concatenable processes
[\textit{P. Degano} et al., in: Proceedings of the 4th annual symposium on logic in computer science, LICS'89. Los Alamitos, CA: IEEE Computer Society. 175--185 (1989; Zbl 0722.68085)]
which can be expressed axiomatically via a functor \(\mathcal{Q}[\_]\) from the category of Petri nets to an appropriate category of symmetric strict monoidal categories, in the precise sense that, for each net \(N\), the strongly concatenable processes of \(N\) are isomorphic to the arrows of \(\mathcal{Q}[N]\). In addition, we identify a \textit{coreflection} right adjoint to \(\mathcal{Q}[\_]\) and characterize its \textit{replete image}, thus yielding an axiomatization of the category of net computations.
For the entire collection see [Zbl 0835.68002].Parameterized reachability trees for algebraic Petri netshttps://zbmath.org/1496.682462022-11-17T18:59:28.764376Z"Schmidt, Karsten"https://zbmath.org/authors/?q=ai:schmidt.karsten|schmidt.karsten.1|wolf.karstenSummary: Parameterized reachability trees have been proposed by M. Lindquist for predicate/transition nets. We discuss the application of this concept to algebraic nets. For this purpose a modification of several definitions is necessary due to the different net descriptions, transition rules and theoretical backgrounds. That's why we present the concept from the bottom for algebraic nets. Furthermore we discuss the combination of parameterized reachability analysis with the well known stubborn set method.
For the entire collection see [Zbl 1492.68015].Functional equivalences of Petri netshttps://zbmath.org/1496.682472022-11-17T18:59:28.764376Z"Schreiber, Gerlinde"https://zbmath.org/authors/?q=ai:schreiber.gerlindeSummary: Equivalence notions capture interesting behavioural properties of a system. Functional equivalences of Petri nets are state based equivalence notions that allow to compare systems with a different action level of detail. We introduce a functional equivalence on Petri nets suited for hierarchic modular system design and investigate correct transformation rules.
For the entire collection see [Zbl 1492.68015].Solving systems of bilinear equations for transition rate reconstructionhttps://zbmath.org/1496.682482022-11-17T18:59:28.764376Z"Soltanieh, Amin"https://zbmath.org/authors/?q=ai:soltanieh.amin"Siegle, Markus"https://zbmath.org/authors/?q=ai:siegle.markusSummary: Compositional models, specified with the help of a Markovian Stochastic Process Algebra (SPA), are widely used in performance and dependability modelling. The paper considers the problem of transition rate reconstruction: Given two SPA components with unknown rates, and given their combined flat model with fixed rates, the task is to reconstruct the rates in the components. This problem occurs frequently during so-called model repair, if a certain subset of transition rates of the flat model needs to be changed in order to satisfy some given requirement. It is important to have a structured approach to decide whether or not the rate reconstruction, satisfying the desired low-level model changes, is possible or not. In order to realize such a reconstruction, every combined model transition is transformed into an equation, resulting -- for each action type -- in a system of bilinear equations. If the system of equations meets a consistency condition, rate reconstruction is indeed possible. We identify a class of SPA systems for which solving the system of equations is not necessary, since by checking a set of simple conditions we can check the consistency of the system of equations. Furthermore, for general models outside this class, an iterative algorithm for solving the system of equations efficiently is proposed.
For the entire collection see [Zbl 1489.68021].Markov regenerative stochastic Petri nets with age type general transitionshttps://zbmath.org/1496.682492022-11-17T18:59:28.764376Z"Telek, Miklós"https://zbmath.org/authors/?q=ai:telek.miklos"Bobbio, Andrea"https://zbmath.org/authors/?q=ai:bobbio.andreaSummary: Markov Regenerative Stochastic Petri Nets (\textit{MRSPN}) have been recently introduced in the literature with the aim of combining exponential and non-exponential firing times into a single model. However, the realizations of the general \textit{MRSPN} model, so far discussed, require that at most a single non-exponential transition is enabled in each marking and that its associated memory policy is of enabling type. The present paper extends the previous models by allowing the memory policy to be of age type and by allowing multiple general transitions to be simultaneously enabled, provided that their enabling intervals do not overlap. A final completely developed example, that couldn't have been considered in previous formulations, derives the closed form expressions for the transient state probabilities for a queueing system with \textit{preemptive resume (prs)} service policy.
For the entire collection see [Zbl 1492.68015].Unique solutions of contractions, CCS, and their HOL formalisationhttps://zbmath.org/1496.682502022-11-17T18:59:28.764376Z"Tian, Chun"https://zbmath.org/authors/?q=ai:tian.chun"Sangiorgi, Davide"https://zbmath.org/authors/?q=ai:sangiorgi.davideSummary: The unique solution of contractions is a proof technique for (weak) bisimilarity that overcomes certain syntactic limitations of Milner's ``unique solution of equations'' theorem. This paper presents an overview of a comprehensive formalisation of Milner's Calculus of Communicating Systems (CCS) in the HOL theorem prover (HOL4), with a focus towards the theory of unique solutions of equations and contractions. The formalisation consists of about 24,000 lines (1MB) of code in total. Some refinements of the ``unique solution of contractions'' theory itself are obtained. In particular we remove the constraints on summation, which must be guarded, by moving from contraction to rooted contraction. We prove the ``unique solution of rooted contractions'' theorem and show that rooted contraction is the coarsest precongruence contained in the contraction preorder.Timed processes of timed Petri netshttps://zbmath.org/1496.682512022-11-17T18:59:28.764376Z"Valero, Valentín"https://zbmath.org/authors/?q=ai:valero.valentin"De Frutos, David"https://zbmath.org/authors/?q=ai:de-frutos.david"Cuartero, Fernando"https://zbmath.org/authors/?q=ai:cuartero.fernandoSummary: Processes of Petri nets are usually represented by occurrence nets. In this paper we extend this notion to Timed Petri Nets maintaining the structure of timed processes as occurrence nets, but adding time information to the tokens. In order to do that we need first to define formally the model of Timed Petri Nets that we consider, and then we relate timed step sequences with timed processes, obtaining similar results to those for the classical theory of ordinary (non-timed) processes.
For the entire collection see [Zbl 1492.68015].Generated models and the \(\omega\)-rule: the nondeterministic casehttps://zbmath.org/1496.682522022-11-17T18:59:28.764376Z"Walicki, Michal"https://zbmath.org/authors/?q=ai:walicki.michal"Meldal, Sigurd"https://zbmath.org/authors/?q=ai:meldal.sigurdSummary: A language for specifying nondeterministic operations which generalizes the equational specification language is introduced. Then, various notions of generated multimodels are discussed and sufficient conditions for the existence of quasi-initial semantics of nondeterministic specifications are given. Two calculi are introduced: NEQ and NIP. The former is sound and complete with respect to the class of all multimodels. The latter is an extension of the former with the \(\omega\)-rule. It is sound and complete with respect to one of the classes of the generated multimodels. The calculi reduce to the respective deterministic calculi whenever the specification involves only deterministic operations.
For the entire collection see [Zbl 0835.68002].Probabilistic model counting with short XORshttps://zbmath.org/1496.682532022-11-17T18:59:28.764376Z"Achlioptas, Dimitris"https://zbmath.org/authors/?q=ai:achlioptas.dimitris"Theodoropoulos, Panos"https://zbmath.org/authors/?q=ai:theodoropoulos.panosSummary: The idea of counting the number of satisfying truth assignments (models) of a formula by adding random parity constraints can be traced back to the seminal work of Valiant and Vazirani, showing that NP is as easy as detecting unique solutions. While theoretically sound, the random parity constraints in that construction have the following drawback: each constraint, on average, involves half of all variables. As a result, the branching factor associated with searching for models that also satisfy the parity constraints quickly gets out of hand. In this work we prove that one can work with much shorter parity constraints and still get rigorous mathematical guarantees, especially when the number of models is large so that many constraints need to be added. Our work is based on the realization that the essential feature for random systems of parity constraints to be useful in probabilistic model counting is that the geometry of their set of solutions resembles an error-correcting code.
For the entire collection see [Zbl 1368.68008].Automata-based model counting for string constraintshttps://zbmath.org/1496.682542022-11-17T18:59:28.764376Z"Aydin, Abdulbaki"https://zbmath.org/authors/?q=ai:aydin.abdulbaki"Bang, Lucas"https://zbmath.org/authors/?q=ai:bang.lucas"Bultan, Tevfik"https://zbmath.org/authors/?q=ai:bultan.tevfikSummary: Most common vulnerabilities in Web applications are due to string manipulation errors in input validation and sanitization code. String constraint solvers are essential components of program analysis techniques for detecting and repairing vulnerabilities that are due to string manipulation errors. For quantitative and probabilistic program analyses, checking the satisfiability of a constraint is not sufficient, and it is necessary to count the number of solutions. In this paper, we present a constraint solver that, given a string constraint, (1) constructs an automaton that accepts all solutions that satisfy the constraint, (2) generates a function that, given a length bound, gives the total number of solutions within that bound. Our approach relies on the observation that, using an automata-based constraint representation, model counting reduces to path counting, which can be solved precisely. We demonstrate the effectiveness of our approach on a large set of string constraints extracted from
real-world web applications.
For the entire collection see [Zbl 1342.68028].The power of the combined basic linear programming and affine relaxation for promise constraint satisfaction problemshttps://zbmath.org/1496.682552022-11-17T18:59:28.764376Z"Brakensiek, Joshua"https://zbmath.org/authors/?q=ai:brakensiek.joshua"Guruswami, Venkatesan"https://zbmath.org/authors/?q=ai:guruswami.venkatesan"Wrochna, Marcin"https://zbmath.org/authors/?q=ai:wrochna.marcin"Živný, Stanislav"https://zbmath.org/authors/?q=ai:zivny.stanislavBackdoor treewidth for SAThttps://zbmath.org/1496.682562022-11-17T18:59:28.764376Z"Ganian, Robert"https://zbmath.org/authors/?q=ai:ganian.robert"Ramanujan, M. S."https://zbmath.org/authors/?q=ai:ramanujan.m-s.1"Szeider, Stefan"https://zbmath.org/authors/?q=ai:szeider.stefanSummary: A strong backdoor in a CNF formula is a set of variables such that each possible instantiation of these variables moves the formula into a tractable class. The algorithmic problem of finding a strong backdoor has been the subject of intensive study, mostly within the parameterized complexity framework. Results to date focused primarily on backdoors of small size. In this paper we propose a new approach for algorithmically exploiting strong backdoors for SAT: instead of focusing on small backdoors, we focus on backdoors with certain structural properties. In particular, we consider backdoors that have a certain tree-like structure, formally captured by the notion of backdoor treewidth.
First, we provide a fixed-parameter algorithm for SAT parameterized by the backdoor treewidth w.r.t. the fundamental tractable classes Horn, Anti-Horn, and 2CNF. Second, we consider the more general setting where the backdoor decomposes the instance into components belonging to different tractable classes, albeit focusing on backdoors of treewidth 1 (i.e., acyclic backdoors). We give polynomial-time algorithms for SAT and \#SAT for instances that admit such an acyclic backdoor.
For the entire collection see [Zbl 1368.68008].New width parameters for model countinghttps://zbmath.org/1496.682572022-11-17T18:59:28.764376Z"Ganian, Robert"https://zbmath.org/authors/?q=ai:ganian.robert"Szeider, Stefan"https://zbmath.org/authors/?q=ai:szeider.stefanSummary: We study the parameterized complexity of the propositional model counting problem \#SAT for CNF formulas. As the parameter we consider the treewidth of the following two graphs associated with CNF formulas: the consensus graph and the conflict graph. Both graphs have as vertices the clauses of the formula; in the consensus graph two clauses are adjacent if they do not contain a complementary pair of literals, while in the conflict graph two clauses are adjacent if they do contain a complementary pair of literals. We show that \#SAT is fixed-parameter tractable for the treewidth of the consensus graph but W[1]-hard for the treewidth of the conflict graph. We also compare the new parameters with known parameters under which \#SAT is fixed-parameter tractable.
For the entire collection see [Zbl 1368.68008].Improving MCS enumeration via cachinghttps://zbmath.org/1496.682582022-11-17T18:59:28.764376Z"Previti, Alessandro"https://zbmath.org/authors/?q=ai:previti.alessandro"Mencía, Carlos"https://zbmath.org/authors/?q=ai:mencia.carlos"Järvisalo, Matti"https://zbmath.org/authors/?q=ai:jarvisalo.matti"Marques-Silva, Joao"https://zbmath.org/authors/?q=ai:marques-silva.joao-pSummary: Enumeration of minimal correction sets (MCSes) of conjunctive normal form formulas is a central and highly intractable problem in infeasibility analysis of constraint systems. Often complete enumeration of MCSes is impossible due to both high computational cost and worst-case exponential number of MCSes. In such cases partial enumeration is sought for, finding applications in various domains, including axiom pinpointing in description logics among others. In this work we propose caching as a means of further improving the practical efficiency of current MCS enumeration approaches, and show the potential of caching via an empirical evaluation.
For the entire collection see [Zbl 1368.68008].PPSZ for \(k\geq 5\): more is betterhttps://zbmath.org/1496.682592022-11-17T18:59:28.764376Z"Scheder, Dominik"https://zbmath.org/authors/?q=ai:scheder.dominikSplit contraction: the untold storyhttps://zbmath.org/1496.682602022-11-17T18:59:28.764376Z"Agrawal, Akanksha"https://zbmath.org/authors/?q=ai:agrawal.akanksha"Lokshtanov, Daniel"https://zbmath.org/authors/?q=ai:lokshtanov.daniel"Saurabh, Saket"https://zbmath.org/authors/?q=ai:saurabh.saket"Zehavi, Meirav"https://zbmath.org/authors/?q=ai:zehavi.meiravAsymptotic connectedness of random interval graphs in a one dimensional data delivery problemhttps://zbmath.org/1496.682612022-11-17T18:59:28.764376Z"Andrade Sernas, Caleb Erubiel"https://zbmath.org/authors/?q=ai:andrade-sernas.caleb-erubiel"Calvillo Vives, Gilberto"https://zbmath.org/authors/?q=ai:calvillo-vives.gilberto"Manrique Mirón, Paulo Cesar"https://zbmath.org/authors/?q=ai:manrique-miron.paulo-cesar"Treviño Aguilar, Erick"https://zbmath.org/authors/?q=ai:aguilar.erick-trevinoSummary: In this work we present a probabilistic analysis of random interval graphs associated with randomly generated instances of the Data Delivery on a Line Problem (DDLP)
[\textit{J. Chalopin} et al., Lect. Notes Comput. Sci. 8573, 423--434 (2014; Zbl 1411.68157)].
Random Interval Graphs have been previously studied by
\textit{E. R. Scheinerman} [Discrete Math. 82, No. 3, 287--302 (1990; Zbl 0699.05051)].
However, his model and ours provide different ways to generate the graphs. Our model is defined by how the agents in the DDLP may move, thus its importance goes beyond the intrinsic interest of random graphs and has to do with the complexity of a combinatorial optimization problem which has been proven to be NP-complete
[Zbl 1411.68157].
We study the relationship between solvability of a random instance of the DDLP with respect to its associated interval graph connectedness. This relationship is important because through probabilistic analysis we prove that despite the NP-completeness of DDLP, there are classes of instances that can be solved polynomially.
For the entire collection see [Zbl 1478.60006].New results on routing via matchings on graphshttps://zbmath.org/1496.682622022-11-17T18:59:28.764376Z"Banerjee, Indranil"https://zbmath.org/authors/?q=ai:banerjee.indranil"Richards, Dana"https://zbmath.org/authors/?q=ai:richards.dana-sSummary: In this paper we present some new complexity results on the routing time of a graph under the routing via matching model. This is a parallel routing model which was introduced by
\textit{N. Alon} et al. [SIAM J. Discrete Math. 7, No. 3, 513--530 (1994; Zbl 0812.05029)].
The model can be viewed as a communication scheme on a distributed network. The nodes in the network can communicate via matchings (a step), where a node exchanges data (pebbles) with its matched partner. Let \(G\) be a connected graph with vertices labeled from \(\{1,\cdots,n\}\) and the destination vertices of the pebbles are given by a permutation \(\pi\). The problem is to find a minimum step routing scheme for the input permutation \(\pi\). This is denoted as the routing time \(rt(G,\pi)\) of \(G\) given \(\pi\). In this paper we characterize the complexity of some known problems under the routing via matching model and discuss their relationship to graph connectivity and clique number. We also introduce some new problems in this domain, which may be of independent interest.
For the entire collection see [Zbl 1369.68029].Token jumping in minor-closed classeshttps://zbmath.org/1496.682632022-11-17T18:59:28.764376Z"Bousquet, Nicolas"https://zbmath.org/authors/?q=ai:bousquet.nicolas"Mary, Arnaud"https://zbmath.org/authors/?q=ai:mary.arnaud"Parreau, Aline"https://zbmath.org/authors/?q=ai:parreau.alineSummary: Given two \(k\)-independent sets \(I\) and \(J\) of a graph \(G\), one can ask if it is possible to transform the one into the other in such a way that, at any step, we replace one vertex of the current independent set by another while keeping the property of being independent. Deciding this problem, known as the Token Jumping (TJ) reconfiguration problem, is PSPACE-complete even on planar graphs.
\textit{T. Ito} et al. proved in [Lect. Notes Comput. Sci. 8889, 208--219 (2014; Zbl 1432.68195); Lect. Notes Comput. Sci. 8402, 341--351 (2014; Zbl 1405.68141)]
that the problem is FPT parameterized by \(k\) if the input graph is \(K_{3,\ell}\)-free.
We prove that the result of Ito et al. can be extended to any \(K_{\ell,\ell}\)-free graphs. In other words, if \(G\) is a \(K_{\ell,\ell}\)-free graph, then it is possible to decide in FPT-time if \(I\) can be transformed into \(J\). As a by product, the TJ-reconfiguration problem is FPT in many well-known classes of graphs such as any minor-free class.
For the entire collection see [Zbl 1369.68029].A generic framework for computing parameters of sequence-based dynamic graphshttps://zbmath.org/1496.682642022-11-17T18:59:28.764376Z"Casteigts, Arnaud"https://zbmath.org/authors/?q=ai:casteigts.arnaud"Klasing, Ralf"https://zbmath.org/authors/?q=ai:klasing.ralf"Neggaz, Yessin M."https://zbmath.org/authors/?q=ai:neggaz.yessin-m"Peters, Joseph G."https://zbmath.org/authors/?q=ai:peters.joseph-gSummary: We presented in [Lect. Notes Comput. Sci. 9079, 89--100 (2015; Zbl 1459.68153)]
an algorithm for computing a parameter called \(T\)-interval connectivity of dynamic graphs which are given as a sequence of static graphs. This algorithm operates at a high level, manipulating the graphs in the sequence as atomic elements with two types of operations: a composition operation and a test operation. The algorithm is optimal in the sense that it uses only \(O(\delta)\) composition and test operations, where \(\delta\) is the length of the sequence. In this paper, we generalize this framework to use various composition and test operations, which allows us to compute other parameters using the same high-level strategy that we used for \(T\)-interval connectivity. We illustrate the framework through the study of three minimization problems which refer to various properties of dynamic graphs, namely Bounded-Realization-of-the-Footprint, Temporal-Connectivity, and Round-Trip-Temporal-Diameter.
For the entire collection see [Zbl 1381.68003].On the longest spanning tree with neighborhoodshttps://zbmath.org/1496.682652022-11-17T18:59:28.764376Z"Chen, Ke"https://zbmath.org/authors/?q=ai:chen.ke"Dumitrescu, Adrian"https://zbmath.org/authors/?q=ai:dumitrescu.adrianSAT-encodings for special treewidth and pathwidthhttps://zbmath.org/1496.682662022-11-17T18:59:28.764376Z"Lodha, Neha"https://zbmath.org/authors/?q=ai:lodha.neha"Ordyniak, Sebastian"https://zbmath.org/authors/?q=ai:ordyniak.sebastian"Szeider, Stefan"https://zbmath.org/authors/?q=ai:szeider.stefanSummary: Decomposition width parameters such as treewidth provide a measurement on the complexity of a graph. Finding a decomposition of smallest width is itself NP-hard but lends itself to a SAT-based solution. Previous work on treewidth, branchwidth and clique-width indicates that identifying a suitable characterization of the considered decomposition method is key for a practically feasible SAT-encoding.
In this paper we study SAT-encodings for the decomposition width parameters special treewidth and pathwidth. In both cases we develop SAT-encodings based on two different characterizations. In particular, we develop two novel characterizations for special treewidth based on partitions and elimination orderings. We empirically obtained SAT-encodings.
For the entire collection see [Zbl 1368.68008].On the complexity of minimum cardinality maximal uniquely restricted matching in graphshttps://zbmath.org/1496.682672022-11-17T18:59:28.764376Z"Panda, B. S."https://zbmath.org/authors/?q=ai:panda.bhawani-sankar"Pandey, Arti"https://zbmath.org/authors/?q=ai:pandey.artiSummary: For a graph \(G=(V,E)\), a set \(M\subseteq E\) is called a matching in \(G\) if no two edges in \(M\) share a common vertex. A matching \(M\) in \(G\) is called an uniquely restricted matching in \(G\) if there is no other matching of the same cardinality in the graph induced on the vertices saturated by \(M\). An uniquely restricted matching \(M\) is called maximal if \(M\) is not properly contained in any uniquely restricted matching of \(G\). The Minimum Maximal Uniquely Restricted Matching (Min-UR-Matching) problem is the problem of finding a minimum cardinality maximal uniquely restricted matching. In this paper, we initiate the study of the Min-UR-Matching problem. We prove that the decision version of the Min-UR-Matching problem is NP-complete for general graphs. In particular, this answers an open question posed by
\textit{S. T. Hedetniemi} [AKCE Int. J. Graphs Comb. 3, No. 1, 1--37 (2006; Zbl 1104.05056)]
regarding the complexity of the Min-UR-Matching problem. We also prove that this problem remains NP-complete for bipartite graphs with maximum degree 7. Next, we show that the Min-UR-Matching for bipartite graphs cannot be approximated within a factor of \(n^{1-\epsilon}\) for any positive constant \(\epsilon >0\) unless \(\mathrm{P}=\mathrm{NP}\). Finally, we prove that the Min-UR-Matching problem is linear-time solvable for chain graphs, a subclass of bipartite graphs.
For the entire collection see [Zbl 1369.68008].A separator-based method for generating weakly chordal graphshttps://zbmath.org/1496.682682022-11-17T18:59:28.764376Z"Rahman, Md. Zamilur"https://zbmath.org/authors/?q=ai:rahman.md-zamilur"Mukhopadhyay, Asish"https://zbmath.org/authors/?q=ai:mukhopadhyay.asish-kumar"Aneja, Yash. P."https://zbmath.org/authors/?q=ai:aneja.yash-pMinimum label \(s\)-\(t\) cut has large integrality gapshttps://zbmath.org/1496.682692022-11-17T18:59:28.764376Z"Zhang, Peng"https://zbmath.org/authors/?q=ai:zhang.peng|zhang.peng.1|zhang.peng.2"Tang, Linqing"https://zbmath.org/authors/?q=ai:tang.linqingSummary: The Min Label \(s\)-\(t\) Cut problem is a fundamental problem in combinatorial optimization. This problem comes from many applications in real world, for example, information security and computer networks. We study two linear programs for Min Label \(s\)-\(t\) Cut, proving that both of them have large integrality gaps, namely, \(\Omega(m)\) and \(\Omega(m^{1/3 - \epsilon})\) for the respective linear programs, where \(m\) is the number of edges in the input graph of the problem and \(\epsilon > 0\) is any arbitrarily small constant. As Min Label \(s\)-\(t\) Cut is NP-hard and the linear programming technique is a main approach to design approximation algorithms, our results give negative answer to the hope that designs can be found for better approximation algorithms for Min Label \(s\)-\(t\) Cut that purely rely on linear programming.Learning from positive and unlabeled data: a surveyhttps://zbmath.org/1496.682702022-11-17T18:59:28.764376Z"Bekker, Jessa"https://zbmath.org/authors/?q=ai:bekker.jessa"Davis, Jesse"https://zbmath.org/authors/?q=ai:davis.jesseSummary: Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data. The assumption is that the unlabeled data can contain both positive and negative examples. This setting has attracted increasing interest within the machine learning literature as this type of data naturally arises in applications such as medical diagnosis and knowledge base completion. This article provides a survey of the current state of the art in PU learning. It proposes seven key research questions that commonly arise in this field and provides a broad overview of how the field has tried to address them.BIWE: boosting-based iterative weighted ensemble classificationhttps://zbmath.org/1496.682712022-11-17T18:59:28.764376Z"Du, Shiyu"https://zbmath.org/authors/?q=ai:du.shiyu"Han, Meng"https://zbmath.org/authors/?q=ai:han.meng"Shen, Mingyao"https://zbmath.org/authors/?q=ai:shen.mingyao"Zhang, Chunyan"https://zbmath.org/authors/?q=ai:zhang.shunyan"Sun, Rui"https://zbmath.org/authors/?q=ai:sun.rui"Tong, Jixuan"https://zbmath.org/authors/?q=ai:tong.jixuan"Ye, Yingtu"https://zbmath.org/authors/?q=ai:ye.yingtuSummary: Classification is a function that plays a major role in the development of data mining, across a widespread variety of application fields: using classification accuracy and the scale of spanning tree are usually prime requirement. Ensemble classification is the most used method to deal with dynamic data streams problems in recent years. Boosting is renowned method to acquire more diversity and robust classifier from poor performance base classifiers, thus has been widely studied. However, common ensemble algorithms can only replace one base classifier at a time and cannot quickly restore the overall performance. Meanwhile, it is mandatory for the base classifier to use the same distribution weight method in different data sets, and it is too coercive. In this paper, we propose a new algorithm called boosting-based iterative ensemble classification (BIE), which can calculate the number of base classifiers to be replaced in each iteration according to the classification accuracy of the latest incoming data streams. In addition, basis on BIE algorithm, we proposed boosting-based iterative weighted ensemble classification (BIWE) algorithm, which can calculate the optimal value corresponding to the weight of base classifier for data streams with different parameter characteristics. In order to better observe the performance of algorithms, we compare with 9 algorithms on 9 dynamic data streams. Experimental results show that BIE and BIWE algorithms not only have ideal classification accuracy, but also can greatly reduce the scale of spanning trees, in the aspect of the depth, the number of nodes and leaves of trees.Improved graph-based SFA: information preservation complements the slowness principlehttps://zbmath.org/1496.682722022-11-17T18:59:28.764376Z"Escalante-B., Alberto N."https://zbmath.org/authors/?q=ai:escalante-b.alberto-n"Wiskott, Laurenz"https://zbmath.org/authors/?q=ai:wiskott.laurenzSummary: Slow feature analysis (SFA) is an unsupervised learning algorithm that extracts slowly varying features from a multi-dimensional time series. SFA has been extended to supervised learning (classification and regression) by an algorithm called graph-based SFA (GSFA). GSFA relies on a particular graph structure to extract features that preserve label similarities. Processing of high dimensional input data (e.g., images) is feasible via hierarchical GSFA (HGSFA), resulting in a multi-layer neural network. Although HGSFA has useful properties, in this work we identify a shortcoming, namely, that HGSFA networks prematurely discard quickly varying but useful features before they reach higher layers, resulting in suboptimal global slowness and an under-exploited feature space. To counteract this shortcoming, which we call unnecessary information loss, we propose an extension called hierarchical information-preserving GSFA (HiGSFA), where some features fulfill a slowness objective and other features fulfill an information preservation objective. The efficacy of the extension is verified in three experiments: (1) an unsupervised setup where the input data is the visual stimuli of a simulated rat, (2) the localization of faces in image patches, and (3) the estimation of human age from facial photographs of the MORPH-II database. Both HiGSFA and HGSFA can learn multiple labels and offer a rich feature space, feed-forward training, and linear complexity in the number of samples and dimensions. However, the proposed algorithm, HiGSFA, outperforms HGSFA in terms of feature slowness, estimation accuracy, and input reconstruction, giving rise to a promising hierarchical supervised-learning approach. Moreover, for age estimation, HiGSFA achieves a mean absolute error of 3.41 years, which is a competitive performance for this challenging problem.On cognitive preferences and the plausibility of rule-based modelshttps://zbmath.org/1496.682732022-11-17T18:59:28.764376Z"Fürnkranz, Johannes"https://zbmath.org/authors/?q=ai:furnkranz.johannes"Kliegr, Tomáš"https://zbmath.org/authors/?q=ai:kliegr.tomas"Paulheim, Heiko"https://zbmath.org/authors/?q=ai:paulheim.heikoSummary: It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones. In this position paper, we question this latter assumption by focusing on one particular aspect of interpretability, namely the plausibility of models. Roughly speaking, we equate the plausibility of a model with the likeliness that a user accepts it as an explanation for a prediction. In particular, we argue that -- all other things being equal -- longer explanations may be more convincing than shorter ones, and that the predominant bias for shorter models, which is typically necessary for learning powerful discriminative models, may not be suitable when it comes to user acceptance of the learned models. To that end, we first recapitulate evidence for and against this postulate, and then report the results of an evaluation in a crowdsourcing study based on about 3000 judgments. The results do not reveal a strong preference for simple rules, whereas we can observe a weak preference for longer rules in some domains. We then relate these results to well-known cognitive biases such as the conjunction fallacy, the representative heuristic, or the recognition heuristic, and investigate their relation to rule length and plausibility.Joint maximization of accuracy and information for learning the structure of a Bayesian network classifierhttps://zbmath.org/1496.682742022-11-17T18:59:28.764376Z"Halbersberg, Dan"https://zbmath.org/authors/?q=ai:halbersberg.dan"Wienreb, Maydan"https://zbmath.org/authors/?q=ai:wienreb.maydan"Lerner, Boaz"https://zbmath.org/authors/?q=ai:lerner.boazSummary: Although recent studies have shown that a Bayesian network classifier (BNC) that maximizes the classification accuracy (i.e., minimizes the 0/1 loss function) is a powerful tool in both knowledge representation and classification, this classifier: (1) focuses on the majority class and, therefore, misclassifies minority classes; (2) is usually uninformative about the distribution of misclassifications; and (3) is insensitive to error severity (making no distinction between misclassification types). In this study, we propose to learn the structure of a BNC using an information measure (IM) that jointly maximizes the classification accuracy and information, motivate this measure theoretically, and evaluate it compared with six common measures using various datasets. Using synthesized confusion matrices, twenty-three artificial datasets, seventeen UCI datasets, and different performance measures, we show that an IM-based BNC is superior to BNCs learned using the other measures -- especially for ordinal classification (for which accounting for the error severity is important) and/or imbalanced problems (which are most real-life classification problems) -- and that it does not fall behind state-of-the-art classifiers with respect to accuracy and amount of information provided. To further demonstrate its ability, we tested the IM-based BNC in predicting the severity of motorcycle accidents of young drivers and the disease state of ALS patients -- two class-imbalance ordinal classification problems -- and show that the IM-based BNC is accurate also for the minority classes (fatal accidents and severe patients) and not only for the majority class (mild accidents and mild patients) as are other classifiers, providing more informative and practical classification results. Based on the many experiments we report on here, we expect these advantages to exist for other problems in which both accuracy and information should be maximized, the data is imbalanced, and/or the problem is ordinal, whether the classifier is a BNC or not. Our code, datasets, and results are publicly available \url{http://www.ee.bgu.ac.il/~boaz/software}.Structure from randomness in halfspace learning with the zero-one losshttps://zbmath.org/1496.682752022-11-17T18:59:28.764376Z"Kabán, Ata"https://zbmath.org/authors/?q=ai:kaban.ata"Durrant, Robert J."https://zbmath.org/authors/?q=ai:durrant.robert-jSummary: We prove risk bounds for halfspace learning when the data dimensionality is allowed to be larger than the sample size, using a notion of compressibility by random projection. In particular, we give upper bounds for the empirical risk minimizer learned efficiently from randomly projected data, as well as uniform upper bounds in the full high-dimensional space. Our main findings are the following: i) In both settings, the obtained bounds are able to discover and take advantage of benign geometric structure, which turns out to depend on the cosine similarities between the classifier and points of the input space, and provide a new interpretation of margin distribution type arguments. ii) Furthermore our bounds allow us to draw new connections between several existing successful classification algorithms, and we also demonstrate that our theory is predictive of empirically observed performance in numerical simulations and experiments. iii) Taken together, these results suggest that the study of compressive learning can improve our understanding of which benign structural traits -- if they are possessed by the data generator -- make it easier to learn an effective classifier from a sample.Distributed block-diagonal approximation methods for regularized empirical risk minimizationhttps://zbmath.org/1496.682762022-11-17T18:59:28.764376Z"Lee, Ching-pei"https://zbmath.org/authors/?q=ai:lee.ching-pei"Chang, Kai-Wei"https://zbmath.org/authors/?q=ai:chang.kai-weiSummary: In recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified description and comparison of existing methods. Our approach requires only approximate solutions of the sub-problems involved in the optimization process, and is versatile to be applied on many large-scale machine learning problems including classification, regression, and structured prediction. We show that our framework enjoys global linear convergence for a broad class of non-strongly-convex problems, and some specific choices of the sub-problems can even achieve much faster convergence than existing approaches by a refined analysis. This improved convergence rate is also reflected in the superior empirical performance of our method.Classification using proximity catch digraphshttps://zbmath.org/1496.682772022-11-17T18:59:28.764376Z"Manukyan, Artür"https://zbmath.org/authors/?q=ai:manukyan.artur"Ceyhan, Elvan"https://zbmath.org/authors/?q=ai:ceyhan.elvanSummary: We employ random geometric digraphs to construct semi-parametric classifiers. These data-random digraphs belong to parameterized random digraph families called proximity catch digraphs (PCDs). A related geometric digraph family, class cover catch digraph (CCCD), has been used to solve the class cover problem by using its approximate minimum dominating set and showed relatively good performance in the classification of imbalanced data sets. Although CCCDs have a convenient construction in \(\mathbb{R}^d\), finding their minimum dominating sets is NP-hard and their probabilistic behaviour is not mathematically tractable except for \(d=1\). On the other hand, a particular family of PCDs, called \textit{proportional-edge} PCDs (PE-PCDs), has mathematically tractable minimum dominating sets in \(\mathbb{R}^d\); however their construction in higher dimensions may be computationally demanding. More specifically, we show that the classifiers based on PE-PCDs are prototype-based classifiers such that the exact minimum number of prototypes (equivalent to minimum dominating sets) is found in polynomial time on the number of observations. We construct two types of classifiers based on PE-PCDs. One is a family of hybrid classifiers that depends on the location of the points of the training data set, and another type is a family of classifiers solely based on class covers. We assess the classification performance of our PE-PCD based classifiers by extensive Monte Carlo simulations, and compare them with that of other commonly used classifiers. We also show that, similar to CCCD classifiers, our classifiers tend to be robust to the class imbalance in classification as well.On decompositions of decision function quality measurehttps://zbmath.org/1496.682782022-11-17T18:59:28.764376Z"Nedel'ko, Viktor Mikhaĭlovich"https://zbmath.org/authors/?q=ai:nedelko.victor-mikhailovichSummary: A comparative analysis of two approaches to the decomposition of quality criterion of decision functions is carried out.
The first approach is the bias-variance decomposition. This is the most well-known decomposition that is used in analyzing the quality of decision function construction methods, in particular for justifying some ensemble methods. This usually assumes a monotonous dependence of the bias and variance on the complexity. Recent studies show that this is not always true.
The second approach is a decomposition into a measure of adequacy and a measure of statistical stability (robustness). The idea of the approach is to decompose the prediction error into approximation error and statistical error.
In this paper we propose a method of statistical estimation of the components of both decompositions on real data. We compare the dependencies of these components on the complexity of the decision function. Non-normalized margin is used as a general measure of complexity.
The results of the study and the experiments on UCI data show significant qualitative similarities in behavior of the bias and the adequacy measure and between the variance and the statistical stability measure. At the same time, there is a fundamental difference between the considered decompositions, in particular, with increasing complexity, the measure of adequacy cannot increase, while the bias first decreases, but at high enough values of complexity usually starts to grow.Structure learning of Bayesian networks based on hybrid evolutionary algorithm with elite strategyhttps://zbmath.org/1496.682792022-11-17T18:59:28.764376Z"Shi, Jilong"https://zbmath.org/authors/?q=ai:shi.jilong"Zhu, Yungang"https://zbmath.org/authors/?q=ai:zhu.yungangSummary: Structure learning of Bayesian networks is a crucial problem in the area of statistical machine learning and probabilistic graphical models. In this paper, a novel structure learning method for Bayesian network is proposed, which combines genetic algorithm (GA) with particle swarm optimization (PSO), and utilizes elite strategy. It benefits from both advantage of GA in maintaining the diversity of population, and the advantage of PSO in the convergence rate. In addition, in the process of evolution, the elite set strategy is introduced to adjust the maximal number threshold of parents of each node dynamically, prune intelligently and guide mutation operation. The experimental results show the superiority of the proposed approach to state-of-the-art approach.Scalable Bayesian preference learning for crowdshttps://zbmath.org/1496.682802022-11-17T18:59:28.764376Z"Simpson, Edwin"https://zbmath.org/authors/?q=ai:simpson.edwin"Gurevych, Iryna"https://zbmath.org/authors/?q=ai:gurevych.irynaSummary: We propose a scalable Bayesian preference learning method for jointly predicting the preferences of individuals as well as the consensus of a crowd from pairwise labels. Peoples' opinions often differ greatly, making it difficult to predict their preferences from small amounts of personal data. Individual biases also make it harder to infer the consensus of a crowd when there are few labels per item. We address these challenges by combining matrix factorisation with Gaussian processes, using a Bayesian approach to account for uncertainty arising from noisy and sparse data. Our method exploits input features, such as text embeddings and user metadata, to predict preferences for new items and users that are not in the training set. As previous solutions based on Gaussian processes do not scale to large numbers of users, items or pairwise labels, we propose a stochastic variational inference approach that limits computational and memory costs. Our experiments on a recommendation task show that our method is competitive with previous approaches despite our scalable inference approximation. We demonstrate the method's scalability on a natural language processing task with thousands of users and items, and show improvements over the state of the art on this task. We make our software publicly available for future work (\url{https://github.com/UKPLab/tacl2018-preference-convincing/tree/crowdGPPL}).An integrated cuckoo search-genetic algorithm for mining frequent itemsetshttps://zbmath.org/1496.682812022-11-17T18:59:28.764376Z"Sukanya, N. S."https://zbmath.org/authors/?q=ai:sukanya.n-s"Thangaiah, P. Ranjit Jeba"https://zbmath.org/authors/?q=ai:thangaiah.p-ranjit-jeba(no abstract)Discrete facility location in machine learninghttps://zbmath.org/1496.682822022-11-17T18:59:28.764376Z"Vasil'ev, Igor' Leonidovich"https://zbmath.org/authors/?q=ai:vasilev.igor-leonidovich"Ushakov, Anton Vladimirovich"https://zbmath.org/authors/?q=ai:ushakov.anton-vladimirovichSummary: Facility location problems form a wide class of optimization problems, extremely popular in combinatorial optimization and operations research. In any facility location problem, one must locate a set of facilities in order to satisfy the demands of customers so as a certain objective function is optimized. Besides numerous applications in public and private sectors, the problems are widely used in machine learning. For example, clustering can be viewed as a facility location problem where one needs to partition a set of customers into clusters assigned to open facilities. In this survey we briefly look at how ideas and approaches arisen in the field of facility location led to modern, popular machine learning algorithms supported by many data mining and machine learning software packages. We also review the state-of-the-art exact methods and heuristics, as well as some extensions of basic problems and algorithms arisen in applied machine learning tasks. Note that the main emphasis here lies on discrete facility location problems, which, for example, underlie many widely used clustering algorithms (PAM, affinity propagation, etc.). Since the high computational complexity of conventional facility location-based clustering algorithms hinders their application to modern large-scale real-life datasets, we also survey some modern approaches to implementation of the algorithms for such large data collections.Algebraic machine learning: emphasis on efficiencyhttps://zbmath.org/1496.682832022-11-17T18:59:28.764376Z"Vinogradov, D. V."https://zbmath.org/authors/?q=ai:vinogradov.d-vSummary: A survey of the state of the art in research on algebraic machine learning is presented. The main emphasis is on computational complexity. The key idea is to use lattice theory methods and probabilistic algorithms based on Markov chains.Synchronization in finite-/fixed-time of delayed diffusive complex-valued neural networks with discontinuous activationshttps://zbmath.org/1496.682842022-11-17T18:59:28.764376Z"Duan, Lian"https://zbmath.org/authors/?q=ai:duan.lian"Shi, Min"https://zbmath.org/authors/?q=ai:shi.min"Huang, Chuangxia"https://zbmath.org/authors/?q=ai:huang.chuangxia"Fang, Xianwen"https://zbmath.org/authors/?q=ai:fang.xianwenSummary: In this paper, we analyze the finite-time synchronization problem between two delayed diffusive complex-valued neural networks (CVNNs) with discontinuous activations. We first establish the threshold finite-/fixed-time synchronization (FFTS) dynamics of the model by designing a novel negative exponent controller. Then we further study the finite-time synchronization via the adaptive control scheme. Some novel and useful finite-time synchronization criteria are established based on the discontinuous version of finite-time convergence theorem and Filippov regularization techniques, the upper-bound of the settling time is explicitly estimated as well. The obtained results extend some previous ones on CVNNs. Moreover, numerical simulations are performed to substantiate the effectiveness of the theoretical analysis.Stabilization of inertial Cohen-Grossberg neural networks with generalized delays: a direct analysis approachhttps://zbmath.org/1496.682852022-11-17T18:59:28.764376Z"Han, Siyu"https://zbmath.org/authors/?q=ai:han.siyu"Hu, Cheng"https://zbmath.org/authors/?q=ai:hu.cheng"Yu, Juan"https://zbmath.org/authors/?q=ai:yu.juan"Jiang, Haijun"https://zbmath.org/authors/?q=ai:jiang.haijun"Wen, Shiping"https://zbmath.org/authors/?q=ai:wen.shipingSummary: The paper is mainly devoted to the stabilization problem of Cohen-Grossberg type inertial neural networks (INNs) with generalized delays by developing a direct analysis approach to replace the previous transformations of reduced order. Above all, a generalized form of time delays is developed to unify discrete constant delays, discrete variable delays and proportional delays. In stabilization analysis, in the absence of variable substitutions, a direct method is proposed by constructing Lyapunov functionals and designing control schemes for the addressed second-order Cohen-Grossberg INNs to achieve asymptotical or adaptive stabilization. The obtained criteria are simpler and more easily verified in applications compared with the related existing results. At last, three specified examples are provided to verify the theoretical results.Sufficient and necessary conditions for global attractivity and stability of a class of discrete Hopfield-type neural networks with time delayshttps://zbmath.org/1496.682862022-11-17T18:59:28.764376Z"Hong, Yanjie"https://zbmath.org/authors/?q=ai:hong.yanjie"Ma, Wanbiao"https://zbmath.org/authors/?q=ai:ma.wanbiao(no abstract)Deep learning for trivial inverse problemshttps://zbmath.org/1496.682872022-11-17T18:59:28.764376Z"Maass, Peter"https://zbmath.org/authors/?q=ai:maass.peterSummary: Deep learning is producing most remarkable results when applied to some of the toughest large-scale nonlinear problems such as classification tasks in computer vision or speech recognition. Recently, deep learning has also been applied to inverse problems, in particular, in medical imaging. Some of these applications are motivated by mathematical reasoning, but a solid and at least partially complete mathematical theory for understanding neural networks and deep learning is missing. In this paper, we do not address large-scale problems but aim at understanding neural networks for solving some small and rather naive inverse problems. Nevertheless, the results of this paper highlight the particular complications of inverse problems, e.g., we show that applying a natural network design for mimicking Tikhonov regularization fails when applied to even the most trivial inverse problems. The proofs of this paper utilize basic and well-known results from the theory of statistical inverse problems. We include the proofs in order to provide some material ready to be used in student projects or general mathematical courses on data analysis. We only assume that the reader is familiar with the standard definitions of feedforward networks, e.g., the backpropagation algorithm for training such networks. We also include -- without proof -- numerical experiments for analyzing the influence of the network design, which include comparisons with learned iterative soft-thresholding algorithm (LISTA).
For the entire collection see [Zbl 1427.94002].Impulsive effect on fixed-time control for distributed delay uncertain static neural networks with leakage delayhttps://zbmath.org/1496.682882022-11-17T18:59:28.764376Z"Miaadi, Foued"https://zbmath.org/authors/?q=ai:miaadi.foued"Li, Xiaodi"https://zbmath.org/authors/?q=ai:li.xiaodiSummary: In this paper, the problem of fixed-time stabilization (FXTSB) for an uncertain impulsive distributed delay static neural networks (UIDSNNs) with leakage delay is investigated. Firstly, a new memory controller distinct from the existing ones is build. Besides, by using new Lyapunov function which include a new vectorial function, some new criteria are established to deal with the impulsive effect on FXTSB of UIDSNNs with leakage. Under the proposed memory fixed-time controller, the average impulsive interval(AII)-dependent settling-time is established and the controller parameters can be expressed in the form of linear matrix inequalities (LMIs). Finally, some numerical examples with graphical illustrations are provided to demonstrate the effectiveness of our theoretical main results.A sparse deep learning model for privacy attack on remote sensing imageshttps://zbmath.org/1496.682892022-11-17T18:59:28.764376Z"Wang, Eric Ke"https://zbmath.org/authors/?q=ai:kewang.eric"Zhe, Nie"https://zbmath.org/authors/?q=ai:zhe.nie"Li, Yueping"https://zbmath.org/authors/?q=ai:li.yueping"Liang, Zuodong"https://zbmath.org/authors/?q=ai:liang.zuodong"Zhang, Xun"https://zbmath.org/authors/?q=ai:zhang.xun"Yu, Juntao"https://zbmath.org/authors/?q=ai:yu.juntao"Ye, Yunming"https://zbmath.org/authors/?q=ai:ye.yunming(no abstract)Impact of leakage delay on bifurcation in fractional-order complex-valued neural networkshttps://zbmath.org/1496.682902022-11-17T18:59:28.764376Z"Xu, Changjin"https://zbmath.org/authors/?q=ai:xu.changjin"Liao, Maoxin"https://zbmath.org/authors/?q=ai:liao.maoxin"Li, Peiluan"https://zbmath.org/authors/?q=ai:li.peiluan"Yuan, Shuai"https://zbmath.org/authors/?q=ai:yuan.shuaiSummary: During the past decades, integer-order complex-valued neural networks have attracted great attention since they have been widely applied in in many fields of engineering technology. However, the investigation on fractional-order complex-valued neural networks, which are more appropriate to characterize the dynamical nature of neural networks, is rare. In this manuscript, we are to consider the stability and the existence of Hopf bifurcation of fractional-order complex-valued neural networks. By separating the coefficients and the activation functions into their real and imaginary parts and choosing the time delay as bifurcation parameter, we establish a set of sufficient conditions to ensure the stability of the equilibrium point and the existence of Hopf bifurcation for the involved network. The study shows that both the fractional order and the leakage delay have an important impact on the stability and the existence of Hopf bifurcation of the considered model. Some suitable numerical simulations are implemented to illustrate the pivotal theoretical predictions. At last, we ends this article with a simple conclusion.Quasi-conformal neural network (QC-net) with applications to shape matchinghttps://zbmath.org/1496.682912022-11-17T18:59:28.764376Z"Zhang, Han"https://zbmath.org/authors/?q=ai:zhang.han"Lui, Lok Ming"https://zbmath.org/authors/?q=ai:lui.lok-mingSummary: We build a deep neural network based on quasi-conformal theories, called QC-net, to obtain diffeomorphic registration maps between corresponding data. QC-net take the landmarks in the to-be-registered images as input and output the registration mapping between them. The loss function of the QC-net is carefully designed using the Beltrami coefficient to guarantee a homeomorphic registration map. This is the first network to build a neural network with homeomorphic output. Once the network has been trained, the registration map can be obtained efficiently in real-time. Extensive numerical experiments have been carried out, which demonstrate its effectiveness to compute bijective landmarkmatching registration with high accuracy. Our proposed QC-net has also been successfully applied to various real applications, such as medical image registration and shape remeshing.A full stage data augmentation method in deep convolutional neural network for natural image classificationhttps://zbmath.org/1496.682922022-11-17T18:59:28.764376Z"Zheng, Qinghe"https://zbmath.org/authors/?q=ai:zheng.qinghe"Yang, Mingqiang"https://zbmath.org/authors/?q=ai:yang.mingqiang"Tian, Xinyu"https://zbmath.org/authors/?q=ai:tian.xinyu"Jiang, Nan"https://zbmath.org/authors/?q=ai:jiang.nan"Wang, Deqiang"https://zbmath.org/authors/?q=ai:wang.deqiangSummary: Nowadays, deep learning has achieved remarkable results in many computer vision related tasks, among which the support of big data is essential. In this paper, we propose a full stage data augmentation framework to improve the accuracy of deep convolutional neural networks, which can also play the role of implicit model ensemble without introducing additional model training costs. Simultaneous data augmentation during training and testing stages can ensure network optimization and enhance its generalization ability. Augmentation in two stages needs to be consistent to ensure the accurate transfer of specific domain information. Furthermore, this framework is universal for any network architecture and data augmentation strategy and therefore can be applied to a variety of deep learning based tasks. Finally, experimental results about image classification on the coarse-grained dataset CIFAR-10 (93.41\%) and fine-grained dataset CIFAR-100 (70.22\%) demonstrate the effectiveness of the framework by comparing with state-of-the-art results.Quasi-synchronization of heterogeneous neural networks with distributed and proportional delays via impulsive controlhttps://zbmath.org/1496.682932022-11-17T18:59:28.764376Z"Zhu, Ruiyuan"https://zbmath.org/authors/?q=ai:zhu.ruiyuan"Guo, Yingxin"https://zbmath.org/authors/?q=ai:guo.yingxin.1|guo.yingxin"Wang, Fei"https://zbmath.org/authors/?q=ai:wang.fei.1|wang.fei.2Summary: In this paper, we discuss the quasi-synchronization of delayed heterogeneous dynamic neural networks based on impulsive control. The main difference of this paper with previous works on quasi-synchronization is that both proportional delay and distributed delay are considered. By establishing a novel impulsive delay inequality, combining Lyapunov theory and the concept of average impulsive interval, some necessary items for quasi-synchronization of delayed heterogeneous dynamic neural networks are obtained. Moreover, through using the generalized formulae for the variation of proportional and distributed delay parameters, the theoretical error bounded of quasi-synchronization is estimated. Finally, numerical examples are listed to explain the validity of our results.NP-hardness of some data cleaning problemhttps://zbmath.org/1496.682942022-11-17T18:59:28.764376Z"Kutnenko, Ol'ga Andreevna"https://zbmath.org/authors/?q=ai:kutnenko.olga-andreevna"Plyasunov, Aleksandr Vladimirovich"https://zbmath.org/authors/?q=ai:plyasunov.aleksandr-vladimirovichSummary: We prove the NP-hardness of the problem of outliers detection considered in this paper, to solving which a data analysis problem is reduced. As a quantitative assessment of the compactness of the image, the function of rival similarity (FRiS-function) is used, which evaluates the local similarity of objects with their closest neighbors.Choice of weight coefficients in the gradient method for constructing a Sugeno-type modelhttps://zbmath.org/1496.682952022-11-17T18:59:28.764376Z"Bobomuradov, O. Zh."https://zbmath.org/authors/?q=ai:bobomuradov.o-zh(no abstract)Some problems on the realization of combined systems of pattern recognition in the class of linear decision functionshttps://zbmath.org/1496.682962022-11-17T18:59:28.764376Z"Ignat'ev, N. A."https://zbmath.org/authors/?q=ai:ignatev.nikolai-a(no abstract)Analysis on low-resolution image correlation pattern recognition algorithm in finite spacehttps://zbmath.org/1496.682972022-11-17T18:59:28.764376Z"Li, Yanling"https://zbmath.org/authors/?q=ai:li.yanling.2|li.yanling"Li, Gang"https://zbmath.org/authors/?q=ai:li.gang.8|li.gang.9|li.gang.6|li.gang.4|li.gang.11|li.gang.2|li.gang.10|li.gang.1(no abstract)Solving SAT (and MaxSAT) with a quantum annealer: foundations, encodings, and preliminary resultshttps://zbmath.org/1496.682982022-11-17T18:59:28.764376Z"Bian, Zhengbing"https://zbmath.org/authors/?q=ai:bian.zhengbing"Chudak, Fabian"https://zbmath.org/authors/?q=ai:chudak.fabian-a"Macready, William"https://zbmath.org/authors/?q=ai:macready.william-g"Roy, Aidan"https://zbmath.org/authors/?q=ai:roy.aidan"Sebastiani, Roberto"https://zbmath.org/authors/?q=ai:sebastiani.roberto"Varotti, Stefano"https://zbmath.org/authors/?q=ai:varotti.stefanoSummary: Quantum annealers (QAs) are specialized quantum computers that minimize objective functions over discrete variables by physically exploiting quantum effects. Current QA platforms allow for the optimization of quadratic objectives defined over binary variables (qubits), also known as Ising problems. In the last decade, QA systems as implemented by D-Wave have scaled with Moore-like growth. Current architectures provide 2048 sparsely-connected qubits, and continued exponential growth is anticipated, together with increased connectivity.
We explore the feasibility of such architectures for solving SAT and MaxSAT problems as QA systems scale. We develop techniques for effectively encoding SAT -and, with some limitations, MaxSAT- into Ising problems compatible with sparse QA architectures. We provide the theoretical foundations for this mapping, and present encoding techniques that combine offline Satisfiability and Optimization Modulo Theories with on-the-fly placement and routing. Preliminary empirical tests on a current generation 2048-qubit D-Wave system support the feasibility of the approach for certain SAT and MaxSAT problems.Qualitative numeric planning: reductions and complexityhttps://zbmath.org/1496.682992022-11-17T18:59:28.764376Z"Bonet, Blai"https://zbmath.org/authors/?q=ai:bonet.blai"Geffner, Hector"https://zbmath.org/authors/?q=ai:geffner.hectorSummary: Qualitative numerical planning is classical planning extended with non-negative real variables that can be increased or decreased ``qualitatively'', i.e., by positive indeterminate amounts. While deterministic planning with numerical variables is undecidable in general, qualitative numerical planning is decidable and provides a convenient abstract model for generalized planning. The solutions to qualitative numerical problems (QNPs) were shown to correspond to the strong cyclic solutions of an associated fully observable non-deterministic (FOND) problem that terminate. This leads to a generate-and-test algorithm for solving QNPs where solutions to a FOND problem are generated one by one and tested for termination. The computational shortcomings of this approach for solving QNPs, however, are that it is not simple to amend FOND planners to generate all solutions, and that the number of solutions to check can be doubly exponential in the number of variables. In this work we address these limitations while providing additional insights on QNPs. More precisely, we introduce two polynomial-time reductions, one from QNPs to FOND problems and the other from FOND problems to QNPs both of which do not involve termination tests. A result of these reductions is that QNPs are shown to have the same expressive power and the same complexity as FOND problems.MILP, pseudo-Boolean, and OMT solvers for optimal fault-tolerant placements of relay nodes in mission critical wireless networkshttps://zbmath.org/1496.683002022-11-17T18:59:28.764376Z"Chen, Qian Matteo"https://zbmath.org/authors/?q=ai:chen.qian-matteo"Finzi, Alberto"https://zbmath.org/authors/?q=ai:finzi.alberto"Mancini, Toni"https://zbmath.org/authors/?q=ai:mancini.toni"Melatti, Igor"https://zbmath.org/authors/?q=ai:melatti.igor"Tronci, Enrico"https://zbmath.org/authors/?q=ai:tronci.enricoSummary: In \textit{critical infrastructures} like airports, much care has to be devoted in protecting radio communication networks from external electromagnetic interference.
Protection of such \textit{mission-critical} radio communication networks is usually tackled by exploiting radiogoniometers: at least three suitably deployed radiogoniometers, and a gateway gathering information from them, permit to monitor and localise sources of electromagnetic emissions that are not supposed to be present in the monitored area. Typically, radiogoniometers are connected to the gateway through \textit{relay nodes}. As a result, some degree of fault-tolerance for the network of relay nodes is essential in order to offer a reliable monitoring. On the other hand, deployment of relay nodes is typically quite expensive. As a result, we have two conflicting requirements: minimise costs while guaranteeing a given fault-tolerance.
In this paper, we address the problem of computing a deployment for relay nodes that minimises the overall cost while at the same time guaranteeing proper working of the network even when some of the relay nodes (up to a given maximum number) become faulty (\textit{fault-tolerance}).
We show that, by means of a computation-intensive pre-processing on a HPC infrastructure, the above optimisation problem can be encoded as a 0/1 Linear Program, becoming suitable to be approached with standard Artificial Intelligence reasoners like MILP, PB-SAT, and SMT/OMT solvers. Our problem formulation enables us to present experimental results comparing the performance of these three solving technologies on a real case study of a relay node network deployment in areas of the Leonardo da Vinci Airport in Rome, Italy.An adaptive prefix-assignment technique for symmetry reductionhttps://zbmath.org/1496.683012022-11-17T18:59:28.764376Z"Junttila, Tommi"https://zbmath.org/authors/?q=ai:junttila.tommi-a"Karppa, Matti"https://zbmath.org/authors/?q=ai:karppa.matti"Kaski, Petteri"https://zbmath.org/authors/?q=ai:kaski.petteri"Kohonen, Jukka"https://zbmath.org/authors/?q=ai:kohonen.jukkaSummary: This paper presents a technique for symmetry reduction that adaptively assigns a prefix of variables in a system of constraints so that the generated prefix-assignments are pairwise nonisomorphic under the action of the symmetry group of the system. The technique is based on
\textit{B. D. McKay}'s canonical extension framework [J. Algorithms 26, No. 2, 306--324 (1998; Zbl 0894.68107)].
Among key features of the technique are (i) adaptability -- the prefix sequence can be user-prescribed and truncated for compatibility with the group of symmetries; (ii) parallelisability -- prefix-assignments can be processed in parallel independently of each other; (iii) versatility -- the method is applicable whenever the group of symmetries can be concisely represented as the automorphism group of a vertex-colored graph; and (iv) implementability -- the method can be implemented relying on a canonical labeling map for vertex-colored graphs as the only nontrivial subroutine. To demonstrate the tentative practical applicability of our technique we have prepared a preliminary implementation and report on a limited set of experiments that demonstrate ability to reduce symmetry on hard instances.
For the entire collection see [Zbl 1368.68008].Representing fitness landscapes by valued constraints to understand the complexity of local searchhttps://zbmath.org/1496.683022022-11-17T18:59:28.764376Z"Kaznatcheev, Artem"https://zbmath.org/authors/?q=ai:kaznatcheev.artem"Cohen, David A."https://zbmath.org/authors/?q=ai:cohen.david-a"Jeavons, Peter G."https://zbmath.org/authors/?q=ai:jeavons.peter-gSummary: Local search is widely used to solve combinatorial optimisation problems and to model biological evolution, but the performance of local search algorithms on different kinds of fitness landscapes is poorly understood. Here we consider how fitness landscapes can be represented using valued constraints, and investigate what the structure of such representations reveals about the complexity of local search.
First, we show that for fitness landscapes representable by binary Boolean valued constraints there is a minimal necessary constraint graph that can be easily computed. Second, we consider landscapes as equivalent if they allow the same (improving) local search moves; we show that a minimal constraint graph still exists, but is NP-hard to compute.
We then develop several techniques to bound the length of any sequence of local search moves. We show that such a bound can be obtained from the numerical values of the constraints in the representation, and show how this bound may be tightened by considering equivalent representations. In the binary Boolean case, we prove that a degree 2 or treestructured constraint graph gives a quadratic bound on the number of improving moves made by any local search; hence, any landscape that can be represented by such a model will be tractable for any form of local search.
Finally, we build two families of examples to show that the conditions in our tractability results are essential. With domain size three, even just a path of binary constraints can model a landscape with an exponentially long sequence of improving moves. With a treewidth-two constraint graph, even with a maximum degree of three, binary Boolean constraints can model a landscape with an exponentially long sequence of improving moves.Using machine learning for decreasing state uncertainty in planninghttps://zbmath.org/1496.683032022-11-17T18:59:28.764376Z"Krivic, Senka"https://zbmath.org/authors/?q=ai:krivic.senka"Cashmore, Michael"https://zbmath.org/authors/?q=ai:cashmore.michael"Magazzeni, Daniele"https://zbmath.org/authors/?q=ai:magazzeni.daniele"Szedmak, Sandor"https://zbmath.org/authors/?q=ai:szedmak.sandor"Piater, Justus"https://zbmath.org/authors/?q=ai:piater.justus-hSummary: We present a novel approach for decreasing state uncertainty in planning prior to solving the planning problem. This is done by making predictions about the state based on currently known information, using machine learning techniques. For domains where uncertainty is high, we define an active learning process for identifying which information, once sensed, will best improve the accuracy of predictions.
We demonstrate that an agent is able to solve problems with uncertainties in the state with less planning effort compared to standard planning techniques. Moreover, agents can solve problems for which they could not find valid plans without using predictions. Experimental results also demonstrate that using our active learning process for identifying information to be sensed leads to gathering information that improves the prediction process.An empirical study of branching heuristics through the lens of global learning ratehttps://zbmath.org/1496.683042022-11-17T18:59:28.764376Z"Liang, Jia Hui"https://zbmath.org/authors/?q=ai:liang.jiahui"Hari Govind, V. K."https://zbmath.org/authors/?q=ai:hari-govind.v-k"Poupart, Pascal"https://zbmath.org/authors/?q=ai:poupart.pascal"Czarnecki, Krzysztof"https://zbmath.org/authors/?q=ai:czarnecki.krzysztof"Ganesh, Vijay"https://zbmath.org/authors/?q=ai:ganesh.vijaySummary: In this paper, we analyze a suite of 7 well-known branching heuristics proposed by the SAT community and show that the better heuristics tend to generate more learnt clauses per decision, a metric we define as the global learning rate (GLR). Like our previous work on the LRB branching heuristic, we once again view these heuristics as techniques to solve the learning rate optimization problem. First, we show that there is a strong positive correlation between GLR and solver efficiency for a variety of branching heuristics. Second, we test our hypothesis further by developing a new branching heuristic that maximizes GLR greedily. We show empirically that this heuristic achieves very high GLR and interestingly very low literal block distance (LBD) over the learnt clauses. In our experiments this greedy branching heuristic enables the solver to solve instances faster than VSIDS, when the branching time is taken out of the equation. This experiment is a good proof of concept that a branching heuristic maximizing GLR will lead to good solver performance modulo the computational overhead. Third, we propose that machine learning algorithms are a good way to cheaply approximate the greedy GLR maximization heuristic as already witnessed by LRB. In addition, we design a new branching heuristic, called SGDB, that uses a stochastic gradient descent online learning method to dynamically order branching variables in order to maximize GLR. We show experimentally that SGDB performs on par with the VSIDS branching heuristic.
For the entire collection see [Zbl 1368.68008].Coverage-based clause reduction heuristics for CDCL solvershttps://zbmath.org/1496.683052022-11-17T18:59:28.764376Z"Nabeshima, Hidetomo"https://zbmath.org/authors/?q=ai:nabeshima.hidetomo"Inoue, Katsumi"https://zbmath.org/authors/?q=ai:inoue.katsumiSummary: Many heuristics, such as decision, restart, and clause reduction heuristics, are incorporated in CDCL solvers in order to improve performance. In this paper, we focus on learnt clause reduction heuristics, which are used to suppress memory consumption and sustain propagation speed. The reduction heuristics consist of evaluation criteria, for measuring the usefulness of learnt clauses, and a reduction strategy in order to select clauses to be removed based on the criteria. LBD (literals blocks distance) is used as the evaluation criteria in many solvers. For the reduction strategy, we propose a new concise schema based on the coverage ratio of used LBDs. The experimental results show that the proposed strategy can achieve higher coverage than the conventional strategy and improve the performance for both SAT and UNSAT instances.
For the entire collection see [Zbl 1368.68008].Result diversification by multi-objective evolutionary algorithms with theoretical guaranteeshttps://zbmath.org/1496.683062022-11-17T18:59:28.764376Z"Qian, Chao"https://zbmath.org/authors/?q=ai:qian.chao"Liu, Dan-Xuan"https://zbmath.org/authors/?q=ai:liu.dan-xuan"Zhou, Zhi-Hua"https://zbmath.org/authors/?q=ai:zhou.zhihuaSummary: Given a ground set of items, the result diversification problem aims to select a subset with high ``quality'' and ``diversity'' while satisfying some constraints. It arises in various real-world artificial intelligence applications, such as web-based search, document summarization and feature selection, and also has applications in other areas, e.g., computational geometry, databases, finance and operations research. Previous algorithms are mainly based on greedy or local search. In this paper, we propose to reformulate the result diversification problem as a bi-objective maximization problem, and solve it by a multi-objective evolutionary algorithm (EA), i.e., the GSEMO. We theoretically prove that the GSEMO can achieve the (asymptotically) optimal theoretical guarantees under both static and dynamic environments. For cardinality constraints, the GSEMO can achieve the optimal polynomial-time approximation ratio, 1/2. For more general matroid constraints, the GSEMO can achieve an asymptotically optimal polynomial-time approximation ratio, \(1/2-\epsilon /(4 n)\), where \(\epsilon > 0\) and \(n\) is the size of the ground set of items. Furthermore, when the objective function (i.e., a linear combination of quality and diversity) changes dynamically, the GSEMO can maintain this approximation ratio in polynomial running time, addressing the open question proposed by
\textit{A. Borodin} et al. [ACM Trans. Algorithms 13, No. 3, Article No. 41, 25 p. (2017; Zbl 1452.68273)].
This also theoretically shows the superiority of EAs over local search for solving dynamic optimization problems for the first time, and discloses the robustness of the mutation operator of EAs against dynamic changes. Experiments on the applications of web-based search, multi-label feature selection and document summarization show the superior performance of the GSEMO over the state-of-the-art algorithms (i.e., the greedy algorithm and local search) under both static and dynamic environments.Introducing Pareto minimal correction subsetshttps://zbmath.org/1496.683072022-11-17T18:59:28.764376Z"Terra-Neves, Miguel"https://zbmath.org/authors/?q=ai:terra-neves.miguel"Lynce, Inês"https://zbmath.org/authors/?q=ai:lynce.ines"Manquinho, Vasco"https://zbmath.org/authors/?q=ai:manquinho.vasco-mSummary: A minimal correction subset (MCS) of an unsatisfiable constraint set is a minimal subset of constraints that, if removed, makes the constraint set satisfiable. MCSs enjoy a wide range of applications, one of them being approximate solutions to constrained optimization problems. However, existing work on applying MCS enumeration to optimization problems focuses on the single-objective case.
In this work, a first definition of Pareto minimal correction subsets (Pareto-MCSs) is proposed with the goal of approximating the Pareto-optimal solution set of multi-objective constrained optimization problems. We formalize and prove an equivalence relationship between Pareto-optimal solutions and Pareto-MCSs. Moreover, Pareto-MCSs and MCSs can be connected in such a way that existing state-of-the-art MCS enumeration algorithms can be used to enumerate Pareto-MCSs.
An experimental evaluation considers the multi-objective virtual machine consolidation problem. Results show that the proposed Pareto-MCS approach outperforms the state-of-the-art approaches.
For the entire collection see [Zbl 1368.68008].Partial (neighbourhood) singleton arc consistency for constraint satisfaction problemshttps://zbmath.org/1496.683082022-11-17T18:59:28.764376Z"Wallace, Richard J."https://zbmath.org/authors/?q=ai:wallace.richard-jSummary: Algorithms based on singleton arc consistency (SAC) show considerable promise for improving backtrack search algorithms for constraint satisfaction problems (CSPs). The drawback is that even the most efficient of them is still comparatively expensive. Even when limited to preprocessing, they give overall improvement only when problems are quite difficult to solve with more typical procedures such as maintained arc consistency (MAC). The present work examines a form of partial SAC and neighbourhood SAC (NSAC) in which a subset of the variables in a CSP are chosen to be made SAC-consistent or neighbourhood-SAC-consistent. Such consistencies, despite their partial character, are still well-characterized in that algorithms have unique fixpoints. Heuristic strategies for choosing an effective subset of variables are described and tested, the best being choice by highest degree and a more complex strategy of choosing by constraint weight after random probing. Experimental results justify the claim that these methods can be nearly as effective as the corresponding full version of the algorithm in terms of values discarded or problems proven unsatisfiable, while significantly reducing the effort required to achieve this.Adding proof calculi to epistemic logics with structured knowledgehttps://zbmath.org/1496.683092022-11-17T18:59:28.764376Z"Benevides, Mario"https://zbmath.org/authors/?q=ai:benevides.mario-r-f"Madeira, Alexandre"https://zbmath.org/authors/?q=ai:madeira.alexandre"Martins, Manuel A."https://zbmath.org/authors/?q=ai:martins.manuel-aSummary: Dynamic Epistemic Logic (DEL) is used in the analysis of a wide class of application scenarios involving multi-agents systems with local perceptions of information and knowledge. In its classical form, the knowledge of epistemic states is represented by sets of propositions. However, the complexity of the current systems, requires other richer structures, than sets of propositions, to represent knowledge on their epistemic states. Algebras, graphs or distributions are examples of useful structures for this end. Based on this observation, we introduced a parametric method to build dynamic epistemic logics on-demand, taking as parameter the specific knowledge representation framework (e.g., propositional, equational or even a modal logic) that better fits the problems in hand. In order to use the built logics in practices, tools support is needed. Based on this, we extended our previous method with a parametric construction of complete proof calculi. The complexity of the model checking and satisfiability problems for the achieved logics are provided.
For the entire collection see [Zbl 1489.68021].Examining network effects in an argumentative agent-based model of scientific inquiryhttps://zbmath.org/1496.683102022-11-17T18:59:28.764376Z"Borg, Annemarie"https://zbmath.org/authors/?q=ai:borg.annemarie"Frey, Daniel"https://zbmath.org/authors/?q=ai:frey.daniel"Šešelja, Dunja"https://zbmath.org/authors/?q=ai:seselja.dunja"Straßer, Christian"https://zbmath.org/authors/?q=ai:strasser.christianSummary: In this paper we present an agent-based model (ABM) of scientific inquiry aimed at investigating how different social networks impact the efficiency of scientists in acquiring knowledge. The model is an improved variant of the ABM introduced in
[\textit{P. M. Dung}, J. Log. Program. 22, No. 2, 151--177 (1995; Zbl 0816.68045)],
which is based on abstract argumentation frameworks. The current model employs a more refined notion of social networks and a more realistic representation of knowledge acquisition than the previous variant. Moreover, it includes two criteria of success: a monist and a pluralist one, reflecting different desiderata of scientific inquiry. Our findings suggest that, given a reasonable ratio between research time and time spent on communication, increasing the degree of connectedness of the social network tends to improve the efficiency of scientists.
For the entire collection see [Zbl 1369.68010].Image schemas and conceptual blending in diagrammatic reasoning: the case of Hasse diagramshttps://zbmath.org/1496.683112022-11-17T18:59:28.764376Z"Bourou, Dimitra"https://zbmath.org/authors/?q=ai:bourou.dimitra"Schorlemmer, Marco"https://zbmath.org/authors/?q=ai:schorlemmer.marco"Plaza, Enric"https://zbmath.org/authors/?q=ai:plaza.enricSummary: In this work, we propose a formal, computational model of the sense-making of diagrams by using the theories of image schemas and conceptual blending, stemming from cognitive linguistics. We illustrate our model here for the case of a Hasse diagram, using typed first-order logic to formalise the image schemas and to represent the geometry of a diagram. The latter additionally requires the use of some qualitative spatial reasoning formalisms. We show that, by blending image schemas with the geometrical configuration of a diagram, we can formally describe the way our cognition structures the understanding of, and the reasoning with, diagrams. In addition to a theoretical interest for diagrammatic reasoning, we also briefly discuss the cognitive underpinnings of good practice in diagram design, which are important for fields such as human-computer interaction and data visualization.
For the entire collection see [Zbl 1487.68008].An abductive framework for extended logic programminghttps://zbmath.org/1496.683122022-11-17T18:59:28.764376Z"Brogi, Antonio"https://zbmath.org/authors/?q=ai:brogi.antonio"Lamma, Evelina"https://zbmath.org/authors/?q=ai:lamma.evelina"Mancarella, Paolo"https://zbmath.org/authors/?q=ai:mancarella.paolo"Mello, Paola"https://zbmath.org/authors/?q=ai:mello.paolaSummary: We provide a simple formulation of a framework where three main extensions of logic programming for non-monotonic reasoning are treated uniformly: Negation-by-default, explicit negation and abduction. The resulting semantics is purely model-theoretic and gives meaning to any consistent abductive logic program. Moreover, it embeds and generalises existing semantics which deal with negation and abduction separately. The abductive framework is equipped with a correct top-down proof procedure.
For the entire collection see [Zbl 0875.00116].Reasoning with stratified default theorieshttps://zbmath.org/1496.683132022-11-17T18:59:28.764376Z"Cholewiński, Paweł"https://zbmath.org/authors/?q=ai:cholewinski.pawelSummary: Default logic is one of the principal formalisms for nonmonotonic, reasoning. In this paper, we study algorithms for computing extensions for a class of general propositional default theories. We focus on the problem of partitioning a given set of defaults into a family of its subsets. Then we investigate how the results obtained for these subsets can be put together to achieve the extensions of the original theory. The method we propose is designed to prune the search space and reduce the number of calls to propositional provability procedure. It also constitutes a simple and uniform framework for the design of parallel algorithms for computing extensions.
For the entire collection see [Zbl 0875.00116].A sphere world semantics for default reasoninghttps://zbmath.org/1496.683142022-11-17T18:59:28.764376Z"da Silva, João C. P."https://zbmath.org/authors/?q=ai:da-silva.joao-c-p"Veloso, Sheila R. M."https://zbmath.org/authors/?q=ai:veloso.sheila-r-mSummary: The purpose of this paper is to show that we can consider an extension of a Reiter's default theory \((\mathrm{W}, \Delta)\) as the expansion of the (belief) set W by some maximal set D of consequences of defaults in \(\Delta\). We will use the model of revision functions proposed by \textit{A. Grove} [J. Philos. Log. 17, No. 2, 157--170 (1988; Zbl 0639.03025)] to characterize the models of the extensions in \textit{R. Reiter}'s default logic [Artif. Intell. 13, 81--132 (1980; Zbl 0435.68069)], showing that the class of models we obtain in the special case when a revision is an expansion (i.e., a new sentence A is added to a belief set K and no sentence in K is deleted), is the class of models of some extension in Reiter's default logic. Furthermore, we will show that the class of models in Poole's system for default reasoning can be characterized in the same way.
For the entire collection see [Zbl 0875.00116].The complexity landscape of outcome determination in judgment aggregationhttps://zbmath.org/1496.683152022-11-17T18:59:28.764376Z"Endriss, Ulle"https://zbmath.org/authors/?q=ai:endriss.ulle"de Haan, Ronald"https://zbmath.org/authors/?q=ai:de-haan.ronald"Lang, Jérôme"https://zbmath.org/authors/?q=ai:lang.jerome"Slavkovik, Marija"https://zbmath.org/authors/?q=ai:slavkovik.marijaSummary: We provide a comprehensive analysis of the computational complexity of the outcome determination problem for the most important aggregation rules proposed in the literature on logic-based judgment aggregation. Judgment aggregation is a powerful and flexible framework for studying problems of collective decision making that has attracted interest in a range of disciplines, including Legal Theory, Philosophy, Economics, Political Science, and Artificial Intelligence. The problem of computing the outcome for a given list of individual judgments to be aggregated into a single collective judgment is the most fundamental algorithmic challenge arising in this context. Our analysis applies to several different variants of the basic framework of judgment aggregation that have been discussed in the literature, as well as to a new framework that encompasses all existing such frameworks in terms of expressive power and representational succinctness.Hypothetical updates, priority and inconsistency in a logic programming languagehttps://zbmath.org/1496.683162022-11-17T18:59:28.764376Z"Gabbay, D."https://zbmath.org/authors/?q=ai:gabbay.dov-m"Giordano, L."https://zbmath.org/authors/?q=ai:giordano.laura"Martelli, A."https://zbmath.org/authors/?q=ai:martelli.alberto"Olivetti, N."https://zbmath.org/authors/?q=ai:olivetti.nicolaSummary: In this paper we propose a logic programming language which supports hypothetical updates together with integrity constraints. The language allows sequences of updates by sets of atoms and it makes use of a revision mechanism to restore consistency when an update violates some integrity constraint. The revision policy we adopt is based on the simple idea that more recent information is preferred to earlier one. This language can be used to perform several types of defeasible reasoning. We define a goal-directed proof procedure for the language and develope a logical characterization in a modal logic by introducing an abductive semantics.
For the entire collection see [Zbl 0875.00116].On the correspondence between abstract dialectical frameworks and nonmonotonic conditional logicshttps://zbmath.org/1496.683172022-11-17T18:59:28.764376Z"Heyninck, Jesse"https://zbmath.org/authors/?q=ai:heyninck.jesse"Kern-Isberner, Gabriele"https://zbmath.org/authors/?q=ai:kern-isberner.gabriele"Thimm, Matthias"https://zbmath.org/authors/?q=ai:thimm.matthias"Skiba, Kenneth"https://zbmath.org/authors/?q=ai:skiba.kennethThis paper explores the interrelation between formal argumentation and nonmonotonic logics. More precisely, it aims at deepening the understanding of that relationship by investigating characterizations of Abstract Dialectical Frameworks (ADFs) in Conditional Logics (CL) for nonmonotonic reasoning. This work continues and extends the papers [\textit{G. Kern-Isberner} and \textit{M. Thimm}, Tributes 37, 369--382 (2018; Zbl 1440.03052); \textit{J. Heyninck}, J. Appl. Log. - IfCoLog J. Log. Appl. 6, No. 2, 317--357 (2019; Zbl 07594169)].
An Abstract Dialectical Framework (ADF) is a tuple \(D=(S,L,C)\), where \(S\) is a set of statements, \(L \subseteq S \times S\) is a set of links, and \(C=\{C_s\}_{s \in S}\) is a set of total functions \(C_s: 2^{\operatorname{par}_D(s)} \to \{\top,\bot\}\), with \(\operatorname{par}_D(s)=\{s^\prime \in S \mid (s^\prime, s)\in L\}\). Informally, an ADF can be seen a as a directed graph whose nodes represent statements or arguments which can be accepted or not. The set \(\operatorname{par}_D(s)\) is, then, the set of the parent nodes of \(s\) and \(C_s\) is an acceptance function which determines the acceptance status of \(s\) depending on the acceptance status of its parents in \(D\). An ADF \(D = (S,L,C)\) is interpreted through 3-valued interpretations \(v : S \to \{\top,\bot, u\}\) which assign to each statement in \(S\) either the value \(\top\) (true, accepted), \(\bot\) (false, rejected), or \(u\) (unknown). With respect to an ADF \(D\), several relevant classes of interpretations are considered, namely the classes of 2-valued models and of complete, preferred, grounded and stable interpretations of \(D\), which are denoted by \(\mathtt{2mod}(D)\), \(\mathtt{complete}(D)\), \(\mathtt{preferred}(D)\), \(\mathtt{grounded}(D)\), and \(\mathtt{stable}(D)\), respectively. To some of these classes of interpretations a consequence relation for ADFs is associated. More precisely, given an ADF \(D = (S,L,C)\), the following consequence relations are defined:
\[
D \,\mid\!\sim\!_{\mathtt{sem}}^\cap s[\neg s] \; \text{ iff } \; v(s) = \top[\bot] \; \text{ for all } \; v \in\mathtt{sem}(D),
\]
where \(s \in S\) and \(\mathtt{sem} \in \{\mathtt{2mod}, \mathtt{preferred}, \mathtt{grounded}, \mathtt{stable}\}\).
A conditional is essentially an expression of the form \((\psi|\phi)\), which is interpreted with the informal meaning ``if \(\phi\) is true then, usually, \(\psi\) is true as well''. Ordinal Conditional Functions (OCFs), also called ranking functions, which were proposed in [\textit{W. Spohn}, ``Ordinal conditional functions: a dynamic theory of epistemic states'', in: Causation in decision, belief change, and statistics. Dordrecht: Springer. 105--134 (1988)], constitute a convenient formalism for reasoning about conditionals. In fact, given an OCF \(\kappa\), we say that \((\psi|\phi)\) is accepted by \(\kappa\) iff \(\phi \,\mid\!\sim\!_\kappa \psi\) iff \(\kappa(\phi \land \psi) < \kappa(\phi \land \neg\psi)\). Furthermore, for an OCF \(\kappa\), \(\operatorname{Bel}(\kappa)\) denotes the set of sentences that are satisfied by all most plausible worlds (i.e. by all worlds \(\omega\) such that \(\kappa(\omega)=0\)). A specific example of a nonmonotonic inference system whose inference relation is defined using conditionals and OCFs is system Z [\textit{M. Goldszmidt} and \textit{J. Pearl}, Artif. Intell. 84, No. 1--2, 57--112 (1996; Zbl 07591264)]. The consequence relation in system Z is defined by
\[
\Delta \,\mid\!\sim\!\!\!_\mathrm{Z} \; \phi \; \text{ iff } \; \phi \in \operatorname{Bel}(\kappa_\Delta^\mathrm{Z}),
\]
where \(\Delta\) is a set of conditionals, \(\phi\) is a sentence and \(\kappa_\Delta^\mathrm{Z}\) is a ranking function defined by means of \(\Delta\).
The central question underlying this paper is, in the authors' words, ``whether, and how we can interpret abstract dialectical frameworks in terms of conditional logic so that acceptance in the argumentative system is defined by a nonmonotonic inference relation based on conditionals.''
The study starts with the presentation of seven translations of ADFs into conditional knowledge bases and the introduction of the notions of Z-adequacy and OCF-adequacy for such translations. Given a (fixed) set of atoms \(S\), such translations are basically mappings from the set of all ADFs \(D = (S,L,C)\) to the power set of the set of all conditionals over the propositional language generated by \(S\). For any semantics \(\mathtt{sem} \in \{\mathtt{2mod}, \mathtt{preferred}, \mathtt{grounded}, \mathtt{stable}\}\), a translation \(\mathfrak{T}\) is:
-- OCF-adequate with respect to semantics \(\mathtt{sem}\) if for every \(D = (S,L,C)\) there is some OCF \(\kappa\), s.t. \(\kappa\vDash \mathfrak{T}(D)\) and for every \(s \in S\) it holds that: \(D \,\mid\!\sim\!_{\mathtt{sem}}^\cap s\) iff \(s \in \operatorname{Bel}(\kappa)\).
-- Z-adequate with respect to semantics \(\mathtt{sem}\) if for every \(D = (S,L,C)\) and every \(s \in S\) it holds that: \(D \,\mid\!\sim\!_{\mathtt{sem}}^\cap s\) iff \(\mathfrak{T}(D) \,\mid\!\sim\!\!\!_\mathrm{Z} \; s\).
Then the Z- and the OCF-adequacy of all the translations proposed are investigated w.r.t. each one of the following, above-mentioned, ADF-semantics: 2-valued models, stable, preferred and grounded.
The main results presented (and proved) in this paper can be summarized as follows:
\begin{itemize}
\item All the translations studied in this paper are OCF-adequate with respect to the 2-valued models semantics. Furthermore, five of them are also Z-adequate for that semantics.
\item The translations studied in this paper are neither Z-adequate nor OCF-adequate for the grounded semantics and for the preferred and stable semantics in general;
\item The five translations which are both Z-adequate and OCF-adequate with respect to the 2-valued models semantics are also:
\begin{itemize}
\item Z- and OCF-adequate with respect to the stable semantics for two specific subclasses of ADFs;
\item Z- and OCF-adequate with respect to the preferred semantics for one specific subclass of ADFs;
\item Z- and OCF-adequate with respect to the grounded semantics for the specific subclass of acyclic ADFs (formed by the ADFs whose corresponding directed graph is acyclic);
\end{itemize}
\end{itemize}
It is worth mentioning here that the above summarized results are schematically presented in a very concise but more detailed way in Table 1 of the paper under review. However, it must be noted that in that table the title of the third column should be ``Stable'' (instead of ``Preferred'') and the title of the fourth column should be ``Preferred'' (instead of ``Stable'').
To finish, we mention the following additional contributions of this paper:
-- For five of the proposed translations some results have been presented which establish conditions for the consistency of the proposed translations (more precisely, these results identify conditions on an ADF \(D\) which are sufficient for assuring the consistency of the set of conditionals which is the image of \(D\) by the translation under consideration). These results are summarized in Table 2 of the paper under review.
-- It has been shown that the proposed translations satisfy some properties which are desirable for translations between non-monotonic formalisms.
Reviewer: Mauricio Reis (Funchal)Skeptical rational extensionshttps://zbmath.org/1496.683182022-11-17T18:59:28.764376Z"Mikitiuk, Artur"https://zbmath.org/authors/?q=ai:mikitiuk.artur"Truszczyński, Miroslaw"https://zbmath.org/authors/?q=ai:truszczynski.miroslawSummary: In this paper we propose a version of default logic with the following two properties: (1) defaults with mutually inconsistent justifications are never used together in constructing a set of default consequences of a theory; (2) the reasoning formalized by our logic is related to the traditional skeptical mode of default reasoning. Our logic is based on the concept of a \textit{skeptical rational extension}. We give characterization results for skeptical rational extensions and an algorithm to compute them. We present some complexity results. Our main goal is to characterize cases when the class of skeptical rational extensions is closed under intersection. In the case of normal default theories our logic coincides with the standard skeptical reasoning with extensions. In the case of seminormal default theories our formalism provides a description of the standard skeptical reasoning with rational extensions.
For the entire collection see [Zbl 0875.00116].Situation calculus specifications for event calculus logic programshttps://zbmath.org/1496.683192022-11-17T18:59:28.764376Z"Miller, Rob"https://zbmath.org/authors/?q=ai:miller.robSummary: A version of the Situation Calculus is presented which is able to deal with information about the actual occurrence of actions in time. Baker's solution to the frame problem using circumscription is adapted to enable default reasoning about action occurrences, as well as about the effects of actions. Two translations of Situation Calculus style theories into Event Calculus style logic programs are defined, and results are given on the soundness and completeness of the translations.
For the entire collection see [Zbl 0875.00116].Nonmonotonicity and answer set inferencehttps://zbmath.org/1496.683202022-11-17T18:59:28.764376Z"Pearce, David"https://zbmath.org/authors/?q=ai:pearce.david-g|pearce.david-j|pearce.david-a-jSummary: The study of abstract properties of nonmonotonic inference has thrown up a number of general conditions on inference relations that are often thought to be desirable, and sometimes even essential, for an adequate system of nonmonotonic reasoning. However, several of the key conditions on inference that have been proposed in the literature make explicit reference to the \textit{classical} concept of logical consequence, and there is a general tendency to focus attention on inference operations that are \textit{supraclassical} in the sense of extending classical consequence. Against this trend I argue for the importance of systems that are not supraclassical. I suggest that their inference relations should measure up to adequacy conditions that are more sensitive to the style of reasoning for which they are intended, and which take account of the underlying logic of the monotonic subsystem, if such a subsystem can be identified. I illustrate these points by considering some properties of the inference relation associated with the answer set semantics of extended disjunctive databases.
For the entire collection see [Zbl 0875.00116].Nonmonotonic inheritance, argumentation and logic programminghttps://zbmath.org/1496.683212022-11-17T18:59:28.764376Z"Phan Minh Dung"https://zbmath.org/authors/?q=ai:phan-minh-dung."Tran Cao Son"https://zbmath.org/authors/?q=ai:tran-cao-son.Summary: We study the conceptual relationship between the semantics of nonmonotonic inheritance reasoning and argumentation. We show that the credulous semantics of nonmonotonic inheritance network can be captured by the stable semantics of argumentation. We present a transformation of nonmonotonic inheritance networks into equivalent extended logic programs.
For the entire collection see [Zbl 0875.00116].The dynamics of group polarizationhttps://zbmath.org/1496.683222022-11-17T18:59:28.764376Z"Proietti, Carlo"https://zbmath.org/authors/?q=ai:proietti.carloSummary: Exchange of arguments in a discussion often makes individuals more radical about their initial opinion. This phenomenon is known as Group-induced Attitude Polarization. A byproduct of it are bipolarization effects, where the distance between the attitudes of two groups of individuals increases after the discussion. This paper is a first attempt to analyse the building blocks of information exchange and information update that induce polarization. I use Argumentation Frameworks as a tool for encoding the information of agents in a debate relative to a given issue \(a\) I then adapt a specific measure of the degree of acceptability of an opinion
[\textit{P.-A. Matt} and \textit{F. Toni}, Lect. Notes Comput. Sci. 5293, 285--297 (2008; Zbl 1178.68566)].
Changes in the degree of acceptability of \(a\), prior and posterior to information exchange, serve here as an indicator of polarization. I finally show that the way agents transmit and update information has a decisive impact on polarization and bipolarization.
For the entire collection see [Zbl 1369.68010].Embedding circumscriptive theories in general disjunctive programshttps://zbmath.org/1496.683232022-11-17T18:59:28.764376Z"Sakama, Chiaki"https://zbmath.org/authors/?q=ai:sakama.chiaki"Inoue, Katsumi"https://zbmath.org/authors/?q=ai:inoue.katsumiSummary: This paper presents a method of embedding circumscriptive theories in general disjunctive programs. In a general disjunctive program, negation as failure occurs not only in the body but in the head of a rule. In this setting, minimized predicates of a circumscriptive theory are specified using the negation in the body, while fixed and varying predicates are expressed by the negation in the head. Moreover, the translation implies a close relationship between circumscription and abductive logic programming. That is, fixed and varying predicates in a circumscriptive theory are also viewed as abducible predicates in an abductive disjunctive program. Our method of translating circumscription into logic programming is fairly general compared with the existing approaches and exploits new applications of logic programming for representing commonsense knowledge.
For the entire collection see [Zbl 0875.00116].Epistemic argumentation framework: theory and computationhttps://zbmath.org/1496.683242022-11-17T18:59:28.764376Z"Sakama, Chiaki"https://zbmath.org/authors/?q=ai:sakama.chiaki"Son, Tran Cao"https://zbmath.org/authors/?q=ai:son.tran-caoSummary: The paper introduces the notion of an \textit{epistemic argumentation framework} (EAF) as a means to integrate the beliefs of a reasoner with argumentation. Intuitively, an EAF encodes the beliefs of an agent who reasons about arguments. Formally, an EAF is a pair of an argumentation framework and an \textit{epistemic constraint}. The semantics of the EAF is defined by the notion of an \(\omega\)-\textit{epistemic labelling set}, where \(\omega\) is complete, stable, grounded, or preferred, which is a set of \(\omega\)-labellings that collectively satisfies the epistemic constraint of the EAF. The paper shows how EAF can represent different views of reasoners on the same argumentation framework. It also includes representing preferences in EAF and multi-agent argumentation. Finally, the paper discusses complexity issues and computation using epistemic logic programming.Computing the acceptability semanticshttps://zbmath.org/1496.683252022-11-17T18:59:28.764376Z"Toni, Francesca"https://zbmath.org/authors/?q=ai:toni.francesca"Kakas, Antonios C."https://zbmath.org/authors/?q=ai:kakas.antonis-cSummary: We present a proof theory and a proof procedure for nonmonotonic reasoning based on the acceptability semantics for logic programming, formulated in an argumentation framework. These proof theory and procedure are defined as generalisations of corresponding proof theories and procedures for the stable theory and preferred extension semantics. In turn, these can be seen as generalisations of the Eshghi-Kowalski abductive procedure for logic programming.
For the entire collection see [Zbl 0875.00116].Arrow update synthesishttps://zbmath.org/1496.683262022-11-17T18:59:28.764376Z"van Ditmarsch, Hans"https://zbmath.org/authors/?q=ai:van-ditmarsch.hans-pieter"van der Hoek, Wiebe"https://zbmath.org/authors/?q=ai:van-der-hoek.wiebe"Kooi, Barteld"https://zbmath.org/authors/?q=ai:kooi.barteld-pieter"Kuijer, Louwe B."https://zbmath.org/authors/?q=ai:kuijer.louwe-boukeSummary: In this contribution we present arbitrary arrow update model logic (AAUML). This is a dynamic epistemic logic or update logic. In update logics, static/basic modalities are interpreted on a given relational model whereas dynamic/update modalities induce transformations (updates) of relational models. In AAUML the update modalities formalize the execution of arrow update models, and there is also a modality for quantification over arrow update models. Arrow update models are an alternative to the well-known action models. We provide an axiomatization of AAUML. The axiomatization is a rewrite system allowing to eliminate arrow update modalities from any given formula, while preserving truth. Thus, AAUML is decidable and equally expressive as the base multi-agent modal logic. Our main result is to establish arrow update synthesis: if there is an arrow update model after which \(\varphi\), we can construct (synthesize) that model from \(\varphi\). We also point out some pregnant differences in update expressivity between arrow update logics, action model logics, and refinement modal logic.Abduction over 3-valued extended logic programshttps://zbmath.org/1496.683272022-11-17T18:59:28.764376Z"Viegas Damásio, Carlos"https://zbmath.org/authors/?q=ai:viegas-damasio.carlos"Moniz Pereira, Luís"https://zbmath.org/authors/?q=ai:moniz-pereira.luisFor the entire collection see [Zbl 0875.00116].Revision by communicationhttps://zbmath.org/1496.683282022-11-17T18:59:28.764376Z"Witteveen, Cees"https://zbmath.org/authors/?q=ai:witteveen.cees"van der Hoek, Wiebe"https://zbmath.org/authors/?q=ai:van-der-hoek.wiebeSummary: We deal with the problem of revising logic programs that, according to some non-monotonic semantics, do not have an acceptable model. We propose to study such revisions in a framework where a number of semantical agents is distinguished, each agent associated with a different semantics but all agents interpreting the same program. If an agent cannot find an acceptable model for the program, he has to perform program revision. For logic programs, the different agents can be partially ordered by inferential strength. We propose a revision framework where an agent may consult his weaker colleagues, adds the information they can infer to the program and tries to find an acceptable model for the expanded program. In this paper we will concentrate on the kind of information needed to find successful revisions of programs. We point out some parameters along which our framework can be analyzed and suggest some further research.
For the entire collection see [Zbl 0875.00116].A polynomial reduction of forks into logic programshttps://zbmath.org/1496.683292022-11-17T18:59:28.764376Z"Aguado, Felicidad"https://zbmath.org/authors/?q=ai:aguado.felicidad"Cabalar, Pedro"https://zbmath.org/authors/?q=ai:cabalar.pedro"Fandinno, Jorge"https://zbmath.org/authors/?q=ai:fandinno.jorge"Pearce, David"https://zbmath.org/authors/?q=ai:pearce.david-j|pearce.david-a-j|pearce.david-g"Pérez, Gilberto"https://zbmath.org/authors/?q=ai:perez.gilberto"Vidal, Concepción"https://zbmath.org/authors/?q=ai:vidal.concepcionSummary: In this research note we present additional results for an earlier published paper
[the authors, Artif. Intell. 275, 575--601 (2019; Zbl 1478.68338)].
There, we studied the problem of \textit{projective strong equivalence} (PSE) of logic programs, that is, checking whether two logic programs (or propositional formulas) have the same behaviour (under the stable model semantics) regardless of a common context and ignoring the effect of local auxiliary atoms. PSE is related to another problem called \textit{strongly persistent forgetting} that consists in keeping a program's behaviour after removing its auxiliary atoms, something that is known to be not always possible in Answer Set Programming. In [loc. cit.], we introduced a new connective `|' called \textit{fork} and proved that, in this extended language, it is always possible to forget auxiliary atoms, but at the price of obtaining a result containing forks. We also proved that forks can be translated back to logic programs introducing new hidden auxiliary atoms, but this translation was exponential in the worst case. In this note we provide a new polynomial translation of arbitrary forks into regular programs that allows us to prove that brave and cautious reasoning with forks has the same complexity as that of ordinary (disjunctive) logic programs and paves the way for an efficient implementation of forks. To this aim, we rely on a pair of new PSE invariance properties.Modular structures and atomic decomposition in ontologieshttps://zbmath.org/1496.683302022-11-17T18:59:28.764376Z"Del Vescovo, Chiara"https://zbmath.org/authors/?q=ai:del-vescovo.chiara"Horridge, Matthew"https://zbmath.org/authors/?q=ai:horridge.matthew"Parsia, Bijan"https://zbmath.org/authors/?q=ai:parsia.bijan"Sattler, Uli"https://zbmath.org/authors/?q=ai:sattler.uli"Schneider, Thomas"https://zbmath.org/authors/?q=ai:schneider.thomas-r|schneider.thomas.1|schneider.thomas.2|schneider.thomas|schneider.thomas-d"Zhao, Haoruo"https://zbmath.org/authors/?q=ai:zhao.haoruoSummary: With the growth of ontologies used in diverse application areas, the need for module extraction and modularisation techniques has risen. The notion of the \textit{modular structure} of an ontology, which comprises a suitable set of base modules together with their logical dependencies, has the potential to help users and developers in comprehending, sharing, and maintaining an ontology. We have developed a new modular structure, called atomic decomposition (AD), which is based on modules that provide strong logical properties, such as locality-based modules. In this article, we present the theoretical foundations of AD, review its logical and computational properties, discuss its suitability as a modular structure, and report on an experimental evaluation of AD. In addition, we discuss the concept of a modular structure in ontology engineering and provide a survey of existing decomposition approaches.A terminological interpretation of (abductive) logic programminghttps://zbmath.org/1496.683312022-11-17T18:59:28.764376Z"Denecker, Marc"https://zbmath.org/authors/?q=ai:denecker.marcSummary: The logic program formalism is commonly viewed as a modal or default logic. In this paper, we propose an alternative interpretation of the formalism as a terminological logic. A terminological logic is designed to represent two different forms of knowledge. A TBox represents definitions for a set of concepts. An ABox represents the \textit{assertional knowledge} of the expert. In our interpretation, a logic program is a TBox providing definitions for all predicates; this interpretation is present already in Clark's completion semantics. We extend the logic program formalism such that some predicates can be left undefined and use classical logic as the language for the ABox. The resulting logic can be seen as an alternative interpretation of abductive logic program formalism. We study the expressivity of the formalism for representing uncertainty by proposing solutions for problems in temporal reasoning, with null values and open domain knowledge.
For the entire collection see [Zbl 0875.00116].Complexity results for abductive logic programminghttps://zbmath.org/1496.683322022-11-17T18:59:28.764376Z"Eiter, Thomas"https://zbmath.org/authors/?q=ai:eiter.thomas"Gottlob, Georg"https://zbmath.org/authors/?q=ai:gottlob.georg"Leone, Nicola"https://zbmath.org/authors/?q=ai:leone.nicolaSummary: In this paper, we argue that logic programming semantics can be more meaningful for abductive reasoning than classical inference by providing examples from the area of knowledge representation and reasoning. The main part of the paper addresses the issue of the computational complexity of the principal decisional problems in abductive reasoning, which are: Given an instance of an abduction problem (i) does the problem have solution (i.e., an explanation); (ii) does a given hypothesis belong to some explanation; and (iii) does a given hypothesis belong to all explanations. These problems are investigated here for the stable model semantics of normal logic programs.
For the entire collection see [Zbl 0875.00116].Annotated revision specification programshttps://zbmath.org/1496.683332022-11-17T18:59:28.764376Z"Fitting, Melvin"https://zbmath.org/authors/?q=ai:fitting.melvin-cSummary: \textit{V. W. Marek} and \textit{M. Truszczyński} [Lect. Notes Comput. Sci. 838, 122--136 (1994; Zbl 0988.68626)] have introduced an interesting mechanism for specifying revisions of knowledge bases by means of logic programs. Here we extend their idea to allow for confidence factors, multiple experts, and so on. The appropriate programming mechanism turns out to be \textit{annotated logic programs} and the appropriate semantic tool, \textit{bilattices}. This may be the first example of a setting in which both notions arise naturally, and complement each other. We also show that several of the results of Marek and Truszczyński turn out to be essentially algebraic, once the proper setting has been formulated.
For the entire collection see [Zbl 0875.00116].Credibility-limited base revision: new classes and their characterizationshttps://zbmath.org/1496.683342022-11-17T18:59:28.764376Z"Garapa, Marco"https://zbmath.org/authors/?q=ai:garapa.marco"Fermé, Eduardo"https://zbmath.org/authors/?q=ai:ferme.eduardo-leopoldo"Reis, Maurício"https://zbmath.org/authors/?q=ai:reis.mauricio-d-lSummary: In this paper we study a kind of operator -- known as credibility-limited base revisions -- which addresses two of the main issues that have been pointed out to the AGM model of belief change. Indeed, on the one hand, these operators are defined on belief bases (rather than belief sets) and, on the other hand, they are constructed with the underlying idea that not all new information is accepted. We propose twenty different classes of credibility-limited base revision operators and obtain axiomatic characterizations for each of them. Additionally we thoroughly investigate the interrelations (in the sense of inclusion) among all those classes. More precisely, we analyse whether each one of those classes is or is not (strictly) contained in each of the remaining ones.Information algebrashttps://zbmath.org/1496.683352022-11-17T18:59:28.764376Z"Orlowska, Ewa"https://zbmath.org/authors/?q=ai:orlowska.ewa-sFor the entire collection see [Zbl 1492.68008].Visualising lattices with tabular diagramshttps://zbmath.org/1496.683362022-11-17T18:59:28.764376Z"Priss, Uta"https://zbmath.org/authors/?q=ai:priss.utaSummary: Euler and Hasse diagrams are well-known visualisations of sets. This paper introduces a novel type of visualisation, Tabular diagrams, which is essentially a type of Euler diagram where lines have been omitted or a 2-dimensional Linear diagram. Tabular diagrams are utilised to visualise lattices in comparison to Euler and Hasse diagrams. For that purpose, lattice terminology is applied to all three types of diagrams.
For the entire collection see [Zbl 1487.68008].Update by means of inference ruleshttps://zbmath.org/1496.683372022-11-17T18:59:28.764376Z"Przymusinski, Teodor C."https://zbmath.org/authors/?q=ai:przymusinski.teodor-c"Turner, Hudson"https://zbmath.org/authors/?q=ai:turner.hudsonFor the entire collection see [Zbl 0875.00116].Lifted Bayesian filtering in multiset rewriting systemshttps://zbmath.org/1496.683382022-11-17T18:59:28.764376Z"Lüdtke, Stefan"https://zbmath.org/authors/?q=ai:ludtke.stefan"Kirste, Thomas"https://zbmath.org/authors/?q=ai:kirste.thomasSummary: We present a model for Bayesian filtering (BF) in discrete dynamic systems where multiple entities (inter)-act, i.e. where the system dynamics is naturally described by a Multiset rewriting system (MRS). Typically, BF in such situations is computationally expensive due to the high number of discrete states that need to be maintained explicitly.
We devise a lifted state representation, based on a suitable decomposition of multiset states, such that some factors of the distribution are exchangeable and thus afford an efficient representation. Intuitively, this representation groups together similar entities whose properties follow an exchangeable joint distribution. Subsequently, we introduce a BF algorithm that works directly on lifted states, without resorting to the original, much larger ground representation.
This algorithm directly lends itself to approximate versions by limiting the number of explicitly represented lifted states in the posterior. We show empirically that the lifted representation can lead to a factorial reduction in the representational complexity of the distribution, and in the approximate cases can lead to a lower variance of the estimate and a lower estimation error compared to the original, ground representation.Designing a foundation of fuzzy rules based on numerical datahttps://zbmath.org/1496.683392022-11-17T18:59:28.764376Z"Mukhamedieva, D. T."https://zbmath.org/authors/?q=ai:mukhamedieva.d-t(no abstract)Detailed evaluation of fuzzy sets in rule conditions as a key for accurate and explainable rule-based systemshttps://zbmath.org/1496.683402022-11-17T18:59:28.764376Z"Porębski, Sebastian"https://zbmath.org/authors/?q=ai:porebski.sebastianSummary: This work is about research looking for a compromise between accuracy and interpretability of decision support systems. The most accurate of these systems are characterized by high incomprehensibility. On the other hand, rule systems are interpretable but their weakness is accuracy. The search for balanced solutions is a necessary task if we want to have tools that can be the subject of cooperation between a knowledge engineer and a human expert (or finally, a user). This work presents the results of research focused on the evaluation of fuzzy sets as a basic element of rule sets. The evaluation of fuzzy set matching to training data allows you to choose the best components at the beginning of the rule extraction. This approach results in high testing accuracy and maintains satisfactory interpretability.
For the entire collection see [Zbl 1478.03003].Automated non-monotonic reasoning in System \textbf{P}https://zbmath.org/1496.683412022-11-17T18:59:28.764376Z"Stojanović, Tatjana"https://zbmath.org/authors/?q=ai:stojanovic.tatjana"Ikodinović, Nebojša"https://zbmath.org/authors/?q=ai:ikodinovic.nebojsa"Davidović, Tatjana"https://zbmath.org/authors/?q=ai:davidovic.tatjana"Ognjanović, Zoran"https://zbmath.org/authors/?q=ai:ognjanovic.zoranSummary: This paper presents a novel approach to automated reasoning in System \textbf{P}. System \textbf{P} axiomatizes a set of core properties that describe reasoning with defeasible assertions (defaults) of the form: if \(\alpha\) then normally (usually or typically) \( \beta \). A logic with approximate conditional probabilities is used for modeling default rules. That representation enables reducing the satisfiability problem for default reasoning to the (non)linear programming problem. The complexity of the obtained instances requires the application of optimization approaches. The main heuristic that we use is the Bee Colony Optimization (BCO). As an alternative to BCO, we use Simplex method and Fourier-Motzkin Elimination method to solve linear programming problems. All approaches are tested on a set of default reasoning examples that can be found in literature. The general impression is that Fourier-Motzkin Elimination procedure is not suitable for practical use due to substantially high memory usage and time consuming execution, the Simplex method is able to provide useful results for some of the tested examples, while heuristic approach turns out to be the most appropriate in terms of both success rate and time needed for reaching conclusions. In addition, the BCO method was tested on a set of randomly generated examples of larger dimensions, illustrating its practical usability.Weighted evidence combination based on improved conflict factorhttps://zbmath.org/1496.683422022-11-17T18:59:28.764376Z"Xing, Xiaochen"https://zbmath.org/authors/?q=ai:xing.xiaochen"Cai, Yuanwen"https://zbmath.org/authors/?q=ai:cai.yuanwen"Zhao, Zhengyu"https://zbmath.org/authors/?q=ai:zhao.zhengyu"Cheng, Long"https://zbmath.org/authors/?q=ai:cheng.longSummary: The combination of heavy conflict evidence based on D-S evidence theory always has flaws. Research on a combination of conflict evidence at home and abroad is summarized and analyzed in detail, and the conclusion can be drawn that a modified evidence method of conflict evidence is more useful. Obtaining an effective measure of evidence conflict is the first step of conflict evidence combination. The existing conflict measure methods are summarized and the main problem of those methods is analyzed. An improved evidence conflict measure factor called Mconf is put forward based on previous research of conflict evidence combination. Mconf is mainly built up with modified evidence distance called \(md_{BPA} \) and combination conflict called \(k\). A weighted evidence combination method based on Mconf is proposed. A typical case is used to validate the proposed method, and combination results show the proposed method is effective.Variational models for color image correction inspired by visual perception and neurosciencehttps://zbmath.org/1496.683432022-11-17T18:59:28.764376Z"Batard, Thomas"https://zbmath.org/authors/?q=ai:batard.thomas"Hertrich, Johannes"https://zbmath.org/authors/?q=ai:hertrich.johannes"Steidl, Gabriele"https://zbmath.org/authors/?q=ai:steidl.gabrieleSummary: Reproducing the perception of a real-world scene on a display device is a very challenging task which requires the understanding of the camera processing pipeline, the display process, and the way the human visual system processes the light it captures. Mathematical models based on psychophysical and physiological laws on color vision, named Retinex, provide efficient tools to handle degradations produced during the camera processing pipeline like the reduction of the contrast. In particular, \textit{T. Batard} and \textit{M. Bertalmío} [J. Math. Imaging Vis. 60, No. 6, 849--881 (2018; Zbl 1437.94007)] described some psychophysical laws on brightness perception as covariant derivatives, included them into a variational model, and observed that the quality of the color image correction is correlated with the accuracy of the vision model it includes. Based on this observation, we postulate that this model can be improved by including more accurate data on vision with a special attention on visual neuroscience here. Then, inspired by the presence of neurons responding to different visual attributes in the area V1 of the visual cortex as orientation, color or movement, to name a few, and horizontal connections modeling the interactions between those neurons, we construct two variational models to process both local (edges, textures) and global (contrast) features. This is an improvement with respect to the model of Batard and Bertalmío as the latter cannot process local and global features independently and simultaneously. Finally, we conduct experiments on color images which corroborate the improvement provided by the new models.Hexagonality as a new shape-based descriptor of objecthttps://zbmath.org/1496.683442022-11-17T18:59:28.764376Z"Ilić, Vladimir"https://zbmath.org/authors/?q=ai:ilic.vladimir"Ralević, Nebojša M."https://zbmath.org/authors/?q=ai:ralevic.nebojsa-m-ralevicSummary: In this paper, we define a new shape-based measure which evaluates how much a given shape is hexagonal. Such an introduced measure ranges through the interval \((0, 1]\) and reaches the maximal possible value 1 if and only if the shape considered is a hexagon. The new measure is also invariant with respect to rotation, translation and scaling transformations. A number of experiments, performed on both synthetic and real image data, are shown in order to confirm theoretical observations and illustrate the behavior of the new measure. The new hexagonality measure also provides several useful side results whose theoretical properties are discussed and experimentally evaluated. As side results, we obtain a new method that computes the shape orientation as the direction which optimizes the new hexagonality measure and a new shape elongation measure which computes the elongation of a given shape as the ratio of the lengths of the longer and shorter semi-axis of the appropriate associated hexagon. Several experiments relating to three well-known image datasets, such as MPEG-7 CE-1, Swedish Leaf, and Galaxy Zoo datasets, are also provided to illustrate effectiveness and benefits of the new introduced shape measures.Angle aided circle detection based on randomized Hough transform and its application in welding spots detectionhttps://zbmath.org/1496.683452022-11-17T18:59:28.764376Z"Liang, Qiaokang"https://zbmath.org/authors/?q=ai:liang.qiaokang"Long, Jianyong"https://zbmath.org/authors/?q=ai:long.jianyong"Nan, Yang"https://zbmath.org/authors/?q=ai:nan.yang"Coppola, Gianmarc"https://zbmath.org/authors/?q=ai:coppola.gianmarc"Zou, Kunlin"https://zbmath.org/authors/?q=ai:zou.kunlin"Zhang, Dan"https://zbmath.org/authors/?q=ai:zhang.dan"Sun, Wei"https://zbmath.org/authors/?q=ai:sun.wei.3(no abstract)Geometric interpretation of the multi-solution phenomenon in the P3P problemhttps://zbmath.org/1496.683462022-11-17T18:59:28.764376Z"Wang, Bo"https://zbmath.org/authors/?q=ai:wang.bo.2|wang.bo|wang.bo.1"Hu, Hao"https://zbmath.org/authors/?q=ai:hu.hao"Zhang, Caixia"https://zbmath.org/authors/?q=ai:zhang.caixiaSummary: It is well known that the P3P problem could have 1, 2, 3 and at most 4 positive solutions under different configurations among its three control points and the position of the optical center. Since in any real applications, the knowledge on the exact number of possible solutions is a prerequisite for selecting the right one among all the possible solutions, and the study on the phenomenon of multiple solutions in the P3P problem has been an active topic since its very inception. In this work, we provide some new geometric interpretations on the multi-solution phenomenon in the P3P problem, and our main results include: (1) the necessary and sufficient condition for the P3P problem to have a pair of side-sharing solutions is the two optical centers of the solutions both lie on one of the three vertical planes to the base plane of control points; (2) the necessary and sufficient condition for the P3P problem to have a pair of point-sharing solutions is the two optical centers of the solutions both lie on one of the three so-called skewed danger cylinders;(3) if the P3P problem has other solutions in addition to a pair of side-sharing (point-sharing) solutions, these remaining solutions must be a point-sharing (side-sharing ) pair. In a sense, the side-sharing pair and the point-sharing pair are companion pairs; (4) there indeed exist such P3P problems that have four completely distinct solutions, i.e., the solutions sharing neither a side nor a point, closing a long guessing issue in the literature. In sum, our results provide some new insights into the nature of the multi-solution phenomenon in the P3P problem, and in addition to their academic value, they could also be used as some theoretical guidance for practitioners in real applications to avoid occurrence of multiple solutions by properly arranging the control points.Enriched line graph: a new structure for searching language collocationshttps://zbmath.org/1496.683472022-11-17T18:59:28.764376Z"Criado-Alonso, Ángeles"https://zbmath.org/authors/?q=ai:criado-alonso.angeles"Battaner-Moro, Elena"https://zbmath.org/authors/?q=ai:battaner-moro.elena"Aleja, David"https://zbmath.org/authors/?q=ai:aleja.david"Romance, Miguel"https://zbmath.org/authors/?q=ai:romance.miguel"Criado, Regino"https://zbmath.org/authors/?q=ai:criado.reginoSummary: The specific terminology of a specialty language comes, essentially, from specific uses of already existing words and/or from specific combinations of words so called ``collocations''. In this work we introduce a new mathematical structure (enriched line graph) and a new methodology to extract properties and characteristics of a type of multilayer linguistic networks associated with these types of languages. Specifically, this work is focused on the description of a methodology based on a variant of the PageRank algorithm to locate the linguistic collocations and on defining a new structure (enriched line graph) that can be interpreted as a certain type of ``interpolation'' between the original graph and its associated line graph, showing new results, properties and applications of this concept, and, in particular, certain characteristics of the specialty language produced by the scientific community of complex networks.A cross-lingual sentence pair interaction feature capture model based on pseudo-corpus and multilingual embeddinghttps://zbmath.org/1496.683482022-11-17T18:59:28.764376Z"Liu, Gang"https://zbmath.org/authors/?q=ai:liu.gang.2|liu.gang.7|liu.gang|liu.gang.6|liu.gang.3|liu.gang.1|liu.gang.4|liu.gang.5"Dong, Yichao"https://zbmath.org/authors/?q=ai:dong.yichao"Wang, Kai"https://zbmath.org/authors/?q=ai:wang.kai.1|wang.kai.3|wang.kai.2|wang.kai|wang.kai.4"Yan, Zhizheng"https://zbmath.org/authors/?q=ai:yan.zhizheng(no abstract)Assessment of text coherence by constructing the graph of semantic, lexical, and grammatical consistancy of phrases of sentenceshttps://zbmath.org/1496.683492022-11-17T18:59:28.764376Z"Pogorilyy, S. D."https://zbmath.org/authors/?q=ai:pogorilyy.s-d"Kramov, A. A."https://zbmath.org/authors/?q=ai:kramov.a-aSummary: The graph-based method of coherence assessment of texts based on the analysis of semantic, grammatical, and lexical consistency of sentence phrases has been suggested. The experimental verification of the efficiency of the method has been performed on the English-language corpora. The metrics obtained can indicate that the suggested method outperforms other modern approaches. The method can be applied to other languages by replacing the linguistic models according to the features of a certain language.A measure of \(Q\)-convexity for shape analysishttps://zbmath.org/1496.683502022-11-17T18:59:28.764376Z"Balázs, Péter"https://zbmath.org/authors/?q=ai:balazs.peter.1|balazs.peter.2"Brunetti, Sara"https://zbmath.org/authors/?q=ai:brunetti.saraSummary: In this paper, we study three basic novel measures of convexity for shape analysis. The convexity considered here is the so-called \(Q\)-convexity, that is, convexity by quadrants. The measures are based on the geometrical properties of \(Q\)-convex shapes and have the following features: (1) their values range from 0 to 1; (2) their values equal 1 if and only if the binary image is \(Q\)-convex; and (3) they are invariant by translation, reflection, and rotation by 90 degrees. We design a new algorithm for the computation of the measures whose time complexity is linear in the size of the binary image representation. We investigate the properties of our measures by solving object ranking problems and give an illustrative example of how these convexity descriptors can be utilized in classification problems.PPLite: zero-overhead encoding of NNC polyhedrahttps://zbmath.org/1496.683512022-11-17T18:59:28.764376Z"Becchi, Anna"https://zbmath.org/authors/?q=ai:becchi.anna"Zaffanella, Enea"https://zbmath.org/authors/?q=ai:zaffanella.eneaSummary: We present an alternative Double Description representation for the domain of NNC (not necessarily closed) polyhedra, together with the corresponding Chernikova-like conversion procedure. The representation uses no slack variable at all and provides a solution to a few technical issues caused by the encoding of an NNC polyhedron as a closed polyhedron in a higher dimension space. We then reconstruct the abstract domain of NNC polyhedra, providing all the operators needed to interface it with commonly available static analysis tools: while doing this, we highlight the efficiency gains enabled by the new representation and we show how the canonicity of the new representation allows for the specification of proper, semantic widening operators. A thorough experimental evaluation shows that our new abstract domain achieves significant efficiency improvements with respect to classical implementations for NNC polyhedra.2D geometric moment invariants from the point of view of the classical invariant theoryhttps://zbmath.org/1496.683522022-11-17T18:59:28.764376Z"Bedratyuk, Leonid"https://zbmath.org/authors/?q=ai:bedratyuk.leonidSummary: The aim of this paper is to clear up the question of the connection between the geometric moment invariants and the invariant theory, considering a problem of describing the 2D geometric moment invariants as a problem of the classical invariant theory. We give a precise statement of the problem of computation of the 2D geometric invariant moments, introducing the notions of the algebras of simultaneous 2D geometric moment invariants, and prove that they are isomorphic to the algebras of joint \(\mathrm{SO}(2)\)-invariants of several binary forms. Also, to simplify the calculating of the invariants, we proceed from an action of Lie group \(\mathrm{SO}(2)\) to an action of its Lie algebra \({{\mathfrak{so}}}_2\). Though the 2D geometric moments are not as effective as the orthogonal ones are, the author hopes that the results will be useful to the researchers in the fields of image analysis and pattern recognition.SFCDecomp: multicriteria optimized tool path planning in 3D printing using space-filling curve based domain decompositionhttps://zbmath.org/1496.683532022-11-17T18:59:28.764376Z"Gupta, Prashant"https://zbmath.org/authors/?q=ai:gupta.prashant-k"Guo, Yiran"https://zbmath.org/authors/?q=ai:guo.yiran"Boddeti, Narasimha"https://zbmath.org/authors/?q=ai:boddeti.narasimha"Krishnamoorthy, Bala"https://zbmath.org/authors/?q=ai:krishnamoorthy.balaSubsampled turbulence removal networkhttps://zbmath.org/1496.683542022-11-17T18:59:28.764376Z"Chak, Wai Ho"https://zbmath.org/authors/?q=ai:chak.wai-ho"Lau, Chun Pong"https://zbmath.org/authors/?q=ai:lau.chun-pong"Lui, Lok Ming"https://zbmath.org/authors/?q=ai:lui.lok-mingSummary: We present a deep-learning-based approach to restore turbulence-distorted images from turbulent deformations and space-time varying blurs. Instead of requiring a massive training sample size in deep networks, we propose a simple but effective data augmentation method to firstly make deep learning approach feasible to solve turbulence problem with data scarcity. Then we employ the proposed Turbulence Removal Network (TRN), which is the Wasserstein generative adversarial network (GAN) with a \(\ell_1\) cost and multiframe input to freshly restore the degraded image under atmospheric turbulence. Finally, we novelly explore the possibility to introduce a subsampling algorithm in the deep network to filter out strongly corrupted frames and enhance the restoration performance. We also investigate the viability to significantly reduce the demand of a huge number of turbulence-distorted frames in our deep network \textbf{TRN} without losing the quality of the reconstructed image. Experimental results demonstrate the effectiveness of the subsampling algorithm by significantly enhancing the image quality without requiring a large number of frames in deep learning.Power spectral clusteringhttps://zbmath.org/1496.683552022-11-17T18:59:28.764376Z"Challa, Aditya"https://zbmath.org/authors/?q=ai:challa.aditya"Danda, Sravan"https://zbmath.org/authors/?q=ai:danda.sravan"Sagar, B. S. Daya"https://zbmath.org/authors/?q=ai:sagar.b-s-daya"Najman, Laurent"https://zbmath.org/authors/?q=ai:najman.laurentSummary: Spectral clustering is one of the most important image processing tools, especially for image segmentation. This specializes at taking local information such as edge weights and globalizing them. Due to its unsupervised nature, it is widely applicable. However, traditional spectral clustering is \({\mathcal{O}}(n^{3/2})\). This poses a challenge, especially given the recent trend of large datasets. In this article, we propose an algorithm by using ideas from \(\varGamma\)-convergence, which is an amalgamation of maximum spanning tree clustering and spectral clustering. This algorithm scales as \({\mathcal{O}}(n \log (n))\) under certain conditions, while producing solutions which are similar to that of spectral clustering. Several toy examples are used to illustrate the similarities and differences. To validate the proposed algorithm, a recent state-of-the-art technique for segmentation -- multiscale combinatorial grouping is used, where the normalized cut is replaced with the proposed algorithm and results are analyzed.Adaptive correction method of two-dimensional image deviation in visual communication designhttps://zbmath.org/1496.683562022-11-17T18:59:28.764376Z"Kong, Cheng"https://zbmath.org/authors/?q=ai:kong.cheng(no abstract)Background subtraction using adaptive singular value decompositionhttps://zbmath.org/1496.683572022-11-17T18:59:28.764376Z"Reitberger, Günther"https://zbmath.org/authors/?q=ai:reitberger.gunther"Sauer, Tomas"https://zbmath.org/authors/?q=ai:sauer.tomasSummary: An important task when processing sensor data is to distinguish relevant from irrelevant data. This paper describes a method for an iterative singular value decomposition that maintains a model of the background via singular vectors spanning a subspace of the image space, thus providing a way to determine the amount of new information contained in an incoming frame. We update the singular vectors spanning the background space in a computationally efficient manner and provide the ability to perform blockwise updates, leading to a fast and robust adaptive SVD computation. The effects of those two properties and the success of the overall method to perform a state-of-the-art background subtraction are shown in both qualitative and quantitative evaluations.The generic combinatorial algorithm for image matching with classes of projective transformationshttps://zbmath.org/1496.683582022-11-17T18:59:28.764376Z"Rosenke, Christian"https://zbmath.org/authors/?q=ai:rosenke.christian"Liśkiewicz, Maciej"https://zbmath.org/authors/?q=ai:liskiewicz.maciejSummary: Image matching is an important task arising in video compression, optical character recognition, medical imaging, watermarking and in many others fields. Given two digital images \(A\) and \(B\), image matching determines a transformation \(f\) for \(A\) such that it most closely resembles \(B\). In this paper, we introduce the first general discretization technique that works for the class of projective transformations as well as plenty of its subclasses such as affine transformations and several combinations of scaling, rotation and translation. Based on this, we provide a fully generic image matching algorithm for all these classes that runs in polynomial time.Single image blind deblurring based on salient edge-structures and elastic-net regularizationhttps://zbmath.org/1496.683592022-11-17T18:59:28.764376Z"Yu, XiaoYuan"https://zbmath.org/authors/?q=ai:yu.xiaoyuan"Xie, Wei"https://zbmath.org/authors/?q=ai:xie.weiSummary: In single image blind deblurring, the blur kernel and latent image are estimated from a single observed blurry image. The associated mathematical problem is ill-posed, and an acceptable solution is difficult to obtain without additional priors or heuristics. Inspired by the nonlocal self-similarity in image denoising problem, we introduce elastic-net regularization as a rank prior to improve the estimation of the intermediate image. Furthermore, it is well known that salient edge-structures can provide reliable information for kernel estimation. Therefore, we propose a new blind image deblurring method by combining the salient edge-structures and the elastic-net regularization. The salient edge-structures are selected from the intermediate image and used to guide the estimation of the blur kernel. Then, we employ the elastic-net regularization and edge-structures to further estimate intermediate latent image, by retaining the dominant edge and removing slight texture, for a better kernel estimation. Finally, quantitative and qualitative evaluations are conducted by comparing the results with those obtained by state-of-the-art methods. We conclude that the proposed method performs favorably when considering both synthetic and real blurry images.Goal-sensitive reasoning with disconnection tableauxhttps://zbmath.org/1496.683602022-11-17T18:59:28.764376Z"Barnett, Lee A."https://zbmath.org/authors/?q=ai:barnett.lee-aSummary: One of the challenges that has been outlined for instantiation-based theorem proving methods is their application in reasoning over theories with many axioms, as in tasks involving large ontologies or mathematical libraries. Goal-sensitive methods, which restrict inferences to those related to the goal to be refuted, tend to outperform other methods on large axiom sets especially. This paper presents a goal-sensitive adaptation of the disconnection tableau calculus, leveraging the advantages of goal-sensitivity in an instantiation-based, tableau-guided proof method. A proof of the method's completeness follows its description, as well as a discussion of planned future work in this area.
For the entire collection see [Zbl 1371.68015].On the community structure of bounded model checking SAT problemshttps://zbmath.org/1496.683612022-11-17T18:59:28.764376Z"Baud-Berthier, Guillaume"https://zbmath.org/authors/?q=ai:baud-berthier.guillaume"Giráldez-Cru, Jesús"https://zbmath.org/authors/?q=ai:giraldez-cru.jesus"Simon, Laurent"https://zbmath.org/authors/?q=ai:simon.laurent-s-rSummary: Following the impressive progress made in the quest for efficient SAT solving in the last years, a number of researches has focused on explaining performances observed on typical application problems. However, until now, tentative explanations were only partial, essentially because the semantic of the original problem was lost in the translation to SAT.
In this work, we study the behavior of so called ``modern'' SAT solvers under the prism of the first successful application of CDCL solvers, i.e., bounded model checking. We trace the origin of each variable w.r.t. its unrolling depth, and show a surprising relationship between these time steps and the communities found in the CNF encoding. We also show how the VSIDS heuristic, the resolution engine, and the learning mechanism interact with the unrolling steps. Additionally, we show that the literal block distance (LBD), used to identify good learnt clauses, is related to this measure.
Our work shows that communities identify strong dependencies among the variables of different time steps, revealing a structure that arises when unrolling the problem, and which seems to be caught by the LBD measure.
For the entire collection see [Zbl 1368.68008].New resolution-based QBF calculi and their proof complexityhttps://zbmath.org/1496.683622022-11-17T18:59:28.764376Z"Beyersdorff, Olaf"https://zbmath.org/authors/?q=ai:beyersdorff.olaf"Chew, Leroy"https://zbmath.org/authors/?q=ai:chew.leroy"Janota, Mikoláš"https://zbmath.org/authors/?q=ai:janota.mikolasA vision for automated deduction rooted in the connection methodhttps://zbmath.org/1496.683632022-11-17T18:59:28.764376Z"Bibel, Wolfgang"https://zbmath.org/authors/?q=ai:bibel.wolfgangSummary: The paper presents an informal overview of the connection method in automated deduction. In particular, it points out its unique advantage over competing methods which consists in its formula-orientedness. Among the consequences of this unique feature are three striking advantages, viz. uniformity (over many logics), performance (due to its extreme compactness and goal-orientedness, evidenced by the leanCoP family of provers), and a global view over the proof process (enabling a higher-level guidance of the proof search). These aspects are discussed on the basis of the extensive work accumulated in the literature about this proof method. Along this line of research we envisage a bright future for the field and point out promising directions for future research.
For the entire collection see [Zbl 1371.68015].Shortening QBF proofs with dependency schemeshttps://zbmath.org/1496.683642022-11-17T18:59:28.764376Z"Blinkhorn, Joshua"https://zbmath.org/authors/?q=ai:blinkhorn.joshua"Beyersdorff, Olaf"https://zbmath.org/authors/?q=ai:beyersdorff.olafSummary: We provide the first proof complexity results for QBF dependency calculi. By showing that the reflexive resolution path dependency scheme admits exponentially shorter Q-resolution proofs on a known family of instances, we answer a question first posed by
\textit{F. Slivovsky} and \textit{S. Szeider} in [Lect. Notes Comput. Sci. 8561, 269--284 (2014; Zbl 1423.68426)].
Further, we conceive a method of QBF solving in which dependency recomputation is utilised as a form of inprocessing. Formalising this notion, we introduce a new calculus in which a dependency scheme is applied dynamically. We demonstrate the further potential of this approach beyond that of the existing static system with an exponential separation.
For the entire collection see [Zbl 1368.68008].Superposition with structural inductionhttps://zbmath.org/1496.683652022-11-17T18:59:28.764376Z"Cruanes, Simon"https://zbmath.org/authors/?q=ai:cruanes.simonSummary: Superposition-based provers have been successfully used to discharge proof obligations stemming from proof assistants. However, many such obligations require induction to be proved. We present a new extension of typed superposition that can perform structural induction. Several inductive goals can be attempted within a single saturation loop, by leveraging AVATAR
[\textit{A. Voronkov}, Lect. Notes Comput. Sci. 8559, 696--710 (2014; Zbl 1495.68240)].
Lemmas obtained by generalization or theory exploration can be introduced during search, used, and proved, all in the same search space. We describe an implementation and present some promising results.
For the entire collection see [Zbl 1369.68021].Symmetric explanation learning: effective dynamic symmetry handling for SAThttps://zbmath.org/1496.683662022-11-17T18:59:28.764376Z"Devriendt, Jo"https://zbmath.org/authors/?q=ai:devriendt.jo"Bogaerts, Bart"https://zbmath.org/authors/?q=ai:bogaerts.bart"Bruynooghe, Maurice"https://zbmath.org/authors/?q=ai:bruynooghe.mauriceSummary: The presence of symmetry in Boolean satisfiability (SAT) problem instances often poses challenges to solvers. Currently, the most effective approach to handle symmetry is by static symmetry breaking, which generates asymmetric constraints to add to the instance. An alternative way is to handle symmetry dynamically during solving. As modern SAT solvers can be viewed as propositional proof generators, adding a symmetry rule in a solver's proof system would be a straightforward technique to handle symmetry dynamically. However, none of these proposed symmetrical learning techniques are competitive to static symmetry breaking. In this paper, we present symmetric explanation learning, a form of symmetrical learning based on learning symmetric images of explanation clauses for unit propagations performed during search. A key idea is that these symmetric clauses are only learned when they would restrict the current search state, i.e., when they are unit or conflicting. We further provide a theoretical discussion on symmetric explanation learning and a working implementation in a state-of-the-art SAT solver. We also present extensive experimental results indicating that symmetric explanation learning is the first symmetrical learning scheme competitive with static symmetry breaking.
For the entire collection see [Zbl 1368.68008].VINTE: an implementation of internal calculi for Lewis' logics of counterfactual reasoninghttps://zbmath.org/1496.683672022-11-17T18:59:28.764376Z"Girlando, Marianna"https://zbmath.org/authors/?q=ai:girlando.marianna"Lellmann, Björn"https://zbmath.org/authors/?q=ai:lellmann.bjorn"Olivetti, Nicola"https://zbmath.org/authors/?q=ai:olivetti.nicola"Pozzato, Gian Luca"https://zbmath.org/authors/?q=ai:pozzato.gian-luca"Vitalis, Quentin"https://zbmath.org/authors/?q=ai:vitalis.quentinSummary: We present VINTE, a theorem prover for conditional logics for counterfactual reasoning introduced by Lewis in the seventies. VINTE implements some internal calculi recently introduced for the basic system \(\mathbb {V}\) and some of its significant extensions with axioms \(\mathbb {N}\), \(\mathbb {T}\), \(\mathbb {C}\), \(\mathbb {W}\) and \(\mathbb {A}\). VINTE is inspired by the methodology of \(\mathsf{lean}T^A P\) and it is implemented in Prolog. The paper shows some experimental results, witnessing that the performances of VINTE are promising.
For the entire collection see [Zbl 1371.68015].On tackling the limits of resolution in SAT solvinghttps://zbmath.org/1496.683682022-11-17T18:59:28.764376Z"Ignatiev, Alexey"https://zbmath.org/authors/?q=ai:ignatyev.alexey-a"Morgado, Antonio"https://zbmath.org/authors/?q=ai:morgado.antonio"Marques-Silva, Joao"https://zbmath.org/authors/?q=ai:marques-silva.joao-pSummary: The practical success of Boolean satisfiability (SAT) solvers stems from the CDCL (conflict-driven clause learning) approach to SAT solving. However, from a propositional proof complexity perspective, CDCL is no more powerful than the resolution proof system, for which many hard examples exist. This paper proposes a new problem transformation, which enables reducing the decision problem for formulas in conjunctive normal form (CNF) to the problem of solving maximum satisfiability over Horn formulas. Given the new transformation, the paper proves a polynomial bound on the number of MaxSAT resolution steps for pigeonhole formulas. This result is in clear contrast with earlier results on the length of proofs of MaxSAT resolution for pigeonhole formulas. The paper also establishes the same polynomial bound in the case of modern core-guided MaxSAT solvers. Experimental results, obtained on CNF formulas known to be hard for CDCL SAT solvers, show that these can be efficiently solved with modern MaxSAT solvers.
For the entire collection see [Zbl 1368.68008].On simplification of formulas with unconstrained variables and quantifiershttps://zbmath.org/1496.683692022-11-17T18:59:28.764376Z"Jonáš, Martin"https://zbmath.org/authors/?q=ai:jonas.martin"Strejček, Jan"https://zbmath.org/authors/?q=ai:strejcek.janSummary: Preprocessing of the input formula is an essential part of all modern SMT solvers. An important preprocessing step is formula simplification. This paper elaborates on simplification of quantifier-free formulas containing unconstrained terms, i.e. terms that can have arbitrary values independently on the rest of the formula. We extend the idea in two directions. First, we introduce partially constrained terms and show some simplification rules employing this notion. Second, we show that unconstrained terms can be used also for simplification of formulas with quantifiers. Moreover, both these extensions can be merged in order to simplify partially constrained terms in formulas with quantifiers. We experimentally evaluate the proposed simplifications on formulas in the bit-vector theory.
For the entire collection see [Zbl 1368.68008].A little blocked literal goes a long wayhttps://zbmath.org/1496.683702022-11-17T18:59:28.764376Z"Kiesl, Benjamin"https://zbmath.org/authors/?q=ai:kiesl.benjamin"Heule, Marijn J. H."https://zbmath.org/authors/?q=ai:heule.marijn-j-h"Seidl, Martina"https://zbmath.org/authors/?q=ai:seidl.martinaSummary: Q-resolution is a generalization of propositional resolution that provides the theoretical foundation for search-based solvers of quantified Boolean formulas (QBFs). Recently, it has been shown that an extension of Q-resolution, called long-distance resolution, is remarkably powerful both in theory and in practice. However, it was unknown how long-distance resolution is related to \(\mathsf {QRAT}\), a proof system introduced for certifying the correctness of QBF-preprocessing techniques. We show that \(\mathsf {QRAT}\) polynomially simulates long-distance resolution. Two simple rules of \(\mathsf {QRAT}\) are crucial for our simulation -- blocked-literal addition and blocked-literal elimination. Based on the simulation, we implemented a tool that transforms long-distance-resolution proofs into \(\mathsf {QRAT}\) proofs. In a case study, we compare long-distance-resolution proofs of the well-known Kleine Büning formulas with corresponding \(\mathsf {QRAT}\) proofs.
For the entire collection see [Zbl 1368.68008].A mechanizable first-order theory of ordinalshttps://zbmath.org/1496.683712022-11-17T18:59:28.764376Z"Schmitt, Peter H."https://zbmath.org/authors/?q=ai:schmitt.peter-hSummary: We present a first-order theory of ordinals without resorting to set theory. The theory is implemented in the KeY program verification system which is in turn used to prove termination of a Java program computing the Goodstein sequences.
For the entire collection see [Zbl 1371.68015].From DQBF to QBF by dependency eliminationhttps://zbmath.org/1496.683722022-11-17T18:59:28.764376Z"Wimmer, Ralf"https://zbmath.org/authors/?q=ai:wimmer.ralf-d"Karrenbauer, Andreas"https://zbmath.org/authors/?q=ai:karrenbauer.andreas"Becker, Ruben"https://zbmath.org/authors/?q=ai:becker.ruben"Scholl, Christoph"https://zbmath.org/authors/?q=ai:scholl.christoph"Becker, Bernd"https://zbmath.org/authors/?q=ai:becker.berndSummary: In this paper, we propose the elimination of dependencies to convert a given dependency quantified Boolean formula (DQBF) to an equisatisfiable QBF. We show how to select a set of dependencies to eliminate such that we arrive at a smallest equisatisfiable QBF in terms of existential variables that is achievable using dependency elimination. This approach is improved by taking so-called don't-care dependencies into account, which result from the application of dependency schemes to the formula and can be added to or removed from the formula at no cost. We have implemented this new method in the state-of-the-art DQBF solver HQS. Experiments show that dependency elimination is clearly superior to the previous method using variable elimination.
For the entire collection see [Zbl 1368.68008].Issues in machine-checking the decidability of implicational ticket entailmenthttps://zbmath.org/1496.683732022-11-17T18:59:28.764376Z"Dawson, Jeremy E."https://zbmath.org/authors/?q=ai:dawson.jeremy-e"Goré, Rajeev"https://zbmath.org/authors/?q=ai:gore.rajeev-prabhakarSummary: The decidability of the implicational fragment \(T_{\rightarrow}\) of the relevance logic of ticket entailment was recently claimed independently by Bimbó and Dunn, and Padovani. We present a mechanised formalisation, in Isabelle/HOL, of the various proof-theoretical results due to Bimbó and Dunn that underpin their claim. We also discuss the issues that stymied our attempt to verify their proof of decidability.
For the entire collection see [Zbl 1371.68015].On the most suitable axiomatization of signed integershttps://zbmath.org/1496.683742022-11-17T18:59:28.764376Z"Garavel, Hubert"https://zbmath.org/authors/?q=ai:garavel.hubertSummary: The standard mathematical definition of signed integers, based on set theory, is not well-adapted to the needs of computer science. For this reason, many formal specification languages and theorem provers have designed alternative definitions of signed integers based on term algebras, by extending the Peano-style construction of unsigned naturals using ``zero'' and ``succ'' to the case of signed integers. We compare the various approaches used in CADP, CASL, Coq, Isabelle/HOL, KIV, Maude, mCRL2, PSF, SMT-LIB, TLA\(+\), etc. according to objective criteria and suggest an ``optimal'' definition of signed integers.
For the entire collection see [Zbl 1428.68025].Formalization of quasilatticeshttps://zbmath.org/1496.683752022-11-17T18:59:28.764376Z"Kulesza, Dominik"https://zbmath.org/authors/?q=ai:kulesza.dominik"Grabowski, Adam"https://zbmath.org/authors/?q=ai:grabowski.adamSummary: The main aim of this article is to introduce formally one of the generalizations of lattices, namely quasilattices, which can be obtained from the axiomatization of the former class by certain weakening of ordinary absorption laws. We show propositions QLT-1 to QLT-7 from [\textit{W. McCune} and \textit{R. Padmanabhan}, Automated deduction in equational logic and cubic curves. Berlin: Springer-Verlag (1996; Zbl 0921.03011)], presenting also some short variants of corresponding axiom systems. Some of the results were proven in the Mizar [\textit{G. Bancerek} et al., Lect. Notes Comput. Sci. 9150, 261--279 (2015; Zbl 1417.68201); J. Autom. Reasoning 61, No. 1--4, 9--32 (2018; Zbl 1433.68530)] system with the help of Prover9 proof assistant.On liveness of dynamic storagehttps://zbmath.org/1496.683762022-11-17T18:59:28.764376Z"Spiegelman, Alexander"https://zbmath.org/authors/?q=ai:spiegelman.alexander"Keidar, Idit"https://zbmath.org/authors/?q=ai:keidar.iditSummary: Dynamic distributed storage algorithms such as DynaStore, Reconfigurable Paxos, RAMBO, and RDS, do not ensure liveness (wait-freedom) in asynchronous runs with infinitely many reconfigurations. We prove that this is inherent for asynchronous dynamic storage algorithms. Our result holds even if only one process may fail, provided that machines that were successfully removed from the system's configuration can be switched off by a system administrator. To circumvent this result, we define a dynamic eventually perfect failure detector, and present an algorithm that uses it to emulate wait-free dynamic atomic storage. Though some of the previous algorithms have been designed for eventually synchronous models, to the best of our knowledge, our algorithm is the first to ensure liveness for all operations without restricting the reconfiguration rate.
For the entire collection see [Zbl 1381.68003].Testing polynomial equivalence by scaling matriceshttps://zbmath.org/1496.683772022-11-17T18:59:28.764376Z"Bläser, Markus"https://zbmath.org/authors/?q=ai:blaser.markus"Rao, B. V. Raghavendra"https://zbmath.org/authors/?q=ai:raghavendra-rao.b-v"Sarma, Jayalal"https://zbmath.org/authors/?q=ai:sarma-m-n.jayalalSummary: In this paper we study the polynomial equivalence problem: test if two given polynomials \(f\) and \(g\) are equivalent under a non-singular linear transformation of variables.
We begin by showing that the more general problem of testing whether \(f\) can be obtained from \(g\) by an arbitrary (not necessarily invertible) linear transformation of the variables is equivalent to the existential theory over the reals. This strengthens an \(\mathsf {NP}\)-hardness result by
\textit{N. Kayal} [in: Proceedings of the 44th annual ACM symposium on theory of computing, STOC'12. New York, NY: Association for Computing Machinery (ACM). 643--662 (2012; Zbl 1286.68197)].
Two \(n\)-variate polynomials \(f\) and \(g\) are said to be equivalent up to scaling if there are scalars \(a_1,\ldots,a_n\in\mathbb {F}\setminus\{0\}\) such that \(f(a_1x_1,\ldots,a_nx_n)=g(x_1,\ldots ,x_n)\). Testing whether two polynomials are equivalent by scaling matrices is a special case of the polynomial equivalence problem and is harder than the polynomial identity testing problem.
As our main result, we obtain a randomized polynomial time algorithm for testing if two polynomials are equivalent up to a scaling of variables with black-box access to polynomials \(f\) and \(g\) over the real numbers.
An essential ingredient to our algorithm is a randomized polynomial time algorithm that given a polynomial as a black box obtains coefficients and degree vectors of a maximal set of monomials whose degree vectors are linearly independent. This algorithm might be of independent interest. It also works over finite fields, provided their size is large enough to perform polynomial interpolation.
For the entire collection see [Zbl 1369.68029].Flipping out with many flips: hardness of testing \(k\)-monotonicityhttps://zbmath.org/1496.683782022-11-17T18:59:28.764376Z"Grigorescu, Elena"https://zbmath.org/authors/?q=ai:grigorescu.elena"Kumar, Akash"https://zbmath.org/authors/?q=ai:kumar.akash"Wimmer, Karl"https://zbmath.org/authors/?q=ai:wimmer.karlSummary: A function \(f:\{0,1\}^n\to\{0, 1\}\) is said to be \(k\)-monotone if it flips between 0 and 1 at most \(k\) times on every ascending chain. Such functions represent a natural generalization of (1-)monotone functions, and have been recently studied in circuit complexity, PAC learning, and cryptography. Our work is part of a renewed focus in understanding testability of properties characterized by freeness of arbitrary order patterns as a generalization of monotonicity. Recently, \textit{C. L. Canonne} et al. [LIPIcs -- Leibniz Int. Proc. Inform. 67, Article 29, 21 p. (2017; Zbl 1402.68192)] initiate the study of \(k\)-monotone functions in the area of property testing, and \textit{I. Newman} et al. [in: Proceedings of the 28th annual ACM-SIAM symposium on discrete algorithms, SODA 2017. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM); New York, NY: Association for Computing Machinery (ACM). 1582--1597 (2017; Zbl 1403.68338)] study testability of families characterized by freeness from order patterns on real-valued functions over the line \([n]\) domain.\par We study \(k\)-monotone functions in the more relaxed parametrized property testing model, introduced by \textit{M. Parnas} et al. [J. Comput. Syst. Sci. 72, No. 6, 1012--1042 (2006; Zbl 1100.68109)]. In this process we resolve a problem left open in previous work. Specifically, our results include the following.\begin{itemize}\item[1.]Testing 2-monotonicity on the hypercube non-adaptively with one-sided error requires an exponential in \(\sqrt{n}\) number of queries. This behavior shows a stark contrast with testing(1-)monotonicity, which only needs \(\widetilde{O}(\sqrt{n})\) queries
[\textit{S. Khot} et al., SIAM J. Comput. 47, No. 6, 2238--2276 (2018; Zbl 1409.68142)].
Furthermore, even the apparently easier task of distinguishing 2-monotone functions from functions that are far from being \(n^{.01}\)-monotone also requires an exponential number of queries.\item[2.]\ On the hypercube \([n]^d\) domain, there exists a testing algorithm that makes a constant number of queries and distinguishes functions that are \(k\)-monotone from functions that are far from being \(O(kd^2)\)-monotone. Such a dependency is likely necessary, given the lower bound above for the hypercube.\end{itemize}
For the entire collection see [Zbl 1393.68012].Parallel search with no coordinationhttps://zbmath.org/1496.683792022-11-17T18:59:28.764376Z"Korman, Amos"https://zbmath.org/authors/?q=ai:korman.amos"Rodeh, Yoav"https://zbmath.org/authors/?q=ai:rodeh.yoavSummary: We consider a parallel version of a classical Bayesian search problem. \(k\) agents are looking for a treasure that is placed in one of the boxes indexed by \(\mathbb{N}^+\) according to a known distribution \(p\). The aim is to minimize the expected time until the first agent finds it. Searchers run in parallel where at each time step each searcher can ``peek'' into a box. A basic family of algorithms which are inherently robust is non-coordinating algorithms. Such algorithms act independently at each searcher, differing only by their probabilistic choices. We are interested in the price incurred by employing such algorithms when compared with the case of full coordination.
We first show that there exists a non-coordination algorithm, that knowing only the relative likelihood of boxes according to \(p\), has expected running time of at most \(10+4(1+\frac{1}{k})^2 T\), where \(T\) is the expected running time of the best fully coordinated algorithm. This result is obtained by applying a refined version of the main algorithm suggested by \textit{P. Fraigniaud} et al. [in: Proceedings of the 48th annual ACM SIGACT symposium on theory of computing, STOC '16. New York, NY: Association for Computing Machinery (ACM). 312--323 (2016; Zbl 1373.68202)], which was designed for the context of linear parallel search.
We then describe an optimal non-coordinating algorithm for the case where the distribution \(p\) is known. The running time of this algorithm is difficult to analyse in general, but we calculate it for several examples. In the case where \(p\) is uniform over a finite set of boxes, then the algorithm just checks boxes uniformly at random among all non-checked boxes and is essentially 2 times worse than the coordinating algorithm. We also show simple algorithms for Pareto distributions over \(M\) boxes. That is, in the case where \(p(x)\sim 1/x^b\) for \(0<b<1\), we suggest the following algorithm: at step \(t\) choose uniformly from the boxes unchecked in \(\left\{1,\ldots,\min(M{\left\lfloor{t/\sigma}\right\rfloor})\right\}\), where \(\sigma=b/(b+k-1)\). It turns out this algorithm is asymptotically optimal, and runs about \(2+b\) times worse than the case of full coordination.
For the entire collection see [Zbl 1381.68003].Deterministic approximation algorithm for submodular maximization subject to a matroid constrainthttps://zbmath.org/1496.683802022-11-17T18:59:28.764376Z"Sun, Xin"https://zbmath.org/authors/?q=ai:sun.xin|sun.xin.1"Xu, Dachuan"https://zbmath.org/authors/?q=ai:xu.dachuan"Guo, Longkun"https://zbmath.org/authors/?q=ai:guo.longkun"Li, Min"https://zbmath.org/authors/?q=ai:li.min.9|li.min.2|li.min.5|li.min.7|li.min.3|li.min.1|li.min.8|li.min|li.min.6|li.min.10|li.min.4The paper studies the generalized submodular maximization problem over a base set \(N\) with a non-negative monotone submodular set function \(f:2^N\rightarrow\mathbb{R}_{\ge 0}\) as the objective function and subject to a matroid constraint. The aim is to seek a subset \(S\) of \(N\), simultaneously satisfying the feasibility constraint of the matroid \(\mathcal{M}\) and maximizing the value of \(f\). The problem is generalized through the curvature parameter \(\alpha\in[0, 1]\) and we say that a submodular function \(f\) is with curvature \(\alpha\), if \(f(S\cup\{s\})-f(S)\ge(1-\alpha)f(\{s\})\) holds for any subset \(S\subset N\) and element \(s\in N\setminus S\).
The main result is a deterministic approximation algorithm for the above problem. The algorithm employs the deterministic algorithm devised by \textit{N. Buchbinder} et al. [in: Proceedings of the 30th annual ACM-SIAM symposium on discrete algorithms, SODA 2019. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM); New York, NY: Association for Computing Machinery (ACM). 241--254 (2019; Zbl 1431.90125)] as a building block and it reaches the same ratio of 0.5008 when the curvature parameter \(\alpha = 1\), and approximation ratio 1 when \(\alpha = 0\). For a calibrating parameter \(y\in[0,1]\) the algorithm achieves the following approximation ratio:
\[
\frac{1 + h_\alpha(y) +\Delta\cdot[3 + \alpha-(2 + \alpha)y-(1 + \alpha)h_\alpha(y)]}{2 + \alpha + (1 + \alpha)(1-y)},
\]
where \(h_\alpha(y)\doteq\frac{1-(1-y)^{1+\alpha}}{1+\alpha}\).
Reviewer: Vladimír Lacko (Košice)Sparse polynomial interpolation with Bernstein polynomialshttps://zbmath.org/1496.683812022-11-17T18:59:28.764376Z"İmamoğlu, Erdal"https://zbmath.org/authors/?q=ai:imamoglu.erdalSummary: We present an algorithm for interpolating an unknown univariate polynomial \(f\) that has a \(t\) sparse representation (\(t\ll\deg(f)\)) using Bernstein polynomials as term basis from \(2t\) evaluations. Our method is based on manipulating given black box polynomial for \(f\) so that we can make use of Prony's algorithm.Online recognition of dictionary with one gaphttps://zbmath.org/1496.683822022-11-17T18:59:28.764376Z"Amir, Amihood"https://zbmath.org/authors/?q=ai:amir.amihood"Levy, Avivit"https://zbmath.org/authors/?q=ai:levy.avivit"Porat, Ely"https://zbmath.org/authors/?q=ai:porat.ely"Riva Shalom, B."https://zbmath.org/authors/?q=ai:shalom.b-rivaSummary: We formalize and examine the online Dictionary Recognition with One Gap problem (DROG) which is the following. Preprocess a dictionary \(D\) of \(d\) patterns each containing a special gap symbol that matches any string, so that given a text arriving online a character at a time, all patterns from \(D\) which are suffixes of the text that has arrived so far and have not been reported yet, are reported before the next character arrives. The gap symbols are associated with bounds determining possible lengths of matching strings. Online DROG captures the difficulty in a bottleneck procedure for cyber-security, as many digital signatures of viruses manifest themselves as patterns with a single gap.
Following the work on the closely related online Dictionary Matching with One Gap problem (DMOG), we provide algorithms whose time cost depends linearly on \(\delta(G_D)\), where \(G_D\) is a bipartite graph that captures the structure of \(D\) and \(\delta(G_D)\) is the degeneracy of this graph. These algorithms are of practical interest since although \(\delta(G_D)\) can be as large as \(\sqrt{d}\), and even larger if \(G_D\) is a multi-graph, it is typically a small constant in practice.An experimental comparison of algebraic crossover operators for permutation problemshttps://zbmath.org/1496.683832022-11-17T18:59:28.764376Z"Baioletti, Marco"https://zbmath.org/authors/?q=ai:baioletti.marco"Di Bari, Gabriele"https://zbmath.org/authors/?q=ai:di-bari.gabriele"Milani, Alfredo"https://zbmath.org/authors/?q=ai:milani.alfredo"Santucci, Valentino"https://zbmath.org/authors/?q=ai:santucci.valentinoSummary: Crossover operators are very important components in Evolutionary Computation. Here we are interested in crossovers for the permutation representation that find applications in combinatorial optimization problems such as the permutation flowshop scheduling and the traveling salesman problem. We introduce three families of permutation crossovers based on algebraic properties of the permutation space. In particular, we exploit the group and lattice structures of the space. A total of 34 new crossovers is provided. Algebraic and semantic properties of the operators are discussed, while their performances are investigated by experimentally comparing them with known permutation crossovers on standard benchmarks from four popular permutation problems. Three different experimental scenarios are considered and the results clearly validate our proposals.Symbolic methods for studying the equilibrium orientations of a system of two connected bodies in a circular orbithttps://zbmath.org/1496.700052022-11-17T18:59:28.764376Z"Gutnik, S. A."https://zbmath.org/authors/?q=ai:gutnik.sergey-a"Sarychev, V. A."https://zbmath.org/authors/?q=ai:sarychev.vasily-aSummary: This paper investigates the dynamics of a system of two bodies connected by a spherical hinge that moves along a circular orbit under the action of gravitational torque. A computer algebra method based on the resultant approach is applied to reduce the satellite's stationary motion system of algebraic equations to a single algebraic equation in one variable, which determines the equilibrium configurations of the two-body system in the plane orthogonal to the orbital plane. Classification of domains with equal numbers of equilibrium solutions is carried out using algebraic methods for constructing discriminant hypersurfaces. Bifurcation curves in the space of system parameters that determine boundaries of domains with a fixed number of equilibria for the two-body system are obtained symbolically. Depending on the parameters of the problem, the number of equilibria is found by analyzing the real roots of the algebraic equations. Using the proposed approach, it is shown that the satellite-stabilizer system can have up to 44 equilibrium orientations in a circular orbit.Alternate way of soliton solutions in hydrogen-bonded chainhttps://zbmath.org/1496.740862022-11-17T18:59:28.764376Z"Parasuraman, E."https://zbmath.org/authors/?q=ai:parasuraman.elango"Kavitha, L."https://zbmath.org/authors/?q=ai:kavitha.louis(no abstract)Fluid equation-based and data-driven simulation of special effects animationhttps://zbmath.org/1496.761092022-11-17T18:59:28.764376Z"Deng, Yujuan"https://zbmath.org/authors/?q=ai:deng.yujuanSummary: This paper analyzes the simulation of special effects animation through fluid equations and data-driven methods. This paper also considers the needs of computer fluid animation simulation in terms of computational accuracy and simulation efficiency, takes high real-time, high interactivity, and high physical accuracy of simulation algorithm as the research focus and target, and proposes a solution algorithm and acceleration scheme based on deep neural network framework for the key problems of simulation of natural phenomena including smoke and liquid. With the deep development of artificial intelligence technology, deep neural network models are widely used in research fields such as computer image classification, speech recognition, and fluid detail synthesis with their powerful data learning capability. Its stable and efficient computational model provides a new problem-solving approach for computerized fluid animation simulation. In terms of time series reconstruction, this paper adopts a tracking-based reconstruction method, including target tracking, 2D trajectory fitting and repair, and 3D trajectory reconstruction. For continuous image sequences, a linear dynamic model algorithm based on pyramidal optical flow is used to track the feature centers of the objects, and the spatial coordinates and motion parameters of the feature points are obtained by reconstructing the motion trajectories. The experimental results show that in terms of spatial reconstruction, the matching method proposed in this paper is more accurate compared with the traditional stereo matching algorithm; in terms of time series reconstruction, the error of target tracking reduced. Finally, the 3D motion trajectory of the point feature object and the motion pattern at a certain moment are shown, and the method in this paper obtains more ideal results, which proves the effectiveness of the method.Surrogate convolutional neural network models for steady computational fluid dynamics simulationshttps://zbmath.org/1496.761102022-11-17T18:59:28.764376Z"Eichinger, Matthias"https://zbmath.org/authors/?q=ai:eichinger.matthias"Heinlein, Alexander"https://zbmath.org/authors/?q=ai:heinlein.alexander"Klawonn, Axel"https://zbmath.org/authors/?q=ai:klawonn.axelSummary: A convolution neural network (CNN)-based approach for the construction of reduced order surrogate models for computational fluid dynamics (CFD) simulations is introduced; it is inspired by the approach of
\textit{X. Guo} et al. [``Convolutional neural networks for steady flow approximation'', in: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, KDD'16. New York, NY: Assication for Computing Machinery (ACM). 481--490 (2016; \url{doi:10.1145/2939672.2939738})]. In particular, the neural networks are trained in order to predict images of the flow field in a channel with varying obstacle based on an image of the geometry of the channel. A classical CNN with bottleneck structure and a U-Net are compared while varying the input format, the number of decoder paths, as well as the loss function used to train the networks. This approach yields very low prediction errors, in particular, when using the U-Net architecture. Furthermore, the models are also able to generalize to unseen geometries of the same type. A transfer learning approach enables the model to be trained to a new type of geometries with very low training cost. Finally, based on this transfer learning approach, a sequential learning strategy is introduced, which significantly reduces the amount of necessary training data.Sparsity-promoting algorithms for the discovery of informative Koopman-invariant subspaceshttps://zbmath.org/1496.761122022-11-17T18:59:28.764376Z"Pan, Shaowu"https://zbmath.org/authors/?q=ai:pan.shaowu"Arnold-Medabalimi, Nicholas"https://zbmath.org/authors/?q=ai:arnold-medabalimi.nicholas"Duraisamy, Karthik"https://zbmath.org/authors/?q=ai:duraisamy.karthikSummary: Koopman decomposition is a nonlinear generalization of eigen-decomposition, and is being increasingly utilized in the analysis of spatio-temporal dynamics. Well-known techniques such as the dynamic mode decomposition (DMD) and its linear variants provide approximations to the Koopman operator, and have been applied extensively in many fluid dynamic problems. Despite being endowed with a richer dictionary of nonlinear observables, nonlinear variants of the DMD, such as extended/kernel dynamic mode decomposition (EDMD/KDMD) are seldom applied to large-scale problems primarily due to the difficulty of discerning the Koopman-invariant subspace from thousands of resulting Koopman eigenmodes. To address this issue, we propose a framework based on a multi-task feature learning to extract the most informative Koopman-invariant subspace by removing redundant and spurious Koopman triplets. In particular, we develop a pruning procedure that penalizes departure from linear evolution. These algorithms can be viewed as sparsity-promoting extensions of EDMD/KDMD. Furthermore, we extend KDMD to a continuous-time setting and show a relationship between the present algorithm, sparsity-promoting DMD and an empirical criterion from the viewpoint of non-convex optimization. The effectiveness of our algorithm is demonstrated on examples ranging from simple dynamical systems to two-dimensional cylinder wake flows at different Reynolds numbers and a three-dimensional turbulent ship-airwake flow. The latter two problems are designed such that very strong nonlinear transients are present, thus requiring an accurate approximation of the Koopman operator. Underlying physical mechanisms are analysed, with an emphasis on characterizing transient dynamics. The results are compared with existing theoretical expositions and numerical approximations.\textit{The game of drones:} rapid agent-based machine-learning models for multi-UAV path planninghttps://zbmath.org/1496.761132022-11-17T18:59:28.764376Z"Zohdi, T. I."https://zbmath.org/authors/?q=ai:zohdi.tarek-iSummary: The goal of this article is to provide basic modeling and simulation techniques for systems of multiple interacting Unmanned Aerial Vehicles, so called ``swarms'', for applications in mapping. Also, the paper illustrates the application of basic machine-learning algorithms to optimize their information gathering. Numerical examples are provided to illustrate the concepts.Utilizing adaptive boosting to detect quantum steerabilityhttps://zbmath.org/1496.810372022-11-17T18:59:28.764376Z"Song, Hong-fei"https://zbmath.org/authors/?q=ai:song.hongfei"Zhang, Jun"https://zbmath.org/authors/?q=ai:zhang.jun.5"Zhang, Hao"https://zbmath.org/authors/?q=ai:zhang.hao.4|zhang.hao.1|zhang.hao|zhang.hao.2|zhang.hao.3Summary: We use Adaptive Boosting (Adaboost) algorithm to detect the quantum steerability of the arbitrary two-qubit quantum state and predict the steerable bounds of the generalized Werner state. The results show that compared with the performance of the classifiers constructed by the support vector machine (SVM), the classifiers trained by the Adaboost are better. In particular, a high-performance classifier is obtained with partial information only measured in three fixed measurement directions. In the application of predicting the steerable bounds of the generalized Werner state, the classifiers constructed by the Adaboost predict are closer to the theoretical bounds. What is more, we give the feature selection of the high-performance classifier.Quantum theoryhttps://zbmath.org/1496.810402022-11-17T18:59:28.764376Z"Russell, Travis B."https://zbmath.org/authors/?q=ai:russell.travis-bSummary: In this chapter, we introduce the reader to the field of quantum theory with emphasis on applications in cryptography and communication. Quantum devices are physical instruments that allow two or more parties to harness the quantum mechanical properties of superposition and entanglement to perform tasks, some considered to be impossible with non-quantum, or classical devices. In this chapter, we will learn the fundamental differences between quantum and classical processes and introduce some quantum protocols which demonstrate the advantages of quantum devices over their classical counterparts. After introducing the basics of quantum theory, we cover several topics of interest in quantum information. We begin with an overview of super-dense coding, quantum teleportation, and quantum key distribution, communication protocols that each demonstrate the security features of quantum communication. We then peer into the realm of current research by first introducing quantum error-correction and its connections to some old problems in graph theory (Weaver's quantum Ramsey theorem), and then introduce the topic of non-local games and quantum correlations and discuss their role in device-independent quantum cryptography. We conclude with a historical survey of the field, providing many references for the reader to more deeply explore the practical and theoretical sides of quantum information.
For the entire collection see [Zbl 1484.05004].Error correction of the continuous-variable quantum hybrid computation on two-node cluster states: limit of squeezinghttps://zbmath.org/1496.810442022-11-17T18:59:28.764376Z"Korolev, S. B."https://zbmath.org/authors/?q=ai:korolev.s-b"Golubeva, T. Yu."https://zbmath.org/authors/?q=ai:golubeva.t-yuSummary: In this paper, we investigate the error correction of universal Gaussian transformations obtained in the process of continuous-variable quantum computations. We have tried to bring our theoretical studies closer to the actual picture in the experiment. When investigating the error correction procedure, we have considered that both the resource GKP state itself and the entanglement transformation are imperfect. In reality, the GKP state has a finite width associated with the finite degree of squeezing, and the entanglement transformation is performed with error. We have considered a hybrid scheme to implement the universal Gaussian transformations. In this scheme, the transformations are realized through computations on the cluster state, supplemented by linear optical operation. This scheme gives the smallest error in the implementation of universal Gaussian transformations. The use of such a scheme made it possible to reduce the oscillator squeezing threshold required for the implementing of fault-tolerant quantum computation schemes close to reality to \(-19.25\) dB.A digital quantum simulation of the Agassi modelhttps://zbmath.org/1496.811052022-11-17T18:59:28.764376Z"Pérez-Fernández, Pedro"https://zbmath.org/authors/?q=ai:perez-fernandez.pedro"Arias, José-Miguel"https://zbmath.org/authors/?q=ai:arias.jose-miguel"García-Ramos, José-Enrique"https://zbmath.org/authors/?q=ai:garcia-ramos.jose-enrique"Lamata, Lucas"https://zbmath.org/authors/?q=ai:lamata.lucasSummary: A digital quantum simulation of the Agassi model from nuclear physics is proposed and analyzed. The proposal is worked out for the case with four different sites. Numerical simulations and analytical estimations are presented to illustrate the feasibility of this proposal with current technology. The proposed approach is fully scalable to a larger number of sites. The use of a quantum correlation function as a probe to explore the quantum phases by quantum simulating the time dynamics, with no need of computing the ground state, is also studied. Evidence is given showing that the amplitude of the time dynamics of a correlation function in this quantum simulation is linked to the different quantum phases of the system. This approach establishes an avenue for the digital quantum simulation of useful models in nuclear physics.On the remote entanglement of MW qubits using hybrid Rydberg systemshttps://zbmath.org/1496.811082022-11-17T18:59:28.764376Z"Liu, Yubao"https://zbmath.org/authors/?q=ai:liu.yubao"Li, Lin"https://zbmath.org/authors/?q=ai:li.lin.1|li.lin.2|li.lin"Ma, Yiqiu"https://zbmath.org/authors/?q=ai:ma.yiqiuSummary: Distributed quantum computing is a promising architecture for the realization of scalable quantum computers. The cornerstone of such a quantum architecture is the ability to establish quantum correlations between remote computation modules. Remote entanglement and quantum logic gates have been demonstrated in a few physical systems enabled by coupling matter qubits to photons. However, building such a quantum architecture with superconducting qubits seems very challenging due to the lack of efficient matter-light quantum interface. Following our previous work on hybrid microwave-optical quantum gate, we propose a protocol to perform a remote quantum logic gate between distant superconducting quantum modules. MW qubits in distant modules are connected with photonic ancillary qubits and a hybrid Rydberg-cavity system is employed as the quantum mediator for microwave-optical photon interaction. We perform thorough analysis and find high-fidelity remote quantum logic gate is achievable by integrating state of the art quantum systems.Upscaling of two-phase discrete fracture simulations using a convolutional neural networkhttps://zbmath.org/1496.860022022-11-17T18:59:28.764376Z"Andrianov, Nikolai"https://zbmath.org/authors/?q=ai:andrianov.nikolaiSummary: Upscaling methods such as the dual porosity/dual permeability (DPDP) model provide a robust means for numerical simulation of fractured reservoirs. In order to close the DPDP model, one needs to provide the upscaled fracture permeabilities and the parameters of the matrix-fracture mass transfer for every fractured coarse block in the domain. Obtaining these model closures from fine-scale discrete fracture-matrix (DFM) simulations is a lengthy and computationally expensive process. We alleviate these difficulties by pixelating the fracture geometries and predicting the upscaled parameters using a convolutional neural network (CNN), trained on precomputed fine-scale results. We demonstrate that once a trained CNN is available, it can provide the DPDP model closures for a wide range of modeling parameters, not only those for which the training dataset has been obtained. The performance of the DPDP model with both reference and predicted closures is compared to the reference DFM simulations of two-phase flows using a synthetic and a realistic fracture geometries. While the both DPDP solutions underestimate the matrix-fracture transfer rate, they agree well with each other and demonstrate a significant speedup as compared to the reference fine-scale solution.Physics-constrained deep learning forecasting: an application with capacitance resistive modelhttps://zbmath.org/1496.860092022-11-17T18:59:28.764376Z"Yewgat, Abderrahmane"https://zbmath.org/authors/?q=ai:yewgat.abderrahmane"Busby, Daniel"https://zbmath.org/authors/?q=ai:busby.daniel"Chevalier, Max"https://zbmath.org/authors/?q=ai:chevalier.max"Lapeyre, Corentin"https://zbmath.org/authors/?q=ai:lapeyre.corentin"Teste, Olivier"https://zbmath.org/authors/?q=ai:teste.olivierSummary: It is well known that the construction of traditional reservoir simulation models can be very time and resources consuming. Particularly in the case of mature fields with long history and large number of wells where such models can be extremely difficult and long to history match. In this case data driven models can represent a cost-effective alternative, or they can provide complementary analysis to classical reservoir modelling. Due to data scarcity full machine learning approaches are also usually doomed to fail. In this work we develop a new Physics-Constrained Deep Learning approach that combined neural networks with a reduced physics approach: Capacitance Resistive Model (CRM). CRM are data-driven methods that are based on a simple material balance approximation, that can provide very useful reservoir insight. CRM can be used to analyze the underlying connections between producer wells and injector wells that can then be used to better allocate water injection. Such analysis can usually require very long tracer tests or very expensive 4D seismic acquisition and interpretation. CRM can provide directly these wells connection information using only available production and pressure data. The problem with CRM approaches, based on classical optimizers, is that they often detect spurious correlations and can be not very robust and reliable. Our physics-constrained deep learning approach called Deep-CRM performs production data regularization via the neural network approximation that helps to provide a better CRM parameter identification also with the use of robust gradient descent optimization methods developed and widely used by the large deep learning community. We show first on a synthetic and then in real reservoir case that Deep-CRM was able to identify most of the injector-producer connections with higher accuracy with respect to traditional CRM. Deep-CRM produced also better liquid production forecasts on the performed blind tests.Efficient well placement optimization under uncertainty using a virtual drilling procedurehttps://zbmath.org/1496.860162022-11-17T18:59:28.764376Z"Kristoffersen, Brage S."https://zbmath.org/authors/?q=ai:kristoffersen.brage-s"Silva, Thiago L."https://zbmath.org/authors/?q=ai:silva.thiago-lima"Bellout, Mathias C."https://zbmath.org/authors/?q=ai:bellout.mathias-c"Berg, Carl Fredrik"https://zbmath.org/authors/?q=ai:berg.carl-fredrikSummary: An Automatic Well Planner (AWP) is used to efficiently adjust pre-determined well paths to honor near-well properties and increase overall production. AWP replicates modern geosteering decision-making where adjustments to pre-programmed well paths are driven by continuous integration of data obtained from logging-while-drilling and look-ahead technology. In this work, AWP is combined into a robust optimization scheme to develop trajectories that follow reservoir properties in a more realistic manner compared to common well representations for optimization purposes. Core AWP operation relies on an artificial neural network coupled with a geology-based feedback mechanism. Specifically, for each well path candidate obtained from an outer-loop optimization procedure, AWP customizes trajectories according to the particular geological near-well properties of each realization in an ensemble of models. While well placement searches typically rely on linear well path representations, AWP develops customized trajectories by moving sequentially from heel to the toe. Analog to realistic drilling operations, AWP determines subsequent trajectory points by efficiently processing neighboring geological information. Studies are performed using the Olympus ensemble. AWP and the two derivative-free algorithms used in this work, Asynchronous Parallel Pattern Search (APPS) and Particle Swarm Optimization (PSO), are implemented using NTNU's open-source optimization framework FieldOpt. Results show that, with both APPS and PSO, the AWP solutions outperform the solutions obtained with a straight-line parameterization in all the three tested well placement optimization scenarios, which varied from the simplest scenario with a sole producer in a single-realization environment to a scenario with the full ensemble and multiple producers.Sequential design strategy for kriging and cokriging-based machine learning in the context of reservoir history-matchinghttps://zbmath.org/1496.860182022-11-17T18:59:28.764376Z"Thenon, A."https://zbmath.org/authors/?q=ai:thenon.arthur"Gervais, V."https://zbmath.org/authors/?q=ai:gervais.veronique"Le Ravalec, M."https://zbmath.org/authors/?q=ai:le-ravalec.mickaeleSummary: Numerical models representing geological reservoirs can be used to forecast production and help engineers to design optimal development plans. These models should be as representative as possible of the true dynamic behavior and reproduce available static and dynamic data. However, identifying models constrained to production data can be very challenging and time consuming. Machine learning techniques can be considered to mimic and replace the fluid flow simulator in the process. However, the benefit of these approaches strongly depends on the simulation time required to train reliable predictors. Previous studies highlighted the potential of the multi-fidelity approach rooted in cokriging to efficiently provide accurate estimations of fluid flow simulator outputs. This technique consists in combining simulation results obtained on several levels of resolution for the reservoir model to predict the output properties on the finest level (the most accurate one). The degraded levels can correspond for instance to a coarser discretization in space or time, or to less complex physics. The idea behind is to take advantage of the coarse level low-cost information to limit the total simulation time required to train the meta-models. In this paper, we propose a new sequential design strategy for iteratively and automatically training (kriging and) cokriging based meta-models. As highlighted on two synthetic cases, this approach makes it possible to identify training sets leading to accurate estimations for the error between measured and simulated production data (objective function) while requiring limited simulation times.Use of low-fidelity models with machine-learning error correction for well placement optimizationhttps://zbmath.org/1496.860212022-11-17T18:59:28.764376Z"Tang, Haoyu"https://zbmath.org/authors/?q=ai:tang.haoyu"Durlofsky, Louis J."https://zbmath.org/authors/?q=ai:durlofsky.louis-jSummary: Well placement optimization is commonly performed using population-based global stochastic search algorithms. These optimizations are computationally expensive due to the large number of multiphase flow simulations that must be conducted. In this work, we present an optimization framework in which these simulations are performed with low-fidelity (LF) models. These LF models are constructed from the underlying high-fidelity (HF) geomodel using a global transmissibility upscaling procedure. Tree-based machine-learning methods, specifically random forest and light gradient boosting machine, are applied to estimate the error in objective function value (in this case net present value, NPV) associated with the LF models. In the offline (preprocessing) step, preliminary optimizations are performed using LF models, and a clustering procedure is applied to select a representative set of 100--150 well configurations to use for training. HF simulation is then performed for these configurations, and the tree-based models are trained using an appropriate set of features. In the online (runtime) step, optimization with LF models, with the machine-learning correction, is conducted. Differential evolution is used for all optimizations. Results are presented for two example cases involving the placement of vertical wells in 3D bimodal channelized geomodels. We compare the performance of our procedure to optimization using HF models. In the first case, 25 optimization runs are performed with both approaches. Our method provides an overall speedup factor of 46 relative to optimization using HF models, with the best-case NPV within 1\% of the HF result. In the second case fewer HF optimization runs are conducted (consistent with actual practice), and the overall speedup factor with our approach is about 8. In this case, the best-case NPV from our procedure exceeds the HF result by 3.8\%.An intelligent multi-fidelity surrogate-assisted multi-objective reservoir production optimization method based on transfer stackinghttps://zbmath.org/1496.860222022-11-17T18:59:28.764376Z"Wang, Lian"https://zbmath.org/authors/?q=ai:wang.lian"Yao, Yuedong"https://zbmath.org/authors/?q=ai:yao.yuedong"Zhang, Liang"https://zbmath.org/authors/?q=ai:zhang.liang.2|zhang.liang.3|zhang.liang.1|zhang.liang"Adenutsi, Caspar Daniel"https://zbmath.org/authors/?q=ai:adenutsi.caspar-daniel"Zhao, Guoxiang"https://zbmath.org/authors/?q=ai:zhao.guoxiang"Lai, Fengpeng"https://zbmath.org/authors/?q=ai:lai.fengpengSummary: Recently, many researchers have focused on reservoir production optimization because it is one of the most essential processes in closed-loop reservoir management. Surrogate-assisted production optimization in particular has received a lot of research attention. This technique applies a simple yet vigorous approximation model to substitute expensive numerical simulation runs. However, almost all the existing methods independently use a single approximation model and neglect the potential synergies between these models. In order to make full use of the potential synergies of these existing approximation models, a novel multi-fidelity (MF) surrogate-assisted multi-objective production optimization (MOPO) method based on transfer stacking (MFTS-MOPO) is proposed. In the MFTS-MOPO method, the radial basis function network and support vector regression surrogate models are applied to approximate the high-fidelity (HF) model as the two additional low-fidelity (LF) models. Then a multi-fidelity surrogate model is adopted to evaluate objectives during the optimization process by transferring the two additional and streamline low-fidelity models to the computationally expensive high-fidelity model. Furthermore, two sampling infill strategies are applied to efficiently improve the quality of the multi-fidelity surrogate model. The uniqueness of the proposed MFTS-MOPO method is that the transfer stacking technique is employed to efficiently use the information from different fidelity models to establish the MF surrogate model and the infill sampling strategy used to improve its performance. In addition, three benchmark problems and two reservoirs with different scales were applied to illustrate the effectiveness and accuracy of the MFTS-MOPO method. It was found that the MFTS-MOPO method had superior performance in convergence and diversity than other conventional methods.The GLOBAL optimization algorithm. Newly updated with Java implementation and parallelizationhttps://zbmath.org/1496.900022022-11-17T18:59:28.764376Z"Bánhelyi, Balázs"https://zbmath.org/authors/?q=ai:banhelyi.balazs"Csendes, Tibor"https://zbmath.org/authors/?q=ai:csendes.tibor"Lévai, Balázs"https://zbmath.org/authors/?q=ai:levai.balazs-l"Pál, László"https://zbmath.org/authors/?q=ai:pal.laszlo"Zombori, Dániel"https://zbmath.org/authors/?q=ai:zombori.danielPublisher's description: This book explores the updated version of the GLOBAL algorithm which contains improvements for a local search algorithm and new Java implementations. Efficiency comparisons to earlier versions and on the increased speed achieved by the parallelization, are detailed. Examples are provided for students as well as researchers and practitioners in optimization, operations research, and mathematics to compose their own scripts with ease. A GLOBAL manual is presented in the appendix to assist new users with modules and test functions.
GLOBAL is a successful stochastic multistart global optimization algorithm that has passed several computational tests, and is efficient and reliable for small to medium dimensional global optimization problems. The algorithm uses clustering to ensure efficiency and is modular in regard to the two local search methods it starts with, but it can also easily apply other local techniques. The strength of this algorithm lies in its reliability and adaptive algorithm parameters. The GLOBAL algorithm is free to download also in the earlier Fortran, C, and MATLAB implementations.Robustness of scale-free networks with dynamical behavior against multi-node perturbationhttps://zbmath.org/1496.900172022-11-17T18:59:28.764376Z"Lv, Changchun"https://zbmath.org/authors/?q=ai:lv.changchun"Yuan, Ziwei"https://zbmath.org/authors/?q=ai:yuan.ziwei"Si, Shubin"https://zbmath.org/authors/?q=ai:si.shubin"Duan, Dongli"https://zbmath.org/authors/?q=ai:duan.dongliSummary: An issue which is increasingly attracting attention from scientists to engineers, is the robustness of networks which is the ability against perturbations. It is found that both the network topology and network dynamics affect the robustness of networks. In this article, we present the cascading failure model triggered by perturbing a fraction \(1- p\) of nodes on SF networks with three dynamics: the biochemical \((\mathcal{B})\), epidemic \((\mathcal{E})\) and regulatory \((\mathcal{R})\) dynamics. A mathematical method is developed to calculate the cascading failure size and the giant component to evaluate the robustness when a fraction \(1-p\) of nodes is perturbed on SF dynamical networks. We perform extensive numerical simulations to test and verify this formula and find that the theoretical results are in good agreement with simulations. The results show that the network is more robust as the tolerance coefficient \(\delta\) increases, and the size of network has little influence on the robustness, especially for \(\mathcal{B}\) and \(\mathcal{R}\). Remarkably, the heterogeneity of networks is positive on the robustness. Moreover, the different characteristics that as the parameter \(B\) increases or the parameter \(R\) decreases the network with \(\mathcal{B}\) is more robust, and as the parameter \(R\) increases or the parameter \(B\) decreases the network with \(\mathcal{E}\) and \(\mathcal{R}\) is more robust are found. These findings may be useful for engineers to improve the robustness of the network or design robust networks with dynamics.A \(\frac{3}{2}\)-approximation algorithm for the student-project allocation problemhttps://zbmath.org/1496.900272022-11-17T18:59:28.764376Z"Cooper, Frances"https://zbmath.org/authors/?q=ai:cooper.frances"Manlove, David"https://zbmath.org/authors/?q=ai:manlove.david-fSummary: The Student-Project Allocation problem with lecturer preferences over Students (SPA-S) comprises three sets of agents, namely students, projects and lecturers, where students have preferences over projects and lecturers have preferences over students. In this scenario we seek a stable matching, that is, an assignment of students to projects such that there is no student and lecturer who have an incentive to deviate from their assignee/s. We study SPA-ST, the extension of SPA-S in which the preference lists of students and lecturers need not be strictly ordered, and may contain ties. In this scenario, stable matchings may be of different sizes, and it is known that MAX SPA-ST, the problem of finding a maximum stable matching in SPA-ST, is NP-hard. We present a linear-time \(\frac{3}{2}\)-approximation algorithm for MAX SPA-ST and an Integer Programming (IP) model to solve MAX SPA-ST optimally. We compare the approximation algorithm with the IP model experimentally using randomly-generated data. We find that the performance of the approximation algorithm easily surpassed the \(\frac{3}{2}\) bound, constructing a stable matching within 92\% of optimal in all cases, with the percentage being far higher for many instances.
For the entire collection see [Zbl 1390.68017].Optimization of location of interconnected facilities on parallel lines with forbidden zoneshttps://zbmath.org/1496.900312022-11-17T18:59:28.764376Z"Zabudskiĭ, Gennadiĭ Grigor'evich"https://zbmath.org/authors/?q=ai:zabudskii.gennadii-grigorevich"Veremchuk, Natal'ya Sergeevna"https://zbmath.org/authors/?q=ai:veremchuk.natalya-sergeevnaSummary: An overview of statements, models and methods for solving location problem of interconnected rectangular facilities on parallel lines with forbidden zones is given. The centers of the facilities are connected by communications with each other and with forbidden zones. It is necessary to place facilities outside the zones in such a way that the total cost of communications facilities to each other and to the zones was minimal. The main focus is on the problem on the line. For several lines communication are through a viaduct. Models of graph-theoretic formulation and partially integer programming with Boolean variables are constructed. Properties are found that allow us to consider the problem as discrete and decompose it into a number of problems of smaller dimension. Algorithms for finding exact and approximate solutions are developed, and polynomial solvable cases are identified. The results of numerical experiments are presented.Limited-memory common-directions method for large-scale optimization: convergence, parallelization, and distributed optimizationhttps://zbmath.org/1496.900352022-11-17T18:59:28.764376Z"Lee, Ching-pei"https://zbmath.org/authors/?q=ai:lee.ching-pei"Wang, Po-Wei"https://zbmath.org/authors/?q=ai:wang.po-wei"Lin, Chih-Jen"https://zbmath.org/authors/?q=ai:lin.chih-jenSummary: In this paper, we present a limited-memory common-directions method for smooth optimization that interpolates between first- and second-order methods. At each iteration, a subspace of a limited dimension size is constructed using first-order information from previous iterations, and an efficient Newton method is deployed to find an approximate minimizer within this subspace. With properly selected subspace of dimension as small as two, the proposed algorithm achieves the optimal convergence rates for first-order methods while remaining a descent method, and it also possesses fast convergence speed on nonconvex problems. Since the major operations of our method are dense matrix-matrix operations, the proposed method can be efficiently parallelized in multicore environments even for sparse problems. By wisely utilizing historical information, our method is also communication-efficient in distributed optimization that uses multiple machines as the Newton steps can be calculated with little communication. Numerical study shows that our method has superior empirical performance on real-world large-scale machine learning problems.Evaluating and tuning \(n\)-fold integer programminghttps://zbmath.org/1496.900362022-11-17T18:59:28.764376Z"Altmanová, Katerina"https://zbmath.org/authors/?q=ai:altmanova.katerina"Knop, Dusan"https://zbmath.org/authors/?q=ai:knop.dusan"Koutecký, Martin"https://zbmath.org/authors/?q=ai:koutecky.martinSummary: In recent years, algorithmic breakthroughs in stringology, computational social choice, scheduling, etc., were achieved by applying the theory of so-called \(n\)-fold integer programming. An \(n\)-fold integer program (IP) has a highly uniform block structured constraint matrix. \textit{R. Hemmecke} et al. [Math. Program. 137, No. 1--2 (A), 325--341 (2013; Zbl 1262.90104)] showed an algorithm with runtime \(a^{\mathcal{O}(rst + r^2s)}n^3\), where \(a\) is the largest coefficient, \(r\), \(s\), and \(t\) are dimensions of blocks of the constraint matrix and \(n\) is the total dimension of the IP; thus, an algorithm efficient if the blocks are of small size and with small coefficients. The algorithm works by iteratively improving a feasible solution with augmenting steps, and \(n\)-fold IPs have the special property that augmenting steps are guaranteed to exist in a not-too-large neighborhood. However, this algorithm has never been implemented and evaluated. We have implemented the algorithm and learned the following along the way. The original algorithm is practically unusable, but we discover a series of improvements which make its evaluation possible. Crucially, we observe that a certain constant in the algorithm can be treated as a tuning parameter, which yields an efficient heuristic (essentially searching in a smaller-than-guaranteed neighborhood). Furthermore, the algorithm uses an overly expensive strategy to find a ``best'' step, while finding only an ``approximatelly best'' step is much cheaper, yet sufficient for quick convergence. Using this insight, we improve the asymptotic dependence on \(n\) from \(n^3\) to \(n^2\log n\) which yields the currently asymptotically fastest algorithm for \(n\)-fold IP. Finally, we tested the behavior of the algorithm with various values of the tuning parameter and different strategies of finding improving steps. First, we show that decreasing the tuning parameter initially leads to an increased number of iterations needed for convergence and eventually to getting stuck in local optima, as expected. However, surprisingly small values of the parameter already exhibit good behavior. Second, our new strategy for finding ``approximatelly best'' steps wildly outperforms the original construction.
For the entire collection see [Zbl 1390.68017].Inertial accelerated primal-dual methods for linear equality constrained convex optimization problemshttps://zbmath.org/1496.900562022-11-17T18:59:28.764376Z"He, Xin"https://zbmath.org/authors/?q=ai:he.xin"Hu, Rong"https://zbmath.org/authors/?q=ai:hu.rong"Fang, Ya-Ping"https://zbmath.org/authors/?q=ai:fang.yapingSummary: In this paper, we propose an inertial accelerated primal-dual method for the linear equality constrained convex optimization problem. When the objective function has a ``nonsmooth + smooth'' composite structure, we further propose an inexact inertial primal-dual method by linearizing the smooth individual function and solving the subproblem inexactly. Assuming merely convexity, we prove that the proposed methods enjoy \(\mathcal{O}(1/k^2)\) convergence rate on the objective residual and the feasibility violation in the primal model. Numerical results are reported to demonstrate the validity of the proposed methods.An accelerated variance reducing stochastic method with Douglas-Rachford splittinghttps://zbmath.org/1496.900572022-11-17T18:59:28.764376Z"Liu, Jingchang"https://zbmath.org/authors/?q=ai:liu.jingchang"Xu, Linli"https://zbmath.org/authors/?q=ai:xu.linli"Shen, Shuheng"https://zbmath.org/authors/?q=ai:shen.shuheng"Ling, Qing"https://zbmath.org/authors/?q=ai:ling.qingSummary: We consider the problem of minimizing the regularized empirical risk function which is represented as the average of a large number of convex loss functions plus a possibly non-smooth convex regularization term. In this paper, we propose a fast variance reducing (VR) stochastic method called Prox2-SAGA. Different from traditional VR stochastic methods, Prox2-SAGA replaces the stochastic gradient of the loss function with the corresponding gradient mapping. In addition, Prox2-SAGA also computes the gradient mapping of the regularization term. These two gradient mappings constitute a Douglas-Rachford splitting step. For strongly convex and smooth loss functions, we prove that Prox2-SAGA can achieve a linear convergence rate comparable to other accelerated VR stochastic methods. In addition, Prox2-SAGA is more practical as it involves only the stepsize to tune. When each loss function is smooth but non-strongly convex, we prove a convergence rate of \({\mathcal {O}}(1/k)\) for the proposed Prox2-SAGA method, where \(k\) is the number of iterations. Moreover, experiments show that Prox2-SAGA is valid for non-smooth loss functions, and for strongly convex and smooth loss functions, Prox2-SAGA is prominently faster when loss functions are ill-conditioned.Fast convergence of generalized forward-backward algorithms for structured monotone inclusionshttps://zbmath.org/1496.900582022-11-17T18:59:28.764376Z"Maingé, Paul-Emile"https://zbmath.org/authors/?q=ai:mainge.paul-emileSummary: We develop rapidly convergent forward-backward algorithms for computing zeroes of the sum of finitely many maximally monotone operators. A modification of the classical forward-backward method for two general operators is first considered, by incorporating an inertial term (close to the acceleration techniques introduced by Nesterov), a constant relaxation factor and a correction
term. In a Hilbert space setting, we prove the weak convergence to equilibria of the iterates \((x_n)\), with worst-case rates of \(o(n^{-1})\) in terms of both the discrete velocity and the fixed point residual, instead of the classical rates of \(\mathcal O(n^{-1/2})\) established so far for related algorithms. Our procedure
is then adapted to more general monotone inclusions and a fast primal-dual algorithm is proposed for solving convex-concave saddle point problems.Nonlinear optimization and support vector machineshttps://zbmath.org/1496.900592022-11-17T18:59:28.764376Z"Piccialli, Veronica"https://zbmath.org/authors/?q=ai:piccialli.veronica"Sciandrone, Marco"https://zbmath.org/authors/?q=ai:sciandrone.marcoSummary: Support vector machine (SVM) is one of the most important class of machine learning models and algorithms, and has been successfully applied in various fields. Nonlinear optimization plays a crucial role in SVM methodology, both in defining the machine learning models and in designing convergent and efficient algorithms for large-scale training problems. In this paper we present the convex programming problems underlying SVM focusing on supervised binary classification. We analyze the most important and used optimization methods for SVM training problems, and we discuss how the properties of these problems can be incorporated in designing useful algorithms.Characterization results of solutions in interval-valued optimization problems with mixed constraintshttps://zbmath.org/1496.900692022-11-17T18:59:28.764376Z"Treanţă, Savin"https://zbmath.org/authors/?q=ai:treanta.savinSummary: In this paper, we establish some characterization results of solutions associated with a class of interval-valued optimization problems with mixed constraints. More precisely, we investigate the connections between the LU-optimal solutions of the considered interval-valued variational control problem and the saddle-points associated with an interval-valued Lagrange functional corresponding to a modified interval-valued variational control problem. The main derived resuts are accompanied by illustrative examples.An efficient local search for the minimum independent dominating set problemhttps://zbmath.org/1496.900762022-11-17T18:59:28.764376Z"Haraguchi, Kazuya"https://zbmath.org/authors/?q=ai:haraguchi.kazuyaSummary: In the present paper, we propose an efficient local search for the minimum independent dominating set problem. We consider a local search that uses \(k\)-swap as the neighborhood operation. Given a feasible solution \(S\), it is the operation of obtaining another feasible solution by dropping exactly \(k\) vertices from \(S\) and then by adding any number of vertices to it. We show that, when \(k=2\) (resp., \(k=3\) and a given solution is minimal with respect to 2-swap), we can find an improved solution in the neighborhood or conclude that no such solution exists in \(O(n\Delta)\) (resp., \(O(n\Delta^3))\) time, where \(n\) denotes the number of vertices and \(\Delta\) denotes the maximum degree. We develop a metaheuristic algorithm that repeats the proposed local search and the plateau search iteratively, where the plateau search examines solutions of the same size as the current solution that are obtainable by exchanging a solution vertex and a non-solution vertex. The algorithm is so effective that, among 80 DIMACS graphs, it updates the best-known solution size for five graphs and performs as well as existing methods for the remaining graphs.
For the entire collection see [Zbl 1390.68017].Covering and packing of triangles intersecting a straight linehttps://zbmath.org/1496.900802022-11-17T18:59:28.764376Z"Pandit, Supantha"https://zbmath.org/authors/?q=ai:pandit.supanthaSummary: We study four geometric optimization problems: \textit{set cover}, \textit{hitting set}, \textit{piercing set}, and \textit{independent set} with \textit{right-triangles} (a triangle is a right-triangle whose base is parallel to the \(x\)-axis, perpendicular is parallel to the \(y\)-axis, and the slope of the hypotenuse is \(- 1\)). The input triangles are constrained to be intersecting a \textit{straight line}. The straight line can either be a \textit{horizontal} or an \textit{inclined} line (a line whose slope is \(- 1\)). A right-triangle is said to be a \(\lambda\)-\textit{right-triangle}, if the length of both its base and perpendicular is \(\lambda \). For 1-right-triangles where the triangles intersect an inclined line, we prove that the set cover and hitting set problems are \(\mathsf{NP} \)-hard, whereas the piercing set and independent set problems are in \(\mathsf{P} \). The same results hold for 1-right-triangles where the triangles are intersecting a horizontal line instead of an inclined line. We prove that the piercing set and independent set problems with right-triangles intersecting an inclined line are \(\mathsf{NP} \)-hard. Finally, we give an \(n^{O ( \lceil \log c \rceil + 1 )}\) time exact algorithm for the independent set problem with \(\lambda \)-right-triangles intersecting a straight line such that \(\lambda\) takes more than one value from \([ 1 , c ]\), for some integer \(c\). We also present \(O ( n^2 )\)-time dynamic programming algorithms for the independent set problem with 1-right-triangles where the triangles intersect a horizontal line and an inclined line.Block coordinate descent for smooth nonconvex constrained minimizationhttps://zbmath.org/1496.900922022-11-17T18:59:28.764376Z"Birgin, E. G."https://zbmath.org/authors/?q=ai:birgin.ernesto-g"Martínez, J. M."https://zbmath.org/authors/?q=ai:martinez.jose-marioSummary: At each iteration of a block coordinate descent method one minimizes an approximation of the objective function with respect to a generally small set of variables subject to constraints in which these variables are involved. The unconstrained case and the case in which the constraints are simple were analyzed in the recent literature. In this paper we address the problem in which block constraints are not simple and, moreover, the case in which they are not defined by global sets of equations and inequations. A general algorithm that minimizes quadratic models with quadratic regularization over blocks of variables is defined and convergence and complexity are proved. In particular, given tolerances \(\delta >0\) and \(\varepsilon >0\) for feasibility/complementarity and optimality, respectively, it is shown that a measure of \((\delta,0)\)-criticality tends to zero; and the number of iterations and functional evaluations required to achieve \((\delta,\varepsilon)\)-criticality is \(O(\varepsilon^{-2})\). Numerical experiments in which the proposed method is used to solve a continuous version of the traveling salesman problem are presented.On the generalized essential matrix correction: an efficient solution to the problem and its applicationshttps://zbmath.org/1496.900992022-11-17T18:59:28.764376Z"Miraldo, Pedro"https://zbmath.org/authors/?q=ai:miraldo.pedro"Cardoso, João R."https://zbmath.org/authors/?q=ai:cardoso.joao-rSummary: This paper addresses the problem of finding the closest generalized essential matrix from a given \(6\times 6\) matrix, with respect to the Frobenius norm. To the best of our knowledge, this nonlinear constrained optimization problem has not been addressed in the literature yet. Although it can be solved directly, it involves a large number of constraints, and any optimization method to solve it would require much computational effort. We start by deriving a couple of unconstrained formulations of the problem. After that, we convert the original problem into a new one, involving only orthogonal constraints, and propose an efficient algorithm of steepest descent type to find its solution. To test the algorithms, we evaluate the methods with synthetic data and conclude that the proposed steepest descent-type approach is much faster than the direct application of general optimization techniques to the original formulation with 33 constraints and to the unconstrained ones. To further motivate the relevance of our method, we apply it in two pose problems (relative and absolute) using synthetic and real data.Sketched Newton-Raphsonhttps://zbmath.org/1496.901122022-11-17T18:59:28.764376Z"Yuan, Rui"https://zbmath.org/authors/?q=ai:yuan.rui"Lazaric, Alessandro"https://zbmath.org/authors/?q=ai:lazaric.alessandro"Gower, Robert M."https://zbmath.org/authors/?q=ai:gower.robert-manselConvergence of deep fictitious play for stochastic differential gameshttps://zbmath.org/1496.910152022-11-17T18:59:28.764376Z"Han, Jiequn"https://zbmath.org/authors/?q=ai:han.jiequn"Hu, Ruimeng"https://zbmath.org/authors/?q=ai:hu.ruimeng"Long, Jihao"https://zbmath.org/authors/?q=ai:long.jihaoSummary: Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large \(N\)-player asymmetric stochastic differential games [\textit{J. Han} and \textit{R. Hu}, ``Deep fictitious play for finding Markovian Nash equilibrium in multi-agent games'', Proc. Mach. Learn. Res. (PMLR) 107, 221--245 (2020)]. By incorporating the idea of fictitious play, the algorithm decouples the game into \(N\) sub-optimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an \(\epsilon\)-Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.Variable ordering for decision diagrams: a portfolio approachhttps://zbmath.org/1496.910432022-11-17T18:59:28.764376Z"Karahalios, Anthony"https://zbmath.org/authors/?q=ai:karahalios.anthony"van Hoeve, Willem-Jan"https://zbmath.org/authors/?q=ai:van-hoeve.willem-janSummary: Relaxed decision diagrams have been successfully applied to solve combinatorial optimization problems, but their performance is known to strongly depend on the variable ordering. We propose a portfolio approach to selecting the best ordering among a set of alternatives. We consider several different portfolio mechanisms: a static uniform time-sharing portfolio, an offline predictive model of the single best algorithm using classifiers, a low-knowledge algorithm selection, and a dynamic online time allocator. As a case study, we compare and contrast their performance on the graph coloring problem. We find that on this problem domain, the dynamic online time allocator provides the best overall performance.A fast, provably accurate approximation algorithm for sparse principal component analysis reveals human genetic variation across the worldhttps://zbmath.org/1496.920492022-11-17T18:59:28.764376Z"Chowdhury, Agniva"https://zbmath.org/authors/?q=ai:chowdhury.agniva"Bose, Aritra"https://zbmath.org/authors/?q=ai:bose.aritra-n"Zhou, Samson"https://zbmath.org/authors/?q=ai:zhou.samson"Woodruff, David P."https://zbmath.org/authors/?q=ai:woodruff.david-p"Drineas, Petros"https://zbmath.org/authors/?q=ai:drineas.petrosSummary: Principal component analysis (PCA) is a widely used dimensionality reduction technique in machine learning and multivariate statistics. To improve the interpretability of PCA, various approaches to obtain sparse principal direction loadings have been proposed, which are termed sparse principal component analysis (SPCA). In this paper, we present \texttt{ThreSPCA}, a provably accurate algorithm based on thresholding the singular value decomposition for the SPCA problem, without imposing any restrictive assumptions on the input covariance matrix. Our thresholding algorithm is conceptually simple; much faster than current state-of-the-art; and performs well in practice. When applied to genotype data from the 1000 Genomes Project, \texttt{ThreSPCA} is faster than previous benchmarks, at least as accurate, and leads to a set of interpretable biomarkers, revealing genetic diversity across the world.
For the entire collection see [Zbl 1493.92001].Sorting by \(k\)-cuts on signed permutationshttps://zbmath.org/1496.920532022-11-17T18:59:28.764376Z"Rodrigues Oliveira, Andre"https://zbmath.org/authors/?q=ai:oliveira.andre-rodrigues"Oliveira Alexandrino, Alexsandro"https://zbmath.org/authors/?q=ai:alexandrino.alexsandro-oliveira"Jean, Géraldine"https://zbmath.org/authors/?q=ai:jean.geraldine"Fertin, Guillaume"https://zbmath.org/authors/?q=ai:fertin.guillaume"Dias, Ulisses"https://zbmath.org/authors/?q=ai:dias.ulisses"Dias, Zanoni"https://zbmath.org/authors/?q=ai:dias.zanoniSummary: Sorting by genome rearrangements is a classic problem in computational biology. Several models have been considered so far, each of them defines how a genome is modeled (for example, permutations when assuming no duplicated genes, strings if duplicated genes are allowed, and/or use of signs on each element when gene orientation is known), and which rearrangements are allowed. Recently, a new problem, called sorting by multi-cut rearrangements, was proposed. It uses the \(k\)-cut rearrangement which cuts a permutation (or a string) at \(k \ge 2\) places and rearranges the generated blocks to obtain a new permutation (or string) of same size. This new rearrangement may model \textit{chromoanagenesis}, a phenomenon consisting of massive simultaneous rearrangements. Similarly as the double-cut-and-join, this new rearrangement also generalizes several genome rearrangements such as reversals, transpositions, revrevs, transreversals, and block-interchanges. In this paper, we extend a previous work based on unsigned permutations and strings to \textit{signed} permutations. We show the complexity of this problem for different values of \(k\), that the approximation algorithm proposed for unsigned permutations with any value of \(k\) can be adapted to signed permutations, and a 1.5-approximation algorithm for the specific case \(k=4\).
For the entire collection see [Zbl 1492.92002].DeepMinimizer: a differentiable framework for optimizing sequence-specific minimizer schemeshttps://zbmath.org/1496.920672022-11-17T18:59:28.764376Z"Hoang, Minh"https://zbmath.org/authors/?q=ai:hoang.minh-a"Zheng, Hongyu"https://zbmath.org/authors/?q=ai:zheng.hongyu"Kingsford, Carl"https://zbmath.org/authors/?q=ai:kingsford.carlSummary: Minimizers are \(k\)-mer sampling schemes designed to generate sketches for large sequences that preserve sufficiently long matches between sequences. Despite their widespread application, learning an effective minimizer scheme with optimal sketch size is still an open question. Most work in this direction focuses on designing schemes that work well on expectation over random sequences, which have limited applicability to many practical tools. On the other hand, several methods have been proposed to construct minimizer schemes for a specific target sequence. These methods, however, require greedy approximations to solve an intractable discrete optimization problem on the permutation space of \(k\)-mer orderings. To address this challenge, we propose: (a) a reformulation of the combinatorial solution space using a deep neural network re-parameterization; and (b) a fully differentiable approximation of the discrete objective. We demonstrate that our framework, \textsc{DeepMinimizer}, discovers minimizer schemes that significantly outperform state-of-the-art constructions on genomic sequences.
For the entire collection see [Zbl 1493.92001].Benchmarking penalized regression methods in machine learning for single cell RNA sequencing datahttps://zbmath.org/1496.920752022-11-17T18:59:28.764376Z"Puliparambil, Bhavithry Sen"https://zbmath.org/authors/?q=ai:puliparambil.bhavithry-sen"Tomal, Jabed"https://zbmath.org/authors/?q=ai:tomal.jabed-h"Yan, Yan"https://zbmath.org/authors/?q=ai:yan.yanSummary: Single cell RNA sequencing (scRNA-seq) technology has enabled the biological research community to explore gene expression at a single-cell resolution. By studying differences in gene expression, it is possible to differentiate cell clusters and types within tissues. One of the major challenges in a scRNA-seq study is feature selection in high dimensional data. Several statistical and machine learning algorithms are available to solve this problem, but their performances across data sets lack systematic comparison. In this research, we benchmark different penalized regression methods, which are suitable for scRNA-seq data. Results from four different scRNA-seq data sets show that sparse group lasso (SGL) implemented by the SGL package in R performs better than other methods in terms of area under the receiver operating curve (AUC). The computation time for different algorithms varies between data sets with SGL having the least average computation time. Based on our findings, we propose a new method that applies SGL on a smaller pre-selected subset of genes to select the differentially expressed genes in scRNA-seq data. The reduction in the number of genes before SGL reduce the computation hardware requirement from 32 GB RAM to 8 GB RAM. The proposed method also demonstrates a consistent improvement in AUC over SGL.
For the entire collection see [Zbl 1492.92002].CLMB: deep contrastive learning for robust metagenomic binninghttps://zbmath.org/1496.920782022-11-17T18:59:28.764376Z"Zhang, Pengfei"https://zbmath.org/authors/?q=ai:zhang.pengfei"Jiang, Zhengyuan"https://zbmath.org/authors/?q=ai:jiang.zhengyuan"Wang, Yixuan"https://zbmath.org/authors/?q=ai:wang.yixuan"Li, Yu"https://zbmath.org/authors/?q=ai:li.yu.7Summary: The reconstruction of microbial genomes from large metagenomic datasets is a critical procedure for finding uncultivated microbial populations and defining their microbial functional roles. To achieve that, we need to perform metagenomic binning, clustering the assembled contigs into draft genomes. Despite the existing computational tools, most of them neglect one important property of the metagenomic data, that is, the noise. To further improve the metagenomic binning step and reconstruct better metagenomes, we propose a deep contrastive learning framework for metagenome binning (CLMB), which can efficiently eliminate the disturbance of noise and produce more stable and robust results. Essentially, instead of denoising the data explicitly, we add simulated noise to the training data and force the deep learning model to produce similar and stable representations for both the noise-free data and the distorted data. Consequently, the trained model will be robust to noise and handle it implicitly during usage. CLMB outperforms the previous state-of-the-art binning methods significantly, recovering the most near-complete genomes on almost all the benchmarking datasets (up to 17\% more reconstructed genomes compared to the second-best method). It also improves the performance of bin refinement, reconstructing 8--22 more high-quality genomes and 15--32 more middle-quality genomes more than the second-best result. Impressively, in addition to being compatible with the binning refiner, single CLMB even recovers on average 15 more HQ genomes than the refiner of VAMB and Maxbin on the benchmarking datasets. On a real mother-infant microbiome dataset with 110 samples, CLMB is scalable and practical to recover 365 high-quality and middle-quality genomes (including 21 new ones), providing insights into the microbiome transmission. CLMB is open-source and available at \url{https://github.com/zpf0117b/CLMB/}.
For the entire collection see [Zbl 1493.92001].Informatics in control, automation and robotics. 14th international conference, ICINCO 2017 Madrid, Spain, July 26--28, 2017 Revised selected papershttps://zbmath.org/1496.930042022-11-17T18:59:28.764376ZPublisher's description: The book focuses the latest endeavours relating researches and developments conducted in fields of Control, Robotics and Automation. Through more than twenty revised and extended articles, the present book aims to provide the most up-to-date state-of-art of the aforementioned fields allowing researcher, PhD students and engineers not only updating their knowledge but also benefiting from the source of inspiration that represents the set of selected articles of the book.
The deliberate intention of editors to cover as well theoretical facets of those fields as their practical accomplishments and implementations offers the benefit of gathering in a same volume a factual and well-balanced prospect of nowadays research in those topics. A special attention toward ``Intelligent Robots and Control'' may characterize another benefit of this book.
The articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1419.93003]. For additional papers of the conference see [Zbl 1485.93011].Informatics in control, automation and robotics. 15th international conference, ICINCO 2018, Porto, Portugal, July 29--31, 2018, Revised selected papershttps://zbmath.org/1496.930052022-11-17T18:59:28.764376ZPublisher's description: The goal of this book is to familiarize readers with the latest research on, and recent advances in, the field of Informatics in Control, Automation and Robotics. It gathers a selection of papers highlighting the state-of-the-art in Intelligent Control Systems, Optimization, Robotics and Automation, Signal Processing, Sensors, Systems Modelling and Control. Combining theoretical aspects with practical applications, the book offers a well-balanced overview of the latest achievements, and will provide researchers, engineers and PhD students with both a vital update and new inspirations for their own research.
The articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1485.93011]. For additional papers of the conference see [Zbl 1485.93017].Informatics in control, automation and robotics. 16th international conference, ICINCO 2019 Prague, Czech Republic, July 29--31, 2019, Revised selected papershttps://zbmath.org/1496.930062022-11-17T18:59:28.764376ZPublisher's description: This book focuses on the latest endeavors relating researches and developments conducted in fields of control, robotics and automation. Through more than ten revised and extended articles, the present book aims to provide the most up-to-date state of the art of the aforementioned fields allowing researcher, Ph.D. students and engineers not only updating their knowledge but also benefiting from the source of inspiration that represents the set of selected articles of the book.
The deliberate intention of editors to cover as well theoretical facets of those fields as their practical accomplishments and implementations offers the benefit of gathering in the same volume a factual and well-balanced prospect of nowadays research in those topics. Special attention toward ``Intelligent Robots and Control'' may characterize another benefit of this book.
The articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1496.93005]. For additional papers of the conference see [Zbl 1485.93012].Informatics in control, automation and robotics. 17th international conference, ICINCO 2020 Lieusaint, Paris, France, July 7--9, 2020. Revised selected papershttps://zbmath.org/1496.930072022-11-17T18:59:28.764376ZPublisher's description: The book focuses the latest endeavours relating researches and developments conducted in fields of Control, Robotics and Automation. Through more than ten revised and extended articles, the present book aims to provide the most up-to-date state-of-art of the aforementioned fields allowing researcher, PhD students and engineers not only updating their knowledge but also benefiting from the source of inspiration that represents the set of selected articles of the book.
The deliberate intention of editors to cover as well theoretical facets of those fields as their practical accomplishments and implementations offers the benefit of gathering in a same volume a factual and well-balanced prospect of nowadays research in those topics. A special attention toward ``Intelligent Robots and Control'' may characterize another benefit of this book.
The articles of mathematical interest will be reviewed individually. For the preceding conference see [Zbl 1496.93006; Zbl 1485.93012]. For additional papers of the conference see [Zbl 1485.93013].Experience of intellectualization of situational modeling methods for discrete time-varying spatial objectshttps://zbmath.org/1496.930162022-11-17T18:59:28.764376Z"Fridman, A. Ya."https://zbmath.org/authors/?q=ai:fridman.a-yaSummary: The application of the ideas of artificial intelligence to problems of situational modeling and control of discrete spatial dynamic systems is considered. A set of methods for the synthesis and quantitative comparison of alternative structures and scenarios for the development of the modeling object synthesized taking into account the preferences of the decision maker is presented. As an example of the implementation of these methods, a situational modeling system is described the core of which is the conceptual model of the subject domain and the expert and geoinformation systems integrated with it. The model is formalized in the paradigm of a semiotic formal system; this has made it possible to give strict definitions to the basic concepts of D. A. Pospelov's situational control, which are absent in the prototypes, and thereby provide a unified processing of heterogeneous information and automate a detailed analysis of the model consistency and its operational modification.System of integrated simulation of spread of hazardous factors of fire and evacuation of people from indoorshttps://zbmath.org/1496.930202022-11-17T18:59:28.764376Z"Tsvirkun, A. D."https://zbmath.org/authors/?q=ai:tsvirkun.a-d"Rezchikov, A. F."https://zbmath.org/authors/?q=ai:rezchikov.a-f"Filimonyuk, L. Yu."https://zbmath.org/authors/?q=ai:filimonyuk.l-yu"Samartsev, A. A."https://zbmath.org/authors/?q=ai:samartsev.a-a"Ivashchenko, V. A."https://zbmath.org/authors/?q=ai:ivashchenko.v-a"Bogomolov, A. S."https://zbmath.org/authors/?q=ai:bogomolov.a-s"Kushnikov, V. A."https://zbmath.org/authors/?q=ai:kushnikov.v-aSummary: We present an integrated mathematical model and its implementation in the form of a software and information complex developed for joint modeling of the spread of dangerous fire factors and spontaneous evacuation of people from premises of complex configuration. The propagation of fire, heat, and smoke is modeled based on the principle of cellular automata. To simulate the evacuation process, we have developed and used a multiagent model that takes into account the physical characteristics and behavior of people during collisions.Deep neural networks algorithms for stochastic control problems on finite horizon: numerical applicationshttps://zbmath.org/1496.931122022-11-17T18:59:28.764376Z"Bachouch, Achref"https://zbmath.org/authors/?q=ai:bachouch.achref"Huré, Côme"https://zbmath.org/authors/?q=ai:hure.come"Langrené, Nicolas"https://zbmath.org/authors/?q=ai:langrene.nicolas"Pham, Huyên"https://zbmath.org/authors/?q=ai:pham.huyenSummary: This paper presents several numerical applications of deep learning-based algorithms for discrete-time stochastic control problems in finite time horizon that have been introduced in [the authors, SIAM J. Numer. Anal. 59, No. 1, 525--557 (2021; Zbl 1466.65007)]. Numerical and comparative tests using TensorFlow illustrate the performance of our different algorithms, namely control learning by performance iteration (algorithms NNcontPI and ClassifPI), control learning by hybrid iteration (algorithms Hybrid-Now and Hybrid-LaterQ), on the 100-dimensional nonlinear PDEs examples from [\textit{W. E} et al., Commun. Math. Stat. 5, No. 4, 349--380 (2017; Zbl 1382.65016)] and on quadratic backward stochastic differential equations as in [\textit{J.-F. Chassagneux} and \textit{A. Richou}, Ann. Appl. Probab. 26, No. 1, 262--304 (2016; Zbl 1334.60129)]. We also performed tests on low-dimension control problems such as an option hedging problem in finance, as well as energy storage problems arising in the valuation of gas storage and in microgrid management. Numerical results and comparisons to quantization-type algorithms Qknn, as an efficient algorithm to numerically solve low-dimensional control problems, are also provided.On distributed fusion estimation with stochastic scheduling over sensor networkshttps://zbmath.org/1496.931182022-11-17T18:59:28.764376Z"Yu, Dongdong"https://zbmath.org/authors/?q=ai:yu.dongdong"Xia, Yuanqing"https://zbmath.org/authors/?q=ai:xia.yuanqing"Zhai, Di-Hua"https://zbmath.org/authors/?q=ai:zhai.dihua"Zhan, Yufeng"https://zbmath.org/authors/?q=ai:zhan.yufengSummary: The paper deals with the distributed fusion estimation for linear time-varying systems over sensor networks, in which stochastic sensor scheduling and unknown exogenous inputs are taken into account. In the stochastic sensor scheduling, expensive and cheap channels are used to respectively transmit the high-precision data and the low-precision quantized data. Based on the stochastic scheduling scheme, a recursive minimum mean square error (MMSE) estimator is proposed against the unknown inputs. Then, a distributed fusion estimator is presented by combining local estimates and covariances from all sensors, relying on the covariance intersection (CI) fusion rule. Sufficient conditions are established to ensure that the proposed fusion estimator is stable with the stochastically ultimately bounded estimation error. Finally, a target tracking example is given to show the effectiveness of the proposed method.Stochastic optimal control over unreliable communication linkshttps://zbmath.org/1496.931252022-11-17T18:59:28.764376Z"Bengtsson, Fredrik"https://zbmath.org/authors/?q=ai:bengtsson.fredrik"Wik, Torsten"https://zbmath.org/authors/?q=ai:wik.torstenSummary: In this paper LQG control over unreliable communication links is derived. That is to say, the communication channels between the controller and the actuators and between the sensors and the controller are unreliable. This is of growing importance as networked control systems and use of wireless communication in control are becoming increasingly common. The problem of how to optimize LQG control in this case is examined in the situation where communication between the components is done with acknowledgments. Previous solutions to finite horizon discrete time hold-input LQG control for this case do not fully utilize the available information. Here a new solution is presented which resolves this limitation. The solution is linear and covers communication channels subject to both packet losses and delays. The new control scheme is compared with previous solutions for LQG control in simulations, which demonstrates that a significant improvement in the cost can be achieved by fully utilizing the available information.Cryptography, information theory, and error-correction. A handbook for the 21st centuryhttps://zbmath.org/1496.940012022-11-17T18:59:28.764376Z"Bruen, Aiden A."https://zbmath.org/authors/?q=ai:bruen.aiden-a"Forcinito, Mario A."https://zbmath.org/authors/?q=ai:forcinito.mario-a"McQuillan, James M."https://zbmath.org/authors/?q=ai:mcquillan.james-mPublisher's description: As technology continues to evolve Cryptography, Information Theory, and Error-Correction: A Handbook for the 21st Century is an indispensable resource for anyone interested in the secure exchange of financial information. Identity theft, cybercrime, and other security issues have taken center stage as information becomes easier to access. Three disciplines offer solutions to these digital challenges: cryptography, information theory, and error-correction, all of which are addressed in this book.
This book is geared toward a broad audience. It is an excellent reference for both graduate and undergraduate students of mathematics, computer science, cybersecurity, and engineering. It is also an authoritative overview for professionals working at financial institutions, law firms, and governments who need up-to-date information to make critical decisions. The book's discussions will be of interest to those involved in blockchains as well as those working in companies developing and applying security for new products, like self-driving cars. With its reader-friendly style and interdisciplinary emphasis this book serves as both an ideal teaching text and a tool for self-learning for IT professionals, statisticians, mathematicians, computer scientists, electrical engineers, and entrepreneurs.
Six new chapters cover current topics like Internet of Things security, new identities in information theory, blockchains, cryptocurrency, compression, cloud computing and storage. Increased security and applicable research in elliptic curve cryptography are also featured. The book also:
\par i) Shares vital, new research in the field of information theory
\par ii) Provides quantum cryptography updates
\par iii) Includes over 350 worked examples and problems for greater understanding of ideas.
Cryptography, Information Theory, and Error-Correction guides readers in their understanding of reliable tools that can be used to store or transmit digital information safely.
See the review of the first edition in [Zbl 1071.94001].Modern cryptography. Volume 1. A classical introduction to informational and mathematical principlehttps://zbmath.org/1496.940022022-11-17T18:59:28.764376Z"Zheng, Zhiyong"https://zbmath.org/authors/?q=ai:zheng.zhiyongThe present book is a textbook of theoretical cryptography suitable for senior students in mathematics, compulsory for cryptography and science, and engineering postgraduates. It deals with information theory, the statistical characteristics of cryptosystems, and the computational complexity of cryptographic algorithms, and discusses several important public-key cryptosystems. It emphasises the mathematical principles behind various cryptographic and authentication schemes. The book contains seven chapters.
The first chapter presents some preliminary knowledge. Basic facts about maps are recalled, and the computational complexity of an algorithm that receives as input integers is introduced. Furthermore, Jensen inequality and the Stirling formula are presented. Finally, the \(n\)-fold Bernoulli experiment, Chebyshev inequality, and stochastic process are discussed.
The second chapter is devoted to code theory. Its goal is to give an introduction of the theory of error-correcting codes. It includes Hamming distance, Lee distance, linear codes, some typical good codes, and Mac Williams and Shannon theorems.
The third chapter presents Shannon's theory. The information space, the entropy, the redundancy, the Markov space, and the source coding theorem are discussed. Furthermore, the optimal code theory is investigated and several examples of compressing codes are given. Finally, Shannon's channel coding theorem is proved.
Cryptography is the topic of the fourth chapter. It gives an introduction to Shannon's ideas and results in cryptography, and in public key cryptography. First, it deals with the statistical characteristic of cryptosystems, fully confidential systems, and ideal security systems. Further, the message authentication systems and the forgery and substitute attacks are discussed. Moreover, some classical encryption algorithms and the public-key cryptosystems RSA, ElGamal scheme, and knapsack scheme are described.
In the fifth chapter, the primality tests, which are necessary for the construction of a wide class of public key cryptosystems, are presented. The Fermat and Euler tests are described and the Monte Carlo method is introduced. Furthermore, the factor basis method and the continuous fraction method are given.
The important topic of elliptic curves is introduced in the sixth chapter. The basic theory is presented and some classical public key cryptosystems based on elliptic curves are discussed. Finally, the elliptic curve factorisation method is described.
The last chapter contains classical results on lattices and their applications in public-key cryptography. It gives an introduction to the geometry of numbers and discusses the basic properties of lattices. Furthermore, it studies the reduced bases and the LLL algorithm and presents approximation algorithms for the shortest vector problem and the closest vector problem. Finally, the GGH/HNF cryptosystem, the NTRU cryptosystem, the McEliece/Niederreiter cryptosystem, and the Ajtai/Dwork cryptosystem are presented.
Reviewer: Dimitros Poulakis (Thessaloniki)Fractional diffusion equation-based image denoising model using CN-GL schemehttps://zbmath.org/1496.940052022-11-17T18:59:28.764376Z"Abirami, A."https://zbmath.org/authors/?q=ai:abirami.a"Prakash, P."https://zbmath.org/authors/?q=ai:prakash.pradyot|prakash.prem|prakash.prathibha|prakash.p-v|prakash.pankaj|prakash.periasamy"Thangavel, K."https://zbmath.org/authors/?q=ai:thangavel.kSummary: In recent decades, variational methods have achieved great success in reducing noise owing to the use of total variation (TV). The TV-based denoising model introduced by Rudin-Osher-Fatemi (ROF) is playing vital role in denoising the different types of images. In this paper, a new denoising model based on space fractional diffusion equation is proposed with a finite domain discretized using effective applications of Crank-Nicholson and Grünwald Letnikov difference schemes. The ROF model has been adopted to solve the proposed model with the help of Alternative Direction Implicit method to denoise the image. The experimental results of the proposed model have been compared with those of the Gaussian model and it is observed that the Peak Signal-to-Noise Ratio has been improved.Cauchy noise removal by nonlinear diffusion equationshttps://zbmath.org/1496.940092022-11-17T18:59:28.764376Z"Shi, Kehan"https://zbmath.org/authors/?q=ai:shi.kehan"Dong, Gang"https://zbmath.org/authors/?q=ai:dong.gang"Guo, Zhichang"https://zbmath.org/authors/?q=ai:guo.zhichangSummary: This paper focuses on the problem of image restoration under Cauchy noise. The variational method, which constructs the data fidelity term involving the Cauchy distribution by MAP estimator, has been proven to be a successful approach. In this paper, a nonlinear diffusion equation is proposed to deal with it. The main ingredients of the proposed equation are a gray level based diffusivity that estimates the amplitude of the noise and a classical gradient based diffusivity that controls the anisotropic diffusion according to the image's local structure. The proposed equation has the nondivergence form, and its properties, including the existence, uniqueness, and stability of solutions, are established by the notion of viscosity solution. Experimental results show the superiority of the proposed equation over variational methods in restoring small details of images.Quasi-FM waveform using chaotic oscillator for joint radar and communication systemshttps://zbmath.org/1496.940182022-11-17T18:59:28.764376Z"Pappu, Chandra S."https://zbmath.org/authors/?q=ai:pappu.chandra-s"Carroll, Thomas L."https://zbmath.org/authors/?q=ai:carroll.thomas-lSummary: The authors propose a novel signal design for generating wideband quasi-Frequency Modulated (FM) waveforms using chaotic systems. The receiver is based on a self synchronizing chaotic system, making for fast synchronization that is robust to timing errors or Doppler shifts. The chaotic oscillator has fast and slow time scales, and the slow oscillating part of the chaotic system is used to sweep the fast oscillating part thereby generating a modulated waveform that changes its frequency as a function of time. The potentials of these waveforms are demonstrated for joint radar-communication (RadComm) systems. Using the same nonlinear system a chaos frequency shift keying (CFSK) approach is utilized to encode the digital information. To decode the information, a drive-response synchronization scheme is utilized. Results indicate that our proposed signal design closely matches the bit-error rate (BER) of theoretical noncoherent frequency shift keying (FSK) while having good radar imaging capabilities.Zero-knowledge IOPs with linear-time prover and polylogarithmic-time verifierhttps://zbmath.org/1496.940282022-11-17T18:59:28.764376Z"Bootle, Jonathan"https://zbmath.org/authors/?q=ai:bootle.jonathan"Chiesa, Alessandro"https://zbmath.org/authors/?q=ai:chiesa.alessandro"Liu, Siqi"https://zbmath.org/authors/?q=ai:liu.siqiSummary: Interactive oracle proofs (IOPs) are a multi-round generalization of probabilistically checkable proofs that play a fundamental role in the construction of efficient cryptographic proofs.
We present an IOP that simultaneously achieves the properties of zero knowledge, linear-time proving, and polylogarithmic-time verification. We construct a zero-knowledge IOP where, for the satisfiability of an N-gate arithmetic circuit over any field of size \(\varOmega (N)\), the prover uses \(O(N)\) field operations and the verifier uses \({\mathsf{polylog}}(N)\) field operations (with proof length \(O(N)\) and query complexity \({\mathsf{polylog}}(N))\). Polylogarithmic verification is achieved in the holographic setting for every circuit (the verifier has oracle access to a linear-time-computable encoding of the circuit whose satisfiability is being proved).
Our result implies progress on a basic goal in the area of efficient zero knowledge. Via a known transformation, we obtain a zero knowledge argument system where the prover runs in linear time and the verifier runs in polylogarithmic time; the construction is plausibly post-quantum and only makes a black-box use of lightweight cryptography (collision-resistant hash functions).
For the entire collection see [Zbl 1493.94002].Secure multiparty computation with sublinear preprocessinghttps://zbmath.org/1496.940292022-11-17T18:59:28.764376Z"Boyle, Elette"https://zbmath.org/authors/?q=ai:boyle.elette"Gilboa, Niv"https://zbmath.org/authors/?q=ai:gilboa.niv"Ishai, Yuval"https://zbmath.org/authors/?q=ai:ishai.yuval"Nof, Ariel"https://zbmath.org/authors/?q=ai:nof.arielSummary: A common technique for enhancing the efficiency of secure multiparty computation (MPC) with dishonest majority is via preprocessing: In an offline phase, parties engage in an input-independent protocol to securely generate correlated randomness. Once inputs are known, the correlated randomness is consumed by a ``non-cryptographic'' and highly efficient online protocol.
The correlated randomness in such protocols traditionally comes in two flavors: multiplication triples, which suffice for security against semi-honest parties, and authenticated multiplication triples that yield efficient protocols against malicious parties.
Recent constructions of pseudorandom correlation generators [\textit{E. Boyle} et al., Lect. Notes Comput. Sci. 11694, 489--518 (2019; Zbl 07178325)] enable concretely efficient secure generation of multiplication triples with sublinear communication complexity. However, these techniques do not efficiently apply to authenticated triples, except in the case of secure two-party computation of arithmetic circuits over large fields.
In this work, we propose the first concretely efficient approach for (malicious) MPC with preprocessing in which the offline communication is sublinear in the circuit size. More specifically, the offline communication scales with the square root of the circuit size.
From a feasibility point of view, our protocols can make use of any secure protocol for generating (unauthenticated) multiplication triples together with any additive homomorphic encryption. We propose concretely efficient instantiations (based on strong but plausible ``linear-only'' assumptions) from existing homomorphic encryption schemes and pseudorandom correlation generators.
Our technique is based on a variant of a recent protocol of \textit{E. Boyle} et al. [ibid. 12826, 457--485 (2021; Zbl 07511740)] for MPC with preprocessing. As a result, our protocols inherit the succinct correlated randomness feature of the latter protocol.
For the entire collection see [Zbl 1493.94001].Universally composable subversion-resilient cryptographyhttps://zbmath.org/1496.940322022-11-17T18:59:28.764376Z"Chakraborty, Suvradip"https://zbmath.org/authors/?q=ai:chakraborty.suvradip"Magri, Bernardo"https://zbmath.org/authors/?q=ai:magri.bernardo"Nielsen, Jesper Buus"https://zbmath.org/authors/?q=ai:nielsen.jesper-buus"Venturi, Daniele"https://zbmath.org/authors/?q=ai:venturi.danieleSummary: Subversion attacks undermine security of cryptographic protocols by replacing a legitimate honest party's implementation with one that leaks information in an undetectable manner. An important limitation of all currently known techniques for designing cryptographic protocols with security against subversion attacks is that they do not automatically guarantee security in the realistic setting where a protocol session may run concurrently with other protocols.
We remedy this situation by providing a foundation of reverse firewalls in the universal composability (UC) framework. More in details, our contributions are threefold:
\par i) We generalize the UC framework to the setting where each party consists of a core (which has secret inputs and is in charge of generating protocol messages) and a firewall (which has no secrets and sanitizes the outgoing/incoming communication from/to the core). Both the core and the firewall can be subject to different flavors of corruption, modeling different kinds of subversion attacks. For instance, we capture the setting where a subverted core looks like the honest core to any efficient test, yet it may leak secret information via covert channels (which we call specious subversion).
\par ii) We show how to sanitize UC commitments and UC coin tossing against specious subversion, under the DDH assumption.
\par iii) We show how to sanitize the classical GMW compiler for turning MPC with security in the presence of semi-honest adversaries into MPC with security in the presence of malicious adversaries. This yields a completeness theorem for maliciously secure MPC in the presence of specious subversion. Additionally, all our sanitized protocols are transparent, in the sense that communicating with a sanitized core looks indistinguishable from communicating with an honest core. Thanks to the composition theorem, our methodology allows, for the first time, to design subversion-resilient protocols by sanitizing different sub-components in a modular way.
For the entire collection see [Zbl 1493.94001].On the concrete security of TLS 1.3 PSK modehttps://zbmath.org/1496.940382022-11-17T18:59:28.764376Z"Davis, Hannah"https://zbmath.org/authors/?q=ai:davis.hannah"Diemert, Denis"https://zbmath.org/authors/?q=ai:diemert.denis"Günther, Felix"https://zbmath.org/authors/?q=ai:gunther.felix|gunther.felix.1"Jager, Tibor"https://zbmath.org/authors/?q=ai:jager.tiborSummary: The pre-shared key (PSK) handshake modes of TLS 1.3 allow for the performant, low-latency resumption of previous connections and are widely used on the Web and by resource-constrained devices, e.g., in the Internet of Things. Taking advantage of these performance benefits with optimal and theoretically-sound parameters requires tight security proofs. We give the first tight security proofs for the TLS 1.3 PSK handshake modes.
Our main technical contribution is to address a gap in prior tight security proofs of TLS 1.3 which modeled either the entire key schedule or components thereof as independent random oracles to enable tight proof techniques. These approaches ignore existing interdependencies in TLS 1.3's key schedule, arising from the fact that the same cryptographic hash function is used in several components of the key schedule and the handshake more generally. We overcome this gap by proposing a new abstraction for the key schedule and carefully arguing its soundness via the indifferentiability framework. Interestingly, we observe that for one specific configuration, PSK-only mode with hash function SHA-384, it seems difficult to argue indifferentiability due to a lack of domain separation between the various hash function usages. We view this as an interesting insight for the design of protocols, such as future TLS versions.
For all other configurations however, our proofs significantly tighten the security of the TLS 1.3 PSK modes, confirming standardized parameters (for which prior bounds provided subpar or even void guarantees) and enabling a theoretically-sound deployment.
For the entire collection see [Zbl 1493.94002].Garbled circuits with sublinear evaluatorhttps://zbmath.org/1496.940482022-11-17T18:59:28.764376Z"Haque, Abida"https://zbmath.org/authors/?q=ai:haque.abida"Heath, David"https://zbmath.org/authors/?q=ai:heath.david-g"Kolesnikov, Vladimir"https://zbmath.org/authors/?q=ai:kolesnikov.vladimir"Lu, Steve"https://zbmath.org/authors/?q=ai:lu.steve"Ostrovsky, Rafail"https://zbmath.org/authors/?q=ai:ostrovsky.rafail"Shah, Akash"https://zbmath.org/authors/?q=ai:shah.akashSummary: A recent line of work, Stacked Garbled Circuit (SGC), showed that Garbled Circuit (GC) can be improved for functions that include conditional behavior. SGC relieves the communication bottleneck of 2PC by only sending enough garbled material for a single branch out of the \(b\) total branches. Hence, communication is sublinear in the circuit size. However, both the evaluator and the generator pay in computation and perform at least factor \(\log b\) extra work as compared to standard GC.
We extend the sublinearity of SGC to also include the work performed by the GC evaluator \(E\); thus we achieve a fully sublinear \(E\), which is essential when optimizing for the online phase. We formalize our approach as a garbling scheme called \(\mathsf{GCWise}\): GC WIth Sublinear Evaluator.
We show one attractive and immediate application, Garbled PIR, a primitive that marries GC with Private Information Retrieval. Garbled PIR allows the GC to non-interactively and sublinearly access a privately indexed element from a publicly known database, and then use this element in continued GC evaluation.
For the entire collection see [Zbl 1493.94001].Multiple coexisting analysis of a fractional-order coupled memristive system and its application in image encryptionhttps://zbmath.org/1496.940502022-11-17T18:59:28.764376Z"Hu, Yongbing"https://zbmath.org/authors/?q=ai:hu.yongbing"Li, Qian"https://zbmath.org/authors/?q=ai:li.qian"Ding, Dawei"https://zbmath.org/authors/?q=ai:ding.dawei"Jiang, Li"https://zbmath.org/authors/?q=ai:jiang.li"Yang, Zongli"https://zbmath.org/authors/?q=ai:yang.zongli"Zhang, Hongwei"https://zbmath.org/authors/?q=ai:zhang.hongwei.1|zhang.hongwei|zhang.hongwei.2"Zhang, Zhixin"https://zbmath.org/authors/?q=ai:zhang.zhixinSummary: In this paper, a fractional-order chaotic circuit with different coupled memristors is established. The dimensionality of the system is reduced by the flux-charge analysis method and the stability of the equilibrium points is analyzed by the fractional-order stability theory. Then, the complex dynamic behaviors, including periodic and chaotic attractors, period doubling bifurcation orbit, coexistence bifurcation, and asymmetric coexisting attractors, are studied by phase diagrams, bifurcation portraits, Lyapunov exponent spectra, and attractive basins. Moreover, the analog circuit of the fractional-order coupled system is constructed and the results validate the correctness of the theoretical analysis. Finally, a novel encryption scheme based on the fractional-order coupled memristive system combined with Josephus traversal and DNA operations is proposed. The simulation results show that this algorithm has a good effect.Indistinguishability obfuscation from LPN over \(\mathbb{F}_p\), DLIN, and PRGs in \(NC^0\)https://zbmath.org/1496.940522022-11-17T18:59:28.764376Z"Jain, Aayush"https://zbmath.org/authors/?q=ai:jain.aayush"Lin, Huijia"https://zbmath.org/authors/?q=ai:lin.huijia"Sahai, Amit"https://zbmath.org/authors/?q=ai:sahai.amitSummary: In this work, we study what minimal sets of assumptions suffice for constructing indistinguishability obfuscation \((i\mathcal{O})\). We prove:
Theorem (Informal): Assume sub-exponential security of the following assumptions:
\par i) the Learning Parity with Noise \((\mathsf{LPN})\) assumption over general prime fields \(\mathbb{F}_p\) with polynomially many \(\mathsf{LPN}\) samples and error rate \(1/k^\delta\), where \(k\) is the dimension of the \(\mathsf{LPN}\) secret, and \(\delta >0\) is any constant;
\par ii) the existence of a Boolean Pseudo-Random Generator \((\mathsf{PRG})\) in \(\mathsf{NC}^0\) with stretch \(n^{1+\tau }\), where \(n\) is the length of the \(\mathsf{PRG}\) seed, and \(\tau >0\) is any constant;
\par iii) the Decision Linear \textsf{DLIN} assumption on symmetric bilinear groups of prime order.
Then, (subexponentially secure) indistinguishability obfuscation for all polynomial-size circuits exists. Further, assuming only polynomial security of the aforementioned assumptions, there exists collusion resistant public-key functional encryption for all polynomial-size circuits.
This removes the reliance on the Learning With Errors (LWE) assumption from the recent work of \textit{A. Jain} et al. [Lect. Notes Comput. Sci. 13275, 670--699 (2022; Zbl 07577745)]. As a consequence, we obtain the first fully homomorphic encryption scheme that does not rely on any lattice-based hardness assumption.
Our techniques feature a new notion of randomized encoding called Preprocessing Randomized Encoding (PRE), that essentially can be computed in the exponent of pairing groups. When combined with other new techniques, PRE gives a much more streamlined construction of \(i\mathcal{O}\) while still maintaining reliance only on well-studied assumptions.
For the entire collection see [Zbl 1493.94001].One-shot Fiat-Shamir-based NIZK arguments of composite residuosity and logarithmic-size ring signatures in the standard modelhttps://zbmath.org/1496.940572022-11-17T18:59:28.764376Z"Libert, Benoît"https://zbmath.org/authors/?q=ai:libert.benoit"Khoa Nguyen"https://zbmath.org/authors/?q=ai:khoa-nguyen."Peters, Thomas"https://zbmath.org/authors/?q=ai:peters.thomas-j|peters.thomas-d|peters.thomas-a"Yung, Moti"https://zbmath.org/authors/?q=ai:yung.motiSummary: The standard model security of the Fiat-Shamir transform has been an active research area for many years. In breakthrough results, \textit{R. Canetti} et al. [in: Proceedings of the 51st annual ACM SIGACT symposium on theory of computing, STOC '19, Phoenix, AZ, USA, June 23--26, 2019. New York, NY: Association for Computing Machinery (ACM). 1082--1090 (2019; Zbl 1434.94060)] and \textit{C. Peikert} and \textit{S. Shiehian} [Lect. Notes Comput. Sci. 11692, 89--114 (2019; Zbl 1456.94106)] showed that, under the Learning-With-Errors \textsf{(LWE)} assumption, it provides soundness by applying correlation-intractable (CI) hash functions to so-called trapdoor \(\varSigma\)-protocols. In order to be compatible with CI hash functions based on standard \textsf{LWE} assumptions with polynomial approximation factors, all known such protocols have been obtained via parallel repetitions of a basic protocol with binary challenges. In this paper, we consider languages related to Paillier's composite residuosity assumption \((\mathsf{DCR})\) for which we give the first trapdoor \(\varSigma\)-protocols providing soundness in one shot, via exponentially large challenge spaces. This improvement is analogous to the one enabled by Schnorr over the original Fiat-Shamir protocol in the random oracle model. Using the correlation-intractable hash function paradigm, we then obtain simulation-sound NIZK arguments showing that an element of \(\mathbb{Z}_{N^2}^\ast\) is a composite residue, which opens the door to space-efficient applications in the standard model. As a concrete example, we build logarithmic-size ring signatures (assuming a common reference string) with the shortest signature length among schemes based on standard assumptions in the standard model. We prove security under the \(\mathsf{DCR}\) and \textsf{LWE} assumptions, while keeping the signature size comparable with that of random-oracle-based schemes.
For the entire collection see [Zbl 1493.94002].AMBTC based high payload data hiding with modulo-2 operation and Hamming codehttps://zbmath.org/1496.940582022-11-17T18:59:28.764376Z"Li, Li"https://zbmath.org/authors/?q=ai:li.li.25"He, Min"https://zbmath.org/authors/?q=ai:he.min"Zhang, Shanqing"https://zbmath.org/authors/?q=ai:zhang.shanqing"Luo, Ting"https://zbmath.org/authors/?q=ai:luo.ting"Chang, Chin-Chen"https://zbmath.org/authors/?q=ai:chang.chin-chenSummary: An efficient data hiding method with modulo-2 operation and Hamming code (3, 2) based on absolute moment block truncation coding (AMBTC) is proposed. In order to obtain good data hiding performance, different textures are assigned to different embedding strategies. The AMBTC compressed codes are divided into smooth and complex blocks according to texture. In the smooth block, the secret data and the four most significant bits plane of the two quantization levels are calculated using modulo-2 operation to replace the bitmap in order to improve the security of data transmission. Moreover, Hamming code (3, 2) is used to embed the two additional secret bits in the three significant bits planes of the two quantization levels. In the complex block, one secret bit is embedded by swapping the order of two quantization levels and flipping the bitmap. Experimental results show that the proposed method achieves higher capacity than the existing data hiding methods and maintains good visual quality.Double image encryption algorithm based on neural network and chaoshttps://zbmath.org/1496.940592022-11-17T18:59:28.764376Z"Man, Zhenlong"https://zbmath.org/authors/?q=ai:man.zhenlong"Li, Jinqing"https://zbmath.org/authors/?q=ai:li.jinqing"Di, Xiaoqiang"https://zbmath.org/authors/?q=ai:di.xiaoqiang"Sheng, Yaohui"https://zbmath.org/authors/?q=ai:sheng.yaohui"Liu, Zefei"https://zbmath.org/authors/?q=ai:liu.zefeiSummary: To realize the secure transmission of double images, this paper proposes a double image encryption algorithm based on convolutional neural network (CNN) and dynamic adaptive diffusion. This scheme is different from the existing double image encryption technology. According to the characteristics of digital image, we design a dual-channel (digital channel / optical channel) encryption method, which not only ensures the security of double image, but also improves the encryption efficiency and reduces the possibility of being attacked. First, a chaotic map is used to control the initial values of the 5D conservative chaotic system to enhance the security of the key. Secondary, in order to effectively resist known-plaintext attack and chosen-plaintext attack, we employ a chaotic sequence as convolution kernel of convolution neural network to generate plaintext related chaotic pointer to control the scrambling operation of two images. On this basis, a novel image fusion method is designed, which splits and fuses two images into two different parts according to the amount of information contained. In addition, a dual-channel image encryption scheme, optical encryption channel and digital encryption channel, is designed for the two parts after fusion. The former has better parallelism and higher encryption efficiency, while the latter has higher computational complexity and better encryption reliability. Especially in the digital encryption channel, a new dynamic adaptive diffusion method is designed, which is more flexible and secure than the existing encryption algorithm. Finally, numerical simulation and experimental analysis verify the feasibility and effectiveness of the scheme.Evolving homomorphic secret sharing for hierarchical access structureshttps://zbmath.org/1496.940792022-11-17T18:59:28.764376Z"Phalakarn, Kittiphop"https://zbmath.org/authors/?q=ai:phalakarn.kittiphop"Suppakitpaisarn, Vorapong"https://zbmath.org/authors/?q=ai:suppakitpaisarn.vorapong"Attrapadung, Nuttapong"https://zbmath.org/authors/?q=ai:attrapadung.nuttapong"Matsuura, Kanta"https://zbmath.org/authors/?q=ai:matsuura.kantaSummary: Secret sharing is a cryptographic primitive that divides a secret into several shares, and allows only some combinations of shares to recover the secret. As it can also be used in secure multi-party computation protocol with outsourcing servers, several variations of secret sharing are devised for this purpose. Most of the existing protocols require the number of computing servers to be determined in advance. However, in some situations we may want the system to be ``evolving''. We may want to increase the number of servers and strengthen the security guarantee later in order to improve availability and security of the system. Although evolving secret sharing schemes are available, they do not support computing on shares. On the other hand, ``homomorphic'' secret sharing allows computing on shares with small communication, but they are not evolving. As the contribution of our work, we give the definition of ``evolving homomorphic'' secret sharing supporting both properties. We propose two schemes, one with hierarchical access structure supporting multiplication, and the other with partially hierarchical access structure supporting computation of low degree polynomials. Comparing to the work with similar functionality of \textit{A. R. Choudhuri} et al. [Lect. Notes Comput. Sci. 12826, 94--123 (2021; Zbl 07511728)], our schemes have smaller communication costs.
For the entire collection see [Zbl 1484.68016].