Recent zbMATH articles in MSC 68Nhttps://zbmath.org/atom/cc/68N2023-05-31T16:32:50.898670ZUnknown authorWerkzeugConstructive game logichttps://zbmath.org/1508.030652023-05-31T16:32:50.898670Z"Bohrer, Rose"https://zbmath.org/authors/?q=ai:bohrer.rose"Platzer, André"https://zbmath.org/authors/?q=ai:platzer.andreSummary: Game Logic is an excellent setting to study proofs-about-programs via the interpretation of those proofs as programs, because constructive proofs for games correspond to effective winning strategies to follow in response to the opponent's actions. We thus develop \textit{Constructive Game Logic}, which extends Parikh's Game Logic (GL) with constructivity and with first-order programs à la Pratt's first-order dynamic logic (DL). Our major contributions include: 1. a novel realizability semantics capturing the adversarial dynamics of games, 2. a natural deduction calculus and operational semantics describing the computational meaning of strategies via proof-terms, and 3. theoretical results including soundness of the proof calculus w.r.t. realizability semantics, progress and preservation of the operational semantics of proofs, and Existential Properties on support of the extraction of computational artifacts from game proofs. Together, these results provide the most general account of a Curry-Howard interpretation for any program logic to date, and the first at all for Game Logic.
For the entire collection see [Zbl 1496.68030].Automated calculation of higher order partial differential equation constrained derivative informationhttps://zbmath.org/1508.651182023-05-31T16:32:50.898670Z"Maddison, James R."https://zbmath.org/authors/?q=ai:maddison.james-r"Goldberg, Daniel N."https://zbmath.org/authors/?q=ai:goldberg.daniel-n"Goddard, Benjamin D."https://zbmath.org/authors/?q=ai:goddard.benjamin-dSummary: Developments in automated code generation have allowed extremely compact representations of numerical models and for associated adjoint models to be derived automatically via high level algorithmic differentiation. In this article these principles are extended to enable the calculation of higher order derivative information. The higher order derivative information is computed through the automated derivation of tangent-linear equations, which are then treated as new forward equations, and from which higher order tangent-linear and adjoint information can be derived. The principal emphasis is on the calculation of partial differential equation constrained Hessian actions, but the approach generalizes for derivative information at arbitrary order. The derivative calculations are further combined with an advanced data checkpointing strategy. Applications which make use of partial differential equation constrained Hessian actions are presented.Formal methods in outer space. Essays dedicated to Klaus Havelund on the occasion of his 65th birthdayhttps://zbmath.org/1508.680142023-05-31T16:32:50.898670ZThe articles of mathematical interest will be reviewed individually.C-language floating-point proofs layered with VST and Flocqhttps://zbmath.org/1508.680322023-05-31T16:32:50.898670Z"Appel, Andrew W."https://zbmath.org/authors/?q=ai:appel.andrew-w"Bertot, Yves"https://zbmath.org/authors/?q=ai:bertot.yvesSummary: We demonstrate tools and methods for proofs about the correctness and numerical accuracy of C programs. The tools are foundational, in that they are connected to formal semantic specifications of the C operational semantics and of the IEEE 754 floating-point format. The tools are modular, in that the reasoning about C programming can be done quite separately from the reasoning about numerical correctness and numerical accuracy. The tools are general, in that they accommodate almost the entire C language (with pointer data structures, function pointers, control flow, etc.) and applied mathematics (reasoned about in a general-purpose logic and proof assistant with substantial libraries for mathematical reasoning). We demonstrate on a simple Newton's-method square root function.Corrigendum to: ``C floating-point proofs layered with VST and Flocq''https://zbmath.org/1508.680332023-05-31T16:32:50.898670Z"Appel, Andrew W."https://zbmath.org/authors/?q=ai:appel.andrew-w"Bertot, Yves"https://zbmath.org/authors/?q=ai:bertot.yvesCorrigendum to the authors' paper [ibid. 13, 1--16 (2020; Zbl 1508.68032)].A calculus for language transformationshttps://zbmath.org/1508.680342023-05-31T16:32:50.898670Z"Mourad, Benjamin"https://zbmath.org/authors/?q=ai:mourad.benjamin"Cimini, Matteo"https://zbmath.org/authors/?q=ai:cimini.matteoSummary: In this paper we propose a calculus for expressing algorithms for programming languages transformations. We present the type system and operational semantics of the calculus, and we prove that it is type sound. We have implemented our calculus, and we demonstrate its applicability with common examples in programming languages. As our calculus manipulates inference systems, our work can, in principle, be applied to logical systems.
For the entire collection see [Zbl 1435.68022].ASP programs with groundings of small treewidthhttps://zbmath.org/1508.680352023-05-31T16:32:50.898670Z"Bliem, Bernhard"https://zbmath.org/authors/?q=ai:bliem.bernhardSummary: Recent experiments have shown ASP solvers to run significantly faster on ground programs of small treewidth. If possible, it may therefore be beneficial to write a non-ground ASP encoding such that grounding it together with an input of small treewidth leads to a propositional program of small treewidth. In this work, we prove that a class of non-ground programs called guarded ASP guarantees this property. Guarded ASP is a subclass of the recently proposed class of connection-guarded ASP, which is known to admit groundings whose treewidth depends on both the treewidth and the maximum degree of the input. Here we show that this dependency on the maximum degree cannot be dropped. Hence, in contrast to connection-guarded ASP, guarded ASP promises good performance even if the input has large maximum degree.
For the entire collection see [Zbl 1387.68022].On uniquely closable and uniquely typable skeletons of lambda termshttps://zbmath.org/1508.680362023-05-31T16:32:50.898670Z"Bodini, Olivier"https://zbmath.org/authors/?q=ai:bodini.olivier"Tarau, Paul"https://zbmath.org/authors/?q=ai:tarau.paulSummary: Uniquely closable skeletons of lambda terms are Motzkin-trees that predetermine the unique closed lambda term that can be obtained by labeling their leaves with de Bruijn indices. Likewise, uniquely typable skeletons of closed lambda terms predetermine the unique simply-typed lambda term that can be obtained by labeling their leaves with de Bruijn indices.
We derive, through a sequence of logic program transformations, efficient code for their combinatorial generation and study their statistical properties.
As a result, we obtain context-free grammars describing closable and uniquely closable skeletons of lambda terms, opening the door for their in-depth study with tools from analytic combinatorics.
Our empirical study of the more difficult case of (uniquely) typable terms reveals some interesting open problems about their density and asymptotic behavior.
As a connection between the two classes of terms, we also show that uniquely typable closed lambda term skeletons of size \(3n+1\) are in a bijection with binary trees of size \(n\).
For the entire collection see [Zbl 1392.68017].Logic programs with polymorphic types: a condition for static type checkinghttps://zbmath.org/1508.680372023-05-31T16:32:50.898670Z"Bonnier, Staffan"https://zbmath.org/authors/?q=ai:bonnier.staffan"Wallgren, Jonas"https://zbmath.org/authors/?q=ai:wallgren.jonasSummary: This paper describes a polymorphic type system for logic programs. Well-typedness is defined on so-called execution modules of the program, which is shown to yield better results than
[\textit{A. Mycroft} and \textit{R. A. O'Keefe}, Artif. Intell. 23, 295--307 (1984; Zbl 0543.68076)]
in many cases. The system is partly implemented.
For the entire collection see [Zbl 0825.00125].Termination proofs of well-moded logic programs via conditional rewrite systemshttps://zbmath.org/1508.680382023-05-31T16:32:50.898670Z"Ganzinger, Harald"https://zbmath.org/authors/?q=ai:ganzinger.harald"Waldmann, Uwe"https://zbmath.org/authors/?q=ai:waldmann.uweSummary: In this paper, it is shown that a translation from logic programs to conditional rewrite rules can be used in a straightforward way to check (semi-automatically) whether a program is terminating under the Prolog selection rule.
For the entire collection see [Zbl 0825.00125].Typed equivalence, type assignment, and type containmenthttps://zbmath.org/1508.680392023-05-31T16:32:50.898670Z"Amadio, Roberto M."https://zbmath.org/authors/?q=ai:amadio.roberto-mSummary: The study of models based on \textit{partial equivalence relations} (per) and the analysis of the interpretation of \textit{inheritance} that arises in these structures
[\textit{K. B. Bruce} and \textit{G. Longo}, ``A modest model of records, inheritance and bounded quantification'', in: Proceedings of the 3rd annual symposium on logic in computer science, LICS'88. Los Alamitos, CA: IEEE Computer Society. 38--50 (1988; \url{doi:10.1109/LICS.1988.5099})]
leads us to reconsider the classical problem of type-assignment in this framework. Moreover we introduce:
\begin{itemize}
\item[(1)] A natural generalization of type-assignment systems to \textit{typed-equivalence systems} that is suggested by the permodels.
\item[(2)] A specialization of the type-assignment system to a \textit{type-containment system} that is motivated by the search for a ``complete'' theory of inheritance.
\end{itemize}
In the last section we show that a fragment of such theory of inheritance can be \textit{fully automated}.
For the entire collection see [Zbl 0756.68010].A formal system for the universal quantification of schematic variableshttps://zbmath.org/1508.680402023-05-31T16:32:50.898670Z"Guidi, Ferruccio"https://zbmath.org/authors/?q=ai:guidi.ferruccioEnhanced type inference for binding-time analysishttps://zbmath.org/1508.680412023-05-31T16:32:50.898670Z"Szokoli, Mátyás"https://zbmath.org/authors/?q=ai:szokoli.matyas"Kiss, Attila"https://zbmath.org/authors/?q=ai:kiss.attilaSummary: In this paper we will be taking a look at type inference and its uses for binding-time analysis, dynamic typing and better error messages. We will propose a new binding-time analysis algorithm \(\mathcal{B} \), which is a modification of an already existing algorithm by \textit{C. K. Gomard} [``Partial type inference for untyped functional programs'', in: Proceedings of the 1990 ACM conference on LISP and functional programming, LFP'90. New York, NY: Association for Computing Machinery (ACM). 282--287 (1990; \url{doi:10.1145/91556.91672})], and discuss the speed difference.Higher-ranked annotation polymorphic dependency analysishttps://zbmath.org/1508.680422023-05-31T16:32:50.898670Z"Thorand, Fabian"https://zbmath.org/authors/?q=ai:thorand.fabian"Hage, Jurriaan"https://zbmath.org/authors/?q=ai:hage.jurriaanSummary: The precision of a static analysis can be improved by increasing the context-sensitivity of the analysis. In a type-based formulation of static analysis for functional languages this can be achieved by, e.g., introducing let-polyvariance or subtyping. In this paper we go one step further by defining a higher-ranked polyvariant type system so that even properties of lambda-bound identifiers can be generalized over. We do this for dependency analysis, a generic analysis that can be instantiated to a range of different analyses that in this way all can profit.
We prove that our analysis is sound with respect to a call-by-name semantics and that it satisfies a so-called noninterference property. We provide a type reconstruction algorithm that we have proven to be terminating, and sound and complete with respect to its declarative specification. Our principled description can serve as a blueprint for making other analyses higher-ranked.
For the entire collection see [Zbl 1496.68030].Deadlock detection of Java bytecodehttps://zbmath.org/1508.680432023-05-31T16:32:50.898670Z"Laneve, Cosimo"https://zbmath.org/authors/?q=ai:laneve.cosimo"Garcia, Abel"https://zbmath.org/authors/?q=ai:garcia.abelSummary: This paper presents a technique for deadlock detection of Java programs. The technique uses typing rules for extracting infinite-state abstract models of the dependencies among the components of the Java intermediate language -- the Java bytecode. Models are subsequently analysed by means of an extension of a solver that we have defined for detecting deadlocks in process calculi. Our technique is complemented by a prototype verifier that also covers most of the Java features.
For the entire collection see [Zbl 1392.68017].Continualization of probabilistic programs with correctionhttps://zbmath.org/1508.680442023-05-31T16:32:50.898670Z"Laurel, Jacob"https://zbmath.org/authors/?q=ai:laurel.jacob"Misailovic, Sasa"https://zbmath.org/authors/?q=ai:misailovic.sasaSummary: Probabilistic Programming offers a concise way to represent stochastic models and perform automated statistical inference. However, many real-world models have discrete or hybrid discrete-continuous distributions, for which existing tools may suffer non-trivial limitations. Inference and parameter estimation can be exceedingly slow for these models because many inference algorithms compute results faster (or exclusively) when the distributions being inferred are continuous. To address this discrepancy, this paper presents Leios. Leios is the first approach for systematically approximating arbitrary probabilistic programs that have discrete, or hybrid discrete-continuous random variables. The approximate programs have all their variables fully continualized. We show that once we have the fully continuous approximate program, we can perform inference and parameter estimation faster by exploiting the existing support that many languages offer for continuous distributions. Furthermore, we show that the estimates obtained when performing inference and parameter estimation on the continuous approximation are still comparably close to both the true parameter values and the estimates obtained when performing inference on the original model.
For the entire collection see [Zbl 1496.68030].Modular relaxed dependencies in weak memory concurrencyhttps://zbmath.org/1508.680452023-05-31T16:32:50.898670Z"Paviotti, Marco"https://zbmath.org/authors/?q=ai:paviotti.marco"Cooksey, Simon"https://zbmath.org/authors/?q=ai:cooksey.simon"Paradis, Anouk"https://zbmath.org/authors/?q=ai:paradis.anouk"Wright, Daniel"https://zbmath.org/authors/?q=ai:wright.daniel-b|wright.daniel-g"Owens, Scott"https://zbmath.org/authors/?q=ai:owens.scott"Batty, Mark"https://zbmath.org/authors/?q=ai:batty.markSummary: We present a denotational semantics for weak memory concurrency that avoids \textit{thin-air reads}, provides data-race free programs with sequentially consistent semantics (DRF-SC), and supports a compositional refinement relation for validating optimisations. Our semantics identifies false program dependencies that might be removed by compiler optimisation, and leaves in place just the dependencies necessary to rule out thin-air reads. We show that our dependency calculation can be used to rule out thin-air reads in any axiomatic concurrency model, in particular C++. We present a tool that automatically evaluates litmus tests, show that we can augment C++ to fix the thin-air problem, and we prove that our augmentation is compatible with the previously used compilation mappings over key processor architectures. We argue that our dependency calculation offers a practical route to fixing the longstanding problem of thin-air reads in the C++ specification.
For the entire collection see [Zbl 1496.68030].Trace-relating compiler correctness and secure compilationhttps://zbmath.org/1508.680462023-05-31T16:32:50.898670Z"Abate, Carmine"https://zbmath.org/authors/?q=ai:abate.carmine"Blanco, Roberto"https://zbmath.org/authors/?q=ai:blanco.roberto"Ciobâcă, Ștefan"https://zbmath.org/authors/?q=ai:ciobaca.stefan"Durier, Adrien"https://zbmath.org/authors/?q=ai:durier.adrien"Garg, Deepak"https://zbmath.org/authors/?q=ai:garg.deepak"Hrițcu, Cătălin"https://zbmath.org/authors/?q=ai:hritcu.catalin"Patrignani, Marco"https://zbmath.org/authors/?q=ai:patrignani.marco"Tanter, Éric"https://zbmath.org/authors/?q=ai:tanter.eric"Thibault, Jérémy"https://zbmath.org/authors/?q=ai:thibault.jeremySummary: Compiler correctness is, in its simplest form, defined as the inclusion of the set of traces of the compiled program into the set of traces of the original program, which is equivalent to the preservation of all trace properties. Here traces collect, for instance, the externally observable events of each execution. This definition requires, however, the set of traces of the source and target languages to be exactly the same, which is not the case when the languages are far apart or when observations are fine-grained. To overcome this issue, we study a generalized compiler correctness definition, which uses source and target traces drawn from potentially different sets and connected by an arbitrary relation. We set out to understand what guarantees this generalized compiler correctness definition gives us when instantiated with a non-trivial relation on traces. When this trace relation is not equality, it is no longer possible to preserve the trace properties of the source program unchanged. Instead, we provide a generic characterization of the target trace property ensured by correctly compiling a program that satisfies a given source property, and dually, of the source trace property one is required to show in order to obtain a certain target property for the compiled code. We show that this view on compiler correctness can naturally account for undefined behavior, resource exhaustion, different source and target values, side-channels, and various abstraction mismatches. Finally, we show that the same generalization also applies to many secure compilation definitions, which characterize the protection of a compiled program against linked adversarial code.
For the entire collection see [Zbl 1496.68030].Optimal contiguous expression DAG evaluationshttps://zbmath.org/1508.680472023-05-31T16:32:50.898670Z"Keßler, Christoph W."https://zbmath.org/authors/?q=ai:kessler.christoph-w"Rauber, Thomas"https://zbmath.org/authors/?q=ai:rauber.thomas-walterSummary: Generating evaluations for expression DAGs with a minimal number of registers is NP-complete. We present two algorithms that generate optimal contiguous evaluation for a given DAG. The first is a modification of a complete search algorithm that omits redundant evaluations. The second algorithm generates only the most promising evaluations by splitting the DAG into trees with import and export nodes and evaluating the trees with a modified labeling scheme. Experiments with randomly generated DAGs and large DAGs from real application programs confirm that the new algorithms generate optimal contiguous evaluations quite fast.
For the entire collection see [Zbl 0921.00030].A certified reference validation mechanism for the permission model of Androidhttps://zbmath.org/1508.680482023-05-31T16:32:50.898670Z"Betarte, Gustavo"https://zbmath.org/authors/?q=ai:betarte.gustavo"Campo, Juan"https://zbmath.org/authors/?q=ai:campo.juan-diego"Gorostiaga, Felipe"https://zbmath.org/authors/?q=ai:gorostiaga.felipe"Luna, Carlos"https://zbmath.org/authors/?q=ai:luna.carlosSummary: Android embodies security mechanisms at both OS and application level. In this platform application security is built primarily upon a system of permissions which specify restrictions on the operations a particular process can perform. The critical role of these security mechanisms makes them a prime target for (formal) verification. We present an idealized model of a reference monitor of the novel mechanisms of Android 6 (and further), where it is possible to grant permissions at run time. Using the programming language of the proof-assistant Coq we have developed a functional implementation of the reference validation mechanism and certified its correctness with respect to the specified reference monitor. Several properties concerning the permission model of Android 6 and its security mechanisms have been formally formulated and proved. Applying the program extraction mechanism provided by Coq we have also derived a certified Haskell prototype of the reference validation mechanism.
For the entire collection see [Zbl 1392.68017].Local data race freedom with non-multi-copy atomicityhttps://zbmath.org/1508.680492023-05-31T16:32:50.898670Z"Abe, Tatsuya"https://zbmath.org/authors/?q=ai:abe.tatsuyaSummary: Data race freedom ensures the sequentially consistent behaviors of concurrent programs under relaxed memory consistency models (MCMs), and reduces the state explosion problem for software model checking with MCMs. However, data race freedom is too strong to include all interesting programs. In this paper, we define small-step operational semantics for relaxed MCMs, define an observable equivalence using the notion of bisimulation, and propose the property of local data race freedom (LDRF), which requires a kind of race freedom locally instead of globally. LDRF includes some interesting programs, such as the independent reads independent writes program, which is well known to exhibit curious behaviors under non-multi-copy atomic MCMs, and some concurrent copying garbage collection algorithms. In this paper, we introduce an optimization method called memory sharing for model checking of LDRF programs, and show that memory sharing optimization mitigates state explosion problems with non-multi-copy atomic MCMs through experiments.
For the entire collection see [Zbl 1390.68021].Runners in actionhttps://zbmath.org/1508.680502023-05-31T16:32:50.898670Z"Ahman, Danel"https://zbmath.org/authors/?q=ai:ahman.danel"Bauer, Andrej"https://zbmath.org/authors/?q=ai:bauer.andrejSummary: Runners of algebraic effects, also known as comodels, provide a mathematical model of resource management. We show that they also give rise to a programming concept that models top-level external resources, as well as allows programmers to modularly define their own intermediate ``virtual machines''. We capture the core ideas of programming with runners in an equational calculus \(\lambda_{\mathsf{coop}}\), which we equip with a sound and coherent denotational semantics that guarantees the linear use of resources and execution of finalisation code. We accompany \(\lambda_{\mathsf{coop}}\) with examples of runners in action, provide a prototype language implementation in \textsc{OCaml}, as well as a \textsc{Haskell} library based on \(\lambda_{\mathsf{coop}}\).
For the entire collection see [Zbl 1496.68030].Fuzzy unification and generalization of first-order terms over similar signatureshttps://zbmath.org/1508.680512023-05-31T16:32:50.898670Z"Aït-Kaci, Hassan"https://zbmath.org/authors/?q=ai:ait-kaci.hassan"Pasi, Gabriella"https://zbmath.org/authors/?q=ai:pasi.gabriellaSummary: Unification and generalization are operations on two terms computing respectively their greatest lower bound and least upper bound when the terms are quasi-ordered by subsumption up to variable renaming (i.e., \(t_1\preceq t_2\) iff \(t_1=t_2\sigma\) for some variable substitution \(\sigma\)). When term signatures are such that distinct functor symbols may be related with a fuzzy equivalence (called a similarity), these operations can be formally extended to tolerate mismatches on functor names and/or arity or argument order. We reformulate and extend previous work with a declarative approach defining unification and generalization as sets of axioms and rules forming a complete constraint-normalization proof system. These include the Reynolds-Plotkin term-generalization procedures, Maria Sessa's ``weak'' unification with partially fuzzy signatures and its corresponding generalization, as well as novel extensions of such operations to fully fuzzy signatures (i.e., similar functors with possibly different arities). One advantage of this approach is that it requires no modification of the conventional data structures for terms and substitutions. This and the fact that these declarative specifications are efficiently executable conditional Horn-clauses offers great practical potential for fuzzy information-handling applications.
For the entire collection see [Zbl 1392.68017].Generation of initial contexts for effective deadlock detectionhttps://zbmath.org/1508.680522023-05-31T16:32:50.898670Z"Albert, Elvira"https://zbmath.org/authors/?q=ai:albert.elvira"Gómez-Zamalloa, Miguel"https://zbmath.org/authors/?q=ai:gomez-zamalloa.miguel"Isabel, Miguel"https://zbmath.org/authors/?q=ai:isabel.miguelSummary: It has been recently proposed that testing based on symbolic execution can be used in conjunction with static deadlock analysis to define a deadlock detection framework that: (i) can show deadlock presence, in that case a concrete test-case and trace are obtained, and (ii) can also prove deadlock freedom. Such symbolic execution starts from an initial distributed context, i.e., a set of locations and their initial tasks. Considering all possibilities results in a combinatorial explosion on the different distributed contexts that must be considered. This paper proposes a technique to effectively generate initial contexts that can lead to deadlock, using the possible conflicting task interactions identified by static analysis, discarding other distributed contexts that cannot lead to deadlock. The proposed technique has been integrated in the above-mentioned deadlock detection framework hence enabling it to analyze systems without the need of any user supplied initial context.
For the entire collection see [Zbl 1392.68017].Context generation from formal specifications for C analysis toolshttps://zbmath.org/1508.680532023-05-31T16:32:50.898670Z"Alberti, Michele"https://zbmath.org/authors/?q=ai:alberti.michele"Signoles, Julien"https://zbmath.org/authors/?q=ai:signoles.julienSummary: Analysis tools like abstract interpreters, symbolic execution tools and testing tools usually require a proper context to give useful results when analyzing a particular function. Such a context initializes the function parameters and global variables to comply with function requirements. However it may be error-prone to write it by hand: the handwritten context might contain bugs or not match the intended specification. A more robust approach is to specify the context in a dedicated specification language, and hold the analysis tools to support it properly. This may mean to put significant development efforts for enhancing the tools, something that is often not feasible if ever possible.
This paper presents a way to systematically generate such a context from a formal specification of a C function. This is applied to a subset of the ACSL specification language in order to generate suitable contexts for the abstract interpretation-based value analysis plug-ins of Frama-C, a framework for analysis of code written in C. The idea here presented has been implemented in a new Frama-C plug-in which is currently in use in an operational industrial setting.
For the entire collection see [Zbl 1392.68017].Liveness-driven random program generationhttps://zbmath.org/1508.680542023-05-31T16:32:50.898670Z"Barany, Gergö"https://zbmath.org/authors/?q=ai:barany.gergoSummary: Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler's more complex optimizations.
To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code.
The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix.
For the entire collection see [Zbl 1392.68017].On the versatility of open logical relations. Continuity, automatic differentiation, and a containment theoremhttps://zbmath.org/1508.680552023-05-31T16:32:50.898670Z"Barthe, Gilles"https://zbmath.org/authors/?q=ai:barthe.gilles"Crubillé, Raphaëlle"https://zbmath.org/authors/?q=ai:crubille.raphaelle"Dal Lago, Ugo"https://zbmath.org/authors/?q=ai:dal-lago.ugo"Gavazzo, Francesco"https://zbmath.org/authors/?q=ai:gavazzo.francescoSummary: Logical relations are one among the most powerful techniques in the theory of programming languages, and have been used extensively for proving properties of a variety of higher-order calculi. However, there are properties that cannot be immediately proved by means of logical relations, for instance program continuity and differentiability in higher-order languages extended with real-valued functions. Informally, the problem stems from the fact that these properties are naturally expressed on terms of non-ground type (or, equivalently, on open terms of base type), and there is no apparent good definition for a base case (i.e. for closed terms of ground types). To overcome this issue, we study a generalization of the concept of a logical relation, called \textit{open logical relation}, and prove that it can be fruitfully applied in several contexts in which the property of interest is about expressions of first-order type. Our setting is a simply-typed \(\lambda\)-calculus enriched with real numbers and real-valued first-order functions from a given set, such as the one of continuous or differentiable functions. We first prove a containment theorem stating that for any collection of real-valued first-order functions including projection functions and closed under function composition, any well-typed term of first-order type denotes a function belonging to that collection. Then, we show by way of open logical relations the correctness of the core of a recently published algorithm for forward automatic differentiation. Finally, we define a refinement-based type system for local continuity in an extension of our calculus with conditionals, and prove the soundness of the type system using open logical relations.
For the entire collection see [Zbl 1496.68030].Concise read-only specifications for better synthesis of programs with pointershttps://zbmath.org/1508.680562023-05-31T16:32:50.898670Z"Costea, Andreea"https://zbmath.org/authors/?q=ai:costea.andreea"Zhu, Amy"https://zbmath.org/authors/?q=ai:zhu.amy"Polikarpova, Nadia"https://zbmath.org/authors/?q=ai:polikarpova.nadia"Sergey, Ilya"https://zbmath.org/authors/?q=ai:sergey.ilyaSummary: In program synthesis there is a well-known trade-off between \textit{concise} and \textit{strong} specifications: if a specification is too verbose, it might be harder to write than the program; if it is too weak, the synthesised program might not match the user's intent. In this work we explore the use of annotations for restricting memory access permissions in program synthesis, and show that they can make specifications much stronger while remaining surprisingly concise. Specifically, we enhance Synthetic Separation Logic (SSL), a framework for synthesis of heap-manipulating programs, with the logical mechanism of \textit{read-only borrows}.
We observe that this minimalistic and conservative SSL extension benefits the synthesis in several ways, making it more (a) \textit{expressive} (stronger correctness guarantees are achieved with a modest annotation overhead), (b) \textit{effective} (it produces more concise and easier-to-read programs), (c) \textit{efficient} (faster synthesis), and (d) \textit{robust} (synthesis efficiency is less affected by the choice of the search heuristic). We explain the intuition and provide formal treatment for read-only borrows. We substantiate the claims (a)--(d) by describing our quantitative evaluation of the borrowing-aware synthesis implementation on a series of standard benchmark specifications for various heap-manipulating programs.
For the entire collection see [Zbl 1496.68030].Soundness conditions for big-step semanticshttps://zbmath.org/1508.680572023-05-31T16:32:50.898670Z"Dagnino, Francesco"https://zbmath.org/authors/?q=ai:dagnino.francesco"Bono, Viviana"https://zbmath.org/authors/?q=ai:bono.viviana"Zucca, Elena"https://zbmath.org/authors/?q=ai:zucca.elena"Dezani-Ciancaglini, Mariangiola"https://zbmath.org/authors/?q=ai:dezani-ciancaglini.mariangiolaSummary: We propose a general proof technique to show that a predicate is \textit{sound}, that is, prevents stuck computation, \textit{with respect to a big-step semantics}. This result may look surprising, since in big-step semantics there is no difference between non-terminating and stuck computations, hence soundness cannot even be \textit{expressed}. The key idea is to define constructions yielding an extended version of a given arbitrary big-step semantics, where the difference is made explicit. The extended semantics are exploited in the meta-theory, notably they are necessary to show that the proof technique works. However, they remain \textit{transparent} when using the proof technique, since it consists in checking three conditions on the original rules only, as we illustrate by several examples.
For the entire collection see [Zbl 1496.68030].Predicate pairing with abstraction for relational verificationhttps://zbmath.org/1508.680582023-05-31T16:32:50.898670Z"De Angelis, Emanuele"https://zbmath.org/authors/?q=ai:de-angelis.emanuele"Fioravanti, Fabio"https://zbmath.org/authors/?q=ai:fioravanti.fabio"Pettorossi, Alberto"https://zbmath.org/authors/?q=ai:pettorossi.alberto"Proietti, Maurizio"https://zbmath.org/authors/?q=ai:proietti.maurizioSummary: Relational verification is a technique that aims at proving properties that relate two different program fragments, or two different program runs. It has been shown that constrained Horn clauses (CHCs) can effectively be used for relational verification by applying a CHC transformation, called predicate pairing, which allows the CHC solver to infer relations among arguments of different predicates. In this paper we study how the effects of the predicate pairing transformation can be enhanced by using various abstract domains based on Linear Arithmetic (i.e., the domain of convex polyhedra and some of its subdomains) during the transformation. After presenting an algorithm for predicate pairing with abstraction, we report on the experiments we have performed on over a hundred relational verification problems by using various abstract domains. The experiments have been performed by using the VeriMAP verification system, together with the Parma Polyhedra Library (PPL) and the Z3 solver for CHCs.
For the entire collection see [Zbl 1392.68017].Liberate abstract garbage collection from the stack by decomposing the heaphttps://zbmath.org/1508.680592023-05-31T16:32:50.898670Z"Germane, Kimball"https://zbmath.org/authors/?q=ai:germane.kimball"Adams, Michael D."https://zbmath.org/authors/?q=ai:adams.michael-dSummary: Abstract garbage collection and the use of pushdown systems each enhance the precision of control-flow analysis (CFA). However, their respective needs conflict: abstract garbage collection requires the stack but pushdown systems obscure it. Though several existing techniques address this conflict, none take full advantage of the underlying interplay. In this paper, we dissolve this conflict with a technique which exploits the precision of pushdown systems to decompose the heap across the continuation. This technique liberates abstract garbage collection from the stack, increasing its effectiveness and the compositionality of its host analysis. We generalize our approach to apply compositional treatment to abstract timestamps which induces the context abstraction of \(m\)-CFA, an abstraction more precise than \(k\)-CFA's for many common programming patterns.
For the entire collection see [Zbl 1496.68030].Variant-based decidable satisfiability in initial algebras with predicateshttps://zbmath.org/1508.680602023-05-31T16:32:50.898670Z"Gutiérrez, Raúl"https://zbmath.org/authors/?q=ai:gutierrez.raul"Meseguer, José"https://zbmath.org/authors/?q=ai:meseguer.joseSummary: Decision procedures can be either theory-specific, e.g., Presburger arithmetic, or theory-generic, applying to an infinite number of user-definable theories. Variant satisfiability is a theory-generic procedure for quantifier-free satisfiability in the initial algebra of an order-sorted equational theory \((\varSigma,E\cup B)\) under two conditions: (i) \(E\cup B\) has the finite variant property and \(B\) has a finitary unification algorithm; and (ii) \((\varSigma,E\cup B)\) protects a constructor subtheory \((\varOmega,E_{\varOmega}\cup B_{\varOmega})\) that is OS-compact. These conditions apply to many user-definable theories, but have a main limitation: they apply well to data structures, but often do not hold for user-definable predicates on such data structures. We present a theory-generic satisfiability decision procedure, and a prototype implementation, extending variant-based satisfiability to initial algebras with user-definable predicates under fairly general conditions.
For the entire collection see [Zbl 1392.68017].Erlang code evolution controlhttps://zbmath.org/1508.680612023-05-31T16:32:50.898670Z"Insa, David"https://zbmath.org/authors/?q=ai:insa.david"Pérez, Sergio"https://zbmath.org/authors/?q=ai:perez.sergio-l|perez.sergio-p|perez.sergio-a"Silva, Josep"https://zbmath.org/authors/?q=ai:silva.josep"Tamarit, Salvador"https://zbmath.org/authors/?q=ai:tamarit.salvadorSummary: In the software lifecycle, a program can evolve several times for different reasons such as the optimisation of a bottle-neck, the refactoring of an obscure function, etc. These code changes often involve several functions or modules, so it can be difficult to know whether the correct behaviour of the previous releases has been preserved in the new release. Most developers rely on a previously defined test suite to check this behaviour preservation. We propose here an alternative approach to automatically obtain a test suite that specifically focusses on comparing the old and new versions of the code. Our test case generation is directed by: a sophisticated combination of several already existing tools such as TypEr, CutEr, and PropEr; the choice of an expression of interest whose behaviour must be preserved; and the recording of the sequences of values this expression is evaluated to. All the presented work has been implemented in an open-source tool that is publicly available on GitHub.
For the entire collection see [Zbl 1392.68017].Program verification with separation logichttps://zbmath.org/1508.680622023-05-31T16:32:50.898670Z"Iosif, Radu"https://zbmath.org/authors/?q=ai:iosif.raduSummary: Separation Logic is a framework for the development of modular program analyses for sequential, inter-procedural and concurrent programs. The first part of the paper introduces Separation Logic first from a historical, then from a program verification perspective. Because program verification eventually boils down to deciding logical queries such as the validity of verification conditions, the second part is dedicated to a survey of decision procedures for Separation Logic, that stem from either SMT, proof theory or automata theory. Incidentally we address issues related to decidability and computational complexity of such problems, in order to expose certain sources of intractability.
For the entire collection see [Zbl 1390.68021].Confluence and convergence in probabilistically terminating reduction systemshttps://zbmath.org/1508.680632023-05-31T16:32:50.898670Z"Kirkeby, Maja H."https://zbmath.org/authors/?q=ai:kirkeby.maja-hanne"Christiansen, Henning"https://zbmath.org/authors/?q=ai:christiansen.henningSummary: Convergence of an abstract reduction system (ARS) is the property that any derivation from an initial state will end in the same final state, a.k.a. normal form. We generalize this for probabilistic ARS as almost-sure convergence, meaning that the normal form is reached with probability one, even if diverging derivations may exist. We show and exemplify properties that can be used for proving almost-sure convergence of probabilistic ARS, generalizing known results from ARS.
For the entire collection see [Zbl 1392.68017].Verifying visibility-based weak consistencyhttps://zbmath.org/1508.680642023-05-31T16:32:50.898670Z"Krishna, Siddharth"https://zbmath.org/authors/?q=ai:krishna.siddharth"Emmi, Michael"https://zbmath.org/authors/?q=ai:emmi.michael"Enea, Constantin"https://zbmath.org/authors/?q=ai:enea.constantin"Jovanović, Dejan"https://zbmath.org/authors/?q=ai:jovanovic.dejanSummary: Multithreaded programs generally leverage efficient and thread-safe \textit{concurrent objects} like sets, key-value maps, and queues. While some concurrent-object operations are designed to behave atomically, each witnessing the atomic effects of predecessors in a linearization order, others forego such strong consistency to avoid complex control and synchronization bottlenecks. For example, contains (value) methods of key-value maps may iterate through key-value entries without blocking concurrent updates, to avoid unwanted performance bottlenecks, and consequently overlook the effects of some linearization-order predecessors. While such \textit{weakly-consistent} operations may not be atomic, they still offer guarantees, e.g., only observing values that have been present.
In this work we develop a methodology for proving that concurrent object implementations adhere to weak-consistency specifications. In particular, we consider (forward) simulation-based proofs of implementations against \textit{relaxed-visibility specifications}, which allow designated operations to overlook some of their linearization-order predecessors, i.e., behaving as if they never occurred. Besides annotating implementation code to identify \textit{linearization points}, i.e., points at which operations' logical effects occur, we also annotate code to identify \textit{visible operations}, i.e., operations whose effects are observed; in practice this annotation can be done automatically by tracking the writers to each accessed memory location. We formalize our methodology over a general notion of transition systems, agnostic to any particular programming language or memory model, and demonstrate its application, using automated theorem provers, by verifying models of Java concurrent object implementations.
For the entire collection see [Zbl 1496.68030].Local reasoning for global graph propertieshttps://zbmath.org/1508.680652023-05-31T16:32:50.898670Z"Krishna, Siddharth"https://zbmath.org/authors/?q=ai:krishna.siddharth"Summers, Alexander J."https://zbmath.org/authors/?q=ai:summers.alexander-j"Wies, Thomas"https://zbmath.org/authors/?q=ai:wies.thomasSummary: Separation logics are widely used for verifying programs that manipulate complex heap-based data structures. These logics build on so-called \textit{separation algebras}, which allow expressing properties of heap regions such that modifications to a region do not invalidate properties stated about the remainder of the heap. This concept is key to enabling modular reasoning and also extends to concurrency. While heaps are naturally related to mathematical graphs, many ubiquitous graph properties are non-local in character, such as reachability between nodes, path lengths, acyclicity and other structural invariants, as well as data invariants which combine with these notions. Reasoning modularly about such graph properties remains notoriously difficult, since a local modification can have side-effects on a global property that cannot be easily confined to a small region.
In this paper, we address the question: What separation algebra can be used to avoid proof arguments reverting back to tedious global reasoning in such cases? To this end, we consider a general class of global graph properties expressed as fixpoints of algebraic equations over graphs. We present mathematical foundations for reasoning about this class of properties, imposing minimal requirements on the underlying theory that allow us to define a suitable separation algebra. Building on this theory, we develop a general proof technique for modular reasoning about global graph properties expressed over program heaps, in a way which can be directly integrated with existing separation logics. To demonstrate our approach, we present local proofs for two challenging examples: a priority inheritance protocol and the non-blocking concurrent Harris list.
For the entire collection see [Zbl 1496.68030].Improving generalization in software IC3https://zbmath.org/1508.680662023-05-31T16:32:50.898670Z"Lange, Tim"https://zbmath.org/authors/?q=ai:lange.tim"Prinz, Frederick"https://zbmath.org/authors/?q=ai:prinz.frederick"Neuhäußer, Martin R."https://zbmath.org/authors/?q=ai:neuhausser.martin-r"Noll, Thomas"https://zbmath.org/authors/?q=ai:noll.thomas"Katoen, Joost-Pieter"https://zbmath.org/authors/?q=ai:katoen.joost-pieterSummary: Generalization is a key feature to support state-space abstraction in IC3-based algorithms for software model checking, such as Tree-IC3 or IC3CFA. This paper introduces several improvements that range from efficient caching of generalizations over variable reductions to syntax-oriented generalization. Our techniques are generic in that they are independent of the underlying theory, and some of them are even applicable to IC3 in general. Their evaluation on multiple benchmarks, including a significant subset of the SV-COMP 2017 benchmarks, yields promising results.
For the entire collection see [Zbl 1390.68021].Inferring energy bounds via static program analysis and evolutionary modeling of basic blockshttps://zbmath.org/1508.680672023-05-31T16:32:50.898670Z"Liqat, Umer"https://zbmath.org/authors/?q=ai:liqat.umer"Banković, Zorana"https://zbmath.org/authors/?q=ai:bankovic.zorana"Lopez-Garcia, Pedro"https://zbmath.org/authors/?q=ai:lopez-garcia.pedro"Hermenegildo, Manuel V."https://zbmath.org/authors/?q=ai:hermenegildo.manuel-vSummary: The ever increasing number and complexity of energy-bound devices (such as the ones used in Internet of Things applications, smart phones, and mission critical systems) pose an important challenge on techniques to optimize their energy consumption and to verify that they will perform their function within the available energy budget. In this work we address this challenge from the software point of view and propose a novel approach to estimating accurate parametric bounds on the energy consumed by program executions that are practical for their application to energy verification and optimization. Our approach divides a program into basic (branchless) blocks and performs a best effort modeling to estimate upper and lower bounds on the energy consumption for each block using an evolutionary algorithm. Then it combines the obtained values according to the program control flow, using a safe static analysis, to infer functions that give both upper and lower bounds on the energy consumption of the whole program and its procedures as functions on input data sizes. We have tested our approach on (C-like) embedded programs running on the XMOS hardware platform. However, our method is general enough to be applied to other microprocessor architectures and programming languages. The bounds obtained by our prototype implementation on a set of benchmarks were always safe and quite accurate. This supports our hypothesis that our approach offers a good compromise between safety and accuracy, and can be applied in practice for energy verification and optimization.
For the entire collection see [Zbl 1392.68017].Analysis of rewriting-based systems as first-order theorieshttps://zbmath.org/1508.680682023-05-31T16:32:50.898670Z"Lucas, Salvador"https://zbmath.org/authors/?q=ai:lucas.salvadorSummary: Computational systems based on a first-order language that can be given a canonical model which captures provability in the corresponding calculus can often be seen as first-order theories \(\mathcal{S}\), and computational properties of such systems can be formulated as first-order sentences \(\varphi\) that hold in such a canonical model of \(\mathcal{S}\). In this setting, standard results regarding the preservation of satisfiability of different classes of first-order sentences yield a number of interesting applications in program analysis. In particular, properties expressed as existentially quantified boolean combinations of atoms (for instance, a set of unification problems) can then be disproved by just finding an arbitrary model of the considered theory plus the negation of such a sentence. We show that rewriting-based systems fit into this approach. Many computational properties (e.g., infeasibility and non-joinability of critical pairs in (conditional) rewriting, non-loopingness, or the secure access to protected pages of a web site) can be investigated in this way. Interestingly, this semantic approach succeeds when specific techniques developed to deal with the aforementioned problems fail.
For the entire collection see [Zbl 1392.68017].Semantic foundations for deterministic dataflow and stream processinghttps://zbmath.org/1508.680692023-05-31T16:32:50.898670Z"Mamouras, Konstantinos"https://zbmath.org/authors/?q=ai:mamouras.konstantinosSummary: We propose a denotational semantic framework for deterministic dataflow and stream processing that encompasses a variety of existing streaming models. Our proposal is based on the idea that data streams, stream transformations, and stream-processing programs should be classified using types. The type of a data stream is captured formally by a monoid, an algebraic structure with a distinguished binary operation and a unit. The elements of a monoid model the finite fragments of a stream, the binary operation represents the concatenation of stream fragments, and the unit is the empty fragment. Stream transformations are modeled using monotone functions on streams, which we call stream transductions. These functions can be implemented using abstract machines with a potentially infinite state space, which we call stream transducers. This abstract typed framework of stream transductions and transducers can be used to (1) verify the correctness of streaming computations, that is, that an implementation adheres to the desired behavior, (2) prove the soundness of optimizing transformations, e.g. for parallelization and distribution, and (3) inform the design of programming models and query languages for stream processing. In particular, we show that several useful combinators can be supported by the full class of stream transductions and transducers: serial composition, parallel composition, and feedback composition.
For the entire collection see [Zbl 1496.68030].Connecting higher-order separation logic to a first-order outside worldhttps://zbmath.org/1508.680702023-05-31T16:32:50.898670Z"Mansky, William"https://zbmath.org/authors/?q=ai:mansky.william"Honoré, Wolf"https://zbmath.org/authors/?q=ai:honore.wolf"Appel, Andrew W."https://zbmath.org/authors/?q=ai:appel.andrew-wSummary: Separation logic is a useful tool for proving the correctness of programs that manipulate memory, especially when the model of memory includes higher-order state: Step-indexing, predicates in the heap, and higher-order ghost state have been used to reason about function pointers, data structure invariants, and complex concurrency patterns. On the other hand, the behavior of system features (e.g., operating systems) and the external world (e.g., communication between components) is usually specified using first-order formalisms. In principle, the soundness theorem of a separation logic is its interface with first-order theorems, but the soundness theorem may implicitly make assumptions about how other components are specified, limiting its use. In this paper, we show how to extend the higher-order separation logic of the Verified Software Toolchain to interface with a first-order verified operating system, in this case CertiKOS, that mediates its interaction with the outside world. The resulting system allows us to prove the correctness of C programs in separation logic based on the semantics of system calls implemented in CertiKOS. It also demonstrates that the combination of interaction trees + CompCert memories serves well as a \textit{lingua franca} to interface and compose two quite different styles of program verification.
For the entire collection see [Zbl 1496.68030].RustHorn: CHC-based verification for Rust programshttps://zbmath.org/1508.680712023-05-31T16:32:50.898670Z"Matsushita, Yusuke"https://zbmath.org/authors/?q=ai:matsushita.yusuke"Tsukada, Takeshi"https://zbmath.org/authors/?q=ai:tsukada.takeshi"Kobayashi, Naoki"https://zbmath.org/authors/?q=ai:kobayashi.naokiSummary: Reduction to the satisfiablility problem for constrained Horn clauses (CHCs) is a widely studied approach to automated program verification. The current CHC-based methods for pointer-manipulating programs, however, are not very scalable. This paper proposes a novel translation of pointer-manipulating Rust programs into CHCs, which clears away pointers and heaps by leveraging ownership. We formalize the translation for a simplified core of Rust and prove its correctness. We have implemented a prototype verifier for a subset of Rust and confirmed the effectiveness of our method.
For the entire collection see [Zbl 1496.68030].A first-order logic with frameshttps://zbmath.org/1508.680722023-05-31T16:32:50.898670Z"Murali, Adithya"https://zbmath.org/authors/?q=ai:murali.adithya"Peña, Lucas"https://zbmath.org/authors/?q=ai:pena.lucas"Löding, Christof"https://zbmath.org/authors/?q=ai:loding.christof"Madhusudan, P."https://zbmath.org/authors/?q=ai:madhusudan.parthasarathySummary: We propose a novel logic, called \textit{Frame Logic} (FL), that extends first-order logic (with recursive definitions) using a construct \(Sp(\cdot)\) that captures the \textit{implicit supports} of formulas -- the precise subset of the universe upon which their meaning depends. Using such supports, we formulate proof rules that facilitate frame reasoning elegantly when the underlying model undergoes change. We show that the logic is expressive by capturing several data-structures and also exhibit a translation from a \textit{precise} fragment of separation logic to frame logic. Finally, we design a program logic based on frame logic for reasoning with programs that dynamically update heaps that facilitates local specifications and frame reasoning. This program logic consists of both localized proof rules as well as rules that derive the weakest tightest preconditions in FL.
For the entire collection see [Zbl 1496.68030].CARET analysis of multithreaded programshttps://zbmath.org/1508.680732023-05-31T16:32:50.898670Z"Nguyen, Huu-Vu"https://zbmath.org/authors/?q=ai:nguyen.huu-vu"Touili, Tayssir"https://zbmath.org/authors/?q=ai:touili.tayssirSummary: Dynamic pushdown networks (DPNs) are a natural model for multithreaded programs with (recursive) procedure calls and thread creation. On the other hand, CARET is a temporal logic that allows to write linear temporal formulas while taking into account the matching between calls and returns. We consider in this paper the Model-Checking problem of DPNs against CARET formulas. We show that this problem can be effectively solved by a reduction to the emptiness problem of Büchi dynamic pushdown systems. We then show that CARET model checking is also decidable for DPNs communicating with locks. Our results can, in particular, be used for the detection of concurrent malware.
For the entire collection see [Zbl 1392.68017].Solving program sketches with large integer valueshttps://zbmath.org/1508.680742023-05-31T16:32:50.898670Z"Pan, Rong"https://zbmath.org/authors/?q=ai:pan.rong"Hu, Qinheping"https://zbmath.org/authors/?q=ai:hu.qinheping"Singh, Rishabh"https://zbmath.org/authors/?q=ai:singh.rishabh"D'Antoni, Loris"https://zbmath.org/authors/?q=ai:dantoni.lorisSummary: Program sketching is a program synthesis paradigm in which the programmer provides a partial program with holes and assertions. The goal of the synthesizer is to automatically find integer values for the holes so that the resulting program satisfies the assertions. The most popular sketching tool, \textsc{Sketch}, can efficiently solve complex program sketches, but uses an integer encoding that often performs poorly if the sketched program manipulates large integer values. In this paper, we propose a new solving technique that allows \textsc{Sketch} to handle large integer values while retaining its integer encoding. Our technique uses a result from number theory, the Chinese Remainder Theorem, to rewrite program sketches to only track the remainders of certain variable values with respect to several prime numbers. We prove that our transformation is sound and the encoding of the resulting programs are exponentially more succinct than existing \textsc{Sketch} encodings. We evaluate our technique on a variety of benchmarks manipulating large integer values. Our technique provides speedups against both existing \textsc{Sketch} solvers and can solve benchmarks that existing \textsc{Sketch} solvers cannot handle.
For the entire collection see [Zbl 1496.68030].The practice of logical frameworkshttps://zbmath.org/1508.680752023-05-31T16:32:50.898670Z"Pfenning, Frank"https://zbmath.org/authors/?q=ai:pfenning.frankFor the entire collection see [Zbl 1435.68032].Information flow tracking for side-effectful librarieshttps://zbmath.org/1508.680762023-05-31T16:32:50.898670Z"Sjösten, Alexander"https://zbmath.org/authors/?q=ai:sjosten.alexander"Hedin, Daniel"https://zbmath.org/authors/?q=ai:hedin.daniel"Sabelfeld, Andrei"https://zbmath.org/authors/?q=ai:sabelfeld.andreiSummary: Dynamic information flow control is a promising technique for ensuring confidentiality and integrity of applications that manipulate sensitive information. While much progress has been made on increasingly powerful programming languages ranging from low-level machine languages to high-level languages for distributed systems, surprisingly little attention has been devoted to libraries and APIs. The state of the art is largely an all-or-nothing choice: either a \textit{shallow} or \textit{deep} library modeling approach. Seeking to break out of this restrictive choice, we formalize a general mechanism that tracks information flow for a language that includes higher-order functions, structured data types and references. A key feature of our approach is the \textit{model heap}, a part of the memory, where security information is kept to enable the interaction between the labeled program and the unlabeled library. We provide a proof-of-concept implementation and report on experiments with a file system library. The system has been proved correct using Coq.
For the entire collection see [Zbl 1387.68012].Classes of equational programs that compile into efficient machine codehttps://zbmath.org/1508.680772023-05-31T16:32:50.898670Z"Strandh, Robert I."https://zbmath.org/authors/?q=ai:strandh.robert-iSummary: \textit{G. Huet} and \textit{J.-J. Lévy} [Call by need computations in non-ambiguous linear term rewriting systems. Techn. Rep. 359, IRIA (1979)]
showed that, if an equational program E is strongly sequential, there exists an automaton that, given a term in the language L(E), finds a redex in that term.
The most serious problem with their approach becomes evident when one tries to use their result in a programming system. Once a redex has been found, it must be replaced by a term built from the structure of the right-hand side corresponding to the redex, and from parts of the old term. Then, the reduction process must be restarted so that other redexes can be found. With their approach, a large part of the term tree may have to be rescanned.
\textit{C. M. Hoffmann} and \textit{M. J. O'Donnell} [J. Assoc. Comput. Mach. 29, 68--95 (1982; Zbl 0477.68067)]
improved the situation by defining the class of strongly left-sequential programs. For this class, a particularly simple reduction algorithm exists. A stack is used to hold information about the state of the reduction process. When a redex has been found and replaced by the corresponding right-hand side, the stack holds all the relevant information needed to restart the reduction process in a well defined state such that no unnecessary rescanning of the term is done.
However, it turns out that the approach of Hoffmann and O'Donnell is unnecessarily restrictive. In this paper, we define a new class of Equational Programs, called the forward branching programs. This class is much larger than the class of strongly left-sequential programs. Together with a new reduction algorithm, briefly discussed in this paper, our approach allows us to use the hardware stack to hold reduction information in a way similar to the way a block structured programming language uses the stack to hold local variables. In effect, our approach allows us to use innermost stabilization, while preserving the overall outermost reduction strategy.
For the entire collection see [Zbl 0741.68008].Generalized partial computation using disunification to solve constraintshttps://zbmath.org/1508.680782023-05-31T16:32:50.898670Z"Takano, Akihiko"https://zbmath.org/authors/?q=ai:takano.akihikoSummary: Generalized Partial Computation (GPC) is a program optimization principle based on partial computation and theorem proving. Techniques in conventional partial computation make use of only static values of given data to specialize programs. GPC employs a theorem prover to explicitly utilize more information such as logical structure of programs, axioms for abstract data types, algebraic properties of primitive functions, etc. In this paper we formalize a GPC transformation method for a first-order language which utilizes a disunification procedure to reason about the program context. Context information of each program fragment is represented by a quantifier-free equational formula and is used to eliminate redundant transformation.
For the entire collection see [Zbl 0825.00125].ConSORT: context- and flow-sensitive ownership refinement types for imperative programshttps://zbmath.org/1508.680792023-05-31T16:32:50.898670Z"Toman, John"https://zbmath.org/authors/?q=ai:toman.john"Siqi, Ren"https://zbmath.org/authors/?q=ai:siqi.ren"Suenaga, Kohei"https://zbmath.org/authors/?q=ai:suenaga.kohei"Igarashi, Atsushi"https://zbmath.org/authors/?q=ai:igarashi.atsushi"Kobayashi, Naoki"https://zbmath.org/authors/?q=ai:kobayashi.naokiSummary: We present \textsc{ConSORT}, a type system for safety verification in the presence of mutability and aliasing. Mutability requires \textit{strong updates} to model changing invariants during program execution, but aliasing between pointers makes it difficult to determine which invariants must be updated in response to mutation. Our type system addresses this difficulty with a novel combination of refinement types and fractional ownership types. Fractional ownership types provide flow-sensitive and precise aliasing information for reference variables. \textsc{ConSORT} interprets this ownership information to soundly handle strong updates of potentially aliased references. We have proved \textsc{ConSORT} sound and implemented a prototype, fully automated inference tool. We evaluated our tool and found it verifies non-trivial programs including data structure implementations.
For the entire collection see [Zbl 1496.68030].On a verification framework for certifying distributed algorithms: distributed checking and consistencyhttps://zbmath.org/1508.680802023-05-31T16:32:50.898670Z"Völlinger, Kim"https://zbmath.org/authors/?q=ai:vollinger.kim"Akili, Samira"https://zbmath.org/authors/?q=ai:akili.samiraSummary: A major problem in software engineering is assuring the correctness of a distributed system. A \textit{certifying distributed algorithm} (CDA) computes for its input-output pair \((i, o)\) an additional \textit{witness} \(w\) -- a formal argument for the correctness of \((i, o)\). Each CDA features a \textit{witness predicate} such that if the witness predicate holds for a triple \((i, o, w)\), the input-output pair \((i, o)\) is correct. An accompanying \textit{checker} algorithm decides the witness predicate. Consequently, a user of a CDA does not have to trust the CDA but its checker algorithm. Usually, a checker is simpler and its verification is feasible. To sum up, the idea of a CDA is to adapt the underlying algorithm of a program at design-time such that it verifies its own output at runtime. While certifying \textit{sequential} algorithms are well-established, there are open questions on how to apply certification to \textit{distributed algorithms}. In this paper, we discuss \textit{distributed checking} of a \textit{distributed witness}; one challenge is that all parts of a distributed witness have to be \textit{consistent} with each other. Furthermore, we present a method for \textit{formal instance verification} (i.e. obtaining a \textit{machine-checked} proof that a particular input-output pair is correct), and implement the method in a framework for the theorem prover \textsc{Coq}.
For the entire collection see [Zbl 1387.68012].Higher-order spreadsheets with spilled arrayshttps://zbmath.org/1508.681002023-05-31T16:32:50.898670Z"Williams, Jack"https://zbmath.org/authors/?q=ai:williams.jack-r|williams.jack"Joharizadeh, Nima"https://zbmath.org/authors/?q=ai:joharizadeh.nima"Gordon, Andrew D."https://zbmath.org/authors/?q=ai:gordon.andrew-d"Sarkar, Advait"https://zbmath.org/authors/?q=ai:sarkar.advaitSummary: We develop a theory for two recently-proposed spreadsheet mechanisms: \textit{gridlets} allow for abstraction and reuse in spreadsheets, and build on \textit{spilled arrays}, where an array value spills out of one cell into nearby cells. We present the first formal calculus of spreadsheets with spilled arrays. Since spilled arrays may collide, the semantics of spilling is an iterative process to determine which arrays spill successfully and which do not. Our first theorem is that this process converges deterministically. To model gridlets, we propose the \textit{grid calculus}, a higher-order extension of our calculus of spilled arrays with primitives to treat spreadsheets as values. We define a semantics of gridlets as formulas in the grid calculus. Our second theorem shows the correctness of a remarkably direct encoding of the Abadi and Cardelli object calculus into the grid calculus. This result is the first rigorous analogy between spreadsheets and objects; it substantiates the intuition that gridlets are an object-oriented counterpart to functional programming extensions to spreadsheets, such as sheet-defined functions.
For the entire collection see [Zbl 1496.68030].Computing in unpredictable environments: semantics, reduction strategies, and program transformationshttps://zbmath.org/1508.681152023-05-31T16:32:50.898670Z"Lisper, Björn"https://zbmath.org/authors/?q=ai:lisper.bjornSummary: We study systems where deterministic computations take place in environments which may behave nondeterministically. We give a simple formalization by unions of abstract reduction systems, on which various semantics can be based in a straightforward manner. We then prove that under a simple condition on the reduction systems, the following holds: reduction strategies which are \textit{cofinal} for the deterministic reduction system will implement the semantics for the combined system, provided the environment behaves in a``fair'' manner, and certain program transformations, such as \textit{folding} and \textit{unfolding}, will preserve the semantics. An application is evaluation strategies and program transformations for concurrent languages.
For the entire collection see [Zbl 1435.68032].Normalization by leftmost innermost rewritinghttps://zbmath.org/1508.681392023-05-31T16:32:50.898670Z"Antoy, Sergio"https://zbmath.org/authors/?q=ai:antoy.sergioSummary: We define a transformation of rewrite systems that allows one to compute the normal form of a term in the source system by leftmost innermost rewriting of a corresponding term in the target system. This transformation is the foundation of an implementation technique used in an application of rewriting to software development. We also show in an example how our transformation accommodates lazy, non-deterministic computations in an eager, deterministic programming language.
For the entire collection see [Zbl 0825.00125].Problems in rewriting applied to categorical concepts by the example of a computational comonadhttps://zbmath.org/1508.681542023-05-31T16:32:50.898670Z"Gehrke, Wolfgang"https://zbmath.org/authors/?q=ai:gehrke.wolfgangSummary: We present a canonical system for comonads which can be extended to the notion of a computational comonad
[\textit{S. Brookes} and \textit{S. Geva}, Lond. Math. Soc. Lect. Note Ser. 177, 1--44 (1992; Zbl 0797.18004)]
where the crucial point is to find an appropriate representation. These canonical systems are checked with the help of the Larch prover
[\textit{S. J. Garland} and \textit{J. V. Guttag}, A guide to LP, the Larch prover. Cambridge, MA: Massachusetts Institute of Technology (MIT) (1991)]
exploiting a method by
\textit{G. Huet} [Lect. Notes Comput. Sci. 242, 123--135 (1986; Zbl 0609.18002)]
to represent typing within an untyped rewriting system. The resulting decision procedures are implemented in the programming language Elf
[\textit{F. Pfenning}, ``Elf: a language for logic definition and verified metaprogramming'', in: Proceedings of the 4th annual symposium on logic in computer science, LICS'89. Los Alamitos, CA: IEEE Computer Society. 313--322 (1989; \url{doi:10.1109/LICS.1989.39186})]
since typing is directly supported by this language. Finally we outline an incomplete attempt to solve the problem which could be used as a benchmark for rewriting tools.
For the entire collection see [Zbl 0813.68038].Explicit cyclic substitutionshttps://zbmath.org/1508.681732023-05-31T16:32:50.898670Z"Rose, Kristoffer Høgsbro"https://zbmath.org/authors/?q=ai:rose.kristoffer-hogsbroSummary: In this paper we consider rewrite systems that describe the \(\lambda \)-calculus enriched with recursive and non-recursive local definitions by generalizing the lsexplicit substitutions' used by
\textit{M. Abadi} et al. [``Explicit substitutions'', in: Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on principles of programming, POPL'90. New York, NY: Association for Computing Machinery (ACM). 31--46 (1989; \url{doi:10.1145/96709.96712})]
to describe sharing in \(\lambda \)-terms. This leads to `explicit cyclic substitutions' that can describe the mutual sharing of local recursive definitions. We demonstrate how this may be used to describe standard binding constructions (\textbf{let} and \textbf{letrec}) -- directly using substitution and fixed point induction as well as using `small-step' rewriting semantics where substitution is interleaved with the mechanics of the following \(\beta \)-reductions.
With this we hope to contribute to the synthesis of denotational and operational specifications of sharing and recursion.
For the entire collection see [Zbl 0825.00125].Untyped lambda-calculus with input-outputhttps://zbmath.org/1508.682132023-05-31T16:32:50.898670Z"Tiuryn, Jerzy"https://zbmath.org/authors/?q=ai:tiuryn.jerzy"Wand, Mitchell"https://zbmath.org/authors/?q=ai:wand.mitchellSummary: We introduce an untyped \(\lambda \)-calculus with input-output, based on Gordon's continuation-passing model of input-output. This calculus is intended to allow the classification of possibly infinite input-output behaviors, such as those required for servers or distributed systems. We define two terms to be operationally approximate iff they have similar behaviors in any context. We then define a notion of applicative approximation and show that it coincides with operational approximation for these new behaviors. Last, we consider the theory of pure \(\lambda \)-terms under this notion of operational equivalence.
For the entire collection see [Zbl 1435.68032].Software model checking for mobile security -- collusion detection in \(\mathbb{K} \)https://zbmath.org/1508.682142023-05-31T16:32:50.898670Z"Asăvoae, Irina Măriuca"https://zbmath.org/authors/?q=ai:asavoae.irina-mariuca"Nguyen, Hoang Nga"https://zbmath.org/authors/?q=ai:nguyen.hoang-nga"Roggenbach, Markus"https://zbmath.org/authors/?q=ai:roggenbach.markusSummary: Mobile devices pose a particular security risk because they hold personal details and have capabilities potentially exploitable for eavesdropping. The Android operating system is designed with a number of built-in security features such as application sandboxing and permission-based access control. Unfortunately, these restrictions can be bypassed, without the user noticing, by colluding apps whose combined permissions allow them to carry out attacks that neither app is able to execute by itself. In this paper, we develop a software model-checking approach within the \(\mathbb{K}\) framework that is capable to detect collusion. This involves giving an abstract, formal semantics to Android applications and proving that the applied abstraction principles lead to a finite state space.
For the entire collection see [Zbl 1390.68021].Universal equivalence and majority of probabilistic programs over finite fieldshttps://zbmath.org/1508.682152023-05-31T16:32:50.898670Z"Barthe, Gilles"https://zbmath.org/authors/?q=ai:barthe.gilles"Jacomme, Charlie"https://zbmath.org/authors/?q=ai:jacomme.charlie"Kremer, Steve"https://zbmath.org/authors/?q=ai:kremer.steveApplied choreographieshttps://zbmath.org/1508.682422023-05-31T16:32:50.898670Z"Giallorenzo, Saverio"https://zbmath.org/authors/?q=ai:giallorenzo.saverio"Montesi, Fabrizio"https://zbmath.org/authors/?q=ai:montesi.fabrizio"Gabbrielli, Maurizio"https://zbmath.org/authors/?q=ai:gabbrielli.maurizioSummary: Choreographic Programming is a paradigm for distributed programming, where high-level ``Alice and Bob'' descriptions of communications (choreographies) are used to synthesise correct-by-construction programs. However, implementations of choreographic models use message routing technologies distant from their related theoretical models (e.g., CCS/\(\pi\) channels). This drives implementers to mediate discrepancies with the theory through undocumented, unproven adaptations, weakening the reliability of their implementations.
As a solution, we propose the framework of Applied Choreographies (AC). In AC, programmers write choreographies in a language that follows the standard syntax and semantics of previous works. Then, choreographies are compiled to a real-world execution model for Service-Oriented Computing (SOC). To manage the complexity of this task, our compilation happens in three steps, respectively dealing with: implementing name-based communications using the concrete mechanism found in SOC, projecting a choreography to a set of processes, and translating processes to a distributed implementation in terms of services.
For the entire collection see [Zbl 1387.68012].Exploring type-level bisimilarity towards more expressive multiparty session typeshttps://zbmath.org/1508.682442023-05-31T16:32:50.898670Z"Jongmans, Sung-Shik"https://zbmath.org/authors/?q=ai:jongmans.sung-shik-t-q"Yoshida, Nobuko"https://zbmath.org/authors/?q=ai:yoshida.nobukoSummary: A key open problem with multiparty session types (MPST) concerns their expressiveness: current MPST have inflexible choice, no existential quantification over participants, and limited parallel composition. This precludes many real protocols to be represented by MPST. To overcome these bottlenecks of MPST, we explore a new technique using weak bisimilarity between global types and endpoint types, which guarantees deadlock-freedom and absence of protocol violations. Based on a process algebraic framework, we present well-formed conditions for global types that guarantee weak bisimilarity between a global type and its endpoint types and prove their check is decidable. Our main practical result, obtained through benchmarks, is that our well-formedness conditions can be checked orders of magnitude faster than directly checking weak bisimilarity using a state-of-the-art model checker.
For the entire collection see [Zbl 1496.68030].Mixed sessionshttps://zbmath.org/1508.682552023-05-31T16:32:50.898670Z"Vasconcelos, Vasco T."https://zbmath.org/authors/?q=ai:vasconcelos.vasco-thudichum"Casal, Filipe"https://zbmath.org/authors/?q=ai:casal.filipe"Almeida, Bernardo"https://zbmath.org/authors/?q=ai:almeida.bernardo-f"Mordido, Andreia"https://zbmath.org/authors/?q=ai:mordido.andreiaSummary: Session types describe patterns of interaction on communicating channels. Traditional session types include a form of choice whereby servers offer a collection of options, of which each client picks exactly one. This sort of choice constitutes a particular case of separated choice: offering on one side, selecting on the other. We introduce mixed choices in the context of session types and argue that they increase the flexibility of program development at the same time that they reduce the number of synchronisation primitives to exactly one. We present a type system incorporating subtyping and prove preservation and absence of runtime errors for well-typed processes. We further show that classical (conventional) sessions can be faithfully and tightly embedded in mixed choices. Finally, we discuss algorithmic type checking and a runtime system built on top of a conventional (choice-less) message-passing architecture.
For the entire collection see [Zbl 1496.68030].Pac-learning non-recursive Prolog clauseshttps://zbmath.org/1508.682842023-05-31T16:32:50.898670Z"Cohen, William W."https://zbmath.org/authors/?q=ai:cohen.william-wSummary: Recently there has been an increasing amount of research on learning concepts expressed in subsets of Prolog; the term \textit{inductive logic programming (ILP)} has been used to describe this growing body of research. This paper seeks to expand the theoretical foundations of ILP by investigating the pac-learnability of logic programs. We focus on programs consisting of a single function-free non-recursive clause, and focus on generalizations of a language known to be pac-learnable: namely, the language of determinate function-free clauses of constant depth. We demonstrate that a number of syntactic generalizations of this language are hard to learn, but that the language can be generalized to clauses of constant locality while still allowing pac-learnability. More specifically, we first show that determinate clauses of log depth are not pac-learnable, regardless of the language used to represent hypotheses. We then investigate the effect of allowing indeterminacy in a clause, and show that clauses with \(k\) indeterminate variables are as hard to learn as DNF. We next show that a more restricted language of clauses with bounded indeterminacy is learnable using \(k\)-CNF to represent hypotheses, and that restricting the ``locality'' of a clause to a constant allows pac-learnability even if an arbitrary amount of indeterminacy is allowed. This last result is also shown to be a strict generalization of the previous result for determinate function-free clauses of constant depth. Finally, we present some extensions of these results to logic programs with multiple clauses.Four-layer distance metric and distance-based kernel functions for inductive logic programminghttps://zbmath.org/1508.682942023-05-31T16:32:50.898670Z"Khamsemanan, Nirattaya"https://zbmath.org/authors/?q=ai:khamsemanan.nirattaya"Nattee, Cholwich"https://zbmath.org/authors/?q=ai:nattee.cholwich"Numao, Masayuki"https://zbmath.org/authors/?q=ai:numao.masayukiSummary: Inductive Logic Programming (ILP) is a field of study focusing developingmachine learning algorithms using logic programming to describe examples andhypotheses. This makes ILP techniques capable to deal with relational data,i.e. non-vector data. To learn from ILP data, an algorithm must be able tohandle non-linear data. Hypotheses generated from ILP techniques are in form of Horn clauses, which can be interpreted by human. This is a benefit overconventional learning algorithms that generate black-box hypotheses orclassification models. Nevertheless, learning algorithms used by ILP techniques are based on covering algorithms. It requires high computational power to generate appropriate hypotheses from a set of examples. We propose a distance metric for ILP datasets. Incorporating distances between examples in the hypothesis generation helps improve the performance of an ILP system. We also propose distance-based kernel functions for ILP datasets based on the distance metric. The kernel functions allow us to improve a hypothesis construction algorithm for ILP systems. To evaluate our proposed technique, we conduct experiments on real-world ILP datasets. The results show that the
proposed technique outperforms the existing techniques.Non-local configuration of component interfaces by constraint satisfactionhttps://zbmath.org/1508.684042023-05-31T16:32:50.898670Z"Tveretina, Olga"https://zbmath.org/authors/?q=ai:tveretina.olga"Zaichenkov, Pavel"https://zbmath.org/authors/?q=ai:zaichenkov.pavel"Shafarenko, Alex"https://zbmath.org/authors/?q=ai:shafarenko.alexSummary: Service-oriented computing is the paradigm that utilises services as fundamental elements for developing applications. Service composition, where data consistency becomes especially important, is still a key challenge for service-oriented computing. We maintain that there is one aspect of Web service communication on the data conformance side that has so far escaped the researchers attention. Aggregation of networked services gives rise to long pipelines, or quasi-pipeline structures, where there is a profitable form of inheritance called flow inheritance. In its presence, interface reconciliation ceases to be a local procedure, and hence it requires distributed constraint satisfaction of a special kind. We propose a constraint language for this, and present a solver which implements it. In addition, our approach provides a binding between the language and C++, whereby the assignment to the variables found by the solver is automatically translated into a transformation of C++ code. This makes the C++ Web service context compliant without any further communication. Besides, it uniquely permits a very high degree of flexibility of a C++ coded Web service without making public any part of its source code.An FPGA implementation of a distributed virtual machinehttps://zbmath.org/1508.684202023-05-31T16:32:50.898670Z"Jensen, Lee A."https://zbmath.org/authors/?q=ai:jensen.lee-a"Williams, Lance R."https://zbmath.org/authors/?q=ai:williams.lance-rSummary: An expression in a functional programming language can be compiled into a massively redundant, spatially distributed, concurrent computation called a distributed virtual machine (DVM). A DVM is comprised of bytecodes reified as actors undergoing diffusion on a two-dimensional grid communicating via messages containing encapsulated virtual machine states (continuations). Because the semantics of expression evaluation are purely functional, DVMs can employ massive redundancy in the representation of the heap to help ensure that computations complete even when large areas of the physical host substrate have failed. Because they can be implemented as asynchronous circuits, DVMs also address the well known problem affecting traditional machine architectures implemented as integrated circuits, namely, clock networks consuming increasingly large fractions of area as device size increases. This paper describes the first hardware implementation of a DVM. This was accomplished by compiling a VHDL specification of a special purpose distributed memory multicomputer with a mesh interconnection network into a globally asynchronous, locally synchronous (GALS) circuit in an FPGA. Each independently clocked node combines a processor based on a virtual machine for compiled Scheme language programs, with just enough local memory to hold a single heap allocated object and a continuation.
For the entire collection see [Zbl 1390.68028].