MIAO Seminars
The
MIAO seminars
are what we call the series of seminars arranged by the
Mathematical
Insights into Algorithms for Optimization (MIAO)
research group
at the University of Copenhagen and Lund University.
These seminars typically have a low-key, informal, atmosphere, but
they are open to the public and are announced beforehand on
our seminar mailing lists (with two different lists for more
theoretically oriented or more applied seminars, respectively).
In case you want to receive the announcements, please just let us
know by sending an e-mail message to jn@di.ku.dk
specifying which seminar mailing list you wish to be added to
(theoretical, applied, or both).
We also add all public seminars to the
MIAO seminar calendar.
In response to multiple requests, since the spring
of 2021 we have started recording most seminars. The recordings are posted on the
MIAO
Research YouTube channel.
All seminar times below are in the Central European Time Zone
(with daylight saving time observed from the last Sunday in March to
the last Sunday in October).
Upcoming seminars
The MIAO seminars have been mostly on hold due to an intense period of teaching,
but we are hoping and planning to have some more seminars before summer.
-
Wednesday Jun 1 at 14:00
in Auditorium No. 2, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
Certifying correctness for combinatorial algorithms by using pseudo-Boolean reasoning
(Stephan Gocht, Lund University and University of Copenhagen)
Although solving NP-complete problems is widely believed to
require exponential time in the worst case, state-of-the-art
algorithms are amazingly efficient for many NP-hard
optimisation problems. However, this is achieved through
highly sophisticated algorithms that are prone to
implementation errors which can cause incorrect results, even
for the best commercial tools. A promising approach to address
this problem is to use certifying algorithms that produce not
only the desired output but also a simple, machine-verifiable
certificate or proof of correctness of the output. By
verifying this proof with an external tool, we can guarantee
that the given answer is valid.
This talk will give an introduction to a new proof system that can
certify answers of algorithms for various combinatorial
problems, such as Boolean satisfiability (SAT) solving and
optimisation, constraint programming, and graph solving. This
proof system operates on 0-1 integer linear constraints,
generalising the cutting planes proof system. As a running
example, we will show how to certify the correctness of a
maximum matching algorithm.
After the official seminar, there will be an optional second
part with more technical discussions. This second part will
demonstrate how to generate such machine-verifiable proofs in
practice and how to check them using the verifier VeriPB,
which I developed during my PhD project.
-
Wednesday Jun 8 at 14:00
in seminar room E:2116, Ole Römers väg 3, Lund University
Short proofs in strong proof systems
(Marijn Heule, Carnegie Mellon University)
The success of satisfiability solving presents us with an
interesting peculiarity: modern solvers can frequently handle
gigantic formulas while failing miserably on supposedly easy
problems. Their poor performance is typically caused by the
weakness of their underlying proof system—resolution. To
overcome this obstacle, we need solvers that are based on
stronger proof systems. Unfortunately, existing strong proof
systems—such as extended resolution or Frege systems—do not
seem to lend themselves to mechanization.
In this talk, we discuss some recently proposed strong proof
systems that are more suitable for mechanization. These proof
systems are surprisingly strong, even without the introduction
of new variables—a key feature of short proofs presented in
the proof-complexity literature. We demonstrate the strength
of these proof systems on two famous problems: the pigeonhole
principle and mutilated chessboards. For both problems, we
present proofs of linear size without new variables. We also
present a novel technique to find such proofs automatically
and show its usefulness on SAT Competition benchmarks. We
conclude with some theoretical and practical challenges.
MIAO seminars spring 2022
-
Monday May 30 at 14:00
in seminar room E:2116, Ole Römers väg 3, Lund University,
and on Zoom
CDCL versus resolution
(Marc Vinyals)
The effectiveness of the CDCL algorithm, the one most used to solve SAT in
practice, is complicated to understand, and so far one of the most
successful tools for that has been proof complexity. CDCL is easily
seen to be limited by the resolution proof system, and furthermore it
has been shown to be equivalent to resolution, in the sense that CDCL can
reproduce a given resolution proof with only polynomial overhead.
But the question of the power of CDCL with respect to resolution is
far from closed. To begin with, the previous equivalence is subject to
some assumptions, only some of which are reasonable in practice. In
addition, in a world where algorithms are expected to run in linear
time, a polynomial overhead is too coarse a measure.
In this talk we will discuss what breaks when we try to make the
assumptions more realistic, and how much of an overhead CDCL needs in
order to simulate resolution.
-
Tuesday Mar 22 at 14:00
in seminar room A107,
Hans Christian Ørsted Building, Universitetsparken 5, University of Copenhagen,
and on Zoom
Solving large scale instances of a geometric set cover problem with coloring conflicts
(Allan Sapucaia, University of Campinas)
When designing wireless networks, one wants to install
antennas ensuring that all important regions are
covered. There are many different secondary goals to consider
such as minimizing the number of antennas, or ensuring that
the network can still work even when some of the antennas are
down.
In this talk, we will discuss a geometric wireless network
design problem where we want to minimize the number of
antennas, while constrained by frequency assignment
conflicts. We will study a Integer Linear Program model for
this problem, and discuss how it can be improved using both
its geometric and graph-theoretical aspects. We show that the
problem can be efficiently solved in practice using
decomposition and conclude by providing theoretical
explanations for its performance.
MIAO seminars autumn 2021
-
Monday Dec 6 at 17:00 (note the time!)
Core-guided minimal correction set and core enumeration
(video)
(Nina Narodytska, VMware Research)
A set of constraints is unsatisfiable if there is no solution
that satisfies these constraints. To analyze unsatisfiable
problems, the user needs to understand where inconsistencies
come from and how they can be repaired. Minimal unsatisfiable
cores and correction sets are important subsets of constraints
that enable such analysis. In the first part of the talk, we
analyze core-guided MaxSAT algorithms and discuss how the cores
found by these algorithms are related to the cores of the
original unsatisfiable formula. Based on these results, in the
second part, we discuss a new algorithm for extracting minimal
unsatisfiable cores and correction sets. Our new solver
significantly outperforms several state of the art algorithms on
common benchmarks when it comes to extracting correction sets
and compares favorably on core extraction.
-
Monday Nov 22 at 14:00
TFNP: Collapses, separations, and characterisations
(video)
(Mika Göös, EPFL)
We discuss a range of results for total NP search problem classes (TFNP)
that includes new collapses of classes, black-box separations, and
characterisations using propositional proof systems. We will focus on a
surprising collapse result, EoPL = PLS ∩ PPAD, and see its full proof.
Joint work with Alexandros Hollender, Siddhartha Jain, Gilbert
Maystre, William Pires, Robert Robere, and Ran Tao.
-
Wednesday Nov 17 at 14:00
in E:1408, Ole Römers väg 3, Lund University and on Zoom
Decomposition approaches for a large-scale scheduling problem
(video of first half of seminar)
(Elina Rönnberg, Linköping University)
Generic state-of-the-art solvers for discrete optimisation are
exceptionally powerful tools that efficiently solve a variety of
problems. However, the scale and complexity of practically
relevant problems can sometimes render these solvers incapable
of even finding feasible solutions. In such cases, one
possibility is to develop solution approaches that exploit
problem structure to decompose the problem and then use the
generic solvers for addressing the resulting subproblems. The
success of such approaches relies on that the subproblems can be
efficiently solved and that the information gained from that is
sufficient to solve the original problem.
In an industry-academia collaboration between Linköping
University and Saab Aeronautics, we address the scheduling of a
kind of electronic systems in future aircraft. Briefly, this
problem can be described as a rich multiprocessor scheduling
problem that also includes the scheduling of a communication
network. To find feasible solutions to practically relevant
instances of this problem is challenging.
In this talk, I will present two different ways of decomposing
the problem. The first relies on making a strong relaxation of
the problem and then applying a constraint generation
procedure. In this approach, both the relaxed problem and the
subproblem are solved by a mixed-integer programming solver. The
performance of the method is enhanced by an integration with
adaptive large neighbourhood search. The second decomposition
strategy is based on logic-based Benders decomposition. In this
case, the master problem is solved as a mixed-integer program
and the subproblem is formulated as a constraint program. The
computational results from the two approaches are compared. To
conclude the talk, the different ways of exploiting the problem
structure and efficiency of solvers will be discussed and we
will compare the properties of the resulting decomposition
approaches.
-
Joint BARC/MIAO seminar:
Friday Nov 12 at 14:00
in Auditorium No. 10, Hans Christian Ørsted Building,
Universitetsparken 5, University of Copenhagen,
and on Zoom
Superpolynomial lower bounds against low-depth algebraic circuits
(video)
(Nutan Limaye, IT University of Copenhagen)
Every multivariate polynomial P(X) can be written as a sum
of monomials, i.e. a sum of products of variables and field constants.
In general, the size of such an expression is the number of monomials
that have a non-zero coefficient in P.
What happens if we add another layer of complexity, and consider sums
of products of sums (of variables and field constants) expressions?
Now, it becomes unclear how to prove that a given polynomial
P(X) does not have small expressions. In this result, we
solve exactly this problem.
More precisely, we prove that certain explicit polynomials have no
polynomial-sized "Sigma-Pi-Sigma" (sums of products of sums)
representations. We can also show similar results for
Sigma-Pi-Sigma-Pi, Sigma-Pi-Sigma-Pi-Sigma and so on for all
"constant-depth" expressions.
In the first part of this two-part talk, I will present the statements of the main results
and some background. In the second part of the talk, I will give some proof
details.
This is a joint work with Srikanth Srinivasan and Sébastien Tavenas.
-
Monday Nov 8 at 14:00
Lifting with inner-product and other low discrepancy gadgets
(video)
(Sajin Koroth, Simon Fraser University)
Lifting theorems are a powerful technique that exports our
understanding of a weak model of computation (usually query
complexity) to a powerful model of computation (usually
communication complexity). They have been instrumental in
solving many open problems in diverse areas of theoretical
computer science like communication complexity, circuit
complexity, proof complexity, linear programming, game theory,
etc. A typical lifting theorem connects the complexity of
computing any function f in the weak model to computing an
associated function f ∘ g in the powerful model. The associated
function f ∘ g is obtained by composing f with a specific function
g known as the gadget. A key aspect of broad applicability and
success of the lifting theorems is that they work for any f. In
a typical lifting theorem, this universality is achieved by
designing the gadget g with specific properties. The
applicability of a lifting theorem is usually limited by the
choice of gadget g and the size of the gadget. Thus, it is
crucial to understand what functions g can be used as a gadget
and how small g can be. In this talk, we present a lifting theorem
that connects query complexity to communication complexity and
works for almost all gadgets g of "smallish" size.
This talk is based on joint work with Arkadev Chattopadhyay, Yuval Filmus, Or Meir, and Toniann Pitassi.
-
Monday Oct 25 at 14:00
Proof complexity lower bounds by composition
(video)
(Robert Robere, McGill University)
In recent years, there has been an explosion in the development of
so-called "lifting theorems" in proof complexity and other closely
related areas which have led to the resolution of many
long-standing open problems. The basic idea of a lifting theorem
is simple: in order to prove a lower bound in a "complicated"
system that we do not understand (for example, Cutting Planes
refutations), we will "escalate" the hardness from a "simple"
system that we do understand (for example, Resolution refutations)
by taking a hard formula F for the simple system and composing it
with a specially chosen "gadget function" that removes the power
of the complicated system. In many cases, it is possible to prove
that the "best" possible strategy in the complicated system for
refuting the composed formula is to simulate the proof of the
uncomposed formula in the simple system (and thus lower bounds for
the uncomposed formula are "escalated" or "lifted" to lower bounds
for the composed formula). While lifting theorems in circuit
complexity can be traced back to the work of Raz and McKenzie
[RM99], recent developments have pushed these techniques to many
new areas, and have shown how lifting is a flexible tool that
allows quite finely tuned tradeoffs between different parameters
for a wide variety of proof systems.
In this talk, we give an introduction to and survey of lifting
results in proof complexity and related fields, like
communication complexity and circuit complexity, and outline
some of the ways that lifting can be used to control various
"complexity parameters" of unsatisfiable CNF formulas (like
proof length, proof space, proof depth, etc.). We will highlight
several recent developments in the area and indicate some future
directions.
-
Friday Oct 22 at 14:00
Verified proof checkers
(video)
(Magnus Myreen, Chalmers University of Technology)
Solvers, such as e.g. SAT solvers, are often complicated pieces
of software. How can we trust their results? For highly
optimised state-of-the-art solvers, testing is insufficient and
proving the functional correctness of the solver's
implementation is not practically possible. Fortunately, solvers
can often be augmented to produce proof certificates that can be
checked by separate simpler proof checkers. In recent years,
there has been growing interest in formally proving that the
proof checkers will never accept a certificate that contains
flaws.
In this talk, I will describe work that makes it possible to
prove functional correctness of proof checkers down to the
machine code that runs them. I have worked on (and have
supervised work on) several checkers. In this talk, I'll focus
on (1) my work on proving end-to-end correctness of Jared
Davis's Milawa prover, and (2) recent work on a LPR/LRAT checker
for UNSAT proofs. My talk will include a description of the
CakeML project, which was the context of (2). I will use demos
to show what the tools look like when running.
-
Friday Oct 15 at 10:00
Logic-based explainable AI
(video)
(Alexey Ignatiev, Monash University)
Explainable artificial intelligence (XAI) represents arguably
one of the most crucial challenges being faced by the area of AI
these days. Although the majority of approaches to XAI are of
heuristic nature, recent work proposed the use of abductive
reasoning to computing provably correct explanations for machine
learning (ML) predictions. The proposed rigorous approach was
shown to be useful not only for computing trustable explanations
but also for reasoning about explanations computed
heuristically. It was also applied to uncover a close
relationship between XAI and verification of ML models. This
talk will overview the advances of the rigorous logic-based
approach to XAI as well as the use of reasoning in devising
interpretable rule-based ML models including decision trees,
decision sets, and decision lists.
-
Monday Oct 4 at 14:00
Monotone arithmetic lower bounds via communication complexity
(video)
(Arkadev Chattopadhyay, Tata Institute of Fundamental Research, Mumbai)
How much power does negation or cancellation provide to
computation? This is a fundamental question in theoretical
computer science that appears in various parts: in Boolean
circuits, arithmetic circuits and also in communication
complexity. I will talk about some new connections between the
latter two fields and their applications to extend two classical
results from four decades ago:
-
Valiant (1979) showed that monotone arithmetic circuits are
exponentially weaker than general circuits for computing
monotone polynomials. Our first result gives a qualitatively
more powerful separation by showing an exponential separation
between general monotone circuits and constant-depth
multi-linear formulas. Neither such a separation between general
formulas and monotone circuits, nor a separation between
multi-linear circuits and monotone circuits were known
before. Our result uses the recent counter-example to the
Log-Approximate-Rank Conjecture in communication complexity.
-
Jerrum and Snir (1982) also obtained a separation between the
powers of general circuits and monotone ones via a different
polynomial, i.e. the spanning tree polynomial (STP), a
polynomial that is well known to be in
VP, using non-multi-linear cancellations of determinantal
computation. We provide the first extension of this result to show
that the STP remains "robustly hard" for monotone circuits in the
sense of Hrubes' recent notion of epsilon-sensitivity. The latter
result is proved via formulating a discrepancy method for monotone
arithmetic circuits that seems independently interesting.
We will discuss several open problems arising from these results.
(The seminar is based on joint works with Rajit Datta, Utsab Ghosal and
Partha Mukhopadhyay.)
-
Monday Sep 20 at 14:00
KRW composition theorems via lifting
(video)
(Or Meir, University of Haifa)
One of the major open problems in complexity theory is proving
super-logarithmic lower bounds on the depth of circuits (i.e., separating
P from NC0).
Karchmer, Raz, and Wigderson (Computational
Complexity 5(3/4), 1995) suggested approaching this problem by proving
that depth complexity behaves "as expected" with respect to
composition of functions. They showed that the validity of this
conjecture would separate
P from NC0.
Several works have made
progress toward resolving this conjecture by proving special cases. In
particular, these works proved the KRW conjecture for every outer
function, but only for few inner functions. Thus, it is an important
challenge to prove the KRW conjecture for a wider range of inner
functions.
In this work, we extend significantly the range of inner
functions that can be handled. First, we consider the monotone
version of the KRW conjecture. We prove it for every monotone
inner function whose depth complexity can be lower bounded via a
query-to-communication lifting theorem. This allows us to handle
several new and well-studied functions such as the
s-t-connectivity, clique, and generation functions.
In order to carry this progress back to the non-monotone
setting,we introduce a new notion of semi-monotone composition,
which combines the non-monotone complexity of the outer function
with the monotone complexity of the inner function. In this
setting, we prove the KRW conjecture for a similar selection of
inner functions, but only for a specific choice of the outer
function.
-
Friday Sep 17 at 14:15
Estimating the size of unions of sets in streaming model
(video)
(Kuldeep S. Meel, National University of Singapore)
Feeling tired of complicated theorems with complicated proofs?
Fret not; this week will be different: A simple theorem with a simpler
proof for a general problem. I will make a case for our PODS-21 paper
to be the simplest paper ever accepted at PODS.
A set is Delphic if it supports efficient membership, sampling, and
counting calls. The notion of Delphic sets captures several well-known
problems, such as Klee's measure, distinct elements, and model
counting of DNF formulas. We will discuss estimation of the size of
the union of Delphic sets wherein each set is implicitly (and
succinctly) represented, and comes in a streaming fashion.
The primary contribution of our work is a simple, elegant, and
efficient sampling-based algorithm for the estimation of the union in
the streaming setting. Our algorithm has worst case space
complexity and update time that is logarithmic in the size of the
universe and stream size. Consequently, our algorithm provides the
first algorithm with a linear dependence on d for Klee's measure
problem in streaming setting for d>1, thereby settling the open
problem of Woodruff and Tirthpura (PODS-12). The Klee's measure
problem corresponds to computation of volume of multi-dimension
axis-aligned rectangles, i.e., every d-dimension axis-aligned
rectangle can be defined as [a_1,b_1] x [a_2,b_2] x … x [a_d, b_d].
(Joint work with N.V. Vinodchandran and Sourav Chakraborty.)
-
Monday Sep 6 at 14:00
Lifting with sunflowers
(video)
(Ian Mertz, University of Toronto)
Lifting theorems are an important class of techniques which
can transform functions which are hard for weak computation
models into functions which are hard for strong computation
models. Starting with Raz and McKenzie (1999), there has been
a flurry of work proving query-to-communication lifting
theorems, which translate lower bounds on query complexity to
lower bounds for the corresponding communication model. In
this talk I will present a simplified proof of deterministic
query-to-communication lifting. Our proof uses elementary
counting together with a novel connection to the sunflower
lemma.
-
Friday Aug 27 at 10:00
Who is Isabelle, and why is she so picky about my proofs?
(Dmitriy Traytel, University of Copenhagen)
Proof assistants are tools that mechanically check human-written formal proofs,
e.g., of an algorithm's correctness, and support their users in developing
these proofs. Landmark achievements in this area include realistic verified
compilers, operating system kernels, and distributed systems as well as formal
proofs of deep mathematical results such as the four color theorem, the
Feit–Thompson theorem, and the Kepler conjecture. In this talk, I will give a
brief introduction of the Isabelle/HOL proof assistant, of which I am both a user
and a developer. I will explain how Isabelle's foundational approach achieves
the highest level of trustworthiness and will examplify the benefits of using a
proof assistant via three case studies: the formal verification of security
properties of authenticated data structures, the completeness of ordered
resolution, and the correctness of an efficient runtime monitoring algorithm.
MIAO seminars spring 2021
During the spring semester of 2021,
we had online virtual seminars running every weekday
at 17:30 CET from early February to mid-May
as part of the
Satisfiability: Theory, Practice, and Beyond
and
Theoretical Foundations of Computer Systems
programs
at the
Simons Institute
for the Theory of Computing.
On top of this, we had some additional seminars as listed
below.
-
Monday Jun 7 at 14:00
Proof complexity meets finite model theory
(video)
(Joanna Ochremiak, LaBRI, Université de Bordeaux and CNRS)
Finite model theory studies the power of logics on the class of finite
structures. One of its goals is to characterize symmetric computation,
that is, computation that abstracts away details which are not essential
for the given task, by respecting the symmetries of the input.
In this talk I will discuss connections between proof complexity and
finite model theory, focussing on lower bounds. For certain proof
systems, the existence of a succinct refutation can be decided in a
symmetry-preserving way. This allows us to transfer lower bounds from
finite model theory to proof complexity. I will introduce this approach
and explain its key technical ideas such as the method of folding for
dealing with symmetries in linear programs.
-
Friday Jun 4 at 14:00
Abstract cores in implicit hitting set based MaxSAT solving
(video)
(Jeremias Berg, University of Helsinki)
Maximum satisfiability (MaxSat) solving is an active area of research
motivated by numerous successful applications to solving NP-hard
combinatorial optimization problems. One of the most successful
approaches for solving MaxSat instances from real world domains are the
so called implicit hitting set (IHS) solvers. IHS solvers decouple
MaxSat solving into separate core-extraction and optimization steps
which are tackled by an Boolean satisfiability (SAT) and an integer
linear programming (IP) solver, respectively. While the approach shows
state-of-the-art performance on many industrial instances, it is known
that there exists instances on which IHS solvers need to extract an
exponential number of cores before terminating. In this talk I will
present the simplest of these problematic instances
and talk about how abstract cores, a compact representation of a large
number of regular cores that we recently proposed, addresses perhaps the
main bottleneck of IHS solvers. I will show how to incorporate abstract
core reasoning into the IHS algorithm and demonstrate that
that including abstract cores into a state-of-the- art IHS solver
improves its performance enough to surpass the best performing solvers
of the recent MaxSat Evaluations.
-
Friday May 28 at 14:00
Feasible interpolation for algebraic proof systems
(video)
(Tuomas Hakoniemi,
Universitat Politècnica de Catalunya)
In this talk we present a form of feasible interpolation for two
algebraic proof systems, Polynomial Calculus and Sums-of-Squares. We
show that for both systems there is a poly-time algorithm that given two
sets P(x,z) and Q(y,z) of polynomial equalities, a refutation of the union of P(x,z) and Q(y,z) and an assignment a to the z-variables outputs either a refutation of P(x,a) or a refutation of Q(y,a).
Our proofs are fairly logical in nature, in that they rely heavily on
semantic characterizations of resource-bounded refutations to prove the
existence of suitable refutations. These semantic existence proofs
narrow down the search space for the refutations we are after so that we
can actually find them efficiently.
-
Monday May 17 at 14:00
Average-case perfect matching lower bounds
from hardness of Tseitin formulas
(video)
(Kilian Risse, KTH Royal Institute of Technology)
We study the complexity of proving that a sparse random regular
graph on an odd number of vertices does not have a perfect matching,
and related problems involving each vertex being matched some
pre-specified number of times.
We show that this requires proofs of degree Ω(n / log n) in
the Polynomial Calculus (over fields of characteristic ≠ 2) and
Sum-of-Squares proof systems, and exponential size in the
bounded-depth Frege proof system.
This resolves a question by Razborov asking whether
the Lovász-Schrijver proof system requires nδ
rounds to refute these formulas for some δ > 0.
The results are obtained by a
worst-case to average-case reduction of these formulas relying on a
topological embedding theorem which may be of independent interest.
Joint work with with Per Austrin.
-
Monday May 10 at 14:00
On the complexity of branch and cut
(video)
(Noah Fleming, University of Toronto)
The Stabbing Planes proof system was introduced to model practical
branch-and-cut integer programming solvers. As a proof system, Stabbing
Planes can be viewed as a simple generalization of DPLL to reason about
linear inequalities. It is powerful enough to simulate Cutting Planes
and produce short refutations of the Tseitin formulas — certain
unsatisfiable systems of linear equations mod 2 — which are canonical
hard examples for many algebraic proof systems. In a surprising recent
result, Dadush and Tiwari showed that these short Stabbing Planes
refutations of the Tseitin formulas could be translated into Cutting
Planes proofs. This raises the question of whether all Stabbing Planes
can be efficiently translated into Cutting Planes. In recent work, we
give a partial answer to this question.
In this talk I will introduce and motivate the Stabbing Planes proof
system. I will then show how Stabbing Planes proofs with
quasi-polynomially bounded coefficients can be quasi-polynomially
translated into Cutting Planes. As a consequence of this translation, we
can show that Cutting Planes has quasi-polynomial size refutations of
any unsatisfiable system of linear equations over a finite field. A
remarkable property of our translation for systems of linear equations
over finite fields, and the translation of Dadush and Tiwari for the
Tseitin formulas, is that the resulting proofs are also
quasi-polynomially deep. A natural question is thus whether these depth
bounds can be improved. In the last part of the talk, I will discuss
progress towards answering this question in the form of a new depth
lower bound technique for Cutting Planes, which works also for the
stronger semantic Cutting Planes system, and which allows us to
establish the first linear lower bounds on the depth of semantic Cutting
Planes refutations of the Tseitin formulas.
-
Monday May 3 at 14:00
SAT-encodings and scalability
(video)
(André Schidler, Technische Universität Wien)
Boolean satisfiability (SAT) solvers have reached stunning performance
over the last decade. This performance can be harnessed for other
problems by means of SAT encodings: translating the problem instance
into a SAT instance. SAT encodings have been used successfully for many
combinatorial problems and are established in industry. In this talk, I
discuss the general idea behind SAT encodings and two specific
application areas: graph decompositions and machine learning.
Graph decompositions allow for specialized algorithms that can solve
hard problems efficiently. SAT encodings provide not only the means to
optimally compute these decompositions, but can be easily extended by
extra constraints specific to the hard problem to be solved.
For machine learning, SAT encodings gained increased attention in
recent years, as they allow the induction of very small AI models.
Learning small models has become more important in the context of
explainable AI: smaller models are usually easier to understand for
humans. We will focus specifically on decision tree induction.
Scalability is one of the main issues when using SAT encodings. We can
overcome this problem by embedding SAT-based solving into a heuristic
framework. This allows us to trade off a slightly worse result for large
applicability. In the last part of our talk, we discuss our SAT-based
local improvement framework that implements this idea.
-
Wednesday Apr 28 at 15:00
Slicing the hypercube is not easy
(video)
(Amir Yehudayoff, Technion – Israel Institute of Technology)
How many hyperplanes are needed to slice all edges of the hypercube?
This question has been studied in machine learning, geometry and
computational complexity since the 1970s. We shall describe (most of) an
argument showing that more than n0.57 hyperplanes are needed, for large n. We shall also see a couple of applications. This is joint work with Gal Yehuda.
-
Monday Apr 26 at 14:00
Recent lower bounds in algebraic complexity theory
(video)
(Ben Lee Volk, University of Texas at Austin)
Algebraic complexity theory studies the complexity of solving algebraic
computational tasks using algebraic models of computation. One major
problem in this area is to prove lower bounds on the number of
arithmetic operations required for computing explicit polynomials. This
natural mathematical problem is the algebraic analog of the famous P vs. NP problem. It also has tight connections to other classical mathematical areas and to fundamental questions in complexity theory.
In this talk I will provide background and then present some recent
progress on proving lower bounds for models of algebraic computation,
such as the algebraic analogs of NC1 and NL.
Based on joint works with Prerona Chatterjee, Mrinal Kumar, and Adrian She.
-
Friday Apr 23 at 14:00
On the complexity of branching proofs
(video)
(Daniel Dadush, CWI)
We consider the task of proving integer infeasibility of a bounded
convex K in Rn using a general branching
proof system. In a general branching proof, one constructs a branching
tree by adding an integer disjunction ax ⩽ b or
ax ⩾ b+1, for an integer vector a and an
integer b, at each node, such that the leaves of the tree
correspond to empty sets (i.e., K together with the
inequalities picked up from the root to leaf is empty). Recently,
Beame et al (ITCS 2018), asked whether the bit size of the
coefficients in a branching proof, which they named stabbing planes
(SP) refutations, for the case of polytopes derived from SAT formulas,
can be assumed to be polynomial in n. We resolve this
question by showing that any branching proof can be recompiled so that
the integer disjunctions have coefficients of size at most
(nR)O(n2), where R is the radius of an
l1 ball containing K, while increasing
the number of nodes in the branching tree by at most a factor
O(n). As our second contribution, we show that Tseitin
formulas, an important class of infeasible SAT instances, have
quasi-polynomial sized cutting plane (CP) refutations, disproving the
conjecture that Tseitin formulas are (exponentially) hard for CP. As
our final contribution, we give a simple family of polytopes in
[0,1]n requiring branching proofs of length
2n/n.
Joint work with Samarth Tiwari.
-
Monday Apr 19 at 14:00
Kidney exchange programmes —
saving lives with optimisation algorithms
(video)
(William Pettersson, University of Glasgow)
Kidney Exchange Programmes (KEPs) increase the rate of living
donor kidney transplantation, in turn saving lives. In this
talk, I will explain exactly what KEPs are, how KEPs achieve
this goal, and the role of optimisation algorithms within
KEPs. This will include brief explanations of some of the
models used for kidney exchange programmes, some of the more
generic optimisation techniques that we use, as well as recent
advances and specific research directions for the field into
the future. This talk will very much be an overview of kidney
exchange, without going into complex details of optimisation
algorithms, making it suitable for a wider audience.
-
Thursday Apr 8 at 14:00
(Semi)Algebraic proofs over {±1} variables
(video)
(Dmitry Sokolov, St.Petersburg State University)
One of the major open problems in proof complexity is to prove
lower bounds on AC0[p]-Frege proof systems. As a step toward
this goal Impagliazzo, Mouli and Pitassi in a recent paper
suggested proving lower bounds on the size for Polynomial
Calculus over the {±1} basis. In this talk we show a
technique for proving such lower bounds and moreover, we also
give lower bounds on the size for Sum-of-Squares over the
{±1} basis.
We discuss the difference between {0, 1} and {+1, -1} cases and
problems in further generalizations of the lower bounds.
-
Courtesy of Zuse Institute Berlin:
Friday Feb 12 at 14:00
Introduction to IP presolving techniques in PaPILO
(Alexander Hoen, Zuse Institute Berlin)
Presolving is an essential part contributing to the performance of
modern MIP solvers. PaPILO, a new C++ library, provides presolving
routines for MIP and LP problems. This talk will give an introduction
to basic IP-presolving techniques and provide insights into important
design choices enabling PaPILO's capabilities, in particular
regarding its use of parallel hardware. While presolving itself is
designed to be fast, this can come at the cost of failing to find
important reductions due to working limits and heuristic filtering.
Yet, even most commercial solvers do not use multi-threading for the
preprocessing step as of today. PaPILO's design facilitates use of
parallel hardware to allow for more aggressive presolving and
presolving of huge problems. The architecture of PaPILO allows
presolvers generally to run in parallel without requiring expensive
copies of the problem and without special synchronization in the
presolvers themselves. Additionally, the use of Intel's TBB library
aids PaPILO to efficiently exploit recursive parallelism within
expensive presolving routines, such as probing, dominated columns, or
constraint sparsification. Despite PaPILO's use of parallelization,
its results are guaranteed to be deterministic independently of the
number of threads available.
MIAO seminars autumn 2020
-
Friday Oct 9 at 13:15
Model counting with probabilistic component caching
(Shubham Sharma and Kuldeep S. Meel,
National University of Singapore)
Given a Boolean formula F, the problem of model counting, also
referred to as #SAT, seeks to compute the number of solutions of F.
Model counting is a fundamental problem with a wide variety of
applications ranging from planning, quantified information flow to
probabilistic reasoning and the like. The modern #SAT solvers tend to
be either based on static decomposition, dynamic decomposition, or a
hybrid of the two. Despite dynamic decomposition based #SAT solvers
sharing much of their architecture with SAT solvers, the core design
and heuristics of dynamic decomposition-based #SAT solvers has
remained constant for over a decade. In this paper, we revisit the
architecture of the state-of-the-art dynamic decomposition-based #SAT
tool, sharpSAT, and demonstrate that by introducing a new notion of
probabilistic component caching and the usage of universal hashing
for exact model counting along with the development of several new
heuristics can lead to significant performance
improvement over state-of-the-art model-counters. In particular, we
develop GANAK, a new scalable probabilistic exact model counter that
outperforms state-of-the-art exact and approximate model counters for
a wide variety of instances.
-
Friday Sep 25 at 13:15
Manthan: A data-driven approach for Boolean function synthesis
(Priyanka Golia and Kuldeep S. Meel,
National University of Singapore)
Boolean functional synthesis is a fundamental problem in computer
science with wide-ranging applications and has witnessed a surge of
interest resulting in progressively improved techniques over the past
decade. Despite intense algorithmic development, a large number of
problems remain beyond the reach of the current state-of-the-art
techniques. Motivated by the progress in machine learning, we propose
Manthan, a novel data-driven approach to Boolean functional
synthesis. Manthan views functional synthesis as a classification
problem, relying on advances in constrained sampling for data
generation, and advances in automated reasoning for a novel
proof-guided refinement and provable verification. On an extensive
and rigorous evaluation over 609 benchmarks, we demonstrate that
Manthan significantly improves upon the current state of the art,
solving 356 benchmarks in comparison to 280, which is the most solved
by a state-of-the-art technique; thereby, we demonstrate an increase
of 76 benchmarks over the current state of the art. The significant
performance improvements, along with our detailed analysis, highlights
several interesting avenues of future work at the intersection of
machine learning, constrained sampling, and automated reasoning.
-
Friday Aug 28 at 13:15
Tuning Sat4j PB solvers for decision problems
(Romain Wallon,
Université d'Artois)
During the last decades, many improvements in CDCL SAT solvers have
made possible to solve efficiently large problems containing millions of
variables and clauses.
Despite this practical efficiency, some instances remain out of reach
for such solvers.
This is particularly true when the input formula requires an
exponential size refutation proof in the resolution proof system (as for
pigeonhole formulae).
This observation motivated the development of another kind of solvers,
known as pseudo-Boolean (PB) solvers.
These solvers take as inputs a conjunction of PB constraints (integral
linear inequations over Boolean variables) and benefit from the cutting
planes proof system, which is (in theory) stronger than the resolution
proof system.
To implement this proof system, current PB solvers follow the direction
of modern SAT solvers, by implementing a conflict analysis procedure
relying on the application of cutting planes rules.
However, adapting the CDCL architecture to take into account PB
constraints is not as straightforward as it may look.
In particular, many CDCL invariants and properties do not hold anymore
as long as PB constraints are considered.
While some of them may be safely ignored (e.g., when they affect the
decision heuristic, the deletion strategy or the restart policy), some
others must be fixed to ensure the soundness of the solver (e.g., by
ensuring to preserve the conflict during its analysis).
In this talk, we will give an overview of the main differences between
PB solving and classical SAT solving, and present different approaches
that have been designed to take these differences into account to extend
the CDCL architecture to PB problems.
We will in particular discuss the pros and cons of these approaches,
and we will explain how they can be enhanced to improve the practical
performance of PB solvers.
-
Friday Aug 21 at 10:15
Using proofs to analyze SAT solvers
(Janne Kokkala,
Lund University and University of Copenhagen)
The main idea of this seminar is to give an overview of what
one can ask and what one can learn about SAT solvers when
analyzing the proof to estimate what part of the work was
useful. I will start by giving a 17-ish minute alpha test run
for my presentation for our CP paper [1]. After that, I will
discuss earlier works that use the same or a similar idea for
general insights [2], analyzing VSIDS usefulness [3], and
clause exchange for parallel solvers [4,5]. To conclude, we
can discuss how this approach could be used for pseudo-Boolean
solvers.
[1] J. I. Kokkala, J. Nordström.
Using Resolution Proofs to
Analyse CDCL SAT solvers.
[2] L. Simon.
Post Mortem Analysis of SAT Solver Proofs.
[3] S. Malik, V. Ying.
On the efficiency of the VSIDS decision heuristic.
(Workshop presentation.)
[4] G. Audemard, L. Simon.
Lazy
clause exchange policy for parallel SAT solvers.
[5] G. Katsirelos, A. Sabharwal, H. Samulowitz, L. Simon.
Resolution and parallelizability: Barriers to the efficient
parallelization of SAT solvers.
-
Wednesday Aug 19 at 13:15
On computational aspects of the antibandwidth problem
(Markus Sinnl,
Johannes Kepler University Linz)
In this talk, we consider the antibandwidth problem, also
known as dual bandwidth problem, separation problem and
maximum differential coloring problem. Given a labeled graph
(i.e., a numbering of the vertices of a graph), the
antibandwidth of a node is defined as the minimum absolute
difference of its labeling to the labeling of all its adjacent
vertices. The goal in the antibandwidth problem is to find a
labeling maximizing the antibandwidth. The problem is NP-hard
in general graphs and has applications in diverse areas like
scheduling, radio frequency assignment, obnoxious facility
location and map-coloring.
There has been much work on deriving theoretical bounds for
the problem and also in the design of metaheuristics in recent
years. However, the optimality gaps between the best known
solution values and reported upper bounds for the
HarwellBoeing Matrix-instances, which are the commonly used
benchmark instances for this problem, were often very large
(e.g., up to 577%).
We present new mixed-integer programming approaches for the
problem, including one approach, which does not directly
formulate the problem as optimization problem, but as a series
of feasibility problems. We also discuss how these feasibility
problems can be encoded with various SAT-encodings, including
a new and specialised encoding which exploits a certain
staircase-structure occuring in the problem formulation. We
present computational results for all the algorithms,
including a comparison of the MIP and SAT-approaches. Our
developed approaches allow to find the proven optimal solution
for eight instances from literature, where the optimal
solution was unknown and also provide reduced gaps for eleven
additional instances, including improved solution values for
seven instances, the largest optimality gap is now
46%. Instances based on the problem were submitted to the SAT
Competition 2020.
MIAO seminars spring 2020
-
Wednesday Jul 15 at 15:00
Naïve algorithm selection for SAT solving
(Stephan Gocht,
Lund University and University of Copenhagen)
Although solving propositional formulas is an NP-complete
problem, state-of-the-art SAT solvers are able to solve formulas
with millions of variables. To obtain good performance it is
necessary to configure parameters for heuristic
decisions. However, there is no single parameter configuration
that is perfect for all formulas, and choosing the parameters is
a difficult task. The standard approach is to evaluate different
configurations on some formulas and to choose the single
configuration, that performs best overall. This configuration,
which is called single best solver, is then used to solve new
unseen formulas. In this paper we demonstrate how random forests
can be used to choose a configuration dynamically based on
simple features of the formula. The evaluation shows that our
approach is able to outperform the single best solver on
formulas that are similar to the training set, but not on
formulas from completely new domains.
-
Friday Jun 26 at 10:00
Behind the scenes of chronological CDCL
(Sibylle Möhle and Armin Biere,
Johannes Kepler University Linz)
Combining conflict-driven clause learning (CDCL) with
chronological backtracking is challenging: Multiple invariants
considered crucial in modern SAT solvers are violated, if
after conflict analysis the solver does not jump to the
assertion level but to a higher decision level instead. In
their SAT'18 paper "Chronological Backtracking", Alexander
Nadel and Vadim Ryvchim provide a fix to this issue. Moreover,
their SAT solver implementing chronological backtracking won
the main track of the SAT Competition 2018. In our SAT'19
paper, "Backing Backtracking", we present a formalization and
generalization of this method. We prove its correctness and
provide an independent implementation.
In this seminar, we demonstrate the working of chronological
CDCL by means of an example. In this example, after a conflict
the conflicting clause contains only one literal at conflict
level. It is therefore used as a reason for backtracking, thus
saving the effort of conflict analysis. We further show which
invariants are violated and present new ones followed by a
discussion of the rules of our formal framework. We also shed
light onto implementation details, including those which are
not mentioned in our SAT'19 paper.
-
Monday Jun 22 at 13:30
McSplit: A partitioning algorithm for maximum common subgraph
problems
(Ciaran McCreesh and James Trimble,
University of Glasgow)
We will give a short introduction to McSplit, an algorithm for
the maximum common (connected) subgraph problem coauthored
with Patrick Prosser and presented at IJCAI 2017. McSplit
resembles a forward-checking constraint programming algorithm,
but uses a partitioning data structure to store domains which
greatly reduces memory use and time per search node. We will
also present our recent work with Stephan Gocht and Jakob
Nordström on adding proof logging to the algorithm, turning
McSplit into a certifying algorithm whose outputs can be
independently verified.
-
Thursday Jun 18 at 20:30
A pseudo-Boolean approach to nonlinear verification
(Vincent Liew,
University of Washington)
We discuss some new experimental results showing the promise
of using pseudo-Boolean solvers, rather than SAT solvers, to
verify bit-vector problems containing multiplication. We use
this approach to efficiently verify the commutativity of a
multiplier output bit by output bit. We also give some examples
of simple bit-vector inequalities where the pseudo-Boolean
approach significantly outperformed SAT solvers and even
bit-vector solvers. Finally, we give some of our observations on
the strengths and weaknesses of different methods of conflict
analysis used by pseudo-Boolean solvers.
-
Friday May 8 at 13:15
Pseudo-Boolean solvers for answer set programming
(Bart Bogaerts, Vrije Universiteit Brussel)
Answer set programming (ASP) is a well-established knowledge
representation formalism. Most ASP solvers are based on
(extensions of) technology from Boolean satisfiability
solving. While these solvers have shown to be very successful in
many practical applications, their strength is limited by their
underlying proof system, resolution. In this research, we
present a new tool LP2PB that translates ASP programs into
pseudo-Boolean theories, for which solvers based on the
(stronger) cutting plane proof system exist. We evaluate our
tool, and the potential of cutting-plane-based solving for ASP
on traditional ASP benchmarks as well as benchmarks from
pseudo-Boolean solving. Our results are mixed: overall,
traditional ASP solvers still outperform our translational
approach, but several benchmark families are identified where
the balance shifts the other way, thereby suggesting that
further investigation into a stronger proof system for ASP is
valuable.
|