Archiwum seminariów

23.06.43420
Przemysław Rutka (Lublin)
Podstawy Informatyki
Wybrane algorytmiczne zastosowania klasycznych wielomianów ortogonalnych
Klasyczne wielomiany ortogonalne, odpowiadające im klasyczne funkcje wagowe oraz ich własności znajdują wiele zastosowań w takich chociażby obszarach jak tomografia, mechanika kwantowa, kombinatoryka, przetwarzanie obrazów i sygnałów, kompresja danych oraz zwiększanie wydajności algorytmów. W tym ostatnim zakresie cały czas uzyskuje się wiele ciekawych wyników, pozwalających na efektywne numeryczne rozwiązywanie różnych problemów. Można do tych problemów w szczególności zaliczyć barycentryczne interpolacje Fejéra, Hermite'a i Lagrange'a oraz problemy ekstremalne typu Szegő i Markowa-Bernsteina. W pierwszym przypadku, gdy interpolowanych jest n wartości w węzłach, będących zerami klasycznych wielomianów ortogonalnych, możliwa jest poprawa złożoności obliczeniowej algorytmów, obliczających wartości wielomianów interpolacyjnych w oparciu o wzory barycentryczne, z O(n^2) do O(n). Wymagane jest w tym celu zastosowanie odpowiednich jawnych wzorów na wagi barycentryczne lub wzorów wiążących wagi barycentryczne z wagami i węzłami kwadratur Gaussa. Z kolei w drugim przypadku, jak się okazuje powiązanym z pierwszym, daje się sformułować wzory, pozwalające bezpośrednio obliczać na komputerze najlepsze stałe, występujące w nierównościach typu Szegő i Markowa-Bernsteina oraz wartości wielomianów ekstremalnych, dla których te nierówności stają się równościami. Nierówności te związane są z iterowanymi klasycznymi funkcjami wagowymi i można je wykorzystać do szacowania wartości lub norm pochodnych D^{k}p lub różnic progresywnych Δ^{k}p wielomianów p(x), odpowiednio w przypadku ciągłym lub dyskretnym.
     Inne tego typu rezultaty, korzystające z klasycznych wag i/lub klasycznych wielomianów ortogonalnych, można otrzymać także dla problemu typu izoperymetrycznego w klasie płaskich, zamkniętych krzywych wielomianowych, problemu równowagi elektrostatycznej układu ładunków, problemu efektywnej, stabilnej i najbardziej ekonomicznej interpolacji oraz problemu dwustronnych oszacowań aproksymacyjnych a priori typu Chernoffa.
09.03.68171
Bartłomiej Bosek
Informatyka Teoretyczna
Algorithms for posets and graphs games – coloring and matching

Graph colorings and online algorithms on graphs constitute the key fragments of the algorithmic graph theory. Specifically, the subject of this study will be a presentation of the results concerning

  • different variants of coloring of graphs and partially ordered sets,
  • online coloring of partially ordered sets,
  • incremental matchings of bipartite graphs.

The first part of the talk will concern different aspects of the coloring problem as well as different evidential techniques. The presented results concern majority choosability of digraphs, harmonious coloring of hypergraphs and semi-uni conjecture of product of two posets. The next part of presentation will concern online chain partitioning of posets. There will be presented a full characterization of the class of posets, for which the number of colors (chains) used by first-fit is a function of width, i.e. best offline solution. This part will also present two different subexponential online algorithm for the online chain partitioning problem. The last part will concern the incremental matching problem in bipartite graphs. There will be presented an incremental algorithm that maintains the maximum size matching in total time equal the running time of one of the fastest offline maximum matching algorithm that was given by Hopcroft and Karp. Moreover, I will show an analysis of the shortest augmenting path algorithm.

This is joint work with Marcin Anholcer, Jarosław Grytczuk, Sebastian Czerwiński, Paweł Rzążewski, Stefan Felsner, Kolja Knauer, Grzegorz Matecki, Tomasz Krawczyk, H. A. Kierstead, Matthew Smith, Dariusz Leniowski, Piotr Sankowski, Anna Zych-Pawlewicz.

29.09.76384
Michał Wrona
Informatyka Teoretyczna
Relational Width of First-Order Expansions of Homogeneous Graphs with Bounded Strict Width

We study the amount of consistency (measured by relational width) needed to solve the CSP parametrized by first-order expansions of countably infinite homogeneous graphs, that are, the structures first-order-definable in a homogeneous graph containing the edge relation E, the relation N that holds between different vertices not connected by an edge and the equality. We study our problem for structures that additionally have bounded strict width, i.e., establishing local consistency of an instances of the CSP not only decides if there is a solution but also ensures that every solution may be obtained from a locally consistent instance by greedily assigning values to variables, without backtracking. It is known that with every countably infinite homogeneous graph G the finite unique minimal set S of finite graphs is associated such that some finite H is an induced substructure of G if and only if there is no H' in S such that H' embeds into H.

Our main result is that structures under consideration have relational width exactly (2,L) where L is the maximal size of a forbidden subgraph in S, but not smaller than 3. It implies, in particular, that the CSP parametrized by such structures may be solved in time O(nm) where n is the number of variables and m is the maximum of L and the arities of relations in the signature, while strict width l ensures time O(nl+1). Furthermore, since L may be arbitrarily large, our result contrasts the collapse of the relational bounded width hierarchy for finite structures, in which case, if finite, relational width is always at most (2,3). 

Finally, we show that certain maximally tractable constraint languages with a first-order definition in a countably infinite homogeneous graph whose CSPs are already known to be solvable in polynomial time by other algorithms may be also solved by establishing consistency. Thus, providing an alternative algorithm for already studied problems.

04.02.62636
Rafał Byczek, Bruno Pitrus

Approximating Edit Distance Within Constant Factor in Truly Sub-Quadratic Time

Odległość edycyjna to jeden ze sposobów zmierzenia jak bardzo dwa ciągi znaków są do siebie podobne. Polega on na zliczeniu minimalnej liczby operacji wstawienia, usunięcia lub zmienienia znaku na inny, wymaganej aby przekształcić jedno słowo w drugie. W tej pracy autorzy skupili się na problemie złożoności obliczeniowej aproksymowania odległości edycyjnej pomiędzy parą słów.

Problem wyznaczenia dokładnej odległości edycyjnej może być rozwiązany za pomocą klasycznego algorytmu dynamicznego działającego w kwadratowym czasie. W 2010 roku Andoni, Krauthgamer i Onak przedstawili działający w czasie prawie liniowym, algorytm aproksymujący odległość edycyjną z polilogarytmicznym czynnikiem aproksymacji. W 2014 Backurs i Indyk pokazali, że dokładny algorytm działający w czasie O(n^(2-δ))implikowałby szybki algorytm dla SAT i fałszywość silnej hipotezy o czasie wykładniczym (SETH). Ponadto, ostatnio w 2017, Abboud i Backurs pokazali, że istnienie algorytmu aproksymującego odległość edycyjną w czasie prawdziwie podkwadratowym z czynnikiem aproksymacji 1 + o(1) implikowałoby fałszywość paru hipotez dotyczących złożoności obwodów boolowskich (circuit complexity). To poddaje w wątpliwość aproksymowalność odległości edycyjnej z dokładnością do czynnika stałego w czasie prawdziwie podkwadratowym.

W tej pracy autorzy jednak odpowiedzieli twierdząco na to pytanie, przedstawiając bardzo ciekawy algorytm aproksymujący odległość edycyjną, z stałym czynnikiem aproksymacji i dowodząc, że jego czas działania jest ograniczony od góry przez Õ(n^(2−2/7)).

05.08.70908
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
A new variant of the game of cops and robber

We consider the following metric version of the Cops and Robbers game. Let G be a simple graph and let k≥1 be a fixed integer. In the first round, Cop picks a subset of k vertices B={v1,v2,...,vk} and then Robber picks a vertex u but keeps it in a secret. Then Cop asks Robber for a vector Du(B)=(d1,2,...,dk) whose components di=dG(u,vi), i=1,2,...,k, are the distances from u to the vertices of B. In the second round, Robber may stay at the vertex u or move to any neighbouring vertex which is kept in a secret. Then Cop picks another k vertices and asks as before for the corresponding distances to the vertex occupied by Robber. And so on in every next round. The game stops when Cop determines exactly the current position of Robber. In that case, she is the winner. Otherwise, Robber is the winner (that is if Cop is not able to localise him in any finite number of rounds). Let ζ(G) denote the least integer k for which Cop has a winning strategy. Notice that this parameter is well defined since the inequality ζ(G)≤|V(G)| holds obviously. The aim of the talk is to present results concerning 2-trees, outerplanar graphs and planar graphs. This is a joint work with Przemysław Gordinowicz, Jarosław Grytczuk, Nicolas Nisse, Joanna Sokół, and Małgorzata Śleszyńska-Nowak.

Bartłomiej Bosek, Przemysław Gordinowicz, Jarosław Grytczuk, Nicolas Nisse, Joanna Sokół, Małgorzata Śleszyńska-Nowak. Centroidal localization game. arXiv, pages 1-15, 2017.

Bartłomiej Bosek, Przemysław Gordinowicz, Jarosław Grytczuk, Nicolas Nisse, Joanna Sokół, Małgorzata Śleszyńska-Nowak. Localization game on geometric and planar graphs. arXiv, pages 1-15, 2017.

31.03.51743
Bartłomiej Bosek
Optymalizacja Kombinatoryczna
Local Dimension is Unbounded for Planar Posets

In 1981, Kelly showed that planar posets can have arbitrarily large dimension. However, the posets in Kelly's example have bounded Boolean dimension and bounded local dimension, leading naturally to the questions as to whether either Boolean dimension or local dimension is bounded for the class of planar posets. The question for Boolean dimension was first posed by Nešetril and Pudlák in 1989 and remains unanswered today. The concept of local dimension is quite new, introduced in 2016 by Ueckerdt. In just the last year, researchers have obtained many interesting results concerning Boolean dimension and local dimension, contrasting these parameters with the classic Dushnik-Miller concept of dimension, and establishing links between both parameters and structural graph theory, path-width and tree-width in particular. Here we show that local dimension is not bounded on the class of planar posets. Our proof also shows that the local dimension of a poset is not bounded in terms of the maximum local dimension of its blocks, and it provides an alternative proof of the fact that the local dimension of a poset cannot be bounded in terms of the tree-width of its cover graph, independent of its height. This is a joint work with Jarosław Grytczuk and W.T. Trotter.

Bartłomiej Bosek, Jarosław Grytczuk, William T. Trotter. Local Dimension is Unbounded for Planar Posets. arXiv, pages 1-12, 2017.

Poprzednie referaty

26.05.18778
Bruno Pitrus
Podstawy Informatyki
Linear lambda terms as invariants of rooted trivalent maps by Noam Zeilberger

The main aim of the article is to give a simple and conceptual account for the correspondence (originally described by Bodini, Gardy, and Jacquot) between \alpha equivalence classes of closed linear lambda terms and isomorphism classes of rooted trivalent maps on compact oriented surfaces without boundary, as an instance of a more general correspondence between linear lambda terms with a context of free variables and rooted trivalent maps with a boundary of free edges. We begin by recalling a familiar diagrammatic representation for linear lambda terms, while at the same time explaining how such diagrams may be read formally as a notation for endomorphisms of a reflexive object in a symmetric monoidal closed (bi)category. From there, the “easy” direction of the correspondence is a simple forgetful operation which erases annotations on the diagram of a linear lambda term to produce a rooted trivalent map. The other direction views linear lambda terms as complete invariants of their underlying rooted trivalent maps, reconstructing the missing information through a Tutte-style topological recurrence on maps with free edges. As an application in combinatorics, we use this analysis to enumerate bridgeless rooted trivalent maps as linear lambda terms containing no closed proper subterms, and conclude by giving a natural reformulation of the Four Color Theorem as a statement about typing in lambda calculus.

 

 

27.05.40681
Dawid Pyczek i Jakub Rówiński
Podstawy Informatyki
Asymptotic Density and the Theory of Computability by CARL JOCKUSCH AND PAUL SCHUPP
The purpose of this paper is to survey recent work on how classical asymptotic density interacts with the theory of computability. We have tried to make the survey accessible to those who are not specialists in computability theory and we mainly state results without proof, but we include a few
easy proofs to illustrate the flavor of the subject. In complexity theory, classes such as P and NP are defined by using worst-case measures. That is, a problem belongs to the class if there is an algorithm solving it which has a suitable bound on its running time over all instances of the problem. Similarly, in computability theory, a problem is classified as computable if there is a single algorithm which solves all instances of the given problem. There is now a general awareness that worst-case measures may not give a good picture of a particular algorithm or problem since hard instances may be very sparse. The paradigm case is Dantzig’s Simplex Algorithm for linear programming problems. This algorithm runs many hundreds of times every day for scheduling and transportation problems, almost always very quickly. There are clever examples of Klee and Minty showing that there exist instances for which the Simplex Algorithm must take exponential time, but such examples are not encountered in practice. Observations of this type led to the development of average-case complexity by Gurevich and by Levin independently. There are different approaches to the average-case complexity, but they all involve computing the expected value of the running time of an algorithm with respect to some measure on the set of inputs. Thus the problem must be decidable and one still needs to know the worst-case complexity. Another example of hard instances being sparse is the behavior of algorithms for decision problems in group theory used in computer algebra packages. There is often some kind of an easy “fast check” algorithm which quickly produces a solution for “most” inputs of the problem. This is true even if the worst-case complexity of the particular problem is very high or the problem is even unsolvable. Thus many group-theoretic decision problems have a very large set of inputs where the (usually negative) answer can be obtained easily and quickly.
11.08.43528
Tomasz Krawczyk
Informatyka Teoretyczna
Representation and coloring of partially ordered sets under conditions of incomplete information

The purpose of my talk is to discuss several problems related to coloring and construction of appropriate representation for partially ordered sets (also posets) and graph classes derived from posets. In these problems, we will assume the following two scenarios:

in the first scenario, an algorithm receives a poset element one after another. For each new element added, the algorithm takes an irrevocable decision (assigns a color or extends a current representation) just after this element is presented (algorithms that work under such conditions are called on-line).

in the second scenario, an algorithm receives an incomparability graph of some poset and a representation of some parts of this graph, and its task is to check whether this partial representation can be extended to a representation of the whole graph that is appropriate for the considered class of graphs.

In the context of on-line algorithms, we focus our attention on two problems: partitioning posets into chains, which is a special case of on-line coloring of incomparability graphs, and embedding posets into d-dimentional space Rd.

In the context of extending partial representations problems, we are interested in graph classes whose representations introduce a natural order on vertices of these graphs.

We focus our attention on:

  • function graphs that are intersection graphs of continuous functions [0,1] R,

  • permutation graphs that are intersection graphs of linear functions [0,1] R,

  • trapezoid graphs that are intersection graphs of trapezoids spanned between two fixed parallel lines containing the bases of these trapezoids.

31.12.73644
Marcin Pilipczuk
University of Warsaw
Informatyka Teoretyczna
Subexponential Parameterized Algorithms for Planar Graphs, Apex-Minor-Free Graphs and Graphs of Polynomial Growth via Low Treewidth Pattern Covering

We prove the following theorem. Given a planar graph G and an integer k, it is possible in polynomial time to randomly sample a subset A of vertices of G with the following properties:

1) A induces a subgraph of G of treewidth O(√k log k), and

2) for every connected subgraph H of G on at most k vertices, the probability that A covers the whole vertex set of H is at least (2O(√k log2k)nO(1))−1, where n is the number of vertices of G.

Together with standard dynamic programming techniques for graphs of bounded treewidth, this result gives a versatile technique for obtaining (randomized) subexponential parameterized algorithms for problems on planar graphs, usually with running time bound 2O(√k log2k)nO(1). The technique can be applied to problems expressible as searching for a small, connected pattern with a prescribed property in a large host graph; examples of such problems include Directed k-Path, Weighted k-Path, Vertex Cover Local Search, and Subgraph Isomorphism, among others. Up to this point, it was open whether these problems can be solved in subexponential parameterized time on planar graphs, because they are not amenable to the classic technique of bidimensionality. Furthermore, all our results hold in fact on any class of graphs that exclude a fixed apex graph as a minor, in particular on graphs embeddable in any fixed surface. We also provide a similar statement for graph classes of polynomial growth.

In the talk I will first focus on the background and motivation, and then highlight the main ideas of the proof by sketching the proof for the case of graph classes of polynomial growth. Based on joint work with Fedor Fomin, Daniel Lokshtanov, Dániel Marx, Michał Pilipczuk, and Saket Saurabh: http://arxiv.org/abs/1604.05999 and http://arxiv.org/abs/1610.07778.

04.08.84482
Yauheni Ananchuk
Podstawy Informatyki
ALGEBRAIC FOUNDATIONS FOR QUALITATIVE CALCULI AND NETWORKS by ROBIN HIRSCH, MARCEL JACKSON, AND TOMASZ KOWALSKI
Binary Constraint Problems have traditionally been considered as Network Satisfaction Problems over some relation algebra. A constraint network is satisfable if its nodes can be mapped into some representation of the relation algebra in such a way that the constraints are preserved. A qualitative representation is like an ordinary representation, but instead of requiring (a ; b) = a j b , as we do for ordinary representations, we only require that. A constraint network is qualitatively satisfable if its nodes can be mapped to elements of a qualitative representation, preserving the constraints. If a constraint network is satisfable then it is clearly qualitatively satisfable, but the converse can fail.
However, for a wide range of relation algebras including the point algebra, the Allen Interval Algebra, RCC8 and many others, a network is satisfable if and only if it is qualitatively satisfable. Unlike ordinary composition, the weak composition arising from qualitative representations need not be associative, so we can generalise by considering network satisfaction problems over non-associative algebras. We prove that computationally, qualitative representations have many advantages over
ordinary representations: whereas many finite relation algebras have only infnite representations, every finite qualitatively representable algebra has a finite qualitative representation; the representability problem for (the atom structures of) finite non-associative algebras is NP-complete; the network satisfaction problem over a finite qualitatively representable algebra is always ; the validity of equations over qualitative representations is co-NP-complete. On the other hand we prove that there is no finite axiomatisation of the class of qualitatively representable algebra
23.11.2016
Piotr Danilewski
Informatyka Teoretyczna
Functional Code Building

 
A typical language translator uses a parser to convert an input string into some internal representation, usually in a form of an AST. The AST is then analyzed and transformed in passes. In the final pass, the AST is converted into the output, be it a machine code or source code of another language.
 
However, if we try to combine different languages, e.g. for a purpose of a multi-domain project, we may have a problem: each language uses different kinds of AST nodes, has different assumptions on the AST structure and features different pass algorithms. Settling for the common ground by limiting the node types, assumptions, and algorithms limits how the languages may reason about their code. Any high-level information is lost in such representation and high-level transformations cannot be defined.
 
Working on the common AST is similar to unstructured programming: AST acts as a global machine state. Any change to the state, e.g. introduced by a new language, may inadvertely affect the behavior of the existing languages. In regular programming we have moved away from unstructured programming towards higher abstraction levels, e.g. through functional programming. If it can be done to regular programming, can it be done to AST and code builders as well?
 
Our solution treats AST nodes not as objects of a fixed type. Instead, it is a function with its body describing entirely the behavior of such node. Even the connection between the nodes is represented internally by the function, in a form of continuations. 
The passes over the AST are replaced by a single invocation of these functions. The node functions may contain code for multiple stages of code analysis. In order to reduce the run time these additional stages are scheduled through dynamic staging.
 
This gives the power of abstraction to code building. Even nodes come from different languages may interact, as long as they understand the passed arguments.
 
This major change on how a code is represented requires changes in how we think about common basic operations during language interpretation:
- how do we link two nodes/functions together?
- how do we create control flow structures?
- how do we perform name lookups, even across different languages?
- how do we optimize the code so that the AST layer dissapears when generating code?
- what language-specific optimizations can be performed and how do we specify them?
 
We will address these questions in the upcoming talk.

05.10.2016
Tomasz Kisielewski
Podstawy Informatyki
Programy które są w stanie przeprowadzać rozumowania o swoich własnościach
Proving properties of programs within their language

Przedstawię wstępną wersję swojego programu badawczego, mającego
na celu umożliwienie dowodzenia własności programów w języku, w
którym są napisane. Zacznę od motywacji, opierającej się o pewne
rozwiązanie zmodyfikowanego dylematu więźnia, w której graczami
są programy mające dostęp do kodu źródłowego drugiego gracza.
Następnie przybliżę obecny stan automatycznego dowodzenia faktów
o programach, ze szczególnym uwzględnieniem silnie typowanych
języków funkcyjnych. Na koniec przedstawię dalekosiężne cele i
największe trudności, których należy się spodziewać przy
implementacji efektywnych funkcji dowodzących fakty o programach.

======

I will present an initial version of my research program, whose

main goal is to enable proving properties about programs within
the language they are written in. My motivation is based on a
specific solution to the open-source prisoner's dilemma, where
the players are programs with access to the other player's source
code. After explaining the motivation I will shortly present the
current state of automatic proving of programs' properties, with
emphasis on strongly typed functional languages. Finally, I will
introduce my long term goals and the main challenges one should
expect to face when implementing an effective function for
proving facts about programs.

08.06.2016
Kamil Pietruszka
Podstawy Informatyki
Regular Combinators for String Transformations by Rajeev Alur Adam Freilich Mukund Raghothaman

We focus on (partial) functions that map input strings to a monoid such as the set of integers with addition and the set of output strings with concatenation. The notion of regularity for such functions has been defined using two-way finite-state transducers, (one-way) cost register automata, and MSO-definable graph transformations. In this paper, we give an algebraic and machine-independent characterization of this class analogous to the definition of regular languages by regular expressions. When the monoid is commutative, we prove that every regular function can be constructed from constant functions using the combinators of choice, split sum, and iterated sum, that are analogs of union, concatenation, and Kleene *, respectively, but enforce unique (or unambiguous) parsing. Our main result is for the general case of non-commutative monoids, which is of particular interest for capturing regular string-to-string transformations for document processing. We prove that the following additional combinators suffice for constructing all regular functions:

(1) the left-additive versions of split sum and iterated sum, which allow transformations such as string reversal;

(2) sum of functions, which allows transformations such as copying of strings; and

(3) function composition, or alternatively, a new concept of chained sum, which allows output values from adjacent blocks to mix.

27.04.2016
Michał Zieliński
Podstawy Informatyki
Beta Reduction is Invariant, Indeed by Beniamino Accattoli and Ugo Dal Lago

Slot and van Emde Boas weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time.Is lambda  calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda term? This paper presents the first complete positive answer to this longstanding problem. Moreover, our answer is completely machineindependent and based over a standard notion in the theory of lambda calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modelled after linear logic proof nets and admitting a decomposition of leftmostoutermost
derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda calculus. The size explosion problem seems to imply that this is not possible:having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to bata redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.

13.01.2016
Piotr Bejda
Optymalizacja Kombinatoryczna
Perfect matchings in O(n log n) time in regular bipartite graphs

In this paper we consider the well-studied problem of finding a perfect matching in a d-regular bi-partite graph on 2n nodes with m=nd edges. The best-known algorithm for general bipartite graphs (due to Hopcroft and Karp) takes time O(m*sqrt(n)). In regular bipartite graphs, however, a matching is known to be computable in O(m) time (due to Cole, Ost and Schirra). In a recent line of work by Goel, Kapralov and Khanna the O(m) time algorithm was improved first to O'(min(m, n^2.5/d)) and then to O'(min(m,n^2/d)). It was also shown that the latter algorithm is optimal up to polylogarithmic factors among all algorithms that use non-adaptive uniform sampling to reduce the size of the graph as a first step.
In this paper, we give a randomized algorithm that finds a perfect matching in a d-regular graph and runs in O(n log n) time (both in expectation and with high probability). The algorithm performs an appropriately truncated random walk on a modified graph to successively find augmenting paths. Our algorithm may be viewed as using adaptive uniform sampling, an d is thus able to bypass the limitations of (non-adaptive) uniform sampling established in earlier work. We also show that randomization is crucial for obtaining o(nd) time algorithms by establishing an Ω(nd) lower bound for any deterministic algorithm. Our techniques also give an algorithm that successively finds a matching in the support of a doubly stochastic matrix in expected time O(n log2 n) time, with O(m) pre-processing time; this gives a simple O(m + mn log2 n) time algorithm for finding the Birkhoff-von Neumann decomposition of a doubly stochastic matrix.

A. Goel and M. Kapralov and S. Khanna, Perfect matchings in O n log n time in regular bipartite graphs

28.10.2015
Ariel Gabizon
Technion, Israel
Informatyka Teoretyczna
Quasi-Linear Size Zero Knowledge from Linear-Algebraic PCPs

A probabilistically checkable proof (PCP) enables, e.g., checking the satisfiability of a 3-SAT formula ɸ, while only examining a constant number of locations in the proof. A long line of research led to the construction of PCPs with length that is quasi-linear in n := |ɸ|.


In a zero knowledge PCP with knowledge bound K, reading any K symbols of the proof reveals no additional information besides the validity of the statement; e.g., no information is revealed about the assignment satisfying ɸ. Kilian, Petrank, and Tardos gave a transformation from any PCP into a ZK-PCP with knowledge bound K, for any desired K. A drawback of their transformation is that it requires multiplying the proof length by a factor of (at least) K^6.


In this work, we show how to construct PCPs that are zero knowledge for knowledge bound K and of length quasilinear in K and n, provided that the prover is forced to write the proof in two rounds. In this model, which we call duplex PCP (DPCP), the verifier gets an oracle string from the prover, then replies with some randomness, and then gets another oracle string from the prover, and it can make up to K queries to both oracles.


Deviating from previous works, our constructions do not invoke the PCP Theorem as a blackbox but rely on algebraic properties of a specific family of PCPs. We show that if the PCP has a certain linear algebraic structure (which many constructions have, including [BFLS91,ALMSS98,BS08]) we can add the zero knowledge property at virtually no cost while introducing only minor modifications in the algorithms of the prover and verifier. We believe that our linear-algebraic characterization of PCPs may be of independent interest, as it gives a simplified way to view previous well-studied PCP constructions.


Joint work with Eli Ben-Sasson, Alessandro Chiesa and Madars Virza
27.05.2015
Paweł Zegartowski
Informatyka Teoretyczna
Cache-Oblivious Hashing
The hash table, especially its external memory version, is one of the most important index structures in large databases. Assuming a truly random hash function, it is known that in a standard external hash table with block size b, searching for a particular key only takes expected average t_q=1+1/2^Ω(b) disk accesses for any load factor α bounded away from 1. However, such near-perfect performance is achieved only when b is known and the hash table is particularly tuned for working with such a blocking. In this paper we study if it is possible to build a cache-oblivious hash table that works well with any blocking. Such a hash table will automatically perform well across all levels of the memory hierarchy and does not need any hardware-specific tuning, an important feature in autonomous databases.

We first show that linear probing, a classical collision resolution strategy for hash tables, can be easily made cache-oblivious but it only achieves t_q=1+Θ(α/b) even if a truly random hash function is used. Then we demonstrate that the block probing algorithm (Pagh et al. in SIAM Rev. 53(3):547558, 2011) achieves t_q=1+1/2^Ω(b), thus matching the cache-aware bound, if the following two conditions hold: (a) b is a power of 2; and (b) every block starts at a memory address divisible by b. Note that the two conditions hold on a real machine, although they are not stated in the cacheoblivious model. Interestingly, we also show that neither condition is dispensable: if either of them is removed, the best obtainable bound is t_q=1+O(α/b), which is exactly what linear probing achieves.  


References:
Rasmus Pagh, ZheweiWei, Ke Yi, Qin Zhang, Cache-Oblivious Hashing, Algorithmica (2014) 69:864-883
15.04.2015
Maciej Bendkowski
Informatyka Teoretyczna
Contention Resolution under Selfishness
In many communications settings, such as wired and wireless local-area networks, when multiple users attempt to access a communication channel at the same time, a conflict results and none of the communications are successful. Contention resolution is the study of distributed transmission and retransmission protocols designed to maximize notions of utility such as channel utilization in the face of blocking communications.

An additional issue to be considered in the design of such protocols is that selfish users may have incentive to deviate from the prescribed behavior, if another transmission strategy increases their utility. The work of Fiat et al. (in SODA'07, pp.179188, SIAM, Philadelphia 2007) addresses this issue by constructing an asymptotically optimal incentive-compatible protocol. However, their protocol assumes the cost of any single transmission is zero, and the protocol completely collapses under non-zero transmission costs.

In this paper we treat the case of non-zero transmission cost c.We present asymptotically optimal contention resolution protocols that are robust to selfish users, in two different channel feedback models. Our main result is in the Collision Multiplicity Feedback model, where after each time slot, the number of attempted transmissions is returned as feedback to the users. In this setting, we give a protocol that has expected cost Θ(n+c·log n) and is in o(1)-equilibrium, where n is the number of users.  


References:
George Christodoulou, Katrina Ligett, Evangelia Pyrga, Contention Resolution under Selfishness, Algorithmica (2014) 70:675693
11.03.2015
Piotr Danilewski Universität des Saarlandes
Informatyka Teoretyczna
AnyDSL - a host for any language
In a multi-domain project, there is no single programming language that can be used to program everything. Instead, a combination of general-purpose and domain-specific languages (DSLs) are used. Unfortunately, many domains lack a good, representative DSL, due to domain diversity and difficulty of creating a new compiler. Moreover, the communication across the languages is limited, often requiring the data to be serialized, and reducing the quality of optimization and compile-time verification.

In our talk we present our approach to solve these problems, by introducing a new metamorphic language - AnyDSL. The parsing and execution of AnyDSL can be interleaved and the latter can influence the former - e.g. by introducing a new grammar with which parsing should resume. Regardless of the language the source is written in, all code is translated into a low-level, functional representation in continuous passing style (AIR). AIR serves as a "meeting point", permitting inter-DSL communication. AIR can be interpreted or compiled to LLVM bytecode and then to machine code.

New grammars are defined also within AnyDSL. Unlike typical parser generators, grammar productions and actions are given as functions. AIR supports dynamic staging - a flexible way to define partial evaluation strategies. With it the overhead of having multiple layers of languages can be resolved early. It also allows the DSL designer to specify domain specific optimizations. After all those transformations, AIR can be compiled to machine code that is efficient, with performance comparable to programs written in general-purpose languages.

In our talk we present a new metamorphic language - AnyDSL. AnyDSL permits the native parser to be exchanged with a custom DSL. Regardless of the DSL however, all code is translated into a low-level, functional representation in continuous passing style (AIR).

New grammars are defined also within AnyDSL, but unlike typical parser generators, grammar productions and actions are given as functions. AIR supports dynamic staging - a flexible way to define partial evaluation strategies to resolve overheads and define domain specific optimizations. AIR can be compiled to machine code, and produced programs have performance comparable to those produced by general-purpose languages.  

14.01.2015
Andrzej Dorobisz
Informatyka Teoretyczna
Scheduling parallel jobs to minimize the makespan
We consider the NP-hard problem of scheduling parallel jobs with release dates on identical parallel machines to minimize the makespan. A parallel job requires simultaneously a prespecified, job-dependent number of machines when being processed. We prove that the makespan of any nonpreemptive list-schedule is within a factor of 2 of the optimal preemptive makespan. This gives the best-known approximation algorithms for both the preemptive and the nonpreemptive variant of the problem. We also show that no list-scheduling algorithm can achieve a better performance guarantee than 2 for the nonpreemptive problem, no matter which priority list is chosen.

List-scheduling also works in the online setting where jobs arrive over time and the length of a job becomes known only when it completes; it therefore yields a deterministic online algorithm with competitive ratio 2 as well. In addition, we consider a different online model in which jobs arrive one by one and need to be scheduled before the next job becomes known. We show that no list-scheduling algorithm has a constant competitive ratio. Still, we present the first online algorithm for scheduling parallel jobs with a constant competitive ratio in this context. We also prove a new information-theoretic lower bound of 2.25 for the competitive ratio of any deterministic online algorithm for this model. Moreover, we show that 6/5 is a lower bound for the competitive ratio of any deterministic online algorithm of the preemptive version of the model jobs arriving over time.  


References:
Johannes Berit, Scheduling parallel jobs to minimize the makespan, J of Schedulling, 9(2006), 433–452
23.04.2014
Maciej Gazda Eindhoven University of Technology
Podstawy Informatyki
Zielonka's Recursive Algorithm for Parity Games
Parity games are infinite duration, two player games played on a finite directed graph. Vertices of the graph are labelled with natural numbers (called priorities) and the winning condition is determined by the parity of the most significant (typically maximal) priority encountered inifnitely often. The games are memoryless determined, moreover, the problem of finding the winning partition of a given game belongs to both NP and coNP complexity classes. On the other hand, no polynomial algorithm solving parity games has been found (the best one due to Jurdziński, Paterson and Zwick has subexponential running time with sqrt(n) in the exponent). In my talk, I will give a brief introduction to this intriguing computational problem, and then focus on one of the earliest and simplest solving algorithms, namely Zielonka's recursive algorithm. Even though its worst-case running time is not particularly impressive as compared to more sophisticated solvers, the experimental study of Friedmann and Lange has shown that in practice it works very well. In order to understand why it is the case, in our recent work with Tim Willemse we have analysed the performance of two variants of the algorithm (standard and optimised) on certain subclasses of parity games (dull, weak, and solitaire). Moreover, we have provided a tighter lower bound on its worst-case running time.  
18.12.2013
Bartłomiej Ryniec
Informatyka Teoretyczna
Preprocess, Set, Query!
Thorup and Zwick (J.ACM 52(1):1–24, 2005 and STOC'01) in their seminal work introduced the notion of distance oracles. Given an n-vertex weighted undirected graph with m edges, they show that for any integer k ≥ 1 it is possible to preprocess the graph in O˜(m n^{1/k}) time and generate a compact data structure of size O(k n^{1+1/k}). For each pair of vertices, it is then possible to retrieve an estimated distance with multiplicative stretch 2k−1 in O(k) time. For k=2 this gives an oracle of O(n^{1.5}) size that produces in constant time estimated distances with stretch 3.
Recently, Patrascu and Roditty (In: Proc. of 51st FOCS, 2010) broke the theoretical status-quo in the field of distance oracles and obtained a distance oracle for sparse unweighted graphs of O(n^{5/3}) size that produces in constant time estimated distances with stretch 2.
In this paper we show that it is possible to break the stretch 2 barrier at the price of non-constant query time in unweighted undirected graphs.We present a data structure that produces estimated distances with 1+ε stretch. The size of the data structure is O(n m^{1−ε'}) and the query time is O˜(m^{1−ε'}). Using it for sparse unweighted graphs we can get a data structure of size O(n^{1.87}) that can supply in O(n^{0.87}) time estimated distances with multiplicative stretch 1.75.  
References:
Ely Porat, Liam Roditty, Preprocess, Set, Query! Algorithmica (2013) 67:516–528
30.10.2013
Igor Adamski
Informatyka Teoretyczna
Linked Dynamic Tries with Applications to LZ-Compression in Sublinear Time and Space
The dynamic trie is a fundamental data structure with applications in many areas of computer science. This paper proposes a new technique for maintaining a dynamic trie T of size at most 2^w nodes under the unit-cost RAM model with a fixed word size w. It is based on the idea of partitioning T into a set of linked small tries, each of which can be maintained efficiently. Our method is not only space-efficient, but also allows the longest common prefix between any query pattern P and the strings currently stored in T to be computed in o(|P|) time for small alphabets, and allows any leaf to be inserted into or deleted from T in o(log|T|) time. To demonstrate the usefulness of our new data structure, we apply it to LZ-compression. Significantly, we obtain the first algorithm for generating the LZ78 encoding of a given string of length n over an alphabet of size σ in sublinear (o(n)) time and sublinear (o(n log σ) bits) working space for small alphabets
(σ = 2^{o(log n \cdot \frac{log log log n}{(log log n)^2})).
Moreover, the working space for our new algorithm is asymptotically less than or equal to the space for storing the output compressed text, regardless of the alphabet size.  
References:
Jesper Jansson, Kunihiko Sadakane, Wing-Kin Sung, Linked Dynamic Tries with Applications to LZ-Compression in Sublinear Time and Space, Algorithmica DOI 10.1007/s00453-013-9836-6
14.10.2013
W. Hugh WoodinUC Berkeley
Informatyka Teoretyczna
The Continuum Hypothesis and the search for Mathematical Infinitynew place and date: Oct 14, 2013, 16:00,Polska Akademia Umiejętności, Kraków, Sławkowska 17
By middle of the 20th century the problem of the Continuum Hypothesis was widely regarded as one of the prominent problems in all of Mathematics. Remarkably, this question cannot be solved on the basis of the basic principles (these are the ZFC axioms) on which the entire subject is based. This discovery of Cohen, 50 years ago this summer, is arguably one of the major discoveries in the history of modern Mathematics.

But does this mean that the question of the Continuum Hypothesis has no answer? Any "solution" must involve the adoption of new principles but which principles should one adopt? Alternatively, perhaps the correct assessment of Cohen's discovery is that the entire enterprise of the mathematical study of Infinity is ultimately doomed because the entire subject is simply a human fiction without any true foundation. In this case, any "solution" to the Continuum Hypothesis is just an arbitrary (human) choice.

Over the last few years a scenario has emerged by which with the addition of a single new principle not only can the problem of the Continuum Hypothesis be resolved, but so can all of the other problems  which Cohen's method has been used to show are also unsolvable (and there have been many such problems).  Moreover the extension of the basic (ZFC) principles by this new principle would be seen as an absolutely compelling option based on the fundamental intuitions on which the entire mathematical conception of Infinity is founded.

However, this scenario critically depends upon the outcome of a single conjecture. If this conjecture is false then the entire approach, which is the culmination of nearly 50 years of research, fails or at the very least  has to be significantly revised.

Thus the mathematical study of Infinity has arguably reached a tipping point. But which way will it tip?  

08.05.2013
Sebastian Syta
Informatyka Teoretyczna
A Sublinear Time Algorithm for PageRank Computations
In a network, identifying all vertices whose PageRank is more than a given threshold value $\Delta$ is a basic problem that has arisen in Web and social network analyses. In this paper, we develop a nearly optimal, sublinear time, randomized algorithm for a close variant of this problem. When given a directed network \graph, a threshold value $\Delta$, and a positive constant $c>3$, with probability $1-o(1)$, our algorithm will return a subset $S\subseteq V$ with the property that $S$ contains all vertices of PageRank at least $\Delta$ and no vertex with PageRank less than $\Delta/c$. The running time of our algorithm is always $\tilde{O}(\frac{n}{\Delta})$. In addition, our algorithm can be efficiently implemented in various network access models including the Jump and Crawl query model recently studied by \cite{brautbar_kearns10}, making it suitable for dealing with large social and information networks. As part of our analysis, we show that any algorithm for solving this problem must have expected time complexity of ${\Omega}(\frac{n}{\Delta})$. Thus, our algorithm is optimal up to logarithmic factors. Our algorithm (for identifying vertices with significant PageRank) applies a multi-scale sampling scheme that uses a fast personalized PageRank estimator as its main subroutine. For that, we develop a new local randomized algorithm for approximating personalized PageRank which is more robust than the earlier ones developed by Jeh and Widom \cite{JehW03} and by Andersen, Chung, and Lang \cite{AndersenCL06}.  
References:
Christian Borgs, Michael Brautbar, Jennifer Chayes1, and Shang-Hua Teng, A Sublinear Time Algorithm for PageRank Computations,
17.04.2013
Tomasz Kołodziejski
Informatyka Teoretyczna
8/5 Approximation for TSP Paths
We prove the approximation ratio 8/5 for the metric s-t-Path-TSP problem, and more generally for shortest connected T-joins. The algorithm that achieves this ratio is the simple ``Best of Many'' version of Christofides' algorithm (1976), suggested by An, Kleinberg and Shmoys (2012), which consists in determining the best Christofides s-t-tour out of those constructed from a family Fscr_{>0} of trees having a convex combination dominated by an optimal solution x^* of the fractional relaxation. They give the approximation guarantee sqrt{5}+1/2 for such an s-t--tour, which is the first improvement after the 5/3 guarantee of Hoogeveen's Christofides type algorithm (1991). Cheriyan, Friggstad and Gao (2012) extended this result to a 13/8-approximation of shortest connected T-joins, for |T|≥4.

The ratio 8/5 is proved by simplifying and improving the approach of An, Kleinberg and Shmoys that consists in completing x^*/2 in order to dominate the cost of "parity correction" for spanning trees. We partition the edge-set of each spanning tree in Fscr_{>0} into an s-t--path (or more generally, into a T-join) and its complement, which induces a decomposition of x^*. This decomposition can be refined and then efficiently used to complete x^*/2 without using linear programming or particular properties of T, but by adding to each cut deficient for x^*/2 an individually tailored explicitly given vector, inherent in the problem.

A simple example shows that the Best of Many Christofides algorithm may not find a shorter s-t--tour than 3/2 times the incidentally common optima of the problem and of its fractional relaxation.  


References:
András Sebö, Eight-Fifth Approximation for TSP Paths, Integer Programming and Combinatorial Optimization, LNCS7801, 2013, pp 362-374
09.01.2013
Jakub Adamek
Informatyka Teoretyczna
A Distributed O(1)-Approximation Algorithm for the Uniform Facility Location Problem
We investigate a metric facility location problem in a distributed setting. In this problem, we assume that each point is a client as well as a potential location for a facility and that the opening costs for the facilities and the demands of the clients are uniform. The goal is to open a subset of the input points as facilities such that the accumulated cost for the whole point set, consisting of the opening costs for the facilities and the connection costs for the clients, is minimized.
We present a randomized distributed algorithm that computes in expectation an O(1)-approximate solution to the metric facility location problem described above. Our algorithm works in a synchronous message passing model, where each point is an autonomous computational entity that has its own local memory and that communicates with the other entities by message passing.We assume that each entity knows the distance to all the other entities, but does not know any of the other pairwise distances. Our algorithm uses three rounds of all-to-all communication with message sizes bounded to O(log(n)) bits, where n is the number of input points.
We extend our distributed algorithm to constant powers of metric spaces. For a metric exponent l≥1, we obtain a randomized O(1)-approximation algorithm that uses three rounds of all-to-all communication with message sizes bounded to O(log(n)) bits.  
References:
Joachim Gehweiler, Christiane Lammersen, Christian Sohler, A Distributed O(1)-Approximation Algorithm for the Uniform Facility Location Problem, Algorithmica DOI 10.1007/s00453-012-9690-y
19.12.2012
Radoslaw Smyrek
Informatyka Teoretyczna
Recognizing d-Interval Graphs and d-Track Interval Graphs
A d-interval is the union of d disjoint intervals on the real line. A d-track interval is the union of d disjoint intervals on d disjoint parallel lines called tracks, one interval on each track. As generalizations of the ubiquitous interval graphs, d-interval graphs and d-track interval graphs have wide applications, traditionally to scheduling and resource allocation, and more recently to bioinformatics. In this paper, we prove that recognizing d-track interval graphs is NP-complete for any constant d≥2. This confirms a conjecture of Gyárfás and West in 1995. Previously only the complexity of the case d=2 was known. Our proof in fact implies that several restricted variants of this graph recognition problem, i.e., recognizing balanced d-track interval graphs, unit d-track interval graphs, and (2,..., 2) d-track interval graphs, are all NP-complete. This partially answers another question recently raised by Gambette and Vialette. We also prove that recognizing depth-two 2-track interval graphs is NP-complete, even for the unit case. In sharp contrast, we present a simple linear-time algorithm for recognizing depth-two unit d-interval graphs. These and other results of ours give partial answers to a question of West and Shmoys in 1984 and a similar question of Gyárfás and West in 1995. Finally, we give the first bounds on the track number and the unit track number of a graph in terms of the number of vertices, the number of edges, and the maximum degree, and link the two numbers to the classical concepts of arboricity.  
References:
Minghui Jiang: Recognizing d-Interval Graphs and d-Track Interval Graphs, Algorithmica DOI 10.1007/s00453-012-9651-5
05.12.2012
Michał Masłowski
Informatyka Teoretyczna
On the Exact Complexity of Evaluating Quantified k-CNF
We relate the exponential complexities 2^{s(k)n} of k-SAT and the exponential complexity 2^{s(EVAL(Π_2 3-CNF))n} of EVAL(Π_2 3-CNF) (the problem of evaluating quantified formulas of the form ∀x∃yF(x,y) where F is a 3-CNF in x-variables and y-variables) and show that s(∞) (the limit of s(k) as k→∞) is at most s(EVAL(Π_2 3-CNF)). Therefore, if we assume the Strong Exponential-Time Hypothesis, then there is no algorithm for EVAL(Π_2 3-CNF) running in time 2^{cn} with c<1.
On the other hand, a nontrivial exponential-time algorithm for EVAL(Π_2 3-CNF) would provide a k-SAT solver with better exponent than all current algorithms for sufficiently large k. We also show several syntactic restrictions of the evaluation problem EVAL(Π_2 3-CNF) have nontrivial algorithms, and provide strong evidence that the hardest cases of EVAL(Π_2 3-CNF) must have a mixture of clauses of two types: one universally quantified literal and two existentially quantified literals, or only existentially quantified literals. Moreover, the hardest cases must have at least n−o(n) universally quantified variables, and hence only o(n) existentially quantified variables. Our proofs involve the construction of efficient minimally unsatisfiable k-CNFs and the application of the Sparsification lemma.  
References:
Chris Calabro, Russell Impagliazzo, Ramamohan Paturi, On the Exact Complexity of Evaluating Quantified k-CNF, Algorithmica DOI 10.1007/s00453-012-9648-0
16.05.2012
Kasper Kopeć
Informatyka Teoretyczna
Minimum Weight Cycles and Triangles: Equivalences and Algorithms
We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected n-node graph with edge weights in {1,...,M} or in a directed n-node graph with edge weights in {-M,..., M} and no negative cycles can be efficiently reduced to finding a minimum weight triangle in an Theta(n)-node undirected graph with weights in {1,...,O(M)}. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be "encoded" using only three edges within roughly the same weight interval! This resolves a longstanding open problem posed by Itai and Rodeh [SIAM J. Computing 1978 and STOC'77].
A direct consequence of our efficient reductions are O (Mn^{omega})-time algorithms using fast matrix multiplication (FMM) for finding a minimum weight cycle in both undirected graphs with integral weights from the interval [1,M] and directed graphs with integral weights from the interval [-M,M]. The latter seems to reveal a strong separation between the all pairs shortest paths (APSP) problem and the minimum weight cycle problem in directed graphs as the fastest known APSP algorithm has a running time of O(M^{0.681}n^{2.575}) by Zwick [J. ACM 2002].
> In contrast, when only combinatorial algorithms are allowed (that is, without FMM) the only known solution to minimum weight cycle is by computing APSP. Interestingly, any separation between the two problems in this case would be an amazing breakthrough as by a recent paper by Vassilevska W. and Williams [FOCS'10], any O(n^{3-eps})-time algorithm (eps>0) for minimum weight cycle immediately implies a O(n^{3-delta})-time algorithm (delta>0) for APSP.
 
References:
Liam Roditty and Virginia Vassilevska Williams, Minimum Weight Cycles and Triangles: Equivalences and Algorithms, http://arxiv.org/abs/1104.2882v1
07.03.2012
Kamil Kraszewski
Informatyka Teoretyczna
New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and Its Algorithmic Applications
A pair of unit clauses is called conflicting if it is of the form (x), (¯x). A CNF formula is unit-conflict free (UCF) if it contains no pair of conflicting unit clauses. Lieberherr and Specker (J. ACM 28:411–421, 1981) showed that for each UCF CNF formula with m clauses we can simultaneously satisfy at least ϕ'm clauses, where ϕ'=(√5−1)/2. We improve the Lieberherr-Specker bound by showing that for each UCF CNF formula F with m clauses we can find, in polynomial time, a subformula F' with m' clauses such that we can simultaneously satisfy at least ϕ'm+(1−ϕ')m'+(2−3ϕ')n"/2 clauses (in F), where n"cis the number of variables in F which are not in F'.We consider two parameterized versions of MAX-SAT, where the parameter is the number of satisfied clauses above the bounds m/2 and m(√5−1)/2. The former bound is tight for general formulas, and the later is tight for UCF formulas. Mahajan and Raman (J. Algorithms 31:335–354, 1999) showed that every instance of the first parameterized problem can be transformed, in polynomial time, into an equivalent one with at most 6k+3 variables and 10k clauses.We improve this to 4k variables and (2√5+4)k clauses. Mahajan and Raman conjectured that the second parameterized problem is fixed-parameter tractable (FPT). We show that the problem is indeed FPT by describing a polynomial-time algorithm that transforms any problem instance into an equivalent one with at most (7+3√5)k variables. Our results are obtained using our improvement of the Lieberherr-Specker bound above.  
References:
Robert Crowston, Gregory Gutin, Mark Jones and Anders Yeo, A New Lower Bound on the Maximum Number of Satisfied Clauses in Max-SAT and Its Algorithmic Applications, Algorithmica DOI 10.1007/s00453-011-9550-1
11.01.2012
Paweł Wanat
Informatyka Teoretyczna
Exact Algorithms for Edge Domination
An edge dominating set in a graph G=(V,E) is a subset of the edges D⊆E such that every edge in E is adjacent or equal to some edge in D. The problem of finding an edge dominating set of minimum cardinality is NP-hard. We present a faster exact exponential time algorithm for this problem. Our algorithm uses O(1.3226^n) time and polynomial space. The algorithm combines an enumeration approach of minimal vertex covers in the input graph with the branch and reduce paradigm. Its time bound is obtained using the measure and conquer technique. The algorithm is obtained by starting with a slower algorithm which is refined stepwisely. In each of these refinement steps, the worst cases in the measure and conquer analysis of the current algorithm are reconsidered and a new branching strategy is proposed on one of these worst cases. In this way a series of algorithms appears, each one slightly faster than the previous one, ending in the O(1.3226^n) time algorithm. For each algorithm in the series, we also give a lower bound on its running time. We also show that the related problems: minimum weight edge dominating set, minimum maximal matching and minimum weight maximal matching can be solved in O(1.3226^n) time and polynomial space using modifications of the algorithm for edge dominating set. In addition, we consider the matrix dominating set problem which we solve in O(1.3226^{n+m}) time and polynomial space for n×m matrices, and the parametrised minimum weight maximal matching problem for which we obtain an O∗(2.4179^k) time and space algorithm.  
References:
Johan M.M. van Rooij, Hans L. Bodlaender, Exact Algorithms for Edge Domination, Algorithmica, DOI 10.1007/s00453-011-9546-x
07.12.2011
Wojciech Bukowicki
Informatyka Teoretyczna
Bipartite Matching in the Semi-streaming Model
We present the first deterministic 1+ε approximation algorithm for finding a large matching in a bipartite graph in the semi-streaming model which requires only O((1/ε)^5) passes over the input stream. In this model, the input graph G=(V,E) is given as a stream of its edges in some arbitrary order, and storage of the algorithm is bounded by O(n polylog n) bits, where n=|V|. The only previously known arbitrarily good approximation for general graphs is achieved by the randomized algorithm of McGregor (Proceedings of the International Workshop on Approximation Algorithms for Combinatorial Optimization Problems and Randomization and Computation, Berkeley, CA, USA, pp. 170–181, 2005), which uses Ω((1/ε)^{1/ε}) passes. We show that even for bipartite graphs, McGregor's algorithm needs Ω(1/ε)^{Ω(1/ε)} passes, thus it is necessarily exponential in the approximation parameter. The design as well as the analysis of our algorithm require the introduction of some new techniques. A novelty of our algorithm is a new deterministic assignment of matching edges to augmenting paths which is responsible for the complexity reduction, and gets rid of randomization.

We repeatedly grow an initial matching using augmenting paths up to a length of 2k+1 for k=2/ε. We terminate when the number of augmenting paths found in one iteration falls below a certain threshold also depending on k, that guarantees a 1+ε approximation. The main challenge is to find those augmenting paths without requiring an excessive number of passes. In each iteration, using multiple passes, we grow a set of alternating paths in parallel, considering each edge as a possible extension as it comes along in the stream. Backtracking is used on paths that fail to grow any further. Crucial are the so-called position limits: when a matching edge is the i-th matching edge in a path and it is then removed by backtracking, it will only be inserted into a path again at a position strictly lesser than i. This rule strikes a balance between terminating quickly on the one hand and giving the procedure enough freedom on the other hand.  


References:
Sebastian Eggert, Lasse Kliemann, Peter Munstermann, Anand Srivastav, Bipartite Matching in the Semi-streaming Model, Algorithmica, DOI 10.1007/s00453-011-9556-8
07.12.2011
Michał Handzlik
Podstawy Informatyki
SOME IMPROVEMENTS TO TURNER'S ALGORITHM FOR BRACKET ABSTRACTION by M. Bunder
A computer handles lambda terms more easily if these are translated into combinatory terms. This translation process is called bracket abstraction. The simplest abstraction algorithm-the (fab) algorithm of Curry (see Curry and Feys [6])-is lengthy to implement and produces combinatory terms that increase rapidly in length as the number of variables to be abstracted increases.

A measure of the efficiency of an abstraction algorithm was first introduced by Kennaway as an upper bound of the length of the obtained combinatory term, as a function of the length of the original term and the number of variables to be abstracted. Mulder used these methods and experiments with many special cases, to compare the efficiency of the main algorithms listed above. The algorithm of Statman came out as the most efficient in the limiting case, but showed up as almost the worst in a number of reasonably simple special cases. Turner's algorithm was generally the best in these cases and was in fact Mulder's choice overall. In this paper, firstly we note that what Turner describes as "the improved algorithm of Curry", on which his own is based, is in fact not equivalent to any of Curry's algorithms. Turner's abstracts lack a basic property possessed by all of Curry's as well as many others. Secondly we give methods whereby Turner's algorithm (as well as others) can be more efficiently implemented, while providing simpler abstracts.  

30.11.2011
Michał Feret
Informatyka Teoretyczna
Guard games on graphs
A team of mobile agents, called guards, tries to keep an intruder out of an assigned area by blocking all possible attacks. In a graph model for this setting, the guards and the intruder are located on the vertices of a graph, and they move from node to node via connecting edges. The area protected by the guards is an induced subgraph of the given graph. We investigate the algorithmic aspects of the guarding problem, which is to find the minimum number of guards sufficient to patrol the area. We show that the guarding problem is PSPACE-hard and provide a set of approximation algorithms. All approximation algorithms are based on the study of a variant of the game where the intruder must reach the guarded area in a single step in order to win. This variant of the game appears to be a 2-approximation for the guarding problem, and for graphs without cycles of length 5 the minimum number of required guards in both games coincides. We give a polynomial time algorithm for solving the one-step guarding problem in graphs of bounded treewidth, and complement this result by showing that the problem is W[1]-hard parameterized by the treewidth of the input graph. We also show that the problem is fixed parameter tractable (FPT) parameterized by the treewidth and maximum degree of the input graph. Finally, we turn our attention to a large class of sparse graphs, including planar graphs and graphs of bounded genus, namely apex-minor-free graphs. We prove that the one-step guarding problem is FPT and possess a PTAS on apex-minor-free graphs.  
References:
Fedor V. Fomin, Petr A. Golovach, Daniel Lokshtanov, Guard games on graphs: Keep the intruder out! , Theoretical Computer Science 412 (2011), 6484–6497
23.11.2011
Albert Łącki
Informatyka Teoretyczna
Almost Exact Matchings
In the exact matching problem we are given a graph G, some of whose edges are colored red, and a positive integer k. The goal is to determine if G has a perfect matching, exactly k edges of which are red. More generally if the matching number of G is m = m(G), the goal is to find a matching with m edges, exactly k edges of which are red, or determine that no such matching exists. This problem is one of the few remaining problems that have efficient randomized algorithms (in fact, this problem is in RNC), but for which no polynomial time deterministic algorithm is known. The first result shows that, in a sense, this problem is as close to being in P as one can get. We give a polynomial time deterministic algorithm that either correctly decides that no maximum matching has exactly k red edges, or exhibits a matching with m(G)−1 edges having exactly k red edges. Hence, the additive error is one. We also present an efficient algorithm for the exact matching problem in families of graphs for which this problem is known to be tractable.We show how to count the number of exact perfect matchings in K_{3,3}-minor free graphs (these include all planar graphs as well as many others) in O(n^{3.19}) worst case time. Our algorithm can also count the number of perfect matchings in K_{3,3}-minor free graphs in O(n^{2.19}) time.  
References:
Raphael Yuster, Almost Exact Matchings, Algorithmica, DOI 10.1007/s00453-011-9519-0