Algorithms

Dr Lech Duraj

Wednesday 14:15 - 16:00, room 1103

Classical, approximation and on-line algorithms, computational complexity, lower bound results, etc.

26.11.2020 14:15
Jan Mełech, Michał Zwonek
On the Fine-Grained Complexity of Parity Problems
03.12.2020 14:15
Rafał Kilar, Szymon Salabura
Tetris is NP-hard even with O(1) rows or columns

Previous seminars

19.11.2020 14:15
Łukasz Selwa, Juliusz Wajgelt
Compact Distributed Certification of Planar Graphs
12.11.2020 14:15
Dzianis Pivavarau, Dominik Wielgórski
Explicit two-deletion codes with redundancy matching the existential bound
05.11.2020 14:15
Bartłomiej Jachowicz, Mateusz Kaczmarek
Counting 4-Patterns in Permutations Is Equivalent to Counting 4-Cycles in Graphs
29.10.2020 14:15
Lech Duraj
TBA
22.10.2020 14:15
Marcin Serwin, Wojciech Buczek
A Double-Exponential Lower Bound for the Distinct Vectors Problem
15.10.2020 14:15
Krzysztof Pióro, Krzysztof Potępa
Modular Subset Sum
W problemie Modular Subset Sum dane są liczba naturalna m, n-elementowy multizbiór S liczb całkowitych z zakresu od 0 do m-1 oraz liczba t, dla której chcemy rozstrzygnąć, czy istnieje podzbiór S, który się do niej sumuje modulo m.
 
Przedstawimy własne algorytmy rozwiązujące powyższy problem. Wszystkie z nich będą sprowadzały problem Modular Subset Sum do problemu tekstowego. Na początku przedstawimy prosty algorytm działający w czasie O(n + m*log2(m)) wykorzystujący haszowanie i drzewa przedziałowe. Następnie pokażemy jak poprawić jego złożoność do O(n + m*log(m)). Na końcu zaprezentujemy w pełni deterministyczny wariant algorytmu działający w czasie O(n + m*log(m)*α(m)).
16.01.2020 14:15
Katarzyna Król, Paweł Mader
On the Complexity of Lattice Puzzles [Kobayashi et al.]
In this paper, authors investigate the computational complexity of lattice puzzle, which is one of the traditional puzzles. A lattice puzzle consists of 2n plates with some slits, and the goal of this puzzle is to assemble them to form a lattice of size n×n. It has a long history in the puzzle society; however, there is no known research from the viewpoint of theoretical computer science. This puzzle has some natural variants, and they characterize representative computational complexity classes in the class NP. Especially, one of the natural variants gives a characterization of the graph isomorphism problem. That is, the variant is GI-complete in general. As far as the authors know, this is the first non-trivial GI-complete problem characterized by a classic puzzle. Like the sliding block puzzles, this simple puzzle can be used to characterize several representative computational complexity classes.
12.12.2019 14:15
Krzysztof Pióro, Krzysztof Potępa
Linear-Space Data Structures for Range Mode Query in Arrays [Chan, Durocher, Larsen, Morrison, Wilkinson]
A mode of a multiset S is an element a ∈ S of maximum multiplicity; that is, a occurs at least as frequently as any other element in S. Given an array A[1:n] of n elements, we consider a basic problem: constructing a static data structure that efficiently answers range mode queries on A. Each query consists of an input pair of indices (i,j) for which a mode of A[i:j] must be returned. The best previous data structure with linear space, by Krizanc, Morin, and Smid (ISAAC 2003), requires O(sqrt(n)*log log n) query time. We improve their result and present an O(n)-space data structure that supports range mode queries in O(sqrt(n/log n)) worst-case time. Furthermore, we present strong evidence that a query time significantly below sqrt(n) cannot be achieved by purely combinatorial techniques; we show that boolean matrix multiplication of two sqrt(n) x sqrt(n) matrices reduces to n range mode queries in an array of size O(n). Additionally, we give linear-space data structures for orthogonal range mode in higher dimensions (queries in near O(n1−1/2d) time) and for halfspace range mode in higher dimensions (queries in O(n1−1/d^2) time).
28.11.2019 14:15
Katarzyna Bułat, Dawid Tracz
Parity Games: Zielonka’s Algorithm in Quasi-Polynomial Time [P. Parys]

Gry parzystości to gry pomiędzy dwoma graczami (zwyczajowo Even oraz Odd) na grafie skierowanym G = (V, E). Gracze przesuwają między wierzchołkami wirtualny token, tworząc ścieżkę. Wierzchołki grafu są etykietowane liczbami naturalnymi i każdy z nich jest przypisany do jednego gracza, który decyduje w jakim kierunku zostanie wykonany ruch z tego wierzchołka. Celem każdego z graczy jest wybranie takiej strategii, że przy nieskończonej grze (ścieżce), najwyższa nieskończenie wiele razy powtarzająca się etykieta będzie odpowiednio parzysta bądź nieparzysta.

Problem gry parzystości jest deterministyczny, to znaczy dla każdego wierzchołka jeden z graczy posiada strategię wygrywającą. Rekurencyjny algorytm Zielonki rozwiązuje grę parzystości w czasie wykładniczym. Istnieje jednak algorytm działający w czasie quasi-wielomianowym, czyli 2O((log(n))^c) dla pewnego, ustalonego c.

W trakcie prezentacji omówiony zostanie schemat nowej wersji algorytmu, przeprowadzona analiza jego złożoności oraz przedstawiony dowód poprawności zwracanego przez niego wyniku.

21.11.2019 14:15
Jędrzej Kula, Przemysław Simajchel
Subtree Isomorphism Revisited [A. Abboud et al.]

The Subtree Isomorphism problem asks whether a given tree is contained in an another given tree. The problem is of fundamental importance and has been studied since the 1960s. Subquadratic algorithms are known for some variants, e.g. ordered trees, but not in the general case.

We will present a reduction from the Orthogonal Vectors problem to Subtree Isomorphism, showing that a truly subquadratic algorithm for the latter refutes the SETH. In addition, we will show that the same lower bound holds even for the case of rooted trees of logarithmic height. To contrast the lower bound, we will also show subquadratic randomized algorithms for rooted trees of constant degree and logarithmic height.

The reductions utilize the new "tree gadgets" that are likely useful for future SETH-based lower bounds for problems on trees. The algorithms apply a folklore result from randomized decision tree complexity.

07.10.2019 14:15
Nicoll Bryła, Mateusz Pabian
Faster Algorithms for All Pairs Non-Decreasing Paths Problem [Duan, Jin, Wu]
In this paper, we present an improved algorithm for the All Pairs Non-decreasing Paths (APNP) problem on weighted simple digraphs, which has running time Õ(n^((3+ω)/2)) = Õ(n^2,686). Here n is the number of vertices, and ω < 2.373 is the exponent of time complexity of fast matrix multiplication [Williams 2012, Le Gall 2014]. This matches the current best upper bound for (max, min)-matrix product [Duan, Pettie 2009] which is reducible to APNP. Thus, further improvement for APNP will imply a faster algorithm for (max, min)-matrix product. The previous best upper bound for APNP on weighted digraphs was Õ(n^(1/2(3+(3-ω)/(ω+1) + ω))) = Õ(n^2,78) [Duan, Gu, Zhang 2018]. We also show an Õ(n^2) time algorithm for APNP in undirected graphs which also reaches optimal within logarithmic factors.
17.10.2019 14:15
Piotr Helm, Krzysztof Zysiak
Optimal Sorting with Persistent Comparison Errors [B. Geissmann et al.]
We consider the problem of sorting n elements in the case of persistent comparison errors. In this problem, each comparison between two elements can be wrong with some fixed (small) probability p, and comparisons cannot be repeated. Sorting perfectly in this model is impossible, and the objective is to minimize the dislocation of each element in the output sequence, that is, the difference between its true rank and its position. Existing lower bounds for this problem show that no algorithm can guarantee, with high probability, maximum dislocation and total dislocation better than Ω(log n) and Ω(n), respectively, regardless of its running time.
In this paper, we present the first O(n log n)-time sorting algorithm that guarantees both O(log n) maximum dislocation and O(n) total dislocation with high probability. This settles the time complexity of this problem and shows that comparison errors do not increase its computational difficulty: a sequence with the best possible dislocation can be obtained in O(n log n) time and, even without comparison errors, Ω(n log n) time is necessary to guarantee such dislocation bounds.
In order to achieve this optimal result, we solve two sub-problems, and the respective methods have their own merits for further application. One is how to locate a position in which to insert an element in an almost-sorted sequence having O(log n) maximum dislocation in such a way that the dislocation of the resulting sequence will still be O(log n). The other is how to simultaneously insert m elements into an almost sorted sequence of m different elements, such that the resulting sequence of 2m elements remains almost sorted.
10.10.2019 14:15
Bartłomiej Jachowicz, Mateusz Kaczmarek
Separating strings with small automata [J.M.Robson]

The main problem considered in this paper is to find the smallest finite automaton seperating two strings. Authors prove that, for strings of length bounded by n, there exists an automaton with O(n2/5 * log3/5n) states, accepting only one of given strings, which in the case of two strings with the same length is the best known result.

 

24.04.2019 14:00
Bartłomiej Jachowicz, Mateusz Kaczmarek
On the Complexity of Exact Pattern Matching in Graphs: Binary Strings and Bounded Degree (M. Equi et al.)
Exact pattern matching in labeled graphs is the problem of searching paths of a graph G = (V, E) that spell the same string as the pattern P[1…m]. This problem can be solved in quadratic O(|E|m) time. In this paper authors give  a simple conditional lower bound that, for  any constant e > 0 an O(m |E|1-e) or O(|E| m1-e) time algorithm cannot be achieved unless Strong Exponential Time Hypothesis (SETH) is false.
17.04.2019 14:00
Rafał Kaszuba, Michał Zwonek
A simpler implementation and analysis of Chazelle’s Soft Heaps (H. Kaplan, U. Zwick)
Chazelle (2000) devised an approximate meldable priority queue data  structure, called Soft Heaps, and used it to obtain the fastest known deterministic comparison-based algorithm for computing minimum spanning trees, as well as some new algorithms for selection and approximate sorting problems. If n elements are inserted into a collection of soft heaps, then up to εn of the elements still contained in these heaps, for a given error parameter ε, may be corrupted, i.e.,have their keys artificially increased. In exchange for allowing these corruptions, each soft heap operation is performed in O(log 1/ε) amortized time. Chazelle’s soft heaps are derived from the binomial heaps data structure in which each priority queue is composed of a collection of binomial trees. We describe a simpler and more direct implementation of soft heaps in which each priority queue is composed of a collection of standard binary trees. Our implementation has the advantage that no clean-up operations similar to the ones used in Chazelle’s implementation are required.We also present a concise and unified potential-based amortized analysis of the new implementation.
13.03.2019 14:15
Kornel Dulęba, Jan Mełech
A Randomized Maximum-Flow Algorithm (Cheriyan & Hagerup)
A randomized algorithm for computing a maximum flow is presented. For an n-vertex m-edge network, the running time is O(nm + n2(log n)2) with probability at least 1 - 2-sqrt(nm). The algorithm is always correct, and in the worst case runs in O(nm log n) time. The only use of randomization is to randomly permute the adjacency lists of the network vertices at the start of the execution.
24.01.2019 14:00
Jan Derbisz, Franciszek Stokowacki
An Equivalence Class for Orthogonal Vectors (L.Chen, R.Williams)
The Orthogonal Vectors problem, asking whether any pair from n vectors is orthogonal, can be easily solved in O(n2 log n), however no algorithm faster than n2 is known. The authors show that OV is truly-subquadratic equivalent to several fundamental problems e.g. (Apx-Min-IP) - finding red-blue pair of vectors that is k-approximation to the minimum/maximum inner product and (Approximate Bichrom.-ℓp-Closest-Pair) - approximation to the closest red-blue pair of points. Above equivalence results hold as well in Data Structure setting, where we answer online queries. Also, introduced constructions allow new approximation algorithms for Apx-Min-IP and some MAX-SAT instances.
17.01.2019 14:00
Katarzyna Bułat, Kamil Rajtar
Correctness of constructing optimal alphabetic trees reviseted

Prezentowana przez nas praca przedstawia nowe obserwacje, które pozwoliły autorom dowieść poprawności dwóch znanych algorytmów (Hu-Tuckera i Garsi-Wachs) na konstrukcję optymalnych drzew utrzymujących porządek leksykograficzny. Omówimy uogólnioną wersję algorytmu Garsi-Wachs wraz z przejrzystym i łatwym do zilustrowania dowodem, który pomaga również w zrozumieniu podejścia Hu-Tuckera.

10.01.2019 14:00
Bartłomiej Jachowicz, Mateusz Kaczmarek
SETH-based Lower Bounds for Subset Sum and Bicriteria Path

The main result of this paper is a tight reduction from k-SAT to Subset Sum on dense instances, proving that Bellman's 1962 pseudo-polynomial O*(T) - time algorithm for Subset Sum on n numbers and target T cannot be improved to time T1 - e * 2o(n) for any e > 0, unless SETH fails.
As a corollary it proves a "Direct-OR" theorem for Subset Sum under SETH, offering new tool for proving conditional lower bounds. It is now possible to assume that deciding whether one out of N given instances of Subset Sum is a YES instances requires time (NT)1-o(1). As an application of this corollary, we prove a tight SETH - based lower bound for classical BICRITERIA s,t - PATH problem.

03.01.2019 14:00
Rafał Kaszuba, Krzysztof Zysiak
Fast Modular Subset Sum using Linear Sketching

Dostając zbiór n dodatnich liczb całkowitych, problem Modular Subset Sum polega na sprawdzeniu czy istnieje podzbiór, który sumuje się do zadanego t modulo dana liczba całkowita m. Jest to naturalne uogólnienie problemu Subset Sum (m=+∞), który silnie łączy się z addytywną kombinatoryką i kryptografią.

Niedawno zostały opracowane efektywne algorytmy dla przypadku niemodularnego, działające w czasie blisko-liniowym pseudo-wielomianowym. Jednak dla przypadku modularnego najlepszy znany algorytm (Koiliaris'a i Xu) działa w czasie Õ(m5/4).

W tej pracy prezentujemy algorytm działający w czasie Õ(m), który dopasowuje się do warunkowego ograniczenia dolnego opartego na SETH. W przeciwieństwie do większości poprzednich wyników związanych z problemem Subset Sum, nasz algorytm nie korzysta z FFT. Natomiast, jest zdolny zasymulować "podręcznikowe" programowanie dynamiczne znacznie szybciej, używając pomysłów ze Szkicowania Liniowego. Jest to jedna z pierwszych aplikacji technik bazujących na szkicowaniu, by osiągnąć szybki algorytm dla problemów kombinatorycznych w modelu offline.

13.12.2018 14:00
Łukasz Miśkiewicz, Adam Pardyl
Space-Efficient Algorithms for Longest Increasing Subseqence
Given a sequence of integers, we want to find a longest increasing subsequence of the sequence. It is known that this problem can be solved in O(n*log(n)) time and space. Our goal in this paper is to reduce the space consumption while keeping the time complexity small. For sqrt(n) <= s <= n, we present algorithms that use O(slog(n)) bits and  O(1/s * n2 * log(n)) time for computing the length of a longest increasing subsequence, and O(1/s * n2 * log2(n))  time for finding an actual subsequence. We also show that the time complexity of our algorithms is optimal up to polylogarithmic factors in the framework of sequential access algorithms with the prescribed amount of space.
29.11.2018 14:00
Konrad Deka, Szymon Kapała
Tighter Connections Between Formula-SAT and Shaving Logs

In 2015, Abboud, Backurs and Vassilevska-Williams showed that an O(n2-eps) time algorithm for LCS would refute the Strong Exponential Time Hypothesis (SETH). In this paper, authors prove conditional lower bounds of the form O(n2/logc n) for LCS, as well as for Frechet Distance and regular expression pattern matching. The main result is an efficient reduction from SAT on formulas on n variables and size s, to LCS on words of length 2n/2s1+o(1). It follows that an O(n2/log7+epsn) algorithm for LCS would refute some plausible conjectures about Formula-SAT, and an O(n2/log17+epsn) algorithm would result in major progress in theory of circuit complexity.

22.11.2018 14:00
Rafał Byczek, Bruno Pitrus
Approximating Edit Distance Within Constant Factor in Truly Sub-Quadratic Time

Odległość edycyjna to jeden ze sposobów zmierzenia jak bardzo dwa ciągi znaków są do siebie podobne. Polega on na zliczeniu minimalnej liczby operacji wstawienia, usunięcia lub zmienienia znaku na inny, wymaganej aby przekształcić jedno słowo w drugie. W tej pracy autorzy skupili się na problemie złożoności obliczeniowej aproksymowania odległości edycyjnej pomiędzy parą słów.

Problem wyznaczenia dokładnej odległości edycyjnej może być rozwiązany za pomocą klasycznego algorytmu dynamicznego działającego w kwadratowym czasie. W 2010 roku Andoni, Krauthgamer i Onak przedstawili działający w czasie prawie liniowym, algorytm aproksymujący odległość edycyjną z polilogarytmicznym czynnikiem aproksymacji. W 2014 Backurs i Indyk pokazali, że dokładny algorytm działający w czasie O(n^(2-δ))implikowałby szybki algorytm dla SAT i fałszywość silnej hipotezy o czasie wykładniczym (SETH). Ponadto, ostatnio w 2017, Abboud i Backurs pokazali, że istnienie algorytmu aproksymującego odległość edycyjną w czasie prawdziwie podkwadratowym z czynnikiem aproksymacji 1 + o(1) implikowałoby fałszywość paru hipotez dotyczących złożoności obwodów boolowskich (circuit complexity). To poddaje w wątpliwość aproksymowalność odległości edycyjnej z dokładnością do czynnika stałego w czasie prawdziwie podkwadratowym.

W tej pracy autorzy jednak odpowiedzieli twierdząco na to pytanie, przedstawiając bardzo ciekawy algorytm aproksymujący odległość edycyjną, z stałym czynnikiem aproksymacji i dowodząc, że jego czas działania jest ograniczony od góry przez Õ(n^(2−2/7)).

15.11.2018 14:00
Tomasz Zieliński, Michał Zwonek
On the Complexity of the (Approximate) Nearest Colored Node Problem
Given a graph G = (V, E) where every vertex has assigned a color, we ask for the approximate distance between the selected vertex v and the closest color c. We present an oracle of a stretch 4k-5 using O(kn sigma^(1/k)) space and O(log k) query time. Next, we prove that having only an estimate of order O(polylog(n)) we can answer the query dist(v, c) in O(1) time. Finally, we show the connection between lambda-OuMv and dist(v, c) problems.
08.11.2018 14:00
Weronika Grzybowska, Paweł Mader
Hamming distance completeness and sparse matrix multiplication
Authors of the paper show that a broad class of (+, <>) vector products (for binary integer functions <>) are equivalent under one-to-polylog reductions to the computation of the Hamming distance. Examples include: the dominance product, the threshold product and l_{2p+1} distances for constant p. Those results imply equivalence (up to polylog factors) between complexity of computation of All Pairs: Hamming Distances, l_{2p+1} Distances, Dominance Products and Threshold Products. Additionally, they show that the complexity of AllPairsHammingDistances (and thus of other aforementioned AllPairs- problems) is within poly log n from the time it takes to multiply matrices n×(n · d) and (n · d)×n, each with (n · d) non-zero entries.
25.10.2018 14:00
Filip Bartodziej, Vladyslav Hlembotskyi
Fine-grained Lower Bounds on Cops and Robbers
Thorough policemen or an elusive thief? At this seminar we’ll find out who comes out on top, how fast/slow can we find it out and how many cops will suffice to capture even the legendary Frank Abagnale. Our deliberations will be based around the popular cops and robbers game played on graphs. The presented results require SETH/ETH assumption.
18.10.2018 14:00
Jan Mełech, Rafał Burczyński
A Simple Near-Linear Pseudopolynomial Time Randomized Algorithm for Subset Sum
The Subset-Sum problem asks to find such a subset T of given multiset S of natural numbers (|S| = n) that sum of it's elements is equal to given t.
The authors present a simple probabilistic solution to this well-known NP-complete problem based on techniques such as Fast Fourier Transform and generating
functions. The running time of the algorithm is O((n+t)*polylog(t)) and the error probability is O(1/(n+t)). Even though this time complexity was
achieved before, the presented work is simpler as its description is only a few pages long.
11.10.2018 14:00
Dawid Pyczek, Michał Zieliński
On the Worst-Case Complexity of TimSort
TimSort is a very interesting sorting algorithm, which was introduced to Python in 2002. This popular algorithm is being successfully used all around the world, for its incredibly fast execution on partially sorted data.
There has been, however, no formal proof of the algorithm's pessimistic complexity. This paper proves that TimSort runs in O(n log n).
There will also be a proof that Python’s TimSort running time is in O(n+n log ρ), where ρ is the number of runs. Obviously, the first complexity can be deduced from the second one, but both of them helps to better understand how TimSort works.
As a byproduct of the analysis a new bug was found in Java implementations of TimSort.
16.03.2017 14:15
Jakub Cisło, Grzegorz Jurdziński
Tight Hardness Results for LCS and other Sequence Similarity Measures
09.03.2017 14:00
Sylwester Klocek, Wojciech Kruk
The Alternating Stock Size Problem and the Gasoline Puzzle
05.01.2017 14:15
Jan Derbisz, Jakub Łabaj
How to sort by walking on a tree

We consider a tree with n vertices. On vertex number x there is a box with label p(x), with the function p being a permutation of {1,2,...,n}. A robot is walking on the tree, carrying at most one box at a time. If a box is placed where robot is standing, it can swap this box with the one being carried. The robot's goal is to sort the boxes, placing each one at the vertex with its number. The paper by D. Graf gives an algorithm computing the shortest possible robot's walk in quadratic time, as well as the proof that the problem becomes NP-complete if planar graphs are considered instead of trees.

15.12.2016 14:15
Michał Glapa, Franciszek Stokowacki
Maximum matching with algebraic methods

In 2006, a celebrated result by Mucha and Sankowski stated that the maximum matching problem can be done by Gaussian elimination. The complexity of this algorithm depends on matrix multiplication, but certainly beats O(n2.5) long-standing record of Micali-Vazirani algorithm.

08.12.2016
Lech Duraj
A short tale of matrix multiplication

In recent years, some new algorithms for matrix multiplication problem were presented. Each of them is, however, only slightly faster than previous ones, while requiring substantially more complex analysis. Because of this, the long-standing question of optimal matrix multiplication algorithm seems even harder.

In my presentation, a short survery of the matrix multiplication algorithm is given. The presentation is based on François Le Gall's survey lecture of 2014.

01.12.2016
Aleksandra Mędrek, Krzysztof Maziarz
Navigating Central Path with Electrical Flows: from Flows to Matchings, and Back

The paper by Aleksander Mądry describes a new approach to the maximum flow problem. We define an electrical flow by assigning resistances to every edge and minimizing total energy instead of maximizing flow. Any flow network can be reduced to some electrical flow problem, using auxiliary reductions to some bipartite matching problems. The main result is a new maximum flow algorithm, working in O(m10/7). The presentation will cover a simpler, O(m3/2) version of the algorithm.

24.11.2016
Dominika Salawa, Jakub Cisło
Greedy algorithms for Steiner forest

The paper, resolves a long-standing open question: is the greedy approach for Steiner forest problem optimal up to multiplicative constant? The result by Anupam Gupta and Amit Kumar proves that in fact it is, with constant being at most 96. While some approximation algorithms for this problem, using linear programming, were previously known, this is the first known bound for a greedy algorithm.

17.11.2016
Patryk Gołębiowski, Wojciech Kruk
Lower Bounds in the Preprocessing and Query Phases of Routing Algorithms
In the last decade, there has been a substantial amount of research in finding routing algorithms designed specifically to run on real-world graphs. The paper by Colin White analyzes some of these algorithms and gives lower bounds for their complexity.
10.11.2016
Magdalena Gargas, Mateusz Jachna
Max flows in O(nm) time, or better
A new max-flow algorithm with O(nm + m16/15 logn) time complexity is presented. By combining this with previous results, an O(nm) algorithm may be obtained, resolving a long-standing open question. This work is due to James B. Orlin.
03.11.2016
Krzysztof Francuz, Szymon Łukasz
Fast and simple connectivity in graph timelines

The presented paper (by J. Łącki and A. Karczmarz) deals with graph timelines - graphs with edges being created and destroyed over time. There is an effective algorithm for answering queries about reachability (finding a path between two given vertices) and biconnectivity (two disjoint paths) over some given time intervals.

27.10.2016
Dawid Pyczek, Jakub Rówiński
Faster deterministic sorting and priority queues in linear space

The O(n log n) lower bound for sorting is valid only if the sorted objects allow no operations except comparing. For sorting integers, the bound can be broken - Mikkel Thorup's result of 1997 shows an O (n (log log n)2) algorithm for sorting arbitrary integers.

20.10.2016
Mateusz Twaróg, Patryk Urbański
Disjoint Set Union with randomized linking

The most popular version of Find-Union algorithm uses path compression and linking by rank. The presented work of Goel, Khanna, Larkin and Tarjan gives an analysis of the same algorithm, but with arbitrary (randomized) linking.

13.10.2016
G. Bukowiec, S.Klocek
FKT algorithm

Counting all matchings in a given graph is a #P-complete problem. However, for planar graphs it can be done in polynomial time. The FKT algorithm does it by exploiting the connection between the notions of matrix determinant and matrix permanent.

16.03.2016 14:15
Jakub Cisło, Grzegorz Jurdziński
Tight Hardness Results for LCS and other Sequence Similarity Measures