报名 | CFCS理论计算日与你相约

理论计算机科学是计算机科学的基石，它是构造、解释和理解计算机的基础。如今，理论计算机科学的经典结果已经深刻融入到计算机研究的各个分支中，并渗透到每个计算机科学家的思维里。理论计算机科学试图理解计算现象、语言的可表达性、算法的设计和性能，以及计算的一般限制，特别地，聚焦什么是计算，可以计算什么，如何计算，成本是多少等核心问题。

CFCS理论计算日是理论计算机科学的单日研讨会，首届将于2022年6月28日举办，由北京大学前沿计算研究中心程宽，姜少峰和刘天任老师组织，以特邀报告的方式，邀请多位国内外专家介绍在算法、复杂性、编码、密码学等理论计算机科学的主要分支上的最新进展。

↑↑扫小程序码报名↑↑

https://www.koushare.com/topic-sc/i/cfcspu

https://live.bilibili.com/22051279

1. The Monomial Structure of Boolean Functions

Abstract

Let f:{0,1}^n \to {0,1} be a boolean function. It can be uniquely represented as a multilinear polynomial. What is the structure of its monomials?

This question turns out to be connected to some well-studied problems, such as the log-rank conjecture in communication complexity and the union-closed conjecture in combinatorics. I will describe these connections, and a new structural result, showing that set systems of monomials of boolean functions have small hitting sets. Concretely, if f has r monomials, then they have a hitting set of size poly-log(r).

Based on joint work with Alexander Knop, Sam McGuire, and Weiqiang Yuan.

Biography

Shachar Lovett is an Associate Professor in the CSE department in UC San Diego. He is part of the Theory of Computation group and help run the theory seminar.  He has a broad interest in theoretical computer science and mathematics, in particular computational complexity, randomness and pseudo-randomness, algebraic constructions, coding theory, additive combinatorics and high-dimensional geometry. His research is funded by NSF.

2. Recent Developments in Derandomization

Abstract

This talk will present two new directions in the study of derandomization, which are related to each other. The talk is based on a sequence of recent works with Roei Tell.

The first direction is avoiding classical PRGs in favor of non-black-box techniques, which extract pseudorandomness for a probabilistic machine from the code of that specific machine. This approach allows us to construct derandomization algorithms from weaker hypotheses (compared to classical results), or alternatively to construct very fast derandomization algorithms that cannot be obtained using classical PRGs. The second direction is the fine-grained study of superfast derandomization, asking what is the smallest possible overhead that we need to pay when eliminating randomness (and if we can get away with paying nothing at all).

As examples, we will mention four results:

1. Connecting the BPP = P conjecture to a new type of lower bounds.

2. "Free lunch" derandomization with essentially no time overhead.

3. Superfast derandomization of probabilistic proof systems.

4. Average-case derandomization from weak hardness assumptions.

(Joint work with Ron Rothblum.)

Biography

Lijie Chen is a PhD student at MIT, advised by Ryan Williams.  Prior to that, he received his bachelor's degree from Yao Class at Tsinghua University.  He has a broad interest in theoretical computer science.  His current research interests include computational complexity and fine-grained complexity.

3. Streaming Facility Location in High Dimension

Abstract

In Euclidean Uniform Facility Location, the input is a set of clients in R^d and the goal is to place facilities to serve them, so as to minimize the total cost of opening facilities plus connecting the clients. We study the classical setting of dynamic geometric streams, where the clients are presented as a sequence of insertions and deletions of points in the grid [\Delta]^d, and we focus on the high-dimensional regime, where the algorithm's space complexity must be poly(d \log \Delta).

We present a new algorithmic framework, based on importance sampling from the stream, for O(1)-approximation of the optimal cost using only poly(d \log \Delta) space. This framework is easy to implement in two passes and can be "compressed" into one pass under the random-order setting. Our main result, for arbitrary-order streams, computes O(d^{1.5})-approximation in one pass by using the new framework but combining the two passes differently. This improves upon previous algorithms that either need space exponential in d or only guarantee O(d \log^2 \Delta)-approximation, and therefore our algorithms for high-dimensional streams are the first to avoid the O(\log \Delta)-factor in approximation that is inherent to the widely-used quadtree decomposition. Our improvement is achieved by introducing a geometric hashing scheme in streaming that maps points in R^d into buckets of bounded diameter, with the key property that every point set of small-enough diameter is hashed into at most poly(d) distinct buckets.

Based on a joint work with Artur Czumaj, Robert Krauthgamer, Pavel Veselý and Mingwei Yang.

Biography

Shaofeng Jiang is an assistant professor at the Center on Frontiers of Computing Studies (CFCS), Peking University. He obtained his Ph.D. from the University of Hong Kong. Before he joined PKU, he worked as a postdoctoral researcher at the Weizmann Institute of Science and an assistant professor at Aalto University. His research interest is generally theoretical computer science, with a focus on algorithms for massive datasets, online algorithms and approximation algorithms.

4. Exponential Lower Bounds for Locally Decodable and Correctable Codes for Insertions and Deletion

Abstract

This talk is based on our recent work on lower bounds for locally decodable codes (LDCs) against insdel errors.

Locally Decodable Codes are error-correcting codes for which individual message symbols can be quickly recovered despite errors in the codeword. The topic has been studied extensively in the past few decades, where a major goal is to understand the amount of redundancy that is necessary and sufficient.

We show stronger lower bound for Insdel case than Hamming case. Specifically, for two queries, there is no linear insdel LDC and for 3 or more queries, our lower bound is exponential in the existing lower bounds for Hamming case. Our techniques are based on delicate design and analysis of hard distributions of insdel, which is significantly different from previous ones for Hamming case.

Biography

Kuan Cheng is an Assistant Professor at the Center on Frontiers of Computing Studies (CFCS), Peking University. His research interests include Computational Models and Complexity Theory, Pseudorandomness and Coding Theory. Previously he got his PhD from Johns Hopkins University. He was a postdoc at UT Austin before joining Peking University.

5. The t-wise Independence of Substitution-Permutation Networks

Abstract

Block ciphers such as the Advanced Encryption Standard (Rijndael) are used extensively in practice, yet our understanding of their security continues to be highly incomplete. We promote and continue a research program aimed at proving the security of block ciphers against important and well-studied classes of attacks. In particular, we initiate the study of (almost) t-wise independence of concrete block-cipher construction paradigms such as substitution-permutation networks and key-alternating ciphers. Sufficiently strong (almost) pairwise independence already suffices to resist (truncated) differential attacks and linear cryptanalysis, and hence this is a relevant and meaningful target. Our results are two-fold.

Our first result concerns substitution-permutation networks (SPNs) that model ciphers such as AES. We prove the almost pairwise-independence of an SPN instantiated with concrete S-boxes together with an appropriate linear mixing layer, given sufficiently many rounds and independent sub-keys. Our proof relies on a characterization of S-box computation on input differences in terms of sampling output differences from certain subspaces, and a new randomness extraction lemma (which we prove with Fourier-analytic techniques) that establishes when such sampling yields uniformity. We use our techniques in particular to prove almost pairwise-independence for sufficiently many rounds of both the AES block cipher and the MiMC block cipher, assuming independent sub-keys.

Secondly, we show that instantiating a key-alternating cipher (which can be thought of as a degenerate case of SPNs) with most permutations gives us (almost) t-wise independence in t+o(t) rounds. In order to do this, we use the probabilistic method to develop two new lemmas, an independence-amplification lemma and a distance amplification lemma, that allow us to reason about the evolution of key-alternating ciphers.

Based on joint work with Stefano Tessaro and Vinod Vaikuntanathan.

Biography

Tianren Liu is an assistant professor at the Center on Frontiers of Computing Studies (CFCS), Peking University.  Prior to that, he received his bachelor degree from Yao Class at Tsinghua University.  He has a broad interest in theoretical cryptography, in particular the so-called information-theoretic cryptography that does not rely on computation complexity assumptions.

6. Aggregating a Data Set: Rankings to Strings

Abstract
One of the most common aggregation tasks in data analysis is to find a representative for a given data set S. Perhaps the most popular version asks to find a point from the metric space (not necessarily from S) that minimizes the sum of distances from all the data points in S. An optimal such point is called a median. In another well-known version, the objective is to find a point that minimizes the maximum distance, where an optimal point is called a center. For both the median and center problems, the hardness varies with the underlying metric space. There is a folklore algorithm that achieves 2-approximation (by a simple application of triangle inequality). Can we do better when the input is a set of strings, and the underlying distance is the edit metric?  Instead of arbitrary strings, if the input is a set of rankings (permutations), what is the best approximation factor we can achieve? In this talk, we will address these questions and discuss some of the recent advances.

Biography

Diptarka Chakraborty did his PhD from Indian Institute of Technology, Kanpur under the supervision of Prof. Manindra Agrawal and Prof. Satyadev Nandakumar. After that he spent two years at Charles University, Prague, Czech Republic as a post-doctoral fellow hosted by Prof. Michal Koucky, and then almost a year at Weizmann Institute of Science, Israel as a post-doctoral fellow hosted by Prof. Robert Krauthgamer. He is currently an assistant professor at National University of Singapore. He is interested in theoretical computer science; more specifically, algorithms on large data set, streaming algorithms, string matching algorithms, graph algorithms and data structures.

7. Recent Progress on Space-bounded Derandomization

Abstract
One of the great challenges in complexity theory is the BPL vs. L problem: To what extent is randomness necessary for space-efficient algorithms? It is widely believed that BPL = L, namely, that every randomized algorithm can be derandomized with only a small overhead in space. Fortunately, in contrast to the time-bounded case, there are no known barriers for the unconditional derandomization of BPL. In this talk, I will survey recent progress on the problem and explore a few recurring techniques and ideas. I will focus on a new result on (joint with Gil Cohen and Ori Sberlo) on beating the Saks—Zhou derandomization result in the regime where a space-S algorithm tosses more than 2^S random coins.

Biography

Dean Doron is a faculty member at the Department of Computer Science at Ben-Gurion University. His research lies within Theory of Computation, mainly computational complexity, randomness in computation and explicit constructions. More specifically, he is interested in pseudorandomness, derandomization, and error-correcting codes. Prior to joining Ben Gurion, he completed my PhD at Tel-Aviv university, followed by postdoctoral positions at Austin and Stanford.

8. On Breaking Encryption with a Statistical Zero-knowledge Oracle

Abstract
Basing security of encryption on NP-hardness is a longstanding challenge in the foundations of cryptography. Under standard complexity-theoretic assumptions, such encryption should be secure even if the adversary had access to a statistical zero-knowledge (SZK) oracle, which is believed to have the ability to solve some intractable problems but not NP-complete ones. I will show some examples of classical and newer public-key encryption schemes that are vulnerable to SZK attacks, and some for which an SZK attack is not known.

Biography

Andrej Bogdanov is professor of Computer Science and Engineering and member of the Institute of Theoretical Computer Science and Communications at the Chinese University of Hong Kong. His research interests are in cryptography, pseudorandomness, and sublinear-time algorithms.  Andrej obtained his Ph.D. from UC Berkeley in 2005. Before joining CUHK in 2008 he was a postdoctoral researcher at the Institute for Advanced Study in Princeton, at DIMACS (Rutgers University), and at ITCS (Tsinghua University). He was a visiting professor at the Tokyo Institute of Technology in 2013 and at the Simons Institute for the Theory of Computing in 2017 and 2021.

9. Quadratic Multiparty Randomized Encodings Beyond Honest Majority and Their Applications

Abstract

Multiparty randomized encodings (Applebaum, Brakerski, and Tsabary, SICOMP 2021) reduce the task of securely computing a complicated multiparty functionality f to the task of securely computing a simpler functionality g. The reduction is non-interactive and preserves information-theoretic security against a passive (semi-honest) adversary, also referred to as privacy. The special case of a degree-2 encoding g (2MPRE) has recently found several applications to secure multiparty computation (MPC) with either information-theoretic security or making black-box access to cryptographic primitives. Unfortunately, as all known constructions are based on information-theoretic MPC protocols in the plain model, they can only be private with an honest majority.

In this work, we break the honest-majority barrier and present the first construction of general 2MPRE that remains secure in the presence of a dishonest majority. Our construction encodes every n-party functionality f by a 2MPRE that tolerates at most t=2n/3 passive corruptions.

We derive several applications including (1) The first non-interactive client-server MPC protocol with perfect privacy against any coalition of a minority of the servers and up to t of the n clients; (2) Completeness of 3-party functionalities under non-interactive t-private reductions; and (3) A single-round t-private reduction from general-MPC to an ideal oblivious transfer (OT). These positive results partially resolve open questions that were posed in several previous works. We also show that t-private 2MPREs are necessary for solving (2) and (3), thus establishing new equivalence theorems between these three notions. Finally, we present a new approach for constructing fully-private 2MPREs based on multi-round protocols in the OT-hybrid model that achieve perfect privacy against active attacks. Moreover, by slightly restricting the power of the active adversary, we derive an equivalence between these notions. This forms a surprising and quite unique connection between a non-interactive passively-private primitive to an interactive actively-private primitive.

Based on joint work with Yuval Ishai, Or Karni, and Arpita Patra.

Biography

Benny Applebaum is a Professor in the School of Electrical Engineering at Tel-Aviv University.  Prior to that, he was a post-doc researcher at the Faculty of Mathematics and Computer Science of Weizmann Institute of Science, and a post-doc at the Department of Computer Science of Princeton University.  He received the PhD in 2007 from the Computer Science Department of the Technion under the supervision of Yuval Ishai and Eyal Kushilevitz.  He is interested in the foundations of cryptography, as well as the practical aspects of computer security. He also has a broad interest in computational complexity and coding theory.

10. On the Robustness of CountSketch to Adaptive Inputs

Abstract

CountSketch is a popular dimensionality reduction technique that maps vectors to a lower dimension using randomized linear measurements while supporting the recovery of the L2-heavy hitters of the input vectors. We study the robustness of CountSketch in adaptive settings where input vectors may depend on the output from prior inputs. Adaptive settings arise in processes with feedback or with adversarial attacks. We show that:

(1) CountSketch itself is not robust, and can be attacked with a number of queries of the order of the sketch size.

(2) A slight variation of CountSketch is robust, allowing for quadratic number of queries in the sketch size, which is an improvement factor of $\sqrt{k}$ over prior work (for k heavy hitters).

Based on a joint work with Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, and Moshe Shechner.

Biography

Uri Stemmer is a faculty member at the Blavatnik School of Computer Science at Tel-Aviv University. Previously, he was a faculty member at Ben-Gurion University, a postdoc at the Weizmann Institute of Science, and a postdoc at Harvard University. He completed his Ph.D. at Ben-Gurion University, where he was lucky to have Amos Beimel and   Kobbi Nissim as his advisors. His research interests are privacy-preserving data analysis, computational learning theory, and algorithms.

共同发起人

北京大学前沿计算研究中心助理教授

关于中心

北京大学前沿计算研究中心成立于2017年12月，是北京大学新体制科研机构，由图灵奖获得者、中国政府“友谊奖”获得者、中国科学院外籍院士、北京大学访问讲席教授 John Hopcroft 和中国工程院院士、北京大学博雅讲席教授高文担任联合主任。

中心立足国际计算机学科前沿，在计算理论、前沿计算方法，以及计算与机器人、经济、艺术和体育等多个领域的交叉方向展开前沿探索，创立具有国际一流影响力的计算科学研究中心；形成跨领域、交叉融合的计算应用支撑中心。

中心创建宽松自由的国际化学术环境，助力青年教师成长为计算机领域世界一流的学者；并以“图灵人才培养计划”为代表，建立国际先进的计算科学及相关交叉学科人才培养机制，为国家新时代科技发展和产业革新培养引领未来的卓越人才。