Tutorials

Robert Frank (Yale University)

Title: The Formal Complexity of Natural Language Grammars Revisited: Perspectives from Neural Networks

Abstract: Chomsky (1957) initiated a research program seeking to identify a restricted class of formal grammars rich enough to characterize natural language. This program was motivated by learning — a restricted class of grammars limits the learner’s search space, potentially making learning easier) — as well as the desire to provide a formal account of linguistic universals — why languages pattern in some ways and not others. After reviewing where this program stands in light of the last 60 years of work, I will explore how it connects with current work on the representation of grammars in neural networks. I will discuss a number of interesting connections and divergences, and also describe topics that remain open and ripe for future work

Pr. Frank’s slides

Robert Frank is Professor of Linguistics at Yale University. He received his PhD from the University of Pennsylvania (Computer and Information Science) and has taught at Johns Hopkins University (Cognitive Science) and the University of Delaware (Linguistics). His research explores models of language learning and processing and the role of computationally constrained grammar formalisms, especially Tree Adjoining Grammar, in linguistic explanation.

Gail Weiss (Technion)

Title: Formal Abstractions of Neural Sequence Models

Abstract: We will consider the relationship between popular neural sequence processing models, such as RNNs and Transformers, and formal models such as automata and their variants. In particular, we will discuss several extraction methods for RNNs, and the differences between various RNN architectures as understood through the lens of automata variants. We will then consider the more modern Transformer. In particular, we will show how it (doesn’t!) relate to existing formal classes, and propose an alternative abstraction in the form of a programming language.

Gail Weiss’ slides

Gail Weiss is a PhD student at the Technion in Israel, working with professors Eran Yahav and Yoav Goldberg. She completed her BSc in Computer Engineering at the Technion in 2016 summa cum laude. Her research focuses on the application of formal language theory to deep learning techniques, particularly those used in NLP.

Invited Speakers

Dana Fisman (Ben-Gurion University)

Title: Learning languages of infinite words

Abstract: Languages of infinite words (aka omega-languages), play an important role in modeling, verification and synthesis of reactive systems. Such languages can be recognized by omega-automata. There are several common types of omega automata used in the literature and in verification tools (e.g. Buchi, Rabin, Muller and parity). Some (e.g. Buchi) have the same syntactic structure as automata that read finite words. Still learning them seems to be a much harder problem.

While DFAs can be polynomially learned in the query learning paradigm, so far there is no polynomial learning algorithm that learns the common omega types. The quest to learn languages of infinite words resulted in polynomial learning algorithms for non-common automata types (that can represent infinite words). The learning questions relate to questions of succinctness and minimization of the different automata models.
In this talk, we will get acquainted with the world of infinite word languages, various representation types, and the related learning and succinctness results.

Pr. Fisman’s slides

Dana Fisman is a faculty member in the Computer Science Department at Ben-Gurion University. Before that she was a research scientist at the University of Pennsylvania, the Associate Director of the NSF expedition ExCAPE about system synthesis, and a visiting fellow at Yale University. She did her PhD in Weizmann Institute of Science, and worked many years in the industry in IBM Haifa Research Labs, and in Synopsys Inc. Dana’s research interests are in the area of formal methods in system design, automata and logic. She is mostly known for her work on PSL, the IEEE standard for property specification language, on which she received numerous awards from IEEE, IBM and Synopsys.

Guillaume Rabusseau (Université de Montréal)

Title: Connections between Weighted Automata, Spectral Learning and Tensor Networks

Abstract: In this talk, I will present fundamental connections between weighted automata, spectral learning and tensor networks. Tensor network methods have been a key ingredient of advances in condensed matter physics and have recently sparked interest in the machine learning community for their ability to compactly represent very high-dimensional objects. I will show how weighted automata are equivalent to the so-called uniform matrix product state models from quantum physics and the tensor train decomposition used in machine learning and numerical analysis. I will also briefly present how this equivalence extends to connections with recurrent neural networks and context free grammars. Building upon these connections, I will introduce a novel view of the spectral learning algorithm for weighted automata using the tensor network perspective and discuss promising research directions leveraging these connections for learning with structured data.

Guillaume Rabusseau’s slides

Guillaume Rabusseau is an assistant professor at Univeristé de Montréal and holds a Canada CIFAR AI chair at the Mila research institute. Prior to joining Mila, he was an IVADO postdoctoral research fellow in the Reasoning and Learning Lab at McGill University, where he worked with Prakash Panangaden, Joelle Pineau and Doina Precup. He obtained his PhD in computer science in 2016 at Aix-Marseille University under the supervision of François Denis and Hachem Kadri. His research interests lie at the intersection of theoretical computer science and machine learning, and his work revolves around exploring inter-connections between tensors and machine learning to develop efficient learning methods for structured data relying on linear and multilinear algebra.

C. Lee Giles (Pennsylvania State University)

Title: Grammatical Inference, Memory and Neural Networks

Abstract: To be given

Dr. C. Lee Giles is the David Reese Professor at the College of Information Sciences and Technology at the Pennsylvania State University, University Park, PA. He is also graduate college Professor of Computer Science and Engineering, courtesy Professor of Supply Chain and Information Systems, and Director of the Intelligent Systems Research Laboratory.  He recently became a Teaching and Learning Technology Fellow and the Interim Associate Dean of Research for IST. He directs the Next Generation CiteSeer, CiteSeerx project and codirects the ChemxSeer project at Penn State. He has been associated with Columbia University, the University of Maryland, University of Pennsylvania, Princeton University, and the University of Trento.