**Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Igor Khmelnitsky, Martin Leucker, Daniel Neider, Rajarshi Roy and Lina Ye**

**Video:**

**Link to the paper:** here

**Abstract: **This paper presents (i) an active learning algorithm for visibly pushdown grammars and (ii) shows its applicability for learning surrogate models of recurrent neural networks (RNNs) trained on context-free languages. Such surrogate models may be used for verification or explainability. Our learning algorithm makes use of the proximity of visibly pushdown languages and regular tree languages and builds on an existing learning algorithm for regular tree languages. Equivalence tests between a given RNN and a hypothesis grammar rely on a mixture of A* search and random sampling. An evaluation of our approach on a set of RNNs from the literature shows good preliminary results.

Any** question or comment **can be made using the comment section of the page below.

Very nice talk! I really like the idea of learning a VPA by reducing it to learning a tree automaton. That seems like a very elegant way of approaching the problem. Did you compare this algorithm with the direct algorithms for learning VPAs (such as the one by Malte Isberner)? Do you expect a difference in the number of queries they require?

Thank you very much for the kind feedback and the interesting question!

Unfortunately, we do not yet have a practical comparison with the more direct approaches. These direct algorithms are indeed likely to be suitable for the task of grammar extraction from the RNNs, too. Depending on the target language, they may require fewer queries since the canonical object they learn is potentially smaller. It would be very worthwhile to perform an experimental evaluation of the number of queries required by these algorithms with respect to the VPLs considered in this work.

Interesting talk, it makes me think about related work, not needing tree automata:

– Grammatical inference for even linear languages based on control sets, Yuji Takada,

https://doi.org/10.1016/0020-0190(88)90208-6. (https://www.sciencedirect.com/science/article/pii/0020019088902086)

– A characterization of even Linear Languages and its application to the learning problem, Jose M. Sempere and Pedro García

– The grammatical inference problem for the Szilard languages of linear grammars, Erkki Mäkinen

Is there an equivalence between even linear CFG and VPA, or what would be the difference ?

Interesting talk, it makes me think about related work, not needing tree automata:

– Grammatical inference for even linear languages based on control sets, Yuji Takada,

– A characterization of even Linear Languages and its application to the learning problem, Jose M. Sempere and Pedro García

– The grammatical inference problem for the Szilard languages of linear grammars, Erkki Mäkinen

Is there an equivalence between even linear CFG and VPA, or what would be the difference ?

Thank you very much for the kind feedback and the interesting question!

Unfortunately, we do not yet have a practical comparison with the more direct approaches. These direct algorithms are indeed likely to be suitable for the task of grammar extraction from the RNNs, too. Depending on the target language, they may require fewer queries since the canonical object they learn is potentially smaller. It would be very worthwhile to perform an experimental evaluation of the number of queries required by these algorithms with respect to the VPLs considered in this work.

Interesting talk, it makes me think about related work, not needing tree automata:

– Grammatical inference for even linear languages based on control sets, Yuji Takada,

– A characterization of even Linear Languages and its application to the learning problem, Jose M. Sempere and Pedro García

– The grammatical inference problem for the Szilard languages of linear grammars, Erkki Mäkinen

Is there an equivalence between even linear CFG and VPA, or what would be the difference ?

Thank you very much for the pointers, which look very interesting!

It seems that ELLs and VPLs are incomparable, as witnessed by the set of palindromes (which is not a VPL) and the Dyck language (which is not an ELL). As the rules of visibly pushdown grammars and even linear grammars are similar and only differ in two aspects (while VPLs are non-linear, they restrict the use of terminal symbols), it would be interesting to see whether learning techniques from one class can be transferred to the other to obtain algorithms for richer classes of grammars.

Thank you very much for the pointers, which look very interesting!

It seems that ELLs and VPLs are incomparable, as witnessed by the set of palindromes (which is not a VPL) and the Dyck language (which is not an ELL). As the rules of visibly pushdown grammars and even linear grammars are similar and only differ in two aspects (while VPLs are non-linear, they restrict the use of terminal symbols), it would be interesting to see whether learning techniques from one class can be transferred to the other to obtain algorithms for richer classes of grammars.