**Ryuta Kambe, Naoki Kobayashi, Ryosuke Sato, Ayumi Shinohara and Ryo Yoshinaka**

**Video:** (if the video does not load within this page, click on its top right “external window” button to watch it in another window)

**Link to the paper:** here

**Abstract: **We propose an inside-outside algorithm for stochastic macro grammars. Our approach is based on types, which has been inspired by type-based approaches to reasoning about functional programs and higher-order grammars. By considering type derivations instead of ordinary word derivation sequences, we can naturally extend the standard inside-outside algorithm for stochastic context-free grammars to obtain the algorithm for stochastic macro

grammars. We have implemented the algorithm and confirmed its effectiveness through an application to the learning of macro grammars.

Any** question or comment **can be made using the comment section of the page below.

Your experiments show interestingly that your approach can learn the non-stochastic part of stochastic macro grammars.

I have the impression that you could do this without using Inside-Out and get an algorithm that learns macro grammars from only the presence and absence of substrings (?), and not their counts, in a way similar to the two previsous talks. Am I completely wrong or is this something that could be investigated?

Thank you for asking. I’m afraid I don’t know whether we can get an algorithm in a similar way to the two previous talks. There may be different approaches for learning (non-stochastic) macro grammars and they could be investigated further, but our experiments were intended to confirm whether we can get optimal probability assignments of stochastic macro grammars through our algorithm.

Another way of asking the question is: do you understand why you are able to learn the languages with your algorithm.

In particular, do you actually prune rules with non-zero weights?

Yes we have removed rules which have lower probabilities than a certain threshold.

OI macro grammars are known to be equivalent to indexed grammars. Can you think of a way to approach stochastic indexed grammars based on this work?

An interesting question. Do you know whether *stochastic* OI macro grammars are equivalent to *stochastic* indexed grammars?

If the answer is yes, and there is a constructive proof of the equivalence, you can certainly obtain stochastic indexed grammars from our algorithm. Otherwise, I am not sure.

I don’t even know whether stochastic indexed grammars have been considered before. I don’t think there would be a direct translation between the two stochastic formalisms. The stochastic part wouldn’t correspond. I imagine the situation would be similar to what happens with, e.g., left-corner transform of CFGs.

(Maybe it’s somewhat easier to work with OI context-free tree grammars, rather than OI macro grammars, as the starting point.)