Went better than I expected, but still lost quite a few marks.
Question 1 - 16/20
This question had its own separate answer sheet - 100 +s and 100 Os on a grid, to draw classifiers on. Started with a simple linear classifier - full marks. Then a decision tree - full marks. A proof about Bayes' Decision Rule on arbitrary-dimensional Gaussians, luckily the proof which I'd forced myself to do yesterday - full marks. Last part (6 marks) was sketching the decision boundary, which I screwed up after producing lots of calculations for.
Question 3 - 13/20
Hidden Markov Models and Baum-Welch training. Could only think of two assumptions for the first part. A minor notation mistake in one formula, and a much more serious addition to another, in the second. Some handwaving in the last two parts. Was 50-50 for this question or Q2 from the read through at the start.
Question 4 - 14/20
Language models in recognisers. Should have got full marks on the first part for explaining about P(W|O) in terms of acoustic and language models. Full marks on the second part about scaling. Wasn't entirely sure with some of the discounting/backoff points in part 3, but did remember Good-Turing discounting. Last bit was a handful of marks on perplexity, which I should have got most/all of.
Question 5 - 17/20
Search/token passing decoding in LVCSR. Spent nearly a page on the algorithm and beam pruning and discussed tree-based lexicons. Decent explanations for word-internal/cross-word triphones, syllables and a trigram model. Have probably lost some marks, but no idea where.
So, am predicting 75% overall. A good solid pass, maybe even good enough for a PhD, but I should have been able to do better.
Now just to learn and worry about Module 1B for tomorrow...