Lectures by



Machine learning [1] has been recently applied to build effective algorithms for problems in verification and synthesis of systems. The first part of the tutorial covers basic machine learning concepts including the different learning problems (in particular, classifiers), various learning models, and known efficient learning algorithms. The second part of the tutorial will focus on how these learning algorithms have been recently adapted [2,3] to solve problems in program verification, with a focus on invariant synthesis. In particular, we will explore new learning models required for invariant generation and synthesis of program expressions, and how traditional machine-learning algorithms can be modified to these new learning models. We will also present a concrete learning algorithm that learns Boolean combinations of Boolean predicates and numerical inequalities that researchers can use to build their own invariant-synthesis engines.

Structure of Tutorial

  • Introduction: Machine-learning, the classification problem, invariant synthesis, and ICE
  • Classical classification learning algorithms
  • Learning conjunctions vs disjunctions; k-CNF; k-DNF
  • Conjunctions of predicates, Elimination algorithm, Complexity
  • Linear Threshold Functions: Encoding conjunctions as LTFs, learning small conjunctive formuals and the Winnow algorithm, Perceptron, SVMs.
  • Decision trees over Boolean attributes
  • Decision trees over numerical and Boolean attributes
  • Adapting classical algorithms to invariant synthesis: ICE and convergence
  • Extending elimination algorithm for conjuncts to ICE; Houdini
  • Extending decision-tree learning to ICE
  • Making decision-tree learning converge, for both (+,-) samples and ICE samples
  • Open problems
  • Some invariant synthesis results
  • Demo on a tool ICE-DT for ICE learning using decision trees
  • The broader perspective: applying machine learning to more general synthesis problems; challenges.
  • References

    [1] T. M. Mitchell. Machine learning. McGraw Hill series in Computer Science. McGraw-Hill, 1997.

    [2] Pranav Garg, Christof Loeding, P. Madhusudan, and Daniel Neider. ICE: A Robust Learning Framework for Synthesizing Invariants, CAV 2014.

    [3] Pranav Garg, Daniel Neider, P. Madhusudan, Dan Roth. Learning Invariants using Decision Trees and Implication Counterexamples, July 2015.

    [4] J.R. Quinlan. Induction on Decision Trees, Machine Learning Vol 1, Issue 1, 1986.

    [5] Nick Littlestone. Learning Quickly When Irrelevant Attributes Abound: A New Linear-threshold Algorithm, Machine Learning 285–318(2).