Rev 37 | Blame | Compare with Previous | Last modification | View Log | RSS feed

% aims

% * raw material for proposal

% * properties of Isabelle relevant for project context

% * comparison to other TP systems – why Isabelle?

% * communication of Isabelle to non-TP-people

\section{Mission}\label{mission}

We give an overview of the interactive proof assistant Isabelle/HOL \cite{Nipkow-Paulson-Wenzel:2002}

with a particular focus on properties which are relevant in our envisaged project context.

When reading this, bear in mind that the aim of this project is to

provide a running prototype environment. This requires to focus on

one particular TP system. This does not imply that the issues we want

to investigate could not be transferred to other TP systems as well – but

without any experience gained from a prototype, it is too early to

generalise, although this could be a focus of a future successor project.

A baseline is that only \emph{interactive} proof assistants come into question.

This rules out the vast collection of automatic provers, but also bare

programming languages like Agda \cite{adga-phd-07}. Typical representatives of

interactive proof assistants are the Coq system,

\cite{coq-team-10}, ACL2 \cite{moore:acl2-00}, Mizar\footnote{http://mizar.org/}, the HOL

family\footnote{This covers both HOL4 \cite{Gordon:93} and HOL Light \cite{harrison:hol-light}}, and Isabelle/HOL.

A realistic crosscut through relevant properties of existing TP

systems led us to the conclusion that Isabelle/HOL (from now, just Isabelle)

is the most promising one to start with.

\section{How Isabelle/HOL presents itself}

\subsection{Calculus}\label{calculus}

The logical calculus of Isabelle is higher-order logic (HOL) of simply-typed

schematically polymorphic $\lambda$-terms:

\begin{description}

\item[types $\tau$] consist of \emph{type constructors}

$\kappa$ with a fixed arity and type variables $\alpha$:

\[

\tau ::= \kappa \ \tau_1 \cdots \tau_k \ \| \ \alpha

\]

Function space $\alpha \Rightarrow \beta$ is simply a binary type

constructor.

\item[terms $t$] include application, abstraction,

(local) variables of a particular type, and constants (operations):

\[

t ::= t_1 \ t_2 \ \| \ \lambda x :: \tau. \ t \ \| \ x :: \tau \ \| \ f

\]

\item[formulae $P$] are terms of a distinguished type \emph{bool}

with the usual connectives: conjunction, disjunction, implication,

negation, universal and existential quantifiers. By convention,

chained implications at the outermost level are printed as inference rules:

$Q_1 \Longrightarrow … \Longrightarrow Q_n \Longrightarrow P$ is printed

as

\begin{simplegather}

\infer{P}{Q_1 … Q_n}

\end{simplegather}

A constant (operation) with a particular relevance for formulae

is equality $t_1 = t_2$ of type $\alpha \Rightarrow \alpha \Rightarrow$

\emph{bool}.

\end{description}

On top of this core calculus, Isabelle provides various predefined

type constructors with corresponding operations that resemble concepts

well-known from mathematics and functional programming:

\begin{description}

\item[numeric types \emph{nat}, \emph{int}, \emph{rat}, \emph{real}, \emph{complex}]

modelling the natural, integer, rational, real and complex numbers

with the usual arithmetic operations;

\item[type of sets \emph{$\alpha$ set}] modelling sets as comprehensions

over predicates of type \emph{$\alpha \Rightarrow$ bool} with typical

operations like union, intersection, difference, set product, image

of a set under a operation etc.

\item[fundamental collection types \emph{unit}, $\alpha * \beta$, \emph{$\alpha$ option}, \emph{$\alpha$ list}]

representing the common unit, product, option and list types from functional programming

with dozens of operations similar to what can be found in typical functional

programming libraries: membership test, concatenation, reversal, sorting, etc.

\item[advanced data structures, e.g. red-black trees \emph{$(\alpha, \beta)$ rbt}]

with the typical lookup, insert and delete operations

providing efficient data structures for functional programs embedded

in Isabelle (see \secref{code-generation}).

\end{description}

Isabelle also comes with syntactic sugar for \emph{if}-, \emph{let}- and

\emph{case}-expressions known from functional programming, which are

internally mapped to suitable combinators.

\subsection{Specification and proof language Isar}\label{Isar}

To conduct formal developments, Isabelle uses the specification

and proof language \emph{Isar}. An Isar text is

structured as a series of Isar commands, each consisting of a

particular keyword followed by text denoting logical and non-logical

entities (types, terms, names, \ldots). Isar commands are aggregated

in theories, where one theory's content is given by the Isar commands

it contains: commands add type constructors, constants, proofs etc.

to the theory.

Typical representatives of Isar commands are specification tools as follows:

\begin{description}

\item[Inductive datatypes] correspond to datatype declarations

in functional programming languages:

\isacommand{datatype}\isamarkupfalse%

\ color\ {\isacharequal}\ R\ {\isacharbar}\ B\isanewline

\isacommand{datatype}\isamarkupfalse%

\ {\isacharparenleft}$\alpha${\isacharcomma}\ $\beta${\isacharparenright}\ rbt\ {\isacharequal}\ Empty\ {\isacharbar}\ Branch\ color\ ({\isachardoublequoteopen}{\isacharparenleft}$\alpha${\isacharcomma}\ $\beta${\isacharparenright}\ rbt{\isachardoublequoteclose})\ $\alpha$\ $\beta$\ ({\isachardoublequoteopen}{\isacharparenleft}$\alpha${\isacharcomma}\ $\beta${\isacharparenright}\ rbt{\isachardoublequoteclose})\isanewline

\item[Recursive functions with pattern matching] correspond to function

definitions in functional programming languages:

\isacommand{fun}\isamarkupfalse%

\ splice\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequoteopen}$\alpha$\ list\ {\isasymRightarrow}\ $\alpha$\ list\ {\isasymRightarrow}\ $\alpha$\ list{\isachardoublequoteclose}\ \isakeyword{where}\isanewline

\ {\isachardoublequoteopen}splice\ {\isacharbrackleft}{\isacharbrackright}\ ys\ {\isacharequal}\ ys{\isachardoublequoteclose}\ {\isacharbar}\isanewline

\ {\isachardoublequoteopen}splice\ xs\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ xs{\isachardoublequoteclose}\ {\isacharbar}\isanewline

\ {\isachardoublequoteopen}splice\ {\isacharparenleft}x\ {\isacharhash}\ xs{\isacharparenright}\ {\isacharparenleft}y\ {\isacharhash}\ ys{\isacharparenright}\ {\isacharequal}\ x\ {\isacharhash}\ y\ {\isacharhash}\ splice\ xs\ ys{\isachardoublequoteclose}%

TP experts may note that recursive function definitions within

a formal calculus require a complex machinery to construct termination

proofs and similar. The good news is that end-users need only seldom

to care about that: Isabelle's function package does the job

for a huge class of practically occurring functions

\cite{nipkow-krauss:07-find-lex-orders,Krauss:07-size-terminat}.

\end{description}

Note that Isabelle's calculus as presented here resembles an ML-like

programming language enriched with logical concepts like sets and quantifiers:

datatype and function definitions enable a user to extend the calculus with new notions

in the manner of a functional programming language, and state properties

about those newly introduced concepts. In other words, Isabelle's can be

seen as a carrier for a shallowly embedded functional programming language;

this observation, condensed into the slogan »HOL = functional programming +

logic«, has a couple of consequences:

\begin{itemize}

\item People with a (functional) programming language background get acquainted

with Isabelle comparably fast, and most existing introductions to Isabelle

follow that track \cite{nipkow-prog-prove}.

\item Algorithmic issues can be expressed directly in a functional style

in Isabelle, which eases transfer of algorithms from CA to Isabelle.

\item Formulations in the embedded functional programming language

can be translated to code for common functional programming language

systems using code generation \secref{code-generation}.

\end{itemize}

\fixme[elaborate: comparison to other systems]

Proofs themselves, the Holy Grail of TP systems, are conducted in a fragment

of the Isar language, the Isar \emph{proof} language. This is not a tutorial

% //////////////////////

on how to write Isar proofs, and in our project the envisaged end users, mainly

engineers, are not expected to come in touch with fundamental proofs at all,

% !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

% Proofs of within domain-specific knowledge might become the duty of so-called

% domain-engineers: they do not only provide algorithms solving particular

% domain-specific problems, rather, they start with formalizing the essential

% notions of the domain by definitions, theorem and proofs

% \cite{db:SW-engIII,db:dom-eng,Hansen94b,Dehbonei&94}.

%

as this is rather the task of the system implementers; therefore we will

not dive deeper here. Nevertheless, the ability to write human-readable and

robust proofs in Isar is a key aspect to maintain voluminous formal

developments in the mid-term and long-term run. \footnote{An ever-growing collection

of Isabelle developments is collected in the Archive of Formal Proofs\footnote{\url{http://afp.sourceforge.net/topics.shtml}};

adjustments because of broken

proofs due to changes in the system have become seldom over the years.}

%% ^^^ relates to pt.3 in TPPL-abstract.tex in particular:

%% with this go into detail in \sect{Methods}

\fixme[elaborate: comparison to other systems]

\subsection{A bluffer's glance at Isabelle's architecture}\label{architecture}

This short overview provides the context to understand the subsequent technical

details.

\begin{center}

\includegraphics[width=0.5\linewidth]{fig/fig-isabelle-architecture}

\end{center}

\begin{itemize}

\item Interaction with the system nowadays happens through

an IDE (see \secref{jedit}) which is implemented

in Scala \footnote{\url{http://www.scala-lang.org}; Scala combines an industry-proven

runtime environment, the Java Virtual Machine (JVM), with a state-of-the-art

type-safe general purpose programming language.}.

\item The interactive prover itself acts as a backend to the IDE

and is implemented in Standard ML. The languages of the ML family are standard workhorses

for interactive provers; Isabelle prefers PolyML which excels especially

in parallelism (see \secref{parallel}).

\item External non-interactive provers can be hooked to the core prover

(see \secref{external}).

\item Code generation allows to turn suitable specifications

into programs (see \secref{code-generation}).

\end{itemize}

\subsection{The jEdit user interface}\label{jedit}

Traditionally, TP systems have a reputation of being difficult to access.

Recent developments in Isabelle attempt to overcome this by providing

a state-of-the-art editor framework (jEdit) with a plugin establishing

an IDE-like feeling when editing Isabelle theories:

\begin{center}

\includegraphics[width=0.7\linewidth]{fig/jedit}

\end{center}

The key idea to make this work is continuous proof checking: the theory

source text is annotated by semantic information by the proof assistant as it becomes

available incrementally. This works via an asynchronous

protocol that neither blocks the editor nor stops the proof assistant from

exploiting parallelism on multi-core hardware

\cite{makarius:isa-scala-jedit,makarius-jEdit-12}.

User interface support is vital to our project aims. The traditional

users of TP systems have mostly been scientists engaging in logic or

verification with a strong background in computer science, who would

find their individual ways to cope with the arcane TP system interfaces.

If we want to target engineers, we cannot expect them to develop

a similar hackitude, but must strive to provide a system with a user

interface similar to what one can expect today from computer tools.

Since Isabelle's jEdit interface is JVM-based, existing visualisation

tools in Java, e.g. for drawing function graphs, could be incorporated

if this turns out to be necessary.

The continuous proof checking approach with all its consequences

currently is unique to Isabelle, setting it apart in this respect.

\section{System architecture}

\subsection{Reconciling trustability and extensibility – the LCF approach}

Isabelle is a proof assistant in the LCF tradition: each proof or theorem

is internally represented by a value of an abstract type \verb!thm! in

the underlying (type-safe!) implementation language which can only be

manipulated by a few trusted primitive operations, the so-called

inference kernel \cite{meta-ML-79}. Therefore any construction of proofs

in the run-time system boils down to operations from the inference kernel.

This results in high trustability. What makes this rigorous approach scale

is that there is no fundamental distinction between operations or theorems

which are built-in in the sense of the inference kernel, and derived ones. This

allows to derive operations and corresponding proof rules in terms of existing ones

but to ignore their construction later on. E.g. existential quantification

$\exists$\footnote{Existential quantification itself is indeed no built-in but a derived concept}

comes with two natural-deduction-style \citeme inference rules

\begin{simplegather}

\infer[{ex}_{intro}]{\exists x. P \: x}{P \: t} \quad

\infer[{ex}_{elim}]{R}{\exists x. P \: x && (\forall x. P \: x \Longrightarrow R)}

\end{simplegather}

Defining divisibility on the natural numbers using multiplication on natural numbers

\begin{simplegather}

m \| n = (\exists q. n = q * m)

\end{simplegather}

the following natural-deduction-style inference rules for divisibility can be derived

\begin{simplegather}

\infer[{divides}_{intro}]{m \| n}{n = q * m} \quad

\infer[{divides}_{elim}]{R}{m \| n && (\forall q. n = q * m \Longrightarrow R)}

\end{simplegather}

Reasoning on divisibility then may continue using these two rules, regardless

how the original definition of divisibility and the proofs of the corresponding

inference rules have been accomplished.

\fixme[evaluate and compare to other systems]

\subsection{Programmatic extensibility}

Since theorems are values in the underlying ML implementation language, they

can be manipulated by arbitrary programmatic means. This allows to automate

complex specification and proof steps.

Prominent examples are the function package

mentioned above \refme, or Isabelle's simplifier which enables the user

to perform equational reasoning automatically.

An instance in our project context could be to provide trusted

computation machineries, e.g.~accompany Isabelle's Kurzweil-Henstock

Gauge Integration \cite{kurzweil-henstock:integral-00} in many dimensions with tools to actually

compute integrals.

\fixme[compare to other systems]

\subsection{Syntax variability}

Isabelle's syntax in large parts is not hard-wired to the system. This

allows to customise the appearance of commands, terms etc. with high degrees

of freedom. In the context of computer science, this is not really an issue:

computer scientists are used to switch between different notational conventions

as needed. However, we want to approach engineers with a different attitude

and approximate notation found in engineering literature as close as possible and appropriate.

\fixme[example: example from engineering book, example in Isabelle »as it is«, possible variant of Isabelle syntax]

\fixme[compare to other systems]

\section{Speedful computation}

Proof assistants are designed for proofs, but often do not perform

very well on computations \footnote{This goes back to the first edition

of Russell's \emph{Principia}, where the proof of $1 + 1 = 2$ appears

only on page 379.}

The deeper reason is that by their very nature proof assistants carry out

computations \emph{symbolically} which is quite heavyweight.

In this section we discuss some devices to make computations in Isabelle

feasible in practice.

\subsection{Parallel computation}\label{parallel}

Recent releases of Isabelle exploit parallelism on multicore machines

\cite{makarius:isa-scala-jedit}. Beside the speedup itself, this is also a device to keep

up reactivity of the system while background threads are engaged

in massive computations \secref{jedit}. On suitable hardware, speedups

by factor of 4 to 5 have been achieved \cite{parallel-ML}. Recent versions of

PolyML (Isabelle's run-time system) featuring parallel garbage

collection have shown further boost.

Both the parallelism facilities of the underlying run-time system as

well as the consideration and exploitation of parallelism in the

overall system architecture in combination currently set Isabelle

apart from similar systems in that respect.

\subsection{Code generation and datatype refinement}\label{code-generation}

To avoid the overhead of computations \emph{within} the proof assistant,

it is possible to generate code from specifications and run that code

in its target language environment. Isabelle implements code generation

by turning a suitable set of equational theorems into a program

with the \emph{same equational semantics} in a specific language; currently,

Standard ML, OCaml, Haskell and Scala are supported

\cite{haftm-nipkow-code-gen-HRS-10}. Typical applications of code generation

are:

\begin{itemize}

\item Turning a specification into a executable program which

by construction obeys the original specification.

\item Using generated code as a fast evaluator. This is

particularly suitable for decision procedures, i.e.~

procedures that solve a class of problems programmatically

but by construction of a suitable proofs, which often involves

heavy equational rewriting \cite{Chaieb-PhD}.

\end{itemize}

A particularity of Isabelle's code generator is that it comes

with support for algorithm and datatype refinement, which allows

it e.g. to implement (finite) sets by balanced trees in a trustable way (see

\cite{isa-codegen}) – and thus more performant code.

Note that specifications restricting themselves to the shallowly

embedded programming language of Isabelle identified in \secref{Isar}

can (almost) always be turned into programs by code generation

without further ado, which enables engineers etc. developing

algorithms in them are able to see their algorithms work on

other platforms, too.

For specifications which go beyond that language, there is still

a high chance that reasonable subsets can be turned executable

by providing reasonable models for the involved types. E.g.~although

rational numbers in Isabelle are constructed abstractly using

quotients, they are executable by default since at the same

time an implementation by pairs of integer numbers is provided.

\subsection{Integration of external tools}\label{external}

Over the years, various external tools have been hooked to the Isabelle

system, e.g. first-order provers for the sledgehammer proof searcher \cite{sledgehammer-10},

SAT solvers for the nitpick counterexample searcher \cite{nitpick-10}, etc.

These external tools are generally useful on their own; even

more relevant is the shared generic infrastructure to connect Isabelle

to external tools, which turns Isabelle into a system hub for gluing together

various kinds of systems for proving, computing, evaluating, etc. This

enables us to integrate specific CA tools into the whole machinery

where necessary.

\fixme[comparison to other systems]

\section{Isabelle's knowledge for application in engineering}

Isabelle's features described so far establish a system appealing for

computer scientists, mathematicians and logicians to mechanise their

respective knowledge due to individual interests and in projects of

various size. The Isabelle developer has been continuously integrating

the many contributions to a coherent and manageable system of mechanised

knowledge, providing a base for further development.

\subsection{Preexisting formalisations}

In addition to the datatypes mentioned in \secref{calculus}, the

Isabelle distribution~\footnote{http://isabelle.in.tum.de/dist/library/HOL}

and the accompanying Archive of Formal Proofs~\footnote{\url{http://afp.sourceforge.net/topics.shtml}}

contain various already formalised knowledge which can be built on,

to name some promising candidates:

\begin{description}\label{isa-know-survey}

\item[Algebra] is developed from groups, rings, fields up to the

fundamental theorem of algebra in the univariate case and Groebner

bases in the multivariate case.

\item[Linear algebra] with executable operations on matrices of

arbitrary dimensions over ordered semirings, elementary linear algebra

on Euclidean spaces, convexity in real vector spaces up to

Gauss-Jordan Elimination for Matrices.

\item[Analysis] includes semicontinuous and continuous functions,

Cauchy's Mean Theorem and the Cauchy-Schwarz Inequality, MacLaurin and

Taylor series, nth roots and the transcendental functions.

\item[Multivariate analysis] concerning Frechet Derivative, Inner

Product Spaces and the Gradient Derivative, Finite-Dimensional Inner

Product Spaces, Multivariate calculus in Euclidean space,

Kurzweil-Henstock Gauge Integration in many dimensions, up to initial

value problems of Ordinary Differential

Equations~\footnote{http://afp.sourceforge.net/entries/Ordinary\_Differential\_Equations.shtml}.

\item[Probability theory] including Binomial Coefficients,

Permutations, Social Choice Theory (Arrow's General Possibility

Theorem, Sen's Liberal Paradox and May's Theorem) and also

Discrete-Time Markov Chains.

\item[Polynomials] in an univariate and in a multivariate version, the

latter addressing monotonicity on strongly normalizing (monotone)

orders. \item[Number theory] with primality on $\cal N$, congruence,

unique factorisation, divisibility in monoids and rings, Residue

rings: Euler's theorem and Wilson's theorem, quotient rings.

% \item[Numerals] and their respective arithmetic are all available,

% $\cal N, I, Q, R$ and $\cal C$. For $\cal R$ there are approximations

% by floating point numbers~

% \footnote{http://isabelle.in.tum.de/dist/library/HOL/Decision\_Procs/Approximation.html}.

\end{description}

% Following the LCF paradigm all the above knowledge is derived from

% first principles, from the axioms of HOL. The axioms, definitions,

% theorems and proofs are human readable.

This knowledge covers the range of mathematics addressed by most

introductory university courses in engineering sciences. So there is

comparably easy for the envisaged TP-PL prototype to serve in

engineering education. However, why should student learn something

which is neither required in research nor in the practice of

engineering in the respective domain?

Reaching readiness for application seems difficult in both, in

industry as well as in research. TP-PL decided to primarily address

research as shown in the next section.

\subsection{Examples for TP-services useful in research}

%\subsection{Example for an application domain}

A domain where TP-PL seems to be able to serve actual research is

Information theory -- in spite the fact, that the Isabelle theory

\textit{Information.thy} in the release of 2012 covers only the first

50 pages of total 650 in a standard text book~\cite{info-thy-06}: the

final theorems in \textit{Information.thy} are

\textit{mutual\_information\_eq\_entropy\_conditional\_entropy,

conditional\_entropy\_less\_eq\_entropy, entropy\_chain\_rule} and

\textit{entropy\_data\_processing}.

There are two promising applications, the first one is

``non-Shannon-type inequalities''~\cite{non-shannon-01}. This kind of

information theoretic inequalities still need to be treated by hand

(or by a Matlab toolbox~\cite{Ru99a}) while ``Shannon-type

inequalities'' are proved automatically by special purpose

provers~\footnote{http://user-www.ie.cuhk.edu.hk/~ITIP/}. So

``non-Shannon-type'' still might rely on interactive TP.

Another application is an ``inverse problem'' to the following: Given

a certain measurement device composed from several hardware and

software components, the overall precision of the device can be

calculated. Of practical interest is the problem inversion: Given a

certain precision requested by customers or by standards, how can the

components be optimised in costs and responsiveness by reducing

respective precision such that the precision requested for the

over-all system is still met. This inverse problem can be tackled by a

TP-based program using Isabelle's interval

arithmetic~\cite{hoelzl09realineequalities}, which, after a trial and

error optimisation, would lead to a proof that the ``optimised''

device meets the requirements.

\fixme[Provide a detailed example for the ``inverse problem''.]