The following four initiatives were explicitly mentioned in the first proposals for setting up an IMKT.
The world that uses mathematics employs many techniques that are considered well known and completely trusted in their use in technology and for general reasoning. The most basic are arithmetic and elementary algebra. Among the less elementary subjects that are significant in engineering and other applications of mathematics probably the most common is the use of Special Functions. We envisage a service that will allow the use of special functions to be even more reliable, and will clarify the relationships between the capabilities provided by the many existing tools that offer access to knowledge about them and numerical and symbolic implementations of special functions and their properties.
The mathematics of a great number of so-called classical special functions is agreed to be well understood, although the boundaries of that subject and its relationship to many other areas continue to be fruitfully explored. The matter is so well understood that computer algebra systems, both commercial and non-commercial, offer, as a matter of course, calculation with such functions, both numerical and symbolical. The implementations of trigonometric functions, Bessel, Legendre, Chebyshev, and many other, functions in Maple, Mathematica, Matlab, Sage, Pari, Gap and other such systems have, to a large extent, replaced the old uses of tables and handbooks in dealing with these mathematical objects. The classic Handbook by Abramowitz and Stegun from the National Bureau of Standards has been put out in a revised edition, with an online form as the Digital Library of Mathematical Functions from NIST (National Institute of Standards and Technology). On the Web this joined the massive online Wolfram Functions site, publicly available due to support from the commercial entity Wolfram Research, producers of Mathematica, and offerings from the creators of Maple, a Digital Dictionary of Mathematical Functions from INRIA in France, and other repositories of information about special functions.
One might imagine that this means that there is simply healthy competitive effort about, and that it does not matter which tools are being used. One tool might be more efficient for a particular calculation but the mathematical facts should be the same everywhere. In truth, this is not so. There are almost as many different views of special function as there are systems treating them: choices of normalizations, starting definitions, forms of parameters, branch-cut conventions, and, of course, the algorithms to handle calculations can make a difference. They often do so without an unwitting user’s being aware of it.
To use a mathematical metaphor, it is as though the manifold of special functions is being described locally by specific charts provided by specific systems. That is fine, but the global mathematical whole is only well defined if there are transition functions describing the relationships between different charts supposedly describing the same things. There’s been a sense in which it has been natural for each system—Mathematica (now the Wolfram Language), Maple, Matlab, NAG, etc. —to stick to and promote its own conventions, with little interest in comparison, or even in interoperability, with others. To build the sorts of tools there should be for technology or the development of science this is wholly unsatisfactory and the working community knows it.
The GDML WG has been aware of these problems since its inception. A GDML should be able to work with mathematical knowledge in equivalent representations. It is also important to know when what was thought to be obviously the same turns out not to be. The area of classical special functions should be one where there is lots of clarity for agreement. Two members of the WG are now on the NIST board overseeing the DLMF, and several European special function efforts have also had contact with the WG. In addition, there’s been concern about special functions displayed by the OpenMath Society. The OpenMath markup is coherent at a basic level with Content markup of MathML, which was developed by the W3C (World Wide Web Consortium) and is now an ISO standard.
At the recent gatherings in Seattle (Jan 2016) and Toronto (Feb 2016) where matters of concern to a GDML were worked over, representatives of both Mathematica and Maple publicly declared their willingness to consider, in principle, working on according their notions of specific functions; the same general support of the idea was to be heard from other parties at the OPSF13 meeting at NIST in June 2015.
NIST is as a broker who is obliged by law to be neutral. One current proposal is to organize around NIST groups working on setting down the vocabularies, definitions and conventions used to describe the well-known special functions and to record the comparisons of their various forms in such a way that tools to provide interoperability for, and integration of, the vast knowledge implicit in the various systems. We hope that an IMKT legal entity can provide a way to finesse social issues so as to get to the science and eventually provide the desired public service.
The idea is to develop a collection of formal abstracts (FABstracts) consisting of carefully captured assertions of theorems and the underlying definitions recorded in a formal framework. The proof assistant Lean is the natural framework chosen by the FABstracts Project under the leadership of Tom Hales and Jeremy Avigad to start a pilot system.
Ultimately a full formalization of usable mathematics is the sort of thing people imagine. However, we are not yet clear what can be formalized, what ‘fully’ means outside a specific context, and there’s no general agreed framework in which to formalize. Tom Hales suggested beginning with a much smaller practical pilot project. Hales is well-known for his proof of the Kepler Sphere Packing Conjecture and related results, and then for mounting a successful 25-year FTE project to make his proofs formally machine-checkable after some felt they were beyond human understanding.
HOL Light might have been a natural framework to begin in, since that carried Hales’s convincing Flyspeck project mentioned above. One good area of mathematics with which to start was thought to be geometry, since that makes different semantic demands than analysis or algebra.
To write these formal abstracts starting from extant mathematical literature is at this time a task for educated humans. Experience with doing so can lead to automation as the process becomes better understood. For those carrying out the task of recording the math, many of whom would probably will be graduate students, it will likely be an instructive experience. An initial project would therefore be to undertake the transcription of the abstracts of a selection of geometrical literature into a proof assitant’s formalism.
Once that is done there will be an unusually well-marked up and authoritative corpus of mathematical knowledge that can be publicly displayed. Perhaps more importantly, it is mathematical knowledge data can be used for experiment in providing innovative interfaces into it. That is naturally on top of the challenge it poses to those who are doing automatic theorem proving, to assure proofs of the formulated assertions. This will be a well-formulated collection of theorems and definitions that sould ultimately be supported by full proofs. In the cases of Flyspeck, or of Gonthier’s analyses of the Four Color Theorem and the Feit-Thompson Theorem, a major final result had to be supported by finding proof patterns and reformulations that allowed a final complete verification. This will probably lead to a different style of corpus. In addition, this sort of effort is one that should lead to more crowd-sourcing with a wider net of volunteers extending the starting efforts once we have found a clear way to approach the task.
There are by now overlapping pieces of mathematics, such as basic pieces of Euclidean geometry and some fundamental lemmas and theorems of analysis that have had to be formalized in more than one system. The main systems where there is real overlap are HOL Light, Coq and Mizar. What has not been done is to make careful comparisons of the formulations of, say, the Jordan Curve Theorem, or the Fundamental Theorem of Calculus in the various contexts. We need to see in what sense they really are the same and in what they really may differ. This is not just a matter of the way the proofs proceed but may be about subtleties in the semantics resulting from different foundational bases.
Clarity should be of value to appliers of mathematics but also to educational efforts. What is required for a theorem’s truth is often somewhat obscure.
Harrison from HOL Light, Wiedijk who has worked with more than one system, Urban from Mizar and others have expressed interest in exploring the ways these systems are the same and differ. As a practical result the search for applicable results may be extended to a federated search over the different library collections of already formalized material with clarity over what’s returned.
There are several sorts of document analysis that are possible and have not been applied to the mathematical corpus which has its own special features. Mathematics is arguably a natural language of a globally distributed people (those with enough training in mathematics to do science and engineering) but not a typical one. For a start, its writing is not truly linear and involves a large sign vocabulary and there is a different grammar required by the use of formulas. There is a need for power tools for document digestion as the GDML effort ramps up, so NLP methods tried and tested elsewhere will need to be adapted to handling the mathematical corpus.
$n$-gram analysis of mathematical corpora is a relatively easy place to begin. NLP tools here are readily available for ordinary languages. Sample corpora, e.g., those of zbMATH and arXiv are readily available and their curators are interested in such ideas.
Some early experiments have been tried at zbMATH for keyword generation, and another small project was completed at ORCCA under Watt’s direction that concentrated the $n$-grams on the mathematical formulas and found characteristics of different subjects could be detected. This is a much more limited approach than deep learning and needs many fewer resources, but it is applicable in context finding and clustering tools that can also be deployed much more easily. As a basis for machine learning this is but a baby step. However, other projects have been funded by the Alfred P. Sloan Foundation using alternative statistical approaches.