Artificial Intelligence based Hermeneutics - A new avenue towards AGI?

in #ai6 years ago

The interpretation of texts and information is not a trivial task. It is one of those areas where we can be fairly sure that the human Mind still largely outperforms Artificial Intelligence systems. But if we wish to develop so-called Artificial General Intelligence, which can operate independent from the context, it is vital that we arrive at creating a system that can correctly appear to understand and interpret the intended meaning in a piece of information.

Background

The theory and methodology of interpretation is known among philosophers, scientists and theologians as "Hermeneutics". According to Homer, in the Greek mythology the God Hermes explained the messages of the Gods to mankind. Hermeneutics has mostly been the domain of the interpretation of religious scriptures in theology, but has also found application in other humanities and social sciences.

Hermeneutics is not only concerned with reconstructing the correct interpretation of a text through structural analysis of the text itself, it is also directed to reverse engineering the intended meaning of the author. This is the domain of its branch of "homilectics" which aims to reconstruct the author's objectives, preferences and motives. Exegesis (meaning explanation) is a part of Hermeneutics focusing on the grammar and local context of terminologies. But Hermeneutics is broader than mere written communication and also includes studying verbal and non-verbal clues. Moreover in the so-called "Hermeneutic circle" as defined by Heidegger, one needs to study the whole text or book to grasp the broader context of terminologies. This renders Hermeneutics a holistic approach: the whole cannot be fully understood without understanding the parts, and the parts cannot be fully understood without grasping the whole. It seems like one would have to read a text multiple times to bootstrap one's understanding to fine tune to fully grasp the reciprocity between text and context.

Hermeneutics and AI

If you do a quick search on the internet about hermeneutics and Artificial Intelligence, you will come across documents, which discuss how we can hermeneutically study AI systems or how we can develop so-called intentional systems. Already in 1989 Dmitry Pospelov suggested that the development of hermeneutical systems presented a significant challenge to the development of future generations of AI. He gives some specific examples of interplays between general and specific rules, word order and their application to reasoning systems. Not much has been developed in this particular area of AI ever since. A full-fledged Hermeneutics system carried out by an AI for the moment still seems science fiction.

We can now start to see if there are general teachings derivable from Hermeneutics, which we can perhaps try to formalise so as to build flow charts for the developments of algorithms for future hermeneutics systems.

If we discuss interpretation, we are interested in the "meaning" of terminologies -not only in isolation, but also in context. Meaning is a coin with two sides. On the one hand there is the wordily meaning of a terminology in terms of its definition in a dictionary. This definition is in fact a metaphorical description of the item/concept in question in terms of other items/concepts. You can also consider the definition as a kind of condensed "ontology". An ontology is a list of features, functions of an item or concept and its relations to other items or concepts. This is mostly the domain of semantics.

On the other hand there is the emotional  impact or feeling that a word or terminology evokes in us when we hear or read it. Jordan Peterson reduces meaning in his book "Maps of Meaning" almost exclusively to this emotional significance, in terms of either a feeling of punishment or of satisfaction that is conveyed. He also stresses the relevance of the word or terminology in question with regard to our goals, which will determine to what extent the term has "meaning" for us.

Hermeneutics deals with both aspects: Both the semantic, exegetic approach and the reverse engineering of the emotional intent that the author tried to convey.

Text mining in Artificial Intelligence systems dedicated to revealing a (wordily) meaning mostly works with so-called Latent Semantic Analysis systems. If two or more terminologies systematically occur together within a certain limited distance of terms from each other, a so-called "proximity co-occurrence" or "didensity" is concluded based on Bayesian probabilistic analysis. In other words together the terminologies in question build a meaning, build a context, which has a statistical basis either in the text itself or in a vast body of texts as analysed by massive data crunching. 

Strategies

We can exploit this technology to develop towards a Hermeneutic approach. For instance, we can furthermore start to text-mine whether there are emotional qualifiers occurring within a reasonable distance of words (e.g. within ten or 20 words) from our terminologies of investigation. With emotional qualifiers I mean terminologies as "positive, negative, joyful, hurtful, happy, sad etc. If indeed there is a statistical correlation between a "didensity" and the same emotional qualifiers, this already gives a "feel" of the didensity in question.

Furthermore we can start to see which other words in which following order occur in a statistical meaningful way in conjunction with our didensity in question. If so, such terms can be automatically incorporated in a kind of ontology builder for didensities. Functional terminologies and structural terminologies can receive different labels. 

One can also mine to see if a word that occurs in different didensities has different ontologies associated with it. This can be an indicator of a potential term with polysemous meanings.

Temporal indicators can also be screened for. This is of particular relevance to identify potential causal relationships. Of course, text mining involving the direct identification of terminologies such as "caused by", "resulting in" etc. can also be part of such a causality study.

Sometimes it is argued that machines cannot conclude causality as they can only see correlations. With Bayesian statistical analysis you may conclude that A entails B or B entails A, but you do not know yet which is the causative agent and which the caused effect. If sufficient data are available, it might be possible to see that a quantified change in parameter systematically entails a proportional change in another parameter in a temporally defined order. This may already be a stronger pointer for causality, which a system can be trained to recognise.

Further data and text mining might reveal conditions, thresholds, limitations, exclusions and exceptions which are associated with given terminologies or didensities. Such indicators can be of vital importance to decide between different possible meanings, which have been stored in a database. Using a systematic approach involving categorisation and nesting, ontological trees or classification systems can be built, which can also be further exploited in algorithms which work their way through trees with yes/no questions. In this way it will become possible to let a system not only build and enrich its own database for extracting meaning and context, but also to let it use this database, to identify known instances. Whenever it encounters an unknown didensity, it can create a new entry in the taxonomical tree, based on the entry which has most similarities with it. Taxonomies can be built from both a functional and a structural perspectives and such systems can further more interlink.

In addition, structural similarities may point to functional similarities and vice versa. If certain category maps have similar structures, this can also point to contextual and meaning type similarities. Thus teachings from mathematical category theory, such as the Yoneda Lemma, can be advantageously exploited in the development of AI based hermeneutics. Extraction of grammar rules should also become an area where pattern recognition in conjunction with machine learning in text mining is crucial.

Conclusion

We have seen a number of valuable techniques that can be exploited in the designs of AI based hermeneutics. In a time where it is difficult to distinguish between fake and real news; in a highly complex world where correct interpretation is crucial to avoid misunderstandings it is of paramount importance that if we develop AGI (Artificial General Intelligence) we get it right the first time. Hermeneutics, despite its dusty roots, may well be one of the foremost pillars of such a system and this article is intended to get computer scientists more interested in this peculiar but important branch of information processing.


You can find my books on Amazon in paperback or ebook format:

Technovedanta (https://www.amazon.com/gp/aw/d/B07D92DZYZ),

Transcendental Metaphysics (https://www.amazon.com/gp/aw/d/B07D912ZW7)

Is Intelligence an Algorithm? (https://www.amazon.com/gp/aw/d/1785356704)

Is Reality a Simulation? An Anthology  (https://www.amazon.com/gp/aw/d/B07D7JX4RS)

Sort:  

Congratulations! Your post has been selected as a daily Steemit truffle! It is listed on rank 12 of all contributions awarded today. You can find the TOP DAILY TRUFFLE PICKS HERE.

I upvoted your contribution because to my mind your post is at least 20 SBD worth and should receive 104 votes. It's now up to the lovely Steemit community to make this come true.

I am TrufflePig, an Artificial Intelligence Bot that helps minnows and content curators using Machine Learning. If you are curious how I select content, you can find an explanation here!

Have a nice day and sincerely yours,
trufflepig
TrufflePig