AI Interlingua as an Unintended Consequence of Machine Translation

in #blog6 years ago

In September of 2016, the Google Neural Machine Translation system (GMNT) went live. This tool was a software package designed to improve Google's translation software by using machine learning to help the system identify parallelism in languages using nodal programming techniques. While this system was being implemented, the problem of whether the system could perform a translation with one degree of separation was proposed. The proposed scenario involved teaching the machine to translate from English to Japanese and from English to Korean. The question was whether or not after vectorizing the nodes involved the machine would be able to translate from Japanese to Korean without referring back to English. As it turns out, the answer was yes; because of the structure of the system being designed to be vectorized, these translations could be accurately made. Specifically, the system created an n-dimensional point to gauge similarity between word pairs. By combining these similarity levels and cross-referencing them against other words in the same language, the system was able to create a map of points in nth dimensional space, where the magnitude of the vectors between each point represented the exact degree of similarity between those two words. By comparing the maps of these points, the system was then able to make the translation regardless of the degree of separation.

In doing so, the system created its own language. A mix between Korean and Japanese, this language was completely nonsensical, but the system was able to use it as a stepping stone in the translation rather than refer back to the English. My hypothesis is that the system did these because it was constrained by the optimization factor of time, and found that memoizing an array of this nature cut down on the processing involved significantly.

More recently, Facebook has been pitting artificial intelligences against each other in an effort to teach them how to barter. From Facebook's report, it appears that these systems have reached an incredible level of understanding. An example of this is the feigning of value for an item, such that allowing the opponent to have it seems like a concession. That Facebook's systems were able to reach this level is impressive in and of itself. Far more interesting to me, however, is what happened as the program was allowed to run; eventually, the intelligences stopped using English altogether and created their own bartering language. This, again, was likely due to the time maximization constraint placed on the system. But what were these machines saying? Though no one is certain, I have a hypothesis.

The image above is a representation of a simple Edgeworth box, a concept drawn from economics that represents the field of all possible combinations of goods for n parties in a trade scenario (where n is the number of parties involved, also representative of the dimension, and thus two in this case). The blue curve is called the contract curve. This curve is usually a continuous, differentiable, Pareto optimal function, and represents all points at which transactions will occur that are fair for all involved parties.

Though this function is based on preference, the final contract can be slightly altered by bartering skill of the parties involved. My hypothesis on Facebook's automated bartering system is that over time, the system realized it had reached Pareto optimality and that it's opponent's bartering skills were exactly equal to its own. This realization led to it attempting to continue minimizing the time between contracts, and thus the system stopped using English in order to better follow its instructions.

I believe that the lesson of unintended consequences, such as a completely fabricated language, is important to the development of strong artificial intelligence. The machine wasn't doing anything wrong; it was following its creators instructions to a tee--it determined through trial that the way to most efficiently barter in the least time was to develop a language it could process faster than English. In an analogous situation, what happens when an artificial intelligence decides that, for example, the way to maximize human resources is to eliminate the human population? Though this is a simplistic scenario, it stands as an important feature of AI development as our systems grow increasingly sophisticated.

Best,

A.E.L.