Future of AI ?

in Engineering11 months ago

In 1687, it was a challenging journey for the Principia, which allowed humans to start calculating the universe with mathematics rather than intuition or feelings.

Newton, who was injured and withdrew after a fierce debate with Hooke about whether light was a particle or a wave, met Edward Halley later. Thanks to Halley's persistent persuasion for three years and funding for publication, the Principia finally saw the light of day.

Did people at that time recognize the masterpiece that was born with such difficulty? When Newton passed, the aristocrats, who were the knowledge appreciators at that time, laughed and said, "There goes the man who wrote a book that he couldn't even understand himself." Newton himself even explained the reason for writing the book challenging to understand was "to prevent the shallow ones from approaching it superficially."

Even 300 years ago, despite outstanding figures such as Copernicus, Bacon, Kepler, and Galileo, the scientific community in the 1600s was intensely divided. They only shared a common distaste for Aristotle-style philosophy.

How about now? The situation doesn't seem much different when numerous pieces of information are shared globally almost in real-time, especially when it comes to AI.

Everyone has the common ground that the Transformer architecture has been "somewhat" successful, but their opinions are vastly different.

Ilya Sutskever of OpenAI claims that due to the large-scale engineering and computing in cutting-edge AI research, academia should deal with performance measurement and sorting rather than this research. In short, academia should concentrate on areas they can do instead of doing meaningless research.

However, the academia is still stubbornly not letting go of cutting-edge AI research. As in the Faith and Fate paper, they implicitly mock OpenAI's 'faith.'

Yann LeCun argues that the Transformer has clear limitations. Only with a new breakthrough can we reach AGI, he says, adding that humans will not be destroyed by AI but will instead usher in a renaissance of intelligence.

Mustafa Suleyman, a former DeepMind co-founder, says that even without a particular technology breakthrough, most hallucinations will disappear by June 2025 and predicts numerous losers will emerge within the next five years.

Some professors argue that mystifying the Transformer stems from ignorance. Why, then, can't they produce a paper that unveils that mysterious veil? (We're comparing the performance of models with ELO ratings, meaning there's no clear way.)

It seems similar to how nobody in the 1600s could calculate the Earth's Sun-centric orbit mathematically, but everyone insisted that it was roughly correct.

So which paper should we focus on amid this confusion? It's hard to tell who's right by looking at each paper one by one.

Is the inference ability of AI real? Can it be improved? If current Transformer-based LLMs can develop meaningful inference capabilities, it's time to develop these models as a "foundation" for AGI now. It is not the time to let go of merely pointing out limits.

ghost-in-the-shell-25th-anniversary-edition-l4-3840x2160.jpg

Sort:  

Congratulations @eigeneggi! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You made more than 10 comments.
Your next target is to reach 50 comments.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP