Discussions and Limitations
Last updated
Last updated
Totally, these two interfaces essentially are the same thing. They try to make use of some universal computation / powerful generalization / miraculous magic which might be obtained by LLMs in the large-scale training. Since LLMs are trained for language modelling and language may be the most great invent because it could almost embedding anything, we could try to instruct a LLM to process/handle any thing challenging, which is by language interface. The key is how to construct "correct" instructions. Besides, since LLMs consist of trained neural networks which extract sequence feature, it is able to attempt to let a LLM process any adjusted feature you have, where shape alignment and mapping alignment are the focus. Actually, if you consider the process of transforming your task into language descriptions and instructions as a kind of both shape and mapping alignment, language interface is a special case of feature interface.
However, many people are likely to overestimate the capability of LLMs (For example, I recently saw a graduate classmate major in A.I. spoke highly of GPT4 without mentioning any disadvantage, which really made me dumbfounded . ). Recent papers and researches have already shown that LLMs have some fatal problems. Even in language tasks, LLMs are not perfect at all. Thus, no matter we apply LLM4PP by language interface or feature interface, the performance and efficiency is limited and hard to tell before a trial.
Facing this problem, though it seems that simply apply LLM4PP just shows unknown performance which already exists and does not improve anything, I think maybe it's still worthy since something interesting might be found, many could meet corresponding requirements for their lives and these works will be possible to be able to motivate further improvement of LLMs and similar learning paradigms in the future.