Will o1 Ever Escape ChatGPT's Old Training?
Manage episode 444202216 series 3605861
This study investigates whether the reasoning abilities of large language models (LLMs) are still influenced by their origins in next-word prediction. The authors examine the performance of a new LLM from OpenAI called o1, which is specifically optimized for reasoning, on tasks that highlight the limitations of LLMs based on their autoregressive nature. While o1 shows significant improvements compared to previous LLMs, it still displays a sensitivity to the probability of both the task and the output, suggesting that reasoning optimization may not fully overcome the probabilistic biases ingrained during training. The study provides evidence for the "teleological perspective," which argues that understanding AI systems requires considering the pressures and optimizations that have shaped them.
Read more: https://arxiv.org/abs/2410.01792
71 episodes