Large Language Models (LLMs) like GPT-3 have demonstrated an ability to master reasoning by analogy, outperforming undergraduates in tests traditionally used in standardized exams like the SAT, according to a study by a team from the University of California, Los Angeles. The AI was able to develop rules based on a series of numbers and apply them to different domains, suggesting an abstract understanding of successorship. However, the software showed inconsistencies in recognizing when it was presented with these problems, indicating room for improvement.
Read more at Ars Technica…