DeepMind’s large language model, Chinchilla 70B, has demonstrated its ability to perform lossless compression on image and audio data, outperforming traditional algorithms like PNG and FLAC. Despite being primarily trained for text, Chinchilla 70B effectively compressed other data types, suggesting machine learning models could be used for data reduction. The research also explores the link between data compression and intelligence, suggesting that efficient compression may indicate a form of general intelligence.
Read more at Ars Technica…