Thoughts
- I enjoyed hearing about the idea of ‘model collapse’ in AI models that over time ends up using more and more of it’s ‘slop’ or inaccurate content and then it’s output seems to collapse
- They talked about how all of the crawable internet data has all been used to train AI models and there is relatively little new data out there
- This combined with the fact that the models are being trained to produce pleasing text or data and this reduces the amount of ‘tail’ normative data that they use
- This means that the data will become more and more normatively average
- They finish with the idea that humans may actually be used in the future to moderate and curate AI content. AI models may need humans to ‘parent’ them in the future to stop them descending into blandness.
- They talked about how this might be an area where employment may increase. It felt like a much more balanced way to think about the software than what the current marketing is suggesting. I guess time will tell whether this current situation (of slop generation) will be true as the models change.
- I am certainly not an expert on the topic!
- => overall the conversation is another useful way to think about AI in a more critical way. Model collapse may only be preventable by humans guiding AI in the right direction – there will always need to be a partnership.
Reference
BBC.co.uk. (2025). Page Restricted. [online] Available at: https://www.bbc.co.uk/sounds/play/m00274wj [Accessed 13 May 2025].