4 Comments

This is the second article I have seen about AI improperly citing articles or pointing to articles that don’t exist. It seems like it isn’t able to cite just one source when it synthesizes an answer to a deeper question that requires references. To me it looks like it may be splicing sentences together from multiple sources into something that appears to be coherent, but isn’t necessarily factual. I wonder if it can differentiate between old research and new research that has been updated and if it would mix the results from both studies.

Expand full comment

Agree - I believe it is best considered a "what word would be most likely" generator - zero true context. Context comes from volumes of info and if you get into details - likely of any specific industry - I would assume it falls apart.

Expand full comment

Great point about having to be even more suspect of claims made in research publications. One would hope the top journals begin to scrutinize factual statements more closely, but I’m not holding my breath. Having said that, I’ve used GPT4, and it does seem to have much less hallucination and overall more impressive performance than chatGPT.

Expand full comment

Good to hear about GPT4 - I'm sure they are going to rapidly evolve. Interesting times and clearly this is a snapshot of today. Thanks for reading

Expand full comment