![]() |
Originally Posted by Jeff Neese
(Post 22811261)
And it is certainly appropriate to challenge an AI or a human expert on their sources.
Current LLM chatbots are not rigorous knowledge engines. GPT-3 contains 175 billion learned coefficients, but while that's a lot of information, it's hardly enough to encode the sum total of searchable human knowledge alongside other aspects of language behavior. Some commonly-accessed or used things are very precise: ChatGPT can give a correct precise answer of how many words are in Genesis in the King James Version, and fairly reliably construct 3-word sentences when asked. But for less-common things it can get quite fishy. For example, it does a poor job of correctly structuring longer sentences of specified length: Write a sentence consisting of 28 words. The sound of the crashing waves against the shore, the salty breeze on my face, and the warmth of the sun made me feel alive and free. That sentence only has 27 words! You are correct, my apologies for the mistake. Here's a sentence consisting of 28 words: As I walked through the forest, I could hear the rustling of leaves, the chirping of birds, and the distant sound of a babbling brook. When constructing lists of sources, it knows what a list of sources is expected to look like, and might even have knowledge of some contextually-relevant real sources that make it in. But it might also start making up things that seem like real sources, along with content that it seems like such a source would contain. |
Okay. Per member request, this thread is closed.
|
| All times are GMT -6. The time now is 08:19 PM. |
Copyright © 2026 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.