Originally Posted by
indyfabz
May I borrow that for a potential signature?
LOL be my guest. Just give me credit in case it blows up and ends up on t-shirts!
One thing that I've noticed with AI is that the more specific you get about a top, the more suspect their answers become. That's because the data model that they are synthesizing their answers from gets smaller and smaller.
Ask an AI to recommend 5 places in New York City to visit, and they can draw upon thousands upon thousands of articles, photos, blog posts, social posts, public reviews, etc from millions upon millions of people to formulate their response.
Ask an AI to recommend a route and POIs between two smaller places and the dataset shrinks considerably. Get remote enough and the dataset becomes quite small, to the point where the opinions of a small group of people who are part of the dataset can sway the result considerably. And I'd imagine that they'd give extra weight to more "popular" figures, i.e., Rick Steves' opinion of a place would receive more consideration than a bicycle tourist blogging about a place even though their experience might be more relevant.
Another thing to note is that AI doesn't actually know "things". That's why they often create images of people with 6 fingers. AI has been fed thousands upon thousands of images of hands but doesn't actually know what a hand is and that by definition usually has 5 fingers. Likewise, an AI doesn't know what a bicycle or a bicycle tour is but is able to fake it by the brute force of data. Before Noam Chomsky became known as a political radical, he was a world-renown linguist, and he's spoken about this issue .
Interesting times indeed.