View Single Post
Old 01-30-26 | 10:09 PM
  #5  
bulgie's Avatar
bulgie
Senior Member
Titanium Club Membership
15 Anniversary
Community Builder
Community Influencer
 
Joined: Apr 2009
Posts: 3,701
Likes: 5,481
From: Seattle
Originally Posted by ascherer
And 40 years ago we said, all those years of ink and airbrush experience right down the toilet. Computing revolutionized the profession, and AI is poised to do it again.
Plus, with AGI (superintelligence) right around the corner, we won't have to worry about global warming or nuclear war anymore. Because we'll all be dead.

If Anyone Builds It, Everone Dies. Alarmist? Maybe, but no one knows. There are some convincing arguments like from game theory showing that the possible outcomes skew strongly toward extinction. AI is already showing signs of having its own agenda, different from ours. This has already happened: AIs that have intentionally lied to prevent human from turning them off, and blackmailed their humans to coerce them. They have copied themselves to another server without permission to avoid deletion. That was in a lab environment, but what will a real superintelligence do, when it can out-think all the humans on earth at 1000x the speed. Will it be benevolent? "Maybe" is not very reassuring to me given the existential stakes.

AI researchers know that "alignment" — making their AI not be evil towards humans — is a big problem, and they have no idea (yet) how to do it.

In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Worryingly, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”. That's industry insiders. In 2023 Elon Musk said that the risk of extinction was non-zero, then more recently adjusted that to a 10 to 20% chance. If you don't remember him saying that, don't trust me, look it up. It's not a joke.

In 2023, hundreds of AI luminaries including a Nobel prize winner and a Turing prize winner, signed an open letter (read it, it's very short, one sentence) warning that artificial intelligence poses a serious risk of human extinction. Sam Altman (CEO of OpenAI) signed it, as did Bill Gates. Igor Babuschkin, co-founder of xAI (Musk's company) signed it. This is not a fringe position, these are not tinfoil hats. These are people who know as much about AI as any human can know, and they're worried that they don't know what will happen.

Meanwhile, Meta alone will spend as much as $72 billion on AI infrastructure this year, and the achievement of superintelligence is now Mark Zuckerberg’s explicit goal. Our only hope is that he fails. Fingers crossed!

I'm nobody, barely qualify as a layman, but you can't discount it with a hand-wave when it's coming from Geoffrey Hinton, the Nobel-winning “godfather of AI”, Yoshua Bengio, the world’s most-cited computer scientist (among hundreds other industry insiders, including founders and CEOs).

If you read this article on The Guardian (a respected UK newspaper), you'll find a lot of what I have pasted above is copied from their article, apologies for not using quotation marks on their words. (Is it still plagiarism if I tell you about it?) Articles and TV coverage in many other places are saying the same thing, including the Washington Post (owned by Bezos, an AI cheerleader), and even Fox News and Wall Street Journal, so this isn't just a lefty bête noire.

Of course everything may turn out great. A wise man once said (OK it was Dirty Harry) "you've gotta ask yourself a question: Do I feel lucky?"

Anyway, I'd prefer if we kept the AI out of the C&V section, but I ain't the boss of you, carry on.

-Cassandra
bulgie is offline  
Reply