Charles Hugh Smith: What ChatGPT and DeepMind Tell Us About AI

What’s interesting is the really hard problem AI has not been applied to is how to manage these technologies in our socio-economic-cultural system.

The world is agog at the apparent power of ChatGPT and similar programs to compose human-level narratives and generate images from simple commands. Many are succumbing to the temptation to extrapolate these powers to near-infinity, i.e. the Singularity in which AI reaches super-intelligence Nirvana.

All the excitement is fun but it’s more sensible to start by placing ChatGPT in the context of AI history and our socio-economic system.

I became interested in AI in the early 1980s, and read numerous books by the leading AI researchers of the time.

AI began in the 1960s with the dream of a Universal General Intelligence, a computational machine that matched humanity’s ability to apply a generalized intelligence to any problem.

This quickly led to the daunting realization that human intelligence wasn’t just logic or reason; it was an immensely complex system that depended on sight, heuristics (rules of thumb), feedback and many other subsystems.

AI famously goes through cycles of excitement about advances that are followed by deflating troughs of realizing the limits of the advances.

The increase in computing power and software programming in the 1980s led to advances in these sub-fields: machine vision, algorithms that embodied heuristics, and so on.

At the same time, philosophers like Hubert Dreyfus and John Searle were exploring what we mean by knowing and understanding, and questioning whether computers could ever achieve what we call “understanding.”

This paper (among many) summarizes the critique of AI being able to duplicate human understanding: Intentionality and Background: Searle and Dreyfus against Classical AI Theory.

Simply put, was running a script / algorithm actually “understanding” the problem as humans understand the problem?

The answer is of course no. The Turing Test–programming a computer to mimic human language and responses–can be scripted / programmed, but that doesn’t mean the computer has human understanding. It’s just distilling human responses into heuristics that mimic human responses.

One result of this discussion of consciousness and understanding was for AI to move away from the dream of General Intelligence to the specifics of machine learning.

In other words, never mind trying to make AI mimic human understanding, let’s just enable it to solve complex problems.

The basic idea in machine learning is to distill the constraints and rules of a system into algorithms, and then enable the program to apply these tools to real-world examples.

Given enough real-world examples, the system develops heuristics (rules of thumb) about what works and what doesn’t which are not necessarily visible to the human researchers.

In effect, the machine-learning program becomes a “black box” in which its advances are opaque to those who programmed its tools and digitized real-world examples into forms the program could work with.

It’s important to differentiate this machine learning from statistical analysis using statistical algorithms.

READ MORE HERE

By Published On: February 21, 2023Categories: UncategorizedComments Off on Charles Hugh Smith: What ChatGPT and DeepMind Tell Us About AI

Share This Story, Choose Your Platform!

About the Author: Patriotman

Patriotman currently ekes out a survivalist lifestyle in a suburban northeastern state as best as he can. He has varied experience in political science, public policy, biological sciences, and higher education. Proudly Catholic and an Eagle Scout, he has no military experience and thus offers a relatable perspective for the average suburban prepper who is preparing for troubled times on the horizon with less than ideal teams and in less than ideal locations. Brushbeater Store Page: http://bit.ly/BrushbeaterStore

GUNS N GEAR

Categories

Archives