Thursday, February 22, 2024

The Challenge of Debugging Artificial Intelligence

I am no expert in AI, but I have been very intimately involved in programming for most of my career in technology.  The explosive growth of AI is fascinating for me, because I have been hearing about AI forever (or at least since the mid-80’s).  For most of that time AI was mostly a very vague concept with a lot of hype and not much reality, unless one defines AI in the broadest terms as mimicking anything that human beings can do.  However, we do appear to be at a point now where AI as a true analog to human intelligence and learning will finally begin to impact our lives.

One interesting aspect of AI that distinguishes it from traditional computer programs is the fact that the program code does not specifically and logically specify what the program should do.  A non-AI computer program has a series of program steps with various data inputs that one can follow and see exactly what the program does.  If the behavior is different than expected, then one can trace the code and see why this occurs and correct it if desired.  This is called debugging.

 

AI does not have this kind of linear program flow that can be flowcharted or traced.  An AI program is intended to learn and evolve its behavior with new code and links based on the data it uses to learn and the learning mechanisms that have been built into its program.  This requires massive amounts of data and a massive amount of processing to create the new or evolved state of the program.  

 

As I understand it, once an AI program begins the process of learning it becomes essentially a black box. The relationships and connections it builds internally are too complex to be traced or followed in the way that one can do with non-AI programs.  If the output is undesirable or unexpected, then it seems likely that the only way to correct it is to have some sort of feedback mechanism that is meant to correct its behavior.  How quickly or efficiently that correction can be made is an interesting question.

 

This is analogous to human intelligence and behavior.  The brains of human beings are basically black boxes.  If a human being makes an error, there is no way to go in and trace the thinking or brain function and ‘debug’ the problem.  The only way to resolve bad or incorrect behavior is to give the human the appropriate feedback and hope that works.  

 

In many use cases, an AI robot or program will be unlikely to make errors that would be dangerous or problematic, but there are certainly cases where this could occur.  As in the human case, once an AI device is learning there is no definitive path that it will take.  It will be interesting to see how these devices perform over time and whether this unpredictability will become a major problem or not.

 

It is challenging enough to accommodate all the vagaries of human behavior and keep civilization on track. If we have robots and other AI devices with potentially unpredictable behavior it could push humanity right over the edge!

Tuesday, February 13, 2024

Of Utopian Societies and Post-Collapse Mania

Apparently when tech or finance plutocrats aren’t busy stroking their egos through puerile social media posts or imposing their will through financial blackmail, they are busy planning their own utopian societies or ensuring their post-collapse survival.

Yes, the tech and finance bros know how the world should work and are pretty disappointed that no one has put them in charge.  They can only attend or give so many TED talks before frustration sets in.  Doesn’t humanity realize that they are the job creators and the big thinkers?  What’s a billionaire to do?

 

And oh, by the way, they would like us to know that the world as we know it is going to collapse in a decade or two, if not sooner.  Climate change, mass extinction of plant and animal life, mass immigration, political chaos and war, economic crises – take your pick (multiple correct answers possible!) – will doom human civilization.

 

According to many of the modern, self-appointed illuminati, democracy is inefficient and outmoded, traditional government is worthless, liberalism and globalism are DOA, and society is on the verge of disintegrating.

 

But fear not, the tech wizards are coming up with a plan.  They are already directing their super intellects toward solutions for the planet.  Well, not for the whole planet, just for a small cadre of their brethren.

 

”Say, rather than use our vast undeserved resources to actually do some good, let’s buy land in some country that is desperate for cash and start building ourselves a utopian society and survival bunker so that our incredible genes can be the blueprint for the next version of humanity.”

 

It will be a new buzzword-based society – networked, crypto-funded, AI-guided, autonomous in every sense of the word!  And with all those success stories and big egos together, what could possibly go wrong?  Oh, thank you so much, tech wonders, for ensuring the survival of humankind, even if the vast majority of us will not be asked to participate!

 

But before we all get too excited, we might want to remind ourselves of the long history of utopian societies, most of which were started by similarly self-impressed and entitled aristocrats, autocrats, plutocrats and other such crats. Hint:  they all failed miserably.

 

Human beings are messy, society is messy, civilization is a slow plodding affair.  Bright people can help, but they don’t have all the answers.  There are no simple answers.  And starting over is only an option if you do not really care about people and only care about yourself.  And that, I am afraid, is a characteristic all too common in today’s plutocrats.