• Skip to content
  • Skip to primary sidebar

Rick Kinnaird

Day 73 – Bitchin’ bout Tech

April 2, 2025 by Rick Kinnaird 1 Comment

 

Marilyn.
My.
Marilyn.

Wednesday, April 2, 2025

Okay. I know you’ve all been wondering about the refrigerator ice maker. Well, it’s fixed. At least I think it is. After I did this miracle hack. Right, everything is now “a hack.” I think it’s fixed. What did I do? I pressed the “reset” button. What does pressing reset do? I have no idea. I guess “they” thought I didn’t need to know. I am surmising that somehow the message to pump more water into the ice making tray and when to do it got out of whack. All I know for certain is that there is no water on the kitchen floor. And that is the case ever since I found the button, pressed it, and it chimed at me. Whether these to things are actually linked? I don’t know.

Moving on to something I do know. The boffo idea of taking the Social Security COBOL code and running it through an AI something or other and creating new spiffy code. I’m going to illustrate why this is not a good idea in several ways:

1. Why do some of our most advanced fighter jets run on 286 chips?

2. If I have something that works and it’s been tested and debugged for years do you think it would be a good idea to replace it? (Or, in other words, “if it ain’t broke …” )

I’ll pause for a minute and answer #1. They run on 286 chips because they know it works.

Let me give you a few real world examples:

When I was working with Fannie Mae Steve Jobs had left Apple and was at NeXT. The Next folks had taken C code and put a wrapper around it. This was not a new idea. Borland had done it, calling it a studio. Microsoft stole the code from Borland and that was the end of Borland. (Sorry, they “appropriated” it.)  The idea behind these environment oriented platforms was to incorporate all the pieces that it took to make an actual running program and put it in one place with a recipe of how to assemble all the pieces. It was easy to put a piece of code out of order or to put in a block of code that was not up to date. The idea was to avoid those common pitfalls.

NeXT took this idea a step further and added stuff to supposedly make the task of writing code easier. There was a group at Fannie Mae that endorsed this strategy and pushed for using it. I think they called it “The Plane.” The idea was that you would write up your requirements and dump them into “The Plane” and code would come out the bottom.

Let’s stop there for a second, and let me give you another example of this. MIT had a program to help doctors figure out how much was the right dose to give a baby of some sort of medicine. The idea being that if X amount was good for an adult and a baby weighed so much and take into consideration blah-blah-blah then the right amount for that particular baby would be Y amount.

In both the code case and the medicine case the people writing the code thought, “yeah, someone will just take this code or medicine and do it.” Or as I remember working with Bell Labs they came up with a billing system for their cool idea that companies could use their networking code (this is back in the early 80s) to keep track of all their operations and the bill was a monthly invoice, which literally read “Pay this amount.”

Let’s stop for a second and ask ourselves, “What would you do in these cases?”

In the Fannie Mae case – would you entrust your day to day operation of some critical function to some code that came out the bottom of some code generator you didn’t understand?

If you were a doctor would you inject a baby with medicine based on what you saw on a computer screen? What if the baby had a reaction and died?

If you were the CEO of a company and Bell Labs sent you a bill for say $27,000.00 with no explanation other than “This is what you owe.” Would you pay it?

These DOGE geniuses have already proved themselves as green, wet behind the ears, rubes when it comes to coding. Or as Malcolm Forbes said “time for the boys in short pants.” 

The question they be asked is, “You expect me to …?”

That blank might be filled in with:

“Run your code, untested, untried, live because you think it will work?”

“Inject a kid when I have no idea how the program arrived at it’s conclusion? You need to write program to query the first progam to tell me how it arrived at that answer.”

“Pay this? I’m gonna need some more details.”

The incredible thing is that in all three cases the coders thought “Yeah, that’s what we expect.”

It demonstrates a striking naiveté. There are stories of coders looking at print out from machines saying, “This can’t happen.” And then they’re putting in print statements like “Impossible situation.” And then realizing that they need to number the “impossible situations” so they know in which place in the code they are in.

And let’s talk AI for a second. AI is in its infancy and it’s hard to know where it will go and where it will end up. I think I am safe in saying that whatever people are predicting today of where it will go or what it will be are wrong. Why? Because one thing that has been shown over and over again is that we, we human beings, are terrible at predicting the future.

Here’s what I can say about all that:

Ray Kurzweil, aside from creating synthesizers, also had worked in speech and character recognition when it was just beginning, had said that machines were good at certain things and that as the capacity of those machines grew the things they were good at would expand around the things they were already good at. In other words, things like voices would become better as would their speaking and grammar.

He said that by 2080 the capacity of a machine would equal that of the human brain.

Now we are looking at AI or as far as I know what are called Large Language Models (LLMs). The idea is kind of what Alan Turing said back in the late 1940s “If I had a machine with innumerable tapes” In other words if one could take all knowledge and stuff it into a machine then you could access that and … Well, who knows?

One thing that is becoming clear is that AI (or as they are also known LLMs) can do some pretty cool things, and some pretty stupid things. You can get them to generate reports and they sound pretty good when read back. I recently read an article where someone bemoaned the fact that a really really good piece of writing could get drowned out in a flood of AI generated mediocrity.

And that is where we are.

Good night.

Filed Under: Uncategorized

Reader Interactions

Comments

  1. Kathy Goodwind says

    April 3, 2025 at 4:42 pm

    The further I get into the Dune Series (all 21 books, I am on #10) I realize how far ahead of times those books were written.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

About Me

Rick Kinnaird
I’m Rick Kinnaird, a writer of fictional adventure and travel. That means I write stories about things that never happened in places I’ve never been. This way facts don’t get in the way.

Recent Posts

  • Day 109 – The Tsunami Is Coming
  • Day 102 – May Day! May Day! Warning! Warning!
  • Day 101 – Well? Give Me Something.
  • Day 100 (cont) – The Economic Outlook
  • Day 100 – The Unhinged Reality

Archives

  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2017
  • January 2017

Categories

  • Bryce Holliwell
  • Fantasy
  • Holiday Letters
  • Mayan
  • Romance
  • Stocks and INvesting
  • Travel
  • Trump
  • Uncategorized

Copyright © 2025 · eleven40 Pro on Genesis Framework · WordPress · Log in