Chapter 3 Artificial Intelligence
This essay is dedicated to our dear friend Jim Heter who left us this past February. There was nothing artificial about his intelligence. It was the type we need more of. He will be greatly missed. Fair winds and following seas in the great beyond, Jim.
I hear that Wattpad is now using AI (Artificial Intelligence) to screen for offensive material. That makes as much sense as having a young kid do it. I make this analogy to a young kid because of something Elon Musk said in an interview today. Elon compared making super-smart AI to raising a really smart kid. I think this is a very good analogy. People don't seem to realize that AI is still in its infancy and that although it has access to massive stores of information, a lot of that information has not been vetted and a lot of it is wrong. Even really smart kids must depend on responsible adults to guide them to reliable sources of information. Also, AI is still limited to the logic its programmers are able to equip it with.
Folks, I'll let you in on a little secret. A lot of programmers aren't all that smart. Farley has tons of code he wrote in the NSA's archives, and even Roger has written some code. And as hard as this might be to believe, there are even programmers dumber than them. The proof is all around us.
Have you used a grammar checker recently? One that recommends changes to perfectly correct usage yet fails to catch even obvious punctuation errors. Have you spent hours on the phone trying to navigate through the maze of an automated answering system that either takes you in endless circles back to the main menu or hangs up on you never letting you speak to an actual person? Have you heard on the news about accidents caused by self-driving vehicles? Those things are all AI in action. Are you really ready to turn our world over to all that idiocy?
Currently, AI relies heavily on what its programmers have identified as correct. Often, they assume the most popular answer seen on the web is the correct one. Think about this. You ask your child, "If all your friends jumped in front of a speeding train, would you? Your AI child will answer, "Yes." That speeding train is AI and you wouldn't believe how many idiots are ready to jump in front of it thinking they are jumping on board.
I saw on the news a piece about the most outrageous answers people had gotten from Google's new AI. When the AI was asked, "How many rocks should I eat?" It answered, "according to geologists at UC Berkeley, you should eat 'at least one small rock per day, as rocks are an important source of vitamins and minerals." It went on to suggest serving with gravel, geodes, or pebbles, or hiding them in foods like ice cream or peanut butter. The AI undoubtedly got its information from a popular article on the satirical site "The Onion." The article was a comic piece about eating rocks. Comedy that apparently the AI didn't pick up on. AI doesn't have a very keen sense of humor. In response to another query the AI suggested adding glue to pizza sauce to help it stick to the crust. When asked if it was okay to run with scissors the AI said it was an excellent aerobic activity. There were numerous other examples that I'm sure you can find by googling.
That AI makes mistakes is not my greatest fear. That humans will begin to think it is infallible is what really frightens me.
Someday AI will be smarter, perhaps even smarter than people. Of course, as my earlier chapters have shown, that is a pretty low bar. A bar that seems to be getting lower every day. Still, AI has already improved how humans do some things, and it will get better, no doubt after it has made a lot of things worse first. A lot of experimentation and training are still required. The role of human oversight still has to be figured out.
Did you know, it takes ten times more energy to conduct a search using AI than it does to conduct the same search using the traditional approach. And the traditional approach is still more likely to get the right answer. AI is an energy glutton. According to tedious Ted, training an AI model generates the equivalent carbon emissions as what is generated to build and drive five cars over their lifetimes. That has got to be even more than all the cow farts combined. Farts that may even be more useful. At least they are a necessary product of beef and milk production. How useful are queries about the latest fad and who is dating who in Hollywood.
I tried to look up how to spell Mos Eisley for a story I'm writing for the Fantasmical writing contest. I accidentally typed my query into Bing instead of Google. Bing which is rarely as helpful has tried to improve their performance by letting their AI help. I did get the answer I was seeking, but I also got over a two-page dissertation on the Star Wars franchise that kept streaming until I finally found the button to stop it. I didn't want all that. It wasted energy and my time.
Getting back to Elon's interview, he stressed that it is important to teach AI to always tell the truth and be curious. His plan to keep AI safe is simple: make sure AI always tells the truth. Yeah, truth something philosophers have sought for millennia, no problem, Elon.
Musk also warned against teaching AI to lie because once it starts, it's hard to stop. That really is food for thought! Would you like a side of gravel with that entre? Aggravating artificial automation adulterating actuality. Aaargh! (Insert emoji of paws covering eyes.)
Bạn đang đọc truyện trên: Truyen247.Pro