216 DOWN, 3 TO GO!
I set out on Substack to be thoughtful and novel. I am largely satisfied. While my writing never found much of an audience I am happy I had a chance to become a better writer.
What I Want to Avoid
I am not surprised by the chatter surrounding Artificial Intelligence (AI) but am exhausted! I don’t see the unrest over Chat-GPT and Bard as particularly interesting.
AI images encourage people to prompt as if they just ate half a pan of hashish brownies. There is no harm, it’s just not that interesting to me. An analogy to childhood is Spirograph and Lite-Brite were not creating art.
Substack Obsession
The press has portrayed Chat-GPT as an existential threat to writing and the feeling is amplified here. I have been using the assistant in Gmail for quite a while — it doesn’t appear threatening.
This Post Is Finally Gonna Surprise You
Here is some vocabulary for today’s post.
Artificial Intelligence | Machine Learning | garbage in = garbage out | neural networks | Large Language Models (LLMs) | Convergence
Why LLMs Seem Dumb
My FAVORITE “feature” of LLMs like Chat-GPT and Google Bard is the application gets dumber, seems to lie, and has been termed AI hallucination.1 The results are 100% based on all the data they used to “train” the model (machine learning). I am looking forward to the xAI LLM from Elon Musk, the consummate carnival barker. He will be using Twitter/X content (you are the product) to train his model.2 While I am no longer on Twitter/X, I am confident that you would be hard-pressed to find a poorer source of diction, writing structure, or erudite musings than the content on X. Time will tell but I expect lots of poop emojis. Is there a better source of garbage in = garbage out than Twitter/X? I doubt it.
Is AI Doomed?
The answer is no — there is hope for humanity because of convergence. All of the interesting AIs are not the LLMs. The future belongs to reinforcement learning. LLMs are mostly based on a static snapshot of their training and that data is limited to the data they can steal! The only way Chat-GPT improves is to manually train with new data or intervene on all the dumb stuff discovered so far.
I think a useful analogy is an English teacher. You turn in an essay. Your teacher uncaps their red pen and bleeds all over your paper. Products like Chat-GPT are MANUALLY TUNING the neural network and trying to anticipate the mistakes kids might make IN ADVANCE. All of the very interesting AIs adjust to what they see each day! I think an English teacher might advise it is foolish to anticipate all possible errors in advance. It probably does not help that the tuners of LLMs are not stock option millionaires. Garbage In = Garbage Out.
Why the LLM obsession? Improvements in our lives are not as interesting as “draw a unicorn with a beanie on his head” or “write a story about World War I that includes the usage of drones”.
Good News Is Right Under Our Nose!
My favorite reminder of “good” AI is autopilot which has been around for over a century! This classic scene from the movie Airplane is tough to beat.3
Top-tier self-driving AI systems have been developed by Tesla and Waymo. The systems are constantly training on the latest driving experience of the whole fleet of cars and they learn nightly from the mistakes made the day before! They improve like children touching a hot stove. Even before the pandemic, Waymo was already simulating 100 years of driving every night BASED ON ALL PRIOR activity across the fleet. Now that is reinforcement!4
The press has difficulty generating clicks for the relevant and reinforces the edge cases. What makes THESE systems different than Chat-GPT and Google Bard is when they do stupid stuff, the model discerns the error and begins reinforcing a better answer. Effective AI will depend upon the quality and breadth of data available. Instead of fretting so much and speculating on things we don’t know about like Ivermectin5 (horse cream), or hydroxychloroquine6 (a derivative of a WW2 treatment for malaria) — let’s try giving greater credence to following where the data takes us. A fun aside — tonic water used to contain quinine. I wonder if the British officers thought having a gin and tonic would fight malaria? Did anyone on FB/Twitter/X promote gin and tonics for COVID?
Deepmind AIphaFold7 — Every living thing on this planet can be reduced to a bunch of amino acids connected like TinkerToys in a repeatable pattern — we call them proteins. Before proteins were first described in 1838 we were splashing around in the mud. How proteins fold is one of the fundamental understandings of the universe. I said universe because we find proteins all the time on meteorites and they are not the same as the ones we are using. Humans are made up of 25000+ proteins. Here is an old and early post that remains in my mind as having my best subtitle as this Substack adventure winds down.
DeepMind AlphaFold learned 3D geometry and has described how 200M+ proteins fold. This used to be an onesey-twosey thing for humans in a lab. Life gets better.
Deepmind GNoME8 — Graph Networks for Materials Exploration. There are about 100 elements in the periodic table. When we put them together we can mix Sodium and Chlorine and make some salt. Mixing stuff is another drudgery for humans. Mostly trial and error. In the history of humanity, we’ve figured out about 42000 compounds of consequence. AlphaFold decided 3D geometry was worth knowing. It has now been applied to the mystery of material science instead of proteins. GNoME has proposed 200000+ new and novel compounds. Legions of humans COULD HAVE NEVER accomplished this. Welcome to the future. Life gets better.
IBM Watson started to play chess and Jeopardy by training on all the recorded games ever played. I guess that is pretty cool but a lot like Chat-GPT. At least they weren’t using chess games played by me for training ala clipped training data nonsense used to train LLMs. The method is good enough to beat people at games almost every time. It takes time to feed in all those games. The newest AI game players are a bit more like us. They are taught the rules of the game kinda like a kid. They figure out the rest by playing. In the beginning, they lose a lot. In a very short time, they are better than any human who ever lived. Learning from experience is infinitely more interesting than memorization. I’m guessing by memorizing the filth spewed by even the founder, training on Twitter/X data is the surefire way to create more anti-Semites. Bad ideas ultimately start with narrow-minded people. Wouldn’t it be better if we could call bullshit and stop listening to bullshitters? Reinforcement learning rather than Facebook and Twitter/X musings in your feed is a better way to learn unless your side hustle is blood pressure medication. I wrote about AI gaming way back in this post. It was a bit long but had a lot of fun commentary.
If you want a cool explanation of deep reinforcement learning AND you remember the old Atari game system, this one is for you! You might be overwhelmed to the reference to curiosity and addiction in the link. Having raised three boys who loved video games, this is a blast. It was called Agent57 since there were 57 games available on the original Atari game system.9
The Poll & Music
If you are waitlisted for the Neuralink clinical trial, this is the song for you!!!
My last footnote is only for those genuinely interested in reinforcement learning. Memorizing the next word based on the last word will eventually be understood as a parlor trick. There is an interesting link here10 and a deep dive for the intrepid in the body of the article. I imagine skateboarders and snowboarders learn new tricks by hanging out with pals and emulating what they see one step at a time. I would imagine a whole generation of kids have watched a Michael Jordan dunk on YouTube and eventually figured out how to do the same.
Hello! I am sad to see the countdown ticking down, but I'm very interested in seeing what you'll be creating with the extra writing time you'll have when your posts get to "0."
This most recent post is fascinating and your opinion meshes with my techie husband's. I'm the biology geek in our house and he's an expert in all things that run on electricity - when I told him that I was a little worried about AI taking over jobs, I apparently touched a sore spot, as he began a rant about the stupid way AI is portrayed in popular media. The short version: He said AI is nothing more than programming and while AI might be used to do any number of nefarious things (the idea of a Twitter/X based AI boggles my mind!), an artificially intelligent machine will not suddenly decide to take over the world - some evil HUMANS may indeed end up trying to program an AI machine to do such a thing, but the machine is a tool and will only follow its programming. As you point out, "garbage in, garbage out" will always be a limiting factor in AI.
Two quick points... ivermectin is a very good medication for worming equine (hee haw!) and hydroxychloroquine is useful for alleviating the effects of rheumatoid arthritis. I'm not sure they're much better for preventing covid than injecting bleach (the bio geek in me shudders!), though when covid hit our house last year my father-in-law (an 86 year old man who was taking hydroxychloroquine for RA) was the only family member to escape symptoms (though it might be coincidental, of course - he and I were both fully vaccinated, and I had only mild symptoms, while my unvaccinated hubby was sick in bed for three days).
Have a good day, and I look forward to your last three articles!
Just seeing this. I love your optimism. While I try to be optimistic I’m also a realist. AI could be horrific if not kept in check. I seem to recall an apple that wasn’t supposed to be eaten. 🤔