The Age of AI, (part 1/3)
“AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” — Sam Altman, Chairman of OpenAI
AI has a history of “failed beginnings”. At the Dartmouth Workshop in 1956, computer scientists assumed creating an AI would be a 2-month, 10-person project. For some, IBM Big Blue’s victory in 1997 over Garry Kasparov (playing chess) was a seminal moment. For others, it was AlphaGo defeating the world’s leading Go player (Lee Sedol) in 2016.
However, chess and Go are games with relatively simple rules. The real world is much more messy. More significant AI milestones occurred in 2016, when AI systems started to exceed human abilities on image, speech, and handwriting recognition:
Another seminal breakthrough ocurred in 2017, when researchers at Google published a paper titled “Attention is All You Need” which introduced the concept of Transformers and a novel use of Attention (a mechanism that allows artificial neural networks to “focus” on a set of inputs for their calculations). This set in motion an “arms race” towards ever larger neural nets (requiring more training data and more computing resources). Somewhat unexpectedly, these larger networks started to exhibit emergent properties such as the ability to perform logical reasoning or transfer knowledge of one domain to another.
Fast forward to 2022, when an AI became the fastest-growing consumer product in history:
Despite being probably the ugliest product in history (a text field into which you enter a prompt and receive text output), ChatGPT (and other Large Language Models (LLMs) such as Bard, Claude, etc.) are uncannily “human-like” in their ability to interact. This raises the question: “is it intelligent”?
We’ve long suggested using the Turing Test to determine this. This now seems quaint. Not only do these AIs already use better logic, grammar, and spelling than most humans (a dead giveaway that you’re chatting with an AI), GPT4 already outperforms most humans on various exams used for college credit or legal/medical qualifications:
Predictably, people are now saying that these exams don’t really measure intelligence. This is typical of AI. As we make progress, we move the goalposts:
“If it works, it’s not AI” — Rodney Brooks, early AI pioneer
What can AI (already) do?
This is one of the fastest-evolving technologies in history. As of this writing (May 2023), AI already can:
- Answer questions, summarize documents, and create texts in various forms or styles (articles, jokes, poems, etc.). This is making writers, editors, and researchers more productive.
- Translate text into other languages. Help decipher previously “lost” dead languages, opening up new avenues in historical research.
- Write computer code in response to a prompt. Explain code. Fix errors in code. Early data shows this can boost the productivity of developers by over 50%.
- Create photo-realistic images in response to a prompt (e.g. MidJourney)
- Determine the shape of proteins (Google’s AlphaFold has predicted the shape of 200 million proteins (nearly all known to science) in a matter of weeks. Prior to this, it took a researcher on average 5 years to determine the shape of a single protein). This will revolutionize drug discovery.
- Detect Parkinson’s disease from bodily fluids with 96% accuracy.
- Analyze numerical data and perform a variety of analyses to detect correlations, anomalies, etc.
What can’t AI do?
Neural nets look for patterns in their input data. Because LLMs need lots of it, their creators fed them as much text as they could find on the Internet. However, there’s a known principle in computer science: GIGO (garbage-in, garbage-out). So while snacking on highly curated data (scientific publications, government documents, Wikipedia) can lead to higher-quality results, the input data also contained dregs such as text from online chat groups, hate speech, and conspiracy theories.
Idealists assumed that AI scientists were creating a secular deity (Genesis 5:1 “…he made them in likeness of God”): always accurate and without bias. Sadly, AI is a reflection of humankind and all of our flaws, captured online in our texts. The main limitations of today’s AIs are:
- They reflect the bias in the training data.
- They confabulate (the term Geoffrey Hinton prefers to “hallucinate”).
Of course people have the same two limitations.
Possible reasons for why AIs “make shit up” include:
- Wrong data in the training set
- Faulty learning: creating generalizations not supported by the data
- Lack of “objective truth” or “causality”
The 3rd issue is the most serious one: AIs are not people: they don’t experience a physical reality: gravity, aging, pain, or joy. They can’t refer to an internal model of “how the world really works” that they’ve built from personal experience the way we do. AIs are primarily optimized for “coherence” (i.e. does this output make sense relative to the data its been given). There is no objective “truthness” value that they can optimize for because they have not been provided with a way to measure that.
The best we can do (right now) is take a newly trained AI and have humans refine its training further (by hand). This is called Reinforcement Learning from Human Feedback (RLHF). The AI is tested on various prompts and a human being selects amongst various responses which one it deems best. This then further trains the AI model to produce better outputs over time. This is also how companies prevent the AIs from explaining how to build better bombs or create dangerous bioweapons. Not only is this very labor intensive, it can also be easily circumvented via “prompt injections” that tell the AI to ignore those restrictions. A better approach may be to embed some “primary drives” into the foundation of an AI, so that it can guide itself to more truthful and ethical behaviors (more on this later). This is known as the Alignment Problem (how to ensure that AIs have our best interests at heart)?
Right now, we should think of AIs more like “aliens” instead of “people” because they’re not made like us:
“These things are totally different from us. Sometimes I think it’s as if aliens had landed and people haven’t realized because they speak very good English.” — Geoffrey Hinton, early AI pioneer