AI risk
Two articles arguing (in part) that AI won’t kill us:
The New York Times
Opinion | This Changes Everything
The stakes here are material and they are social and they are metaphysical. O’Gieblyn observes that “as A.I. continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.”
This is an inversion of centuries of thought, O’Gieblyn notes, in which humanity justified its own dominance by emphasizing our cognitive uniqueness. We may soon find ourselves taking metaphysical shelter in the subjective experience of consciousness: the qualities we share with animals but not, so far, with A.I. “If there were gods, they would surely be laughing their heads off at the inconsistency of our logic,” she writes.
These ideas play on old, Hollywood-inspired narratives about what a rogue A.I. might do to humans. But they’re not science fiction. They’re things that today’s best A.I. systems are already capable of doing. And crucially, they’re the good kinds of A.I. risks — the ones we can test, plan for and try to prevent ahead of time.
The worst A.I. risks are the ones we can’t anticipate. And the more time I spend with A.I. systems like GPT-4, the less I’m convinced that we know half of what’s coming.
Opinion | Noam Chomsky: The False Promise of ChatGPT
In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
I get that the success of ChatGPT goes against Chomsky’s theory of language, but it’s still a but surprising that Chomsky et al don’t give it a little more credit…
Emergence, limitations, and intelligence
The Unpredictable Abilities Emerging From Large AI Models | Quanta Magazine
Recent findings like these suggest at least two possibilities for why emergence occurs, said Ellie Pavlick, a computer scientist at Brown University who studies computational models of language. One is that, as suggested by comparisons to biological systems, larger models truly do gain new abilities spontaneously. “It may very well be that the model has learned something fundamentally new and different that it didn’t have at a smaller size,” she said. “That’s what we’re all hoping is the case, that there’s some fundamental shift that happens when models are scaled up.”
The other, less sensational possibility, she said, is that what appears to be emergent may instead be the culmination of an internal, statistics-driven process that works through chain-of-thought-type reasoning. Large LLMs may simply be learning heuristics that are out of reach for those with fewer parameters or lower-quality data.
I don’t fully understand the difference between these two options, but the overall discussion of emergence is interesting.
AI Chatbots Don’t Care About Your Social Norms
The upshot is that chatbots aren’t conversing in a human way, and they’ll never get there solely by saying statistically likely things. Without a genuine understanding of the social world, these systems are just idle chatterboxes — no matter how witty or eloquent.
Hmm.. just because they don’t actually care about things like humans, doesn’t mean they can’t learn social norms in other ways, especially as they get more advanced over time. In fact they can likely do better in some scenarios since they don’t have emotions to get in the way…
What AI Can Tell Us About Human Intelligence
At the heart of this debate are two different visions of the role of symbols in intelligence, both biological and mechanical: One holds that symbolic reasoning must be hard-coded from the outset and the other holds it can be learned through experience, by machines and humans alike. As such, the stakes are not just about the most practical way forward, but also how we should understand human intelligence — and, thus, how we should pursue human-level artificial intelligence.
This article is from June 2022 and it seems some of the examples are already obsolete due to the rapid advance of generative AI since then.
AI & Education
I'm a high school math and science teacher who uses ChatGPT, and it's made my job much easier
When I first used ChatGPT, I treated it as a joke; I asked it to write poems about the Pythagorean theorem and a song about math in the style of Taylor Swift. My students said they enjoyed them, which gave me the push I needed to keep testing it.
Since then, I've asked ChatGPT to write lesson plans, generate exercise worksheets, and come up with quiz questions. I even got it to generate a "fill in the blank" worksheet by feeding my lecture notes to the bot and telling it to remove keywords, a task that would've taken me ages to do manually. It also developed an interactive game.
As a result, my productivity has gone through the roof.
One AI Tutor Per Child: Personalized learning is finally here
Having kids sacrifice their childhoods to passively sit in classrooms while one heroic teacher after another attempts to pour knowledge into them is utterly bone-headed. We have known this for a long time now. Many passionate educators, from to John Taylor Gatto, have written articulately and angrily about this.
We now have lived experience of what a different education can look like. On a daily basis we see how kids can flourish and learn when allowed to actively explore and engage with the world. Worksheets and textbooks are poor substitutes for rich, multi-sensory experiences where learning happens pretty much by default.
He discusses how the Large language models can be like a personalized tutor; clearly this is going to be extremely impactful. He also argues it’s powerful for planning lessons, although I wouldn’t have expected that to matter as much. He then argues it’s the greatest advance of the last 50,000 years, which might be taking things a bit too far.