Why I Rarely Use AI
One IR scholar's take on AI for writing and research
Thanks for reading or listening! You can support Foreign Figures by liking, sharing, buying me a coffee, or subscribing.
This newsletter is about data and international relations, but sometimes also AI—a term I’m using as short-hand for the usual suspects like ChatGPT, Claude, and the myriad other large language model (LLM) services on offer these days. The reason I occasionally write something about AI is because AI is everywhere. As an aspiring IR scholar who also teaches college classes on IR and data analysis, I’m exposed to all kinds of hype about using AI for help in my research, and to content about the promise and peril it poses to student outcomes. So, sometimes, I deviate a bit from the usual programming to opine on AI from my perspective as a researcher and professor.
Today, I want to use this newsletter as a platform to make a confession: I rarely use AI. Mind you, I haven’t not used AI. But if the vibes I get from my Substack and LinkedIn feeds have any grounding in reality, I’d say my AI use is certainly well below average among the most vocal quantitative social scientists the algorithmic powers that be bring to my attention. At the very least, it’s well below where peer pressure on social media dictates it should be. While some seem to use AI daily, I use it closer to once a week at most, and more often once every couple of weeks. That hardly makes me a power-user.
I don’t have a high-minded, self-righteous reason for my sluggish adoption of AI. I certainly respect the conscientious objectors who worry about AI’s ravenous hunger for electrical power and unquenchable thirst for water. I do like the environment, and I think farmers could use the water AI data centers slurp up faster than my toddler downs his Momma Chia pouches. But this doesn’t explain my reluctance. I’ve asked AI to do some trivial things for me—perhaps too trivial to earn me street cred with real conscientious objectors.
The reason also doesn’t have to do with disappointment with overhyped AI’s real world performance. To be sure, AI does disappoint me, and I do think its current abilities are overhyped. A common experience when I give AI a fair shot to help me solve a complex coding problem is getting caught in a doom loop that makes me envious of Bill Murray’s Groundhog Day experience. Before long, I give up, then have a moment of inspiration and solve the problem in a matter of minutes on my own. In those instances, I wonder whether my time would have been better spent taking a coffee break to let my subconscious mind ponder the problem than arduously troubleshooting it with AI assistance.
But I don’t give these negative experiences too much weight. If I’m going to AI for help, it’s usually to sort out a uniquely hard problem. ER doctors don’t exactly see people on their best days. When I go to AI, it’s usually not my best day either, so my experience doesn’t provide the best benchmark for its usefulness. And, of course, I do have positive experiences using AI. Recently I had a new idea for research, but before I went too far down the rabbit hole I wanted to make sure I wasn’t reinventing the wheel—which I have, in fact, done before (darn that “double machine learning”). AI helped me summarize relevant studies and confirm that my idea was sufficiently novel to pursue—though let’s hope that wasn’t the sycophancy talking.
So why do I only rarely use AI? At the risk of sounding like a petulant child: because I don’t want to.
Let me gussy up this answer a bit to make it sound better. Much of the work I do is a combination of teaching, research, and writing. I enjoy doing all three; not just the things I produce, but the process itself. Take this newsletter. I write every word that you’re reading—despite what my sometimes appropriate use of the em dash would suggest. I can’t promise that my prose is the envy of the academy, nor can I guarantee no typos or grammatical faux pas, but I can assure you that every thought and every data visualization you see in Foreign Figures is artisanally handcrafted. And that’s how I like it.
The same is true with my research. I don’t ask research questions because AI helped me identify a gap in the literature in my field; it’s because I have a question nagging at me day after day, like an itch I just have to scratch until I have an answer. And that itch only goes away if I answer it. Sometimes not every question lends itself to an academic publication, which is part of the reason this newsletter exists—I just need a repository for all the answers I’ve accumulated to nagging questions. Sometimes an answer adds up to something more, but many times it just stays in this newsletter. Either way, my research satisfies a deep desire to know something I didn’t know before. Cue some motivational poster with the word “curiosity” plastered to it, like the typo infested one AI made for me below.
I also rarely use AI for coding. Part of the reason is that I don’t need its help. While I’m not the fastest coder in the world, I’m no slouch. But I have to admit that AI produces code much faster than I do on a line-by-line basis, so why write my own code? For pride? Maybe, but there’s a difference between coding and programming. AI excels at the former but struggles mightily with the latter. One involves predicting the appropriate code for a problem; the other, thinking about how to solve a problem and working that solution out through code. If writing is thinking, so too is writing code. I’m fluent in English, and I’m fluent in the R programming language. Both are a means to thinking. If I offload coding or my prose to AI, and by extension, my own thoughts, where’s the joy in that? I don’t use AI much at all to write code simply because I enjoy writing code, the same way a novelist likes writing novels or a philosopher likes crafting a well-argued essay. I derive meaning from the process; not just the final product. I’m not alone in this attitude either—Nate Silver feels the same way.
This was the lesson Adam Sandler’s character learned in the movie Click where he’s sold a TV remote that lets him fast forward through parts of his life he’d rather skip. Soon enough, he realizes that he’s skipped too much and wants to rewind, but he can’t (hence, the tension that animates the plot). Sometimes AI use feels a little too much like a fast-forward button to me—one that, if hit too often, robs me of learning opportunities and friction that contribute to making life meaningful.
Indulge me in a bit of amateur arm-chair philosophy. I think too much of the pro-AI vs anti-AI debate fixates on the value of production while ignoring the value of process. We live in a culture that values maximum production with minimal friction. No wonder we developed a tool that does just that. In fact, we’ve developed many tools and broader infrastructures to accomplish this goal already, and AI is only a recent iteration of this centuries’ long trend. What was the Industrial Revolution but a means to greater production with less friction?
Don’t misunderstand. I think advances that let us produce more with less effort are good. Complaining about living in a society that seeks to, and can, develop a tool like AI would be a bit like complaining that the roof over my head when I go to bed blocks my view of the stars while I fall asleep. That roof (and central heating) is why I haven’t died of exposure. It’s all too easy to critique systems that, paradoxically, have contributed to the conditions that give us the luxury to critique them.
But no system is perfect. We live in a complex world with evermore complex systems being created to confront complex problems. Those systems generate good things, but they also create new pathologies. The good that comes from a society that values maximum production with minimum friction is a vast improvement in the quality and quantity of life for the average person. Once upon a time, 50% of children never made it past their fifteenth birthday, but today the figure is less than 5%. That’s a good thing. But maximizing production and minimizing friction can easily become ends in themselves rather than a means to a better life for us and for future generations. Part of that better life (the good life if you will) is doing something meaningful with our longer lives and bountiful resources. In my experience, I need some friction to generate meaning.
One of the ways I get some meaning-making friction is by letting myself be in the process of production. Right now, AI tries a little too hard to remove friction on my path to producing prose and writing code. As a consequence, my meaning tank plummets when I use AI.
Everyone is different, and plenty seem to enjoy using AI. Not everyone finds the same kinds of work meaningful, and if AI helps you speed through stuff you hate doing to get to work or hobbies that you love, that’s fine with me. More power to you. But I wonder when we’ll hit a point of diminishing returns at a societal level. Will we create a world so productive and frictionless that meaning craters completely?
Thanks for reading or listening! You can support Foreign Figures by liking, sharing, buying me a coffee, or subscribing.


