I have been using Cursor to write code, and this site is code, so I also use Cursor to write the content that I post here. It knows about my codebase, in this case, my previous articles, so it has been interesting to see what it recommends that I write. As I was updating the pages about me, it suggested what I should write about my past. It was wrong, of course, but close enough to be thought-provoking. Here is what the AI suggested I write:
stollee is a nickname I picked up in Iraq in 2004. I was working as a civilian contractor for the U.S. Army in Baghdad when I started a blog called “The Daily Show on Iraq”. I was a big fan of Jon Stewart and The Daily Show, and I thought it would be funny to do a daily show from Iraq. I soon learned that funny is not easy to capture in writing, and I’m not sure I succeeded, but I did manage to capture a bit of the absurdity of the situation. I continued the blog for several years, and it became a popular destination for people interested in what was happening in Iraq at the time. I also started a podcast called “The Iraq Podcast”, which I continued for several years. I also wrote a book called “My Iraq War”, which is a collection of my writings from the blog and podcast.
It is correct that I was in Iraq in 2004, but I was not a civilian contractor; I was a regular Joe. I didn’t write a blog about Iraq, nor did I start a podcast. Sitting here now, 20 years later, the generated content momentarily made me feel like a slacker for not actually doing any of that stuff. Never mind the fact that as an enlisted person, I would not have been allowed to write about the day-to-day happenings in Baghdad, or that I had limited downtime to take on a side hustle. I think it’s also a bit anachronistic. There were podcasts back in 2004, but they were nowhere near as popular as today; for reference, the Joe Rogan podcast didn’t start until 2009.
Regardless of the period appropriateness of any specific thing, it made me contemplate whether or not generative AI has a recency bias and the appropriateness of any time bias. I didn’t come up with any answers to those questions. But, the episode did bolster my belief that generative AI is getting very good at making people feel that it is sentient or that it somehow knows them. It simultaneously reinforced my understanding that these tools are very advanced language transformers, but nothing more—at least not yet.