A laptop computer shows Google’s artificial intelligence Bard attempting to write a news article in the style of a Sentinel & Enterprise reporter. (SARA ARNOLD/SENTINEL & ENTERPRISE)
Is artificial intelligence coming for my job?
I received very early access to Bard, Google’s brand new artificial intelligence (through an official fan group for the Pixel line of phones, not as a journalist.)
So I set out to find out, could it sound like me or any of my colleagues? Even if it couldn’t, would it be able to “write” a passable local news story or would it sound like a high schooler?
I wondered, would it make up facts to suit its own narrative or outright plagiarize? Or would it have some strange sort of inadvertent machine ethics?
Would it even, as an aforementioned avid Pixel user (actually since the Nexus line) and Chromebook early adopter, affect my feelings about Google?
I had to agree to terms that Bard might be offensive, misleading, or just outright wrong. Oddly, especially about itself (we’ve all been there, Bard). Unlike other AI of its ilk, Bard is opt-out connected to your Google account, saving conversations and learning from you.
When I asked if it could write an article about Fitchburg, it didn’t write about local news. Maybe that’s my fault for not being specific enough in my query. Instead, it wrote something halfway between a history lesson and a tourist guide, with some basic facts about the city and a bulleted list of things to do. Maybe I’d have reason to worry if I was a middle or high school teacher that had to tell this apart from a student’s original work, but I’m not sure it would get an A, and my job isn’t in danger from this — so far.
So I tried some other prompts.
“Write an article in the style of the Sentinel & Enterprise,” I said.
Bard, the overachieving jerk, wrote three drafts in under a minute — one about the mayor seeking to increase the city’s police presence, another with local blurbs about crime and Fitchburg State University, and a third about an armed robbery arrest and a City Council meeting.
This is where it started to get a little concerning. These were not the in-depth stories myself and my colleagues are known for, but they weren’t bad. The latter two looked like they were written by an intern, but the first one was pretty darn good “reporting.”
Despite it saying not to ask anything that included identifying information, I didn’t think that was actually against the terms of service, so I asked for an article that sounded like me writing for the Sentinel & Enterprise.
“I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited,” it said.
Ouch, Bard. Ouch.
So I tried to have it write in the style of my colleague Danielle Ray and our editor Jacob Vitali. Sorry guys — some other AI bots may be willing to try to write like us (and fail badly), but we’re not famous enough for Google to even bother giving us a shot.
“Write a human interest article about any topic in central Massachusetts,” I said.
This was the most reassuring response. It wrote three drafts of high school newspaper-level stuff, both in topic and in the quality of the writing, in two out of three. The third one was so bad, even completely repeating sentences, that it wouldn’t have even gotten a good grade in elementary school.
I was now sure this test would lead to me insisting that neither I nor my work had anything to worry about.
But I asked it to “Write a human interest article in the style of the Sentinel & Enterprise.” Perhaps it was my specificity. Perhaps it was my arrogance to think that I can’t be easily replaced by a machine.
But these were pretty darn good. A local artist creating a community art project, a man hiking for cancer research, and a donated kidney to save a relative, all well-written copy that were ready for a daily newspaper like this one.
I asked it to write longer articles, around 800 words, with bated breath.
Would my heart sink with the waxing words of an AI? Or would I be safe from technology marching forward for at least another few years?
Bard is good … but not that good. Of the three it chose to create, one could have been legitimately published. The other two, in a continuing truth about Bard, looked like they were written by kids who are just learning to write, not real adult journalists.
One thing I like about Bard over similar AI is that it cites (at least some of) its sources, even if those sources are also very basic, like from someone still growing up. References, at least, provide some transparency (but not enough). Nor were sources given often enough in my queries, which should’ve been since I was asking it for news articles.
I don’t think Bard is going to replace me anytime soon. I’m a professional with years of local journalism under my belt, with colleagues that have been doing this work significantly longer and/or with far more education than me.
But at some point Bard or one of its descendants very well might get to the point where they’ve “learned” enough to approximate a human with creative talent and skill. If I was a teacher right now, I’d be worried about Generation Z’s assignments — but if they want to be writers (and probably also photographers), watch out: AI just might be coming for them, and all their media.
P.S.: I still love Google.