-
Judge blocks Trump shutdown of teacher-training programs - 9 mins ago
-
Award-winning Washington Post columnist resigns after critical op-ed on Jeff Bezos is killed - 11 mins ago
-
Heartbreak For ‘Sweet’ Shelter Dog Adopted and Returned After Just One Day - 20 mins ago
-
Long Covid still an issue 5 years after global pandemic began - 23 mins ago
-
Video Reveals What the New Hungarian IFV is Capable Of - 29 mins ago
-
Gene Hackman cause of death revealed, Prince Frederik of Luxembourg dead at 22 - 31 mins ago
-
How the media influenced a mother’s death sentence after a fatal robbery - 45 mins ago
-
Cal Fire rolls out new fire hazard maps for Central California. Up next: L.A. - 48 mins ago
-
Dow drops 600 points as Trump escalates trade war with Canada - 50 mins ago
-
The Billionaires That Have Lost The Most Money In 2025 - 59 mins ago
Did AI really defend the KKK at the end of my column? Let’s discuss
Journalism schools teach that writers should report the news, not be the news. But what happens when one of your articles goes viral — not for its content but rather for how an AI doohickey swallowed up what you wrote and upchucked a controversial summation?
Welcome to my week.
On Feb. 25, the Times published my columna about the 100th anniversary of when Anaheim voters kicked four Ku Klux Klan members off the City Council. That many readers seethed at my assertion that the lack of attention paid to the anniversary was unsurprising to me since Anaheim is a place that loves to “celebrate the positive.” More than a few insisted that the KKK in 1920s Orange County wasn’t as bad as in the South, which was such an O.C. response that I didn’t give it a second thought.
No, the real fun started Monday, when the The Times launched Insights. It’s an artificial-intelligence-generated tool that reviews the article to affix a ranking on where the piece supposedly lands on the political spectrum. (My Klan piece, for instance? It’s apparently “Left,” which is as surprising a conclusion as the end of the original “Karate Kid.”)
This feature also offers a bullet-point summary, alternative viewpoints and relevant links from across the internet of other news articles, columns and reports.
Other recent columns of mine got “Center Left,” “Center” and even a “Center Right.” I’m still missing “Right” on my lotería card.
In a letter to readers introducing the feature, L.A. Times owner Dr. Patrick Soon-Shiong wrote that he believes “providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation.”
Well, it didn’t take long for one of Mr. Insights’, well, insights to make people see red.
Linking to articles critical of the KKK, it said: “Local historical accounts occasionally frame the 1920s Klan as a product of ‘white Protestant culture’ responding to societal changes rather than an explicitly hate-driven movement, minimizing its ideological threat.”
The italics are mine, so put a pin on that phrase because it’s important.
Soon, the headlines started:
It only took a day for L.A. Times’ new AI tool to sympathize with the KKK.
L.A. Times pulls new AI tool off article after it defends the KKK.
The L.A. Times’ new AI tool sympathized with the KKK.
And on and on it went. Some of the writers of the articles either excised the phrase “minimizing its ideological threat” or seemed to pretend it didn’t exist. But that part of the sentence is crucial: It makes the point that too many people in Orange County have historically minimized the dangers of the KKK.
The AI tool may have been guilty of fuzzy and clumsy phrasing, but it did not defend or sympathize with the KKK.
Journalists like to complain that critics of their articles don’t read past the headline. Well, this was a case of journalists not reading past the first clause of a sentence.
In fact, as I pointed out on X, that citation was correct. I was actually shocked AI got such a crucial point right. But I was also annoyed that the two other bullet points — including one that linked to one of my columns in 2018 about the Klan in O.C. — were wildly out of context, but no one else seemed to care.
Either way, friends began texting me stories from local and national outlets within hours of my column’s appearing online claiming the AI tool used by the Times outright endorsed the KKK. Some readers announced they were canceling their Times subscriptions, saying they didn’t want their money to support a publication that, somehow, gave a thumbs-up to the Klan.
Insights’ rambling, overly long deconstruction of my columna caused some people to conclude it was downplaying the KKK’s awfulness.
But to proclaim it literally endorsed the hate group?
Only one reporter reached out to me as the writer of the column that provoked AI Klan-gate. My opinion would have been given gladly and AI-free to all comers.
As a journalist, I’d hope that my contemporaries who reported on the situation would have been a little more precise about describing the language they saw on the feature. The net effect was to make it seem like the AI tool had practically burned a cross to show its support for the KKK on a column that explicitly denounced the Invisible Empire.
They were more hung up on The Times’ AI tool and not the actual journalism that preceded it, which makes me think they didn’t even read my column. Thanks, pals!
As for the readers who said that canceling their Times subscriptions was a way to lodge their anger at The Times for using Insights, here’s the thing: You have to press a button to trigger the thing. Like the comments section, you can engage with it or not. You can choose just to read what the humans have to say — and criticize or laud them. Why, if you ignore the AI pendejada enough, it could very well pick up its digital football and go home.
If there’s a silver lining to any of this, it’s that I may be a prophet. In December, I predicted that whatever AI program the Los Angeles Times would end up using on its opinion pieces, it would self-immolate the moment it encountered one of mine.
That should count as a lotería square, right?
Source link