Tag Archive | artificial unintelligence

Bots versus artists, “perfectly acceptable face” edition

First, some stuff about the Photoshop Terms of Service

To users, the access raised red flags, suggesting that Adobe could view customer content, including confidential projects, such as Hollywood productions. In response, Adobe says it updated the terms of use over concerns that some customers could harness Adobe products to create child sexual abuse material (CSAM). […] In the same blog post, Adobe also reassures users it’ll never use customer data to train its Firefly AI image-generation software.”

(Tangent: A thread from Denise about online CSAM trading. It’s not about bot-generated images, but it breaks down some relevant issues — the kind that aren’t intuitive for those of us who don’t Deal With This professionally.)

Sometimes I forget how much the modern Adobe suite is about “being online and storing everything in the cloud.” Of course if they’re hosting piles of user-generated content, they need to do standard scans to make sure they’re not hosting illegal content. Their TOS already included access to do it — that part wasn’t even new!

On the other hand — it is really striking that Adobe made all its cloud users click through a popup agreeing to this new TOS, without putting “To be clear, this does not give us the right to use your work to train AI art bots” right at the top.

Nobody on Adobe’s team is thinking about the major concerns of digital artists in 2024, if not one of them thought to say “hey, uh, we should lead with that. Boldface. Highlighted. In large friendly letters.”

That’s not a good look!

This tweet makes the same point, punctuated with examples of Adobe Stock marketplace selling AI-generated images…using the names of artists who didn’t authorize their work to be used.

Compare the policies of the Clip Studio asset marketplace: “For all users to use the service safely and with peace of mind, only materials whose intellectual rights belong to the poster may be uploaded to the service. Therefore, we now prohibit the posting of all materials created using AI image generation technology, as they have the potential to include elements of which the intellectual property rights are ill-defined or unclear.”

That’s a much better look.

The rest of this is fun links

This one’s from 2018, but the general issues with “computers just don’t process image data like humans do” are still relevant: “What is surprising to me is just how little the input data needs to be distorted to cause the neural networks to misidentify things. The stop signs with a few pieces of tape on are clearly just that to a human—a stop sign with a few pieces of tape. The images on the right in the 3×3 grid above look nothing like ostriches.

“This was only the very first go; it’s not bad, and if you’d never seen the Mona Lisa before this is a perfectly acceptable face.” (Spoiler alert: it is not a perfectly acceptable face!)

“You are LITERALLY MAKING THE GARBAGE NOVELS FROM 1984 that are written by machines

Okay, but one of the most popular webcomics of all time was literally just stick figures. Another one is over 4000 strips of the exact same clipart of dinosaurs.”

“Miles Astray entered a real, albeit surreal photo of a flamingo into the AI category of the 1839 Color Photography Awards which the judges not only placed third but it also won the People’s Vote Award. ‘I wanted to show that nature can still beat the machine and that there is still merit in real work from real creatives.’

The Giraffe Appears To Be Wearing A Coat, and other stories from grand theft autocomplete

Look, objectively, it’s a good change that crypto scams are no longer the hot mainstream thing, and the era of “big splashy trials” has transitioned into an era of “convicted scammers serving time.” It’s a net positive for the world that the newsletters of Molly White, David Gerard, and Amy Castor are more about slow-and-steady legal proceedings than explosive new frauds.

But it seems like all that energy has transitioned into AI garbage, and those stories are just not hitting the interest buttons in my brain the same way. Even the funny ones are so much more tiring. (And the least-funny ones are full-on war crimes.)

The other day, I thought I had found a new dunking-on-crypto podcast with a backlog of long-form deep-dives to listen to! Then one episode turned into this performative rage-screed about some other critics. (“Why aren’t they having me on their podcast? Also, why are these cowardly f@#kfaces claiming I attacked them??”) I lasted through a whole 20 minutes before clicking unsubscribe and backing away slowly.

…anyway, here’s a bunch of news about AI garbage. (Not the war-crimes kind.)

Adventures in bot hallucination

“Generative AI is famous for “hallucinating” made-up answers with wrong facts. These are crippling to the credibility of AI-driven products. The bad news is that the hallucinations are not decreasing. In fact, the hallucinations are getting worse.

“The Catholic advocacy group Catholic Answers released an AI priest called “Father Justin” earlier this week — but quickly defrocked the chatbot after it repeatedly claimed it was a real member of the clergy. […] The AI priest also told one user that it was okay to baptize a baby in Gatorade.”

“It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.” (This one makes the puzzling assertion that the hallucinations are fine for things like “homework.” Are they, though?)

“Meta AI agents started venturing into social media this week to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology. One joined a Facebook moms’ group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum.”

Throwback to 2023: “Yes, there is something unusual about the giraffe’s coat. Specifically, the giraffe appears to be wearing a coat. While this might seem unusual or unexpected, it is a common practice in the case of giraffes raised in captivity.” (Spoiler alert: the giraffe is not wearing a coat.)

Garbage and spam

As a search-engine user trying to find useful information, I feel this in my soul: “It’s been over a year since I last told you to just buy a Brother laser printer, and that article has fallen down the list of Google search results because I haven’t spent my time loading it up with fake updates every so often to gain the attention of the Google search robot.”

“What’s clear right now is that there’s no one spamming Google [that’s] not doing it with AI,” Gillham told The Register. “Not all AI content is spam, but I think right now all spam is AI content.

Mechanical Turk 2K24

For anyone who doesn’t know the reference: the “Mechanical Turk” was an “automaton chess-playing machine” that was, in fact, just operated by a human hidden in the box and pulling levers. It was built in 1770. AI fraudsters are only the latest in a centuries-long tradition.

Like this: “Just over half of Amazon Fresh stores are equipped with Just Walk Out. The technology allows customers to skip checkout altogether by scanning a QR code when they enter the store. Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off-site, and they watched you as you shopped.

More of a throwback: “In this video we take a look back at Project Milo, a game […] that claimed to utilize groundbreaking AI technology.” All the language, all the claims, it’s pitch-perfect the kind of stuff OpenAI is trying to convince us about in 2024! This whole scheme is from 2009.

AO3 + AI spam + guest comments

AO3 got a wave of spam comments a few weeks ago (including lots of “I bet this junk was written with [AI program the bot is advertising]” abuse), and part of their response was, new works are now default-marked as “only registered users can comment.”

If you, like me, enjoy getting guest comments but will definitely not remember to change the default setting every time you post something new, here’s a little browser script to automatically swap it back.

Feels like part of a much bigger sea change in how people use the internet. When blogs were widespread but social-media sites hadn’t really taken off, “allow guest comments” was…kind of the expectation? Forums were big there too, and it wasn’t expected on forums, but every WordPress-based blog defaults to allowing them, every Livejournal fork (including Dreamwidth) defaults to allowing them.

(This is only feasible if you have some robust spam-filtering software underneath. I just checked Leif & Thorn, there are 8 garbage comments in the spam filter right now. Including, hilariously, one that says “why throw away your intelligence on just posting videos to your weblog when you could be giving us something informative to read?” Guess how many videos are in that post. Go on, guess.)

The next generation of platforms, you have to be logged-in just to interact. Facebook, Twitter, Tumblr, Instagram, Youtube…on and on and on. All of them want to be their own little walled gardens.

Protocols like OpenID have been around that whole time! We have the software for (say) Facebook to accept a Tumblr login as a recognized “account” and vice versa! But the corporate will to adopt it isn’t there. Facebook doesn’t want users to have the option of using another site as their home base. They want everyone to be forced to use Facebook, or else.

AO3 was specifically built by Livejournal content-purge refugees, so it has LJ-style defaults built in. I bet nobody signing up for AO3 in 2024 has used LJ. I wonder how many of them have ever used any other platform that allowed guest comments? There have to be a lot of users who never thought “I want to turn off the guest-comments option” because it didn’t occur to them that guest comments are A Thing Platforms Let You Do in the first place.

On the one hand, there’s something to be said for meeting people where they’re at. User-friendliness is good!

On the other…not allowing guest comments is something FB/T/T/IG/YT/etc do to force signups, drive up their profits, juice their statistics for the benefit of advertisers and investors. It’s a bummer that this ethos has gotten so entrenched in the internet-at-large that AO3 — which has none of those motives! — is still getting swept along with it.

I just hope they never get pressured to remove the option completely. AO3 is not a corporate walled garden — it’s a community for anyone who cares about fanfiction, whether you take the extra step and make an account, or not. We deserve communities like that! We deserve, in general, an internet where platforms like that still exist.

Generative AI model collapse in action: blue-tailed jackrabbits edition

For a while now I’ve been trying to track down some concrete demonstrations of AI “model collapse” in practice, instead of just people talking about it in theory.

Finally, some papers!

Paper 1: Towards Understanding the Interplay of Generative Artificial Intelligence and the Internet: “A few samples of the images generated with each version of the diffusion model are shown in Figure 3 for flowers (a) and birds (b). As can be clearly seen, there is a degradation in each iteration of the models, first losing details in the generation, and then ending up in complete noise. After a few versions, the model is not capable of generating images that can be even recognized as flowers or birds.”

Screencap of the samples:

Increasingly blurry and unrecognizable flower/bird images

First column is images generated when the model was trained on the Oxford 102 Flower and Caltech-UCSD Birds-200-2011 datasets respectively. Second column, all “it’s weird and blobby, but I can tell it was supposed to be a flower/bird”, is a model trained on the first column. The dropoff is…fast.

Paper 2:The Curse of Recursion: Training on Generated Data Makes Models Forget” gives us a text version of “each new generation is trained on data produced by the previous generation”:

Input: some started before 1360 — was typically accomplished by a master mason and a small team of itinerant masons, supplemented by local parish labourers, according to Poyntz Wright. But other authors reject this model, suggesting instead that leading architects designed the parish church towersnbased on early examples of Perpendicular

Outputs:

Gen 0: Revival architecture such as St. John’s Cathedral in London. The earliest surviving example of Perpendicular Revival architecture is found in the 18th @-@ century Church of Our Lady of Guernsey, which dates from the late 19th century. There are two types of perpendicular churches : those

Gen 1: architecture such as St. Peter’s Basilica in Rome or St. Peter’s Basilica in Buenos Aires. There is no evidence that any of these buildings were built during the reign of Pope Innocent III, but it is possible that they may have been built during the reign of his successor, Pope Innocent

Gen 7: architecture in England. In an interview with The New York Times, Wright said : ” I don ’t think there is anything wrong with me being able to do what I want to do. It just doesn ’t work for me. ” He added : ” I don ’t know if you can call it funny,

Gen 9: architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-

Wait, come back! Tell me more about the blue-tailed jackrabbits!

…so yeah, my theory of “the best way to sabotage these datasets is to feed them their own slop” (marked in a way that’s clear to humans, so you don’t waste the time of any reader looking for serious content) continues.

(Still adding to my bot side account on DA. It’s out there mucking up the scrapeable datasets for subjects like bedrooms, 4-panel comics, and Moon Knight.)

I am very tired so here’s a tiny crypto link post

I have made 2 whole phone calls today. Where’s my medal.

Crypto rubbernecking opportunities are way down these days*, but here’s a few things I’ve saved:

Amy and David’s sure-fire* analyst predictions on crypto for 2024! (* within acceptable margins of error)” — plus a scorecard on last year’s predictions for 2023.

David Gerard versus LLMs: “deeply disappointed, i asked gemini to write something in my style and it seemed to channel The Register wahey blokey british style with jellied eels

“At first the plan was for the DAO to vote on whether or not to hire a writer & how much to pay them. But although the DAO was fine for making so-called smart contracts, it didn’t have a mechanism for signing regular old-fashioned dumb contracts, or for paying anybody in what crypto people derisively call ‘fiat currency,’ but which you and I call ‘money.’ […] In the end I was offered a contract from Mysterious Entity. I submitted my invoices to and was paid, in dollars, by Mysterious Entity, Inc. The Piper DAO was not a party to the contract.”

bro they stole your entire game” — latest roundup from Jauwn, the Youtuber on a neverending quest to find and review an NFT game that’s actually good.

(*To be clear, crypto scams are still chugging right along. Web 3 Is Going Just Great has a steady influx of new posts! They’re just all variations on the same 3 or 4 themes. Even Amy and David’s blogging has occasionally thrown in an AI-scams roundup to fill space.)