No Mo Wrimo
I mentioned in a post a few months back that I had decided to write a novel during National Novel Writing Month (hereinafter "NaNoWriMo") set on one of the worlds of the Wongery. At the time I wrote that, however, I had not yet set up a NaNoWriMo account (well, not as "Clay Salvage", anyway; I'm pretty sure I had a NaNoWriMo account years ago under another name, but I don't remember the login info and it's been long enough I don't know if that account survived the makeovers the NaNoWriMo site has had since then), and the NaNoWriMo website seemed to have been in a state of transition, with its fora gone or at least not visible. Anyway, with November becoming imminent-ish, a few weeks ago I figured it was time to get things in order and, among other things, create an account on the NaNoWriMo site.
Now that I had an account, I could see the fora on the NaNoWriMo site. Strangely, there didn't seem to have been any new posts since last year (maybe I got lucky and happened to visit the site just as the fora were restored?). There were forum posts visible from last year, though, and I decided to look through those. And... hoo doggy; there was a lot of messed-up stuff that went on with NaNoWriMo last year. I'm not going to recount it all here, but perhaps the most concerning occurrence involved some apparent highly inappropriate interactions between one of the NaNoWriMo moderators and underage users, which the NaNoWriMo adminstrators refused to do anything about. Yikes. Under the circumstances, for a while, I was uncertain whether I should even keep my account there, or whether NaNoWriMo had become something it was better to stay away from. But I kept reading, and later posts seemed to indicate that the NaNoWriMo board of directors had been unaware of all that was going on, and once it was brought to their attention they dealt with it immediately and took measures to make sure there would be no such issues in the future. So, okay, there had been some big problems with NaNoWriMo last year, but that was in the past, and it looked like it had been addressed appropriately. I still had some misgivings, but I'd already created my account, and I figured there wasn't a compelling reason to delete it.
I have since deleted that account, and will not be creating another. Not because of those past events, but because of something NaNoWriMo is doing right now. I can and still do plan to write a novel set in a Wongery world, but I won't be doing it as part of NaNoWriMo. NaNoWriMo is dead to me.
A few days ago a statement appeared on the NaNoWriMo site addressing the use of generative "AI"[1]. This statement included the following paragraph:
We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege.
This is... this is utterly reprehensible. I wouldn't have been so bothered if the NaNoWriMo staff had remained silent on the subject of generative "AI", or even if they'd said they didn't have a problem with generative "AI" and left it at that. But mendaciously trying to turn things around like this and victimblamingly vilify gen "AI" critics as classist and ableist... that's just monstrously despicable.
The Tumblr thread where I learned about it speculated that the reason the NaNoWriMo site includes this supervillainesque defense of gen "AI" is that at least one of its sponsors this year is a company that sells generative "AI" services. That seems like a likely explanation, but ultimately why this garbage is on the NaNoWriMo site doesn't matter; regardless of their reasons for putting it up there, it's an inexcusable statement to make.
(To be fair, the page where this paragraph first appeared has since been edited and expanded to try to explain why they supposedly think condemnation of AI is classist and ableist, and to acknowledge that "AI is a large umbrella technology... which includes both non-generative and generative [']AI[']" and "certain situational abuses clearly conflict with our values". However, the expanded version isn't much better. Its explanations for why they say condemnation of AI is classist and ableist are nothing but a load of empty verbiage, it continues to completely ignore the real issues regarding generative "AI", and the bit bringing up different types of AI is highly disingenuous. Yes, of course there are other types of AI, but come on, there's nobody seriously saying that video game companies had better stop having their NPCs move in patterns or react to players' actions. It's generative "AI" that people (rightly) object to, that's clearly what the statement was intended to refer to, and bringing up other forms of AI is only a transparent attempt at obfuscation.)
(Okay, this post was originally meant to go up yesterday (the previous post explains why it didn't)—in fact, I'd originally hoped to have it up a few days earlier, but I had a busy week and it took a while to write the post so it wasn't until yesterday I finally had it finished—, and since then the NaNoWriMo staff have edited the page on AI again, finally removing the mentions of classism, ableism, and privilege, and posted a "Note to Our Community" that... doesn't so much apologize for the previous statement, as make poor excuses for it. Eh... it's something, I guess, but it's kind of too little too late, and it's still making some dodgy claims; I am not inclined to take them at their word that the original statement on AI was really primarily motivated by wanting to curtail vitriolic harassment of poor beleaguered AI proponents on the NaNoWriMo social media spaces, or that the casting of criticism of AI as classist and ableist—which they saw fit to expand into several paragraphs in the second version of the statement—was just an unintended matter of careless wording. (And the vague acknowledgment that "the ethical questions and risks posed by some aspects of this technology are real" rings rather hollow when, despite having been happy to go on at length trying to justify why they called criticism of "AI" classist and ableist, they can't bring themselves to get into any specifics at all about what aspects of "AI" they consider questionable.) I mean, okay, if that "note" and the current version of the "AI" statement had come out before I'd written (the rest of) this post, maybe there are a few things I would have written differently, but with the exception of this paragraph and the parenthetical bit at the end of the penultimate paragraph, this entire post was written and ready to go before that happened, and frankly I don't think it changes things enough to warrant my completely rewriting the post now.)
Trying to frame criticism of generative "AI" as "classist" is especially laughable, given that gen "AI" is something pushed by billionaires and ruthlessly exploitative of the working classes. The most frequent criticism brought up about gen "AI" is that it uses artists' creations without permission to train its dataset, essentially stealing the artists' work. That's a very valid concern—despite some gen "AI" defenders' desperate attempts to wave it away by inanely asserting that training gen "AI" on artists' work is somehow exactly the same as artists taking inspiration from each other—but it's far from the only one.
Generative "AI" requires vast amounts of energy—in 2022, it used 2% of all global energy, more than most countries. That not only makes energy more expensive in general, but it significantly contributes to climate change. And of course if the use of generative "AI" grows, this problem is only going to get worse, to the extent that gen "AI" proponents are banking on finding new sources of energy that may not become available.
Generative "AI" uses huge amounts of water to cool its hardware, too, polluting it to the extent it's unusuable for anything else and endangering the communities where its data centers are located. According to at least one estimate, each question you ask ChatGPT, the currently most widely used text generative AI application, uses on average between 10 and 100 milligrams of water. Maybe that doesn't sound like much, but when you have billions of people each asking thousands of questions, it really adds up. Ask ChatGPT a few hundred questions, and you've just used as much water as it takes to flush a toilet, and to far less useful effect.
Generative "AI" is a nightmare for workers as well. It isn't just a matter of tossing data at a computer and just letting it go at it. Humans have to sort and label that data, and those humans have been working in conditions that have been compared to modern-day slavery. Even within the U.S., where most of the generative "AI" companies are located, the way that generative "AI" has been used has been driving down wages and worsening inequality.
Who's going to be the most affected by all this? The already disadvantaged, of course. It's not condemnation of generative "AI" that's classist and ableist; it's generative "AI" itself. Even setting aside the issues around intellectual property theft—and that's not something that should be lightly set aside—your ability to have your computer generate a funny picture of a cat playing a guitar or add sensory detail to your bland sentence is not worth all the damage that generative "AI" is doing to the environment, the economy, and the labor market. It is absolutely true that "questions around the use of [generative ']AI['] tie to questions around privilege", but in the exact opposite way from what the NaNoWriMo statement is scurrilously trying to imply.
(Some of these problems may, in fact, be solvable, or at least mitigatable, but the current AI companies show little interest in trying to solve or mitigate them.)
Now, if generative "AI" uses all that power and water and all that (albeit scandalously low-paid) labor, and it's being made available for people to use for free, how are OpenAI and the other generative "AI" companies turning a profit? Well, the answer is simple. They're not. Oh, they're making some money by offering paid plans and licensing out their generative "AI" applications to downstream software manufacturers who want to include it in their products, but they're not making nearly as much money as they're spending, and they're only solvent because they're buoyed by überwealthy investors and big tech companies who have been shoveling money at them. (And at the moment, the other companies using generative "AI" don't seem to be making any money from it either.)
So what's their business plan? They can't keep hemorrhaging money forever... or actually I guess they can, if they can keep finding enough gullible investors to fleece, but my guess is that they have a somewhat more concrete business plan than that, or at least that there's a somewhat more concrete business plan that they pretend to have when hustling said gullible investors. I don't have any special insight into the minds of the gen-"AI" CEOs, of course, and I don't know what their ultimate plan is, but if I had to guess, I'd say it's this: keep giving generative "AI" away for free until people become so accustomed to and reliant on it that they see it as a vital basic amenity, and then start charging up the wazoo for it. (Not even charging the end user directly, necessarily; they could raise the charges to software manufacturers who want to incorporate gen "AI" into their products—and then of course that charge would be passed onto consumers via an increase in the software price. Heck, in some cases that price hike has already happened...)
Could that work? I hope not, but I don't know. There is, unfortunately, precedent; that's not too different from how Amazon got its start, back when it focused on books, and from what Temu may be trying to do now—take advantage of an immense initial investment to sell products at a loss until competitors who can't match their prices are driven out of business, and then, once they've got an effective monopoly, raise the prices to whatever they want.
For me, though, at least, a software product's incorporating generative "AI" is not going to make me more likely to want to use that product. Just the opposite. I've kept using Adobe software despite their move to a subscription model and the increasingly bloated amount of disk space they take up, maybe partly because it's the industry standard (even though I don't work in the industry it's the standard of), but I think mostly just out of inertia; I'd been using Adobe software for decades, I was used to it, and trying to find new software that could do all I wanted and to learn to use it seemed like a daunting prospect. But Adobe's increasing embrace of generative "AI" has finally given me the impetus I needed to wean myself away from it. I just downloaded and installed Krita and Scribus; I already installed Inkscape on my computer last November (for reasons I don't now fully recall) but haven't really made a concerted effort to learn to use it, but I certainly will now. I haven't cancelled my Adobe subscription just yet, because I haven't learned these other programs yet and I still have a lot of files in Adobe formats that I'd need to convert... but I'm finally moving in that direction. I mentioned in the previous post that I have had five different (terrible) webcomics; three of the five were created in Adobe Illustrator (the other two were done in pencil—yes, pencil, not pen; I didn't even ink them; like I said they were terrible), but when and if I do make another attempt to revive one of them, which is a dumb idea because they were awful and don't deserve to be revived but I've tried it before and it is far from impossible that against my better judgment I may do it again, I will be using Inkscape. (I guess next at some point I really ought to make more of an effort to start using Linux instead of Windows[2], but eh, one step at a time.)
I realize that I'm probably not typical in this regard, though, and my feelings about the matter don't mean much; to a lot of people generative "AI" is an attractive feature. But enough to make them collectively cough up the enormous amounts of money gen "AI" would require to become profitable? That remains to be seen. And it seems to me there are cracks beginning to show in the gen "AI" façade. Articles were already asking a year ago whether the AI Boom was already over—there's an old saying that if an article asks a yes or no question in its title, the question's answer is always no, but of course that's not literally always true (and at least one recent study seemed to show that it was actually more often than not false), and here the body of the article makes a good case for the answer being yes. Investors have been increasingly expressing doubt of the technology's potential. With so much "AI"-generated content online, gen "AI" has begun cannibalizing itself, training itself on "AI"-generated "data" and creating positive feedback loops that amplify biases and inaccuracies and make them drift even farther from anything resembling reality. ChatGPT has shown no appreciable improvement in the last eighteen months. Multiple major financial news sites have been asking whether the AI bubble is bursting, and this is another of those times where the answer to the yes-or-no question in the headline doesn't necessarily seem to be "no".
So maybe the days of generative "AI" as we currently know it are numbered. After all, it's not that long ago that you could barely spend five minutes on the internet without tripping over some twit with a hideous monkey avatar rhapsodizing about how NFTs were the wave of the future, and now they're little more than a punchline. Facebook, Inc. spent billions of dollars advertising its incipent "Metaverse" as the next big thing everyone was going to want to be a part of, and was so confident in this becoming the cornerstone of its business that it even changed its company name to "Meta", but its much-touted "Metaverse" never attracted more than a relative handful of visitors and today people rarely talk about it except as an example of poorly conceived corporate folly[3]. I don't think it's impossible that generative "AI" in its current form will soon go the same way... and the sooner it does the better.
But I don't, of course, know that that will happen. Maybe we're headed toward a dystopian future where a handful of huge gen-"AI" corporations have a total stranglehold on what used to be the creative industries, where the majority of the world's energy and water goes to feed ravenous data centers while the impoverished masses struggle to make do with what little is left over, where no company is willing to pay artists or writers or programmers because they can get generative "AI" to churn out content for them, and never mind that the twentieth-generation copies-of-copies that comprise its output are borderline gibberish; the corporations don't have to give the algorithms benefits or worry about worker's rights, and the generations weaned on this pablum don't know to expect better. I certainly hope that's not the case—and really, I don't think it's at all likely that it is—, but I don't know for sure.
I do know I'm never touching NaNoWriMo again, though. At least, not unless it undergoes a complete change of management.
And I'm going to avoid all the companies that sponsored it this year, too. (That would be Dabble, Ellipsus, FreeWrite, Ninja Writers, ProWritingAid, Scrivener, and Storyist Software, for anyone else who wants to keep track.) If they're going to support an organization that's pushing such loathsome lies, I don't want anything to do with them.
...Whoops wait never mind just found out after writing that paragraph (but fortunately before posting) that in reaction to NaNoWriMo's statement on generative "AI" Ellipsus stepped down as a sponsor! Good for them! Oh, and FreeWrite isn't listed as a sponsor anymore either, although they don't seem to have made an official announcement about it. Great! And it looks like Ninja Writers is gone from the sponsor page too now. So okay, Ellipsus, FreeWrite, and Ninja Writers did the right thing; that just leaves Dabble, ProWritingAid, Scrivener, and Storying Software as companies to be avoided. (Whoops; according to a New York Times article the chief executive of the company that owns FreeWrite said that its removal from the NaNoWriMo sponsor page was "an error" but that FreeWrite was "reviewing its relationship" with NaNoWriMo and may "have to cut ties". Hm.)
As I said, I'll still be writing that novel in November. Maybe I'll participate in Novel Quest, which seems to be getting set up as a NaNoWriMo alternative, or maybe I'll just... write the novel without worrying about doing it as a part of any sort of community or larger program. But I won't be doing it as a part of NaNoWriMo. It's a pity, because I think when NaNoWriMo started out it was really doing some good in encouraging people to write. But I want nothing to do with what it's since become.
- ↑ Incidentally, while it may already be obvious, the reason that I (try to) always put quotation marks around the "AI" in "gen 'AI'" is because... it's not really artificial intelligence. At least, it's not what "artificial intelligence" has broadly been taken to mean in speculative fiction, which is... ability by a computer to think and reason, what's now sometimes called "artificial general intelligence" (AGI). Generative "AI" isn't that. Despite gen "AI" industry leaders' occasional mealy-mouthed claims to the contrary, it's not even really trying to be that. It's programmatic pattern-matching; there's nothing resembling actual reasoning involved.
Some people believe that AGI is inherently impossible. I'm not one of those people; I think it's very likely that a sufficiently complex computer program could attain some form of actual cognition, and I don't find the arguments to the contrary persuasive. (Of course, I'm just some random idiot and you shouldn't listen to me.) I'm bullish on the eventual possibility of real artificial general intelligence... but generative "AI" isn't it. And not just in a quantitative sense of not being powerful enough yet, but in a qualitative sense of it's doing something entirely different that has no relation to and no path to attaining any sort of actual intelligence and just coöpted the name as a marketing gimmick.
I guess this kind of semantic hyperinflation is common with words decribing then-futuristic technologies. After all, the word "robot" was first coined (in a play by Czechoslovakian writer Karel Čapek) to refer to a humanoid machine capable of independent thought, and for some time thereafter that's what it meant... and now it's become demoted to a bisyllabic synonym for "automaton", and used to describe pretty much any machine that operates without constant direct human intervention, whether or not there's anything resembling thinking involved. "Artificial intelligence" has followed a similar trajectory; "generative AI" has about as much to do with artificial general intelligence as the machines that assemble cars in factories according to programmed instructions have to do with the humanlike robots of Čapek's play.
Then again, I suppose this linguistic battle has already long been lost; for decades video game creators have been calling the algorithms driving enemy behavior "artificial intelligence", even though they clearly have nothing to do with artificial general intelligence either. For that matter, it could be argued that the people who coined the term "artificial intelligence" weren't really focused on artificial general intelligence, and maybe should have used a different term for what they were doing. So, whatever; "artificial intelligence" has become the smurf of tech terms; it can mean anything and nothing; and whatever battle there may have been to pin down its meaning has been lost decades ago. I'm still going to keep using the quotation marks, though.
- ↑ Speaking of Microsoft, why in the world did they think it was a good idea to add spellcheck and autocorrect to Notepad!? I use (or used) Notepad to jot down quick notes precisely because it was so lightweight and required so little processing power that even when my computer was running slow and other programs started stalling, Notepad kept on working smoothly and never had a problem. That was Notepad's big, indeed only, selling point. But now, presumably thanks to the overhead of these wholly unnecessary new features, Notepad is constantly freezing up or bogging down and missing keystrokes. It's become completely useless.
Okay, it turns out those features can be disabled in settings. I'll see if turning them off returns Notepad to its former functionality.
Later edit: it did not! Notepad is just trash now and I guess I have no reason to ever use it again!
- ↑ Okay, I admit I still kind of want to think the failure of the Metaverse may have been at least as much about the implementation as it was about the core idea. I mean, I feel like it could be really fun to have a virtual environment people can hang out in, as long as that virtual environment is... you know, engaging and interactive and a generally pleasant and entertaining (virtual) place to (virtually) be in. But Meta evidently put zero effort into making their virtual environment at all interesting or inviting or æsthetically pleasing, or giving people anything to actually do there, so it's not surprising that no one was interested in hanging out in Horizon Worlds, for much the same reason that people don't generally enjoy congregating in vacant warehouses.