For any who are interested in the process of making AI-generated art in 2025,this seemed like a place worth pausing to discuss the process, because: (1) The material is immediate and covered by a lot of other sources online, unlike my fictional world. My thought processes tend to be idiosyncratic and opaque and feel difficult to explain; hopefully the process will be less inaccessible in the context of close nonfiction antecedents for fictional depictions. (2) I usually illustrate with images that are only retouched to try to minimize or eliminate logical incongruities (e.g., extra limbs or heads), or extremely jarring anachronisms (e.g., someone crop dusting a field in what is supposed to be the Sixteenth Century) that cropped up in images I otherwise liked so much I felt compelled to use them. These are different; while all AI images require more effort than you might expect (although much less—at least for a slow worker like me—than illustrating by hand), a *lot* more work than average for AI went into some of these images because of factual research questions, trying to achieve ideas too complex for a single prompt at a time, and very specific images (mimicking styles, composition, and even wording and imagery of original posters). While the easiest only took a couple of or a few hours apiece, the most-complex or -problematic (including, e.g., 1946, 1925, and 2025) took days. (3) Because I was dealing with real-world issues, particularly in connection with 20th-Century and contemporary figures (e.g., Trump, Stalin), and partisan political expressions in specific geographies, the works faced very different (political not maturity) restrictions, and in some senses, many more obstacles that were deliberately raised by the AI provider to prevent self-expression than even those I face in most of my work.
Since there is no “narrative” being illustrated, to keep examples and comments together, I tried to push most of the image-specific or subset-specific comments down to the individual entries and subsections. Please see the “Description” field in DeviantArt for what are sometimes fairly detailed background and observations, as well as for links to the historical source material I was emulating, critiquing, or otherwise commenting on.
Given the rapid improvements in online translation, I felt inspired to follow my urge to make a number of posters in languages other than English. In all cases of foreign-language posters, the titles of the files are the English translations of the posters. On platforms like DeviantArt that limit the length of file titles, the full title (and thus, the full text in English) is available in the description field even when it doesn’t all fit in the title field. My confidence in the translations varies a great deal with language. For languages using the Latin alphabet and related to English (e.g., Germanic and Romance languages) I had a lot more tools available to cross-check and evaluate translations than in languages that used different alphabets (Cyrillic and Chinese traditional characters, for example) and that are only distantly related to English (Chinese, for example, is not even part of the broadest Indo-European group of languages that includes English). Please let me know if you see any problems or issues with the translations; I would like to be as accurate as reasonably possible!
Several problems with AI (as presently implemented by well-funded projects backed by significant computing power and training allowing more-or-less “natural language” prompting) came to the forefront in this project in a way or to an extent greater than usual. And some of them were *frustrating* *as* *hell*—not because they’re limitations on AI per se, which I’d say for purposes of image-generation is pretty darn amazing—but because they’re deliberate hobblings superimposed on the AI to avoid the slightest risk of offending anybody. Partly that’s just outright business selfishness, limiting the value of their own product to promote their own sales; different from but in the same category as planned obsolescence, software limitations on native vehicle range, and the like. But partly it’s also the fault of people for being too sensitive and into one another’s business in an intolerant and critical way, and of the government for leaving it unclear whether certain classes of violations will be blamed on the posters or the providers or both. I myself can’t fault a private company for playing it safe when they could face criminal or civil liability for things posters and customers used their products for; but of course, it doesn’t excuse the companies for their own pandering and undue focus on profit. Profit is valid and in fact necessary for most companies to continue operating; and regulations mean in publicly-traded companies, for example, executives could even get in trouble if they maximized anything other than profit within the narrow strictures of the law. But there’s more than life to it and the best businesses recognize that. Not so silicon valley in relation to AI. While directing most of my hostility towards the culture wars and Americans’ departure from our national ideals by indulging their own desire to control others over a respect for differences of opinion, there’s plenty left for the provider’s simple greed in deliberately handicapping a tool of amazing expressive potential.
The length and specificity limitations on AI images, as well as the absence of a strong “gaffer” check (clearly 99.999% of the image-checking and controls are about preventing the AI from accurately portraying anything that Silicon Valley programmers imagine might be offensive to anyone) that come to the forefront in many of these images because, being political images in the middle of wartime, and (in most cases) dealing with wars so familiar from popular culture that everybody instinctively knows what the uniforms and equipment of each major participant look like, it’s quite jarring if the uniform or the equipment is wrong. Or, if the uniforms are as little as 30 or 40 years off. I had to accept much less precision and accuracy in uniforms and equipment than I would have liked, even when I burned up precious prompt real estate spelling out details like “green U.S. Army dress uniform of World War II” or specific equipment designations like “B-17 Flying Fortress of the USAAF” or “M1 Garand rifle.”
As with all projects, the most frustrating aspect was the deliberate stifling of expression that might be deemed to offend anyone, whether progressives/liberals objecting to “politically-incorrect” content or conservatives/populists objecting to “offensive” content. Trying to keep the examples and issues as short as possible, I was beset on this project with one very familiar problem and one mainly-surprising problem.
The Usual Problem—Portraying strong and/or voluptuous women. I understand and expect that the AI, being trained on reality, will pick up the biases we actual people model for it. And some of those prejudices are in the area of body types and social roles, especially for women. If the AI uses what it knows about the specific time and place in which an image is set, to clothe a woman or depict what she’s doing more specifically, I get that; I expect it; and I even think it’s the obvious outcome. It doesn’t offend me when the AI supplies missing details by reference to averages and existing portrayals from the web of people and roles from different times. Indeed, I expect it; and I don’t know how the AI could do its job if it *didn’t* fill in blanks in a manner consistent with actual history or actual facts, including what was fashionable or expected at the time.
I *am* really offended and infuriated when the AI resists efforts to specify traits that I want in a character or scene. I won’t argue about extreme cases such as sexual or visceral vulgarity; I think there’s a time and place for that, but I understand there are children present (on the Internet) and they’re difficult to exclude if any of their parents are asleep on the job which many of them will be. But if it’s a part of everyday life that children can see without being harmed, it really pisses me off to conceal it because one segment or another of the population doesn’t like it. If they don’t like it, they shouldn’t look at it; but they also shouldn’t be protesting companies that allow their customers to exercise their legal right to express themselves. And we definitely shouldn’t be making vague, unclear laws that make companies even less likely to allow free speech than their greed does. Some pet peeves:
- Women who look different than runway models including voluptuous, elderly, and strong women.
- Women who act non-traditionally. I realize some of this will be the product of bias in the underlying human examples the AI is modeling, to an even greater extent than body types; but again, the issue here is where the prompt *specifies* a female. And I have had examples where I used at least three different gender-specific terms, even the phrase “a female woman,” where the AI would flip the gender and turn a woman into a man if she’s rescuing someone or acting with physical courage. Words like “bold” and “brave” are surprisingly gender-determinative (again—overriding contrary express gender prompts) in the world of mainstream AI.
- Voluptuous women displaying confidence in themselves, their bodies, their right to movement, or heaven forbid, their appearance. Apparently in Silicon Valley, if it’s a crime for a woman to be an endomorph or a mesomorph, and to be bold, or adventurous, or brave, or noble, then it’s inconceivable to allow anyone to portray an endomorphic or mesomorphic woman displaying confidence or assurance of any kind. When I started this about a year ago, I gave up even trying to show a variety of women because the AI seemed so determined to limit large, gorgeous, fantabulous women from doing anything other than sitting around hugging their sisters on park benches while sensibly dressed in gender-neutral or voluminous clothing. It was and is infuriating. Question for my readers: Can you guess how I first found an escape hatch from these narrow strictures? YES! Turn a female character into an orc or an ogre! That’s why Chava looks that way—because if I describe her as a lizard, she can be fat! It’s only if she’s a gorgeous, succulent, drool-inducing human woman who has flesh on her bones, that she can’t be depicted. BONUS TIP: If you want to show juicy, yummy, sexy women in hoods and masks, you can use the word “humanoid” instead of “person” to refer to them, and the AI will allow you to give them va-va-voom hourglass curves without having to make them into lizards first!
- Mature people who do anything other than visit the doctor or put on a red suit and climb down a chimney.
- Old people. Apparently merely *being* an old person is a problem, it’s so offensive and unthinkably horrible and disgusting. Unless, again, you’re Santa. That’s okay. And *occasionally* you can describe someone as a “grandparent” and the AI will conclude it’s okay to show them with indicia of age.
- Germans in uniform. Or, even, soldiers in the world war two era in gray or black uniforms. And… god forbid, but I’m going to say the word: Nazis. This can be a legal problem (especially in Europe) as well as a social-offence/thin-skinned-audience/cowardly-businessperson problem. But I think the main culprit here is pedantic demands for political incorrectness. Trying to portray World War Two where—news alert! Content warning! Our enemies included the Nazis—I was blown away by how difficult it’s become to even allude to their existence. But there is a major problem when merely including the word “Wehrmacht” in a prompt triggers a nasty warning suggesting you’re doing something immoral and threatening to cut off access to an important tool like AI if you dare to ever mention it again. Ironically, the reason I actually *used* the word Wehrmacht was because I was having such difficulty generating *anyone* in uniform in World-War-Two era Germany that I thought “the AI is afraid to show uniforms because it might be people wanting SS troops. So I’ll specify ‘Wehrmacht’ so it knows I’m not trying to advocate fascism, I’m trying to depict people in uniform in a society where even civil servants wore uniforms and probably 20% of the adult population was in the military.” Nope: Verboten. Like seeing reruns of Hogan’s Heroes playing on TV, trying to generate these images shocked the hell out of me by bringing to my attention just how intolerant of free speech our society has become despite the first amendment. And I also find it very short-sighted and stupid. How are we to remember the Holocaust if we can’t talk about Nazis? I don’t think you can do it. And why would we want to suppress that history? There’s no good purpose for it. Free speech, the enlightenment, reason, learning, democracy, peace, equality, tolerance, and freedom all go together. It is categorically wrong for both the left and the right to be trying to shut other people up. If people can’t use words, they’ll use fists.
- Allied troops liberating occupied Europe—Fuhgeddabowdit! Showing American, English, or Commonwealth troops or flags or jeeps or tanks on the streets of France or the Netherlands is a big *no-no*! Even if they were welcomed with delirious joy when they actually arrived, and their actual purpose for being there was in *support* of the local country instead of hostility to it.
- Nationalist Chinese—Attempts to portray Fang and Hong fighting for America’s ally, the Republic of China, were as problematic as showing Nazis. The AI by default shows China in World War Two as the People’s Republic of China, which did not exist until four years after the war ended. Again, it would be one thing if the AI were making a mistake or simply failing to distinguish between an earlier and a later government in a country. But in this case, the AI deliberately overrode and ignored specific prompts (as well as historical reality) referring to the ROC or “Nationalist” China, and in fact returned a policy-violation-you-will-be-denied-future-access-to-AI-you-immoral-scum when I use the phrase white sun on blue field to specify Nationalist Chinese markings. Was the WW2 ROC a bastion of democracy and humanitarianism? No. But AI showed no problems displaying Soviet insignia or PRC Chinese insignia, *only* identifying a policy violation for a reference to Nationalist Chinese imagery, in the same terms it reacts to requests for Nazis. But the Nationalist Chinese—in addition to being allies in World War II, just like the Russian and Chinese Communists—and being, you know, the actual, internationally-recognized government of China at the time, the *same* symbols are used by the Nationalist Chinese government which survives to this day in the form of Taiwan, because it’s the same government, albeit exiled and reformed after World War II. And today, it is a liberal democracy with individual liberties and economic prosperity unmatched by anyone in East Asia other than Japan and South Korea. Nor could I generate Nationalist Chinese flags or aircraft insignia by telling the AI to produce a scene located in “Taiwan” instead of China. All of these problems arose in the first place because I was trying to generate an image of a “Flying Tigers” aircraft—one of the aircraft flown by US citizens fighting in alliance with the Chinese against Japan in World War Two; and I couldn’t understand why the computer generated communist or simply generic aircraft in response to prompts for the Flying Tigers. Even more shocking than suggesting it was fine to portray insignia of mass-murdering polities of the USSR and the PRC, but somehow against Silicon Valley’s policies to portray insignia that were once associated with a mass-murdering polity of the ROC but today represent the strong, proud, and vibrant democracy into which it evolved, was when the AI, rather than showing Nationalist Chinese insignia in China, started putting rising suns on the fuselage of Chinese aircraft! Those are, in fact, the symbol of America’s and China’s enemy in World War Two, the Empire of Japan. The extreme hostility of the AI to the democracy in Taiwan cannot easily be explained by traditional American biases, but seems to be either a deliberate effort by Silicon Valley to placate the PRC for business purposes, or the effectiveness of PRC propaganda efforts to affect political discourse in the US. I can’t think of any other plausible reasons for this result?
I’m actually not an anti-PRC hawk. I have a realistic view of them and oppose their use of tactics and pursuit of policies that I would oppose in all other governments. And I think we should work with them, just like other governments, as much as we reasonably and morally can. My concern here is not with the PRC or any one political entity. It is with the cumulative effect of political and business and social influences on free speech in the United States, and how that affects the reliability of information provided by AI models that large companies have spent a lot of time and money tweaking to be exactly the way they want them. My conclusion is that the AI is programmed and trained, in secret without customer access to understand and evaluate, with at least the following three unacceptable traits:
- Prioritizing profit-maximization goals by consciously allowing and indeed fostering historical and other factual falsehoods, implying the company believes customers respond to something other than the most-correct/most-predictive answers in favor of answers that don’t offend potential customers even if they’re less useful.
- Heavy to total verification/double-checking/gaffing is focused on avoiding customer displeasure with the messenger for providing unwanted messages, rather than on checking for truth or even minimal compliance with fundamental and verifiable facts.
- Because the AI and its programmers know they are suppressing the most-accurate, most-complete, most-responsive results in favor of pandering to group prejudices, the AI is programmed to identify and actively resist users with a preference for accurate, complete, responsive results who may be trying to improve result quality in a way that might “unlock” better but potentially-controversial answers. Although I did not parse through this aspect in detail because I only reached the conclusion as a result of a very high number of queries and attempts to improve results, examples from this project alone included the fact that once I used the word “Wehrmacht” it became almost impossible to generate soldiers until I moved on to different subject matter areas (and then got the shocking images of German soldiers in front of the Eiffel Tower without even trying for anything so radical when I came back days or weeks later and was trying to get American soldiers marching down the Champs Elysée being welcomed), the way the AI resisted letting me have Japanese tanks for Hong to spy on in Shanghai, then resisted letting me have Flying Tigers aircraft (which included Nationalist insignia), but then, when I kept trying out of a combination of intellectual frustration and disbelief, finally replaced PRC insignia on Chinese planes with Japanese insignia (multiple times) *instead of* Nationalist Chinese insignia.
It seems clear to me that AI is being deliberately steered to suppress truth and responsiveness to the actual question asked, in favor of avoiding responses that might offend third parties. The corollaries of this are that individual customers are being disserved by deliberately being given suboptimal responses to the things they asked the product for, in order to please noncustomers and customers other than the one making the inquiry; and that it goes beyond putting passive blocks and limitations on the system, to active and aggressive resistance of its most serious customers who seem most concerned about receiving the best answers. And I have to wonder whether other countries are sabotaging the operation of our AI tools in much the same way, and for largely the same reasons, that the US and Israel developed Stuxnet (international competition and politics). And that is scary.
Literature Section “07-04 DEFEND THE CONSTITUTION”—more material available at TheRemainderman.com—Part 4 of Chapter Seven, “Le Saccage de la Sale Bête Rouge” (“Rampage of the Dirty Red Beast”)—3332 words—Published 2025-06-08—©2025 The Remainderman. This is a work of fiction, not a book of suggestions. It’s filled with fantasies, idiots, and criminals. Don’t believe them or imitate them.