TITAA #35: Witch Elms and Barrows
Bella in the Witch Elm - Custom Image Models - 3D & VR Gen - Story Gen

“Who put Bella down the Wych Elm - Hagley Wood?” A famous unsolved murder mystery memorialized by graffiti in England, I ran across it twice this month. The first instance of this graffiti was seen on the wall in Birmingham Fruit Market, in chalk, on March 30, 1944. Then it morphed into “Who put Luebella in the Wych Elm” on March 31 (reddit source). In 1999, a version appeared on the obelisk on Wychbury Hill as seen above and has remained ever since, even after restoration of the obelisk. The graffiti, by unknown writers, evidently refers to the body of a murdered woman found in an elm in Hagley Wood in April 1943.
Robert Hart, of Wollescote, Stourbridge, told the Coroner and jury how at midday on Sunday, 18 April, he and three other lads when birdsnesting in the wood. He left the others and went to the stump of the old elm. Looking in, he saw a skull. He called his friends, and one of them raked the skull out of the tree with a stick, and put it back again. (source)
“There was a small patch of rotting flesh on the forehead with lank hair attaching to it, and the two front teeth were crooked,” one of them said later (source). Indeed, there is an excellent picture of it here. The youngest confessed to his father, and an investigation was launched. Surprisingly ineffectual, if you ask me, since the identity of the woman and motive for killing her were never discovered, despite lots of evidence.
The skeletal remains were examined by Professor James Webster, a forensic scientist at the University of Birmingham, and concluded that the bones belonged to a woman between 35 and 40 years old, about five feet tall, who gave birth to one child, and had irregular teeth. Because Webster found a piece of taffeta stuffed inside the woman’s mouth, he surmised that the she had been murdered by asphyxiation at least 18 months earlier. He also believed that the body had been disposed of in the elm shortly after death because her body would not have been able to fit if rigor mortis had set in. (source)
Her hand was missing and was found buried nearby. One theory is that this was a ritual witchcraft murder, an execution for sins against a coven, since the wood was associated with witch activity. “There was an ancient tradition that the spirit of a dead witch could be imprisoned in the hollow of a tree and thus prevented from wreaking any more harm in the world,” says an investigator in 1968. Or perhaps “Bella’s” murder was related to creation of a magical “hand of glory,” maybe by local gypsies, who always get blamed.
Another wild theory is that she was killed by German spies, although again none of it was ever proven.
In 1953, a reader, referring to herself as Anna, wrote a letter … claiming that Bella was part of a WWII-era spy ring sent by the Germans to get intel on the area’s munitions factories. Anna wrote that this spy ring was made up of “ a Dutchman, a foreign trapeze artist, and a British Army officer.” She said the British officer, who was a relative of hers, had been spying for the Germans, and that Bella was a Dutchwoman named Clarabella Dronkers who had known too much. Anna said the officer and his friend, a trapeze artist performing at the Birmingham Hippodrome, killed Bella and disposed of her body in the Hagley Woods. (source)
Happy Halloween! and may Bella, whoever she was, rest easy.
This is another long newsletter — and now I am launching my “become a paid subscriber to support this” option. For now there is no content difference, and only one newsletter a month. The creative AI, narrative, games and book recs follow below.
AI Art Tools (Mostly Text2Image)
A few business developments: There’s a marketplace for text2image prompts now, because of course there is! Shutterstock will be selling AI-generated images using Dalle-2.
Interestingly, the author of the best Photoshop plugin, Christian Cantrell, has joined Stability.ai as VP of Product (tweet). In other news, David Ha, the well-known-to-AI-folks @hardmaru, left Google to join Stability.ai as head of strategy.
In a good PR turn (after some drama this month), Emad Mostaque announced plans to set up an independent foundation with open source input on Stable Diffusion.
You can learn more about how Stable Diffusion works by watching Jeremy Howard’s latest course videos.
New Models and Performance
Runway released the v1.5 SD model, tuned for better in-painting and details. And the updated VAE decoders look like a great win for image details (thread link from StabilityAI account—people report they are a big win in use especially for character images).
Midjourney model “V4” is in training, from scratch, not based on any other model (says David Holz). Paid users are being asked to rate images now for tuning it.
Performance gains continue… with HuggingFace accelerate, you can now offload to the CPU and get your memory usage down to near 1GB RAM. I mean, I don’t know how fast it will be in this situation; I myself love the 8 seconds with Flax on a TPU scenario.
Custom Style-Tuned Models
This is how you differentiate yourself! Check this Arcane one from nitrosocke, this Evangelion one from aicrumb, and today this new Elden Ring one from nitrosocke which is amazing. Anzorq has kindly made a demo space for some of these tuned models on HF, here.
Simon Meng on Twitter trained with 10K satellite photos to merge slime molds and road networks.
Stable Diffusion Aesthetic Gradients is another way to personalize your style.
DreamBooth / Textual Inversion / Character Tuning
“DreamBooth” and “textual inversion” are the methods currently being used to create a re-usable “character” or style by tuning a model with example images. (See my previous newsletter.) There is already a site making your DB characters for you, for $3. (Thanks to the helpful tweets from @TomLikesRobots.) Everyone seems to like doing selfie characters and putting themselves in action movies!
I guess I wasn’t alone in struggling to get good results last month. Suraj Patil at HF wrote up a bunch of great stuff on what it takes to tune your DreamBooth character (or style) over here. I particularly liked the picture of the cat toy at the Eiffel Tower. This is the future I want for storytelling with these models!
Evidently there’s a faster and simpler way to train DreamBooth models from TheLastBen, code and colab available now.
Editing Image Content
Composable Diffusion demo for prompt editing. I can get very weird results from the latents by giving it incompatible directions, like negative “night” with positive “moonlight”. See example here.
Automatic1111 has code supporting text weights, similar to what Midjourney offers, described here. You can also use “negative” prompt weights in a separate input box in their main UI. There are a few evolutions noted, but the main gist is:
Genetic Stable Diffusion, work in progress from Teytaud. You iterate on your output by selecting the images you like and then rev’ing. I’ve always liked genetic algorithms with human choice input.
Midjourney has also added “remix” prompt editing for image adjustments via prompt, a bit hidden behind some Discord UI steps. Not in the (volunteer-written) docs yet. You need to type “/settings” and then toggle the button for “remix”; then when you upscale, you’ll have a button to make variants of the prompt.
Incidentally, I’m not moderating at MJ anymore; I had some concerns about volunteer labor and incentives for tooling for that job, along with no free time.
New/Different UI’s for SD
More slick Web and CLI UI’s are showing up, and this is Invoke AI’s. “It runs on Windows, Mac and Linux machines, with GPU cards with as little as 4 GB of RAM. It provides both a polished Web interface, and an easy-to-use command-line interface.” Gaining traction now.
Also of interest, not for web UI but for us command line nerds, I like the look of this Pythonic CLI from Bryce Drennan. It offers prompt-based editing and various other utilities (face enhancement, upscaling, prompt template expansion, tile generation…) too:
Victor Dibia’s ongoing Peacasso UI project looks amazing, as well.
Stable Diffusion “Multiplayer” demo on HuggingFace is looking quite psychedelic. (There is also another one here.)
🎥 Video, everyone wants to make video with AI! There are a few hacking efforts to improve the UI for doing videos (I think it’s hard in most colab demos), like an animation-focused UI project from Amotile, Stable Diffusion Studio.
I liked this video and explainer (IG) of how Turkalqahtani made architectural style changes to London’s Big Ben using Dalle-2 and then animated the transitions in Adobe AfterEffects.
At the launch event for Stable Diffusion, Stability.AI’s video (at 23 minutes) showed evolution of their Dreamstudio web tool in the direction of video and audio generation. Meanwhile their partner in dev, RunwayML, is still solidly on the front lines of developing AI-supported video tools. 🤷♀️
VR / 3D / Games with AI Gen
PCGamer’s Katie Wickens has been on top of things with the relevant AI news, this time covering ScottieFoxTTV’s VR/AR efforts for generation with SD: “Stable Diffusion VR is a startling vision of the future of gaming.”
@ThoseSixFaces just released code for his DiffusionCraft AI, which makes renders from Minecraft buildings as image prompts. It’s a similar to this Stable Diffusion in Blender idea from Ben Rugg where Blender 3d shapes are used as image input.
Here’s a video of Stable Diffusion running live in VR with the Quest Pro and Gravity Sketch. But the Pro costs too much for me.
Stable Diffusion integration in Unreal — via Peter Baylies, it’s also on-going work.
🐈⬛ I tried this ashawkey implementation of Dreamfusion text-to-3D to make a black kitty and got a flooffy horror with too many faces (a known issue they call the “Janus problem”):
Midjourney has had tiling for a couple months now. Good tiles are needed for 3D textures. Still lacking an API, you have to do your tile creations by hand in Discord, like an animal. Here’s a cute tile of witches with brooms that I made (previewed via this excellent drag&drop tool).
NERFy Stuff (an Image to 3D format)
👉🏿 NerfStudio looks amazing. It will help you make 3D video spaces you can fly through using video or images from your phone, it seems. I’m super impressed by their docs and the general friendliness of it. There’s a colab too if you don’t have a GPU. Maybe we’re finally getting there with this tech?
@KarenXCheng is all over it of course (check her thread and video here) — and check this video using phone footage, not a drone! And this one from @smallfly of the Palais des Papes in Avignon (he says it’s made with pics)! Yesssss! I want to record all my visits to standing stones and barrows and creepy churches!
I guess this NeRF of Númenor by Wren is not via NerfStudio, but LumaLabs AI. Also wow. If we can use TV footage too….?
Other Tech Art and Tools
🚙 Slowroads.io, a generative driving game/experience in the browser using three.js and webgl, by @anslogen. I love this.
📕 The Electronic Literature Collection V4. Speaking of which, NaNoGenMo 2022 starts November 1. That’s “National Novel Generation Month.” You just have to write code to generation 50K words of text, and you’re good. I try to participate every year.
🎶 Mubert Text-to-Music. There’s also an image2music HF demo by fffiloni based on this that calls the code behind CLIP-Interrogator; which is an image-to-prompt tool by @pharmapsychotic that’s been updated since I linked the colab a couple months ago. CLIP-Interrogator has been very popular with the selfie-crowd this month. It can be quite rude, though, which is one reason I’m a bit worried about synthetic captions, as in LAION-Coco. Although it’s a clever augmentation idea.
🎨 A piece on women working in NFTs and Generative Art.
🖼 London Review of Books on poems about paintings.
NLP & Data Science
👉🏿⚡️ Light-the-torch, a python tool to help you install the right versions of torch and associates on your VM with a GPU. This freaking saved my bacon this month after a couple wasted days on very old VMs and very new ones. Thank you, Philip Meier.
Lots of coreference model action, suddenly! Explosion.ai/SpaCy published an epic blog post about their new experimental coreference model. This is a good overview of some of the computational problems (and research) on managing coreference resolution, along with their solution. A good read if (like me) you are very interested in the nitty gritty. I have not taken their model for a test drive yet.
Fast Coref, a coreference model/lib from Shon Otmazgin and colleagues, which works with SpaCy pipelines. I tried their HuggingFace demo on some hard Jane Austen, and it didn’t do too badly.
Pretrain, Prompt & Predict by Liu et al.— A good overview of prompting methods in NLP for few or zero-shot language model usage, via Paige Bailey. It’s getting, uh, complicated.
WordStream Maker, for making topical word clouds over time!
Google Docs to markdown plugin, yay!
Simple-data-analysis.js by Nael Shiab. It has a chaining syntax and uses three.js for shader-based big data visualizations.
Also, image/text data if you want to play: Diffusion DB, 2 million prompt-image pairs released.
Narrative & Games Links
This month was AIIDE (Artificial Intelligence and Interactive Digital Entertainment), with excellent proceedings online here. In it I found a few immediately interesting gems, such as:
Step: A Highly Expressive Text Generation Language, by Ian Horswill. It runs in a C# interpreter. It has various useful features that game and interactive-fiction/procgen text writers frequently want.
[randomized] Greet: [once] Dude. Greet: Hi. [2] Greet: Hello
In this block, we’re saying that the third option for greet should be used twice as often as the others, and the first one only once. Inspired by some of the logic of Prolog, it’s non-deterministic and tests for matching solutions as it runs via backtracking. It will “undo” variable setting when it does. You get variable substitution, context-free grammar-like rules plus parameters and unification, simple planning, and Expressionist-style tagging. (There is code here. I’m quite interested.)
A Hybrid Approach to Co-creative Story Authoring Using Grammars and Language Models, by Adam Riddle. (Code here.) This work uses a combo of Tracery and a tuned GPT-2-XL model. I did this in a much sillier, faster way in NaNoGenMo a few years ago.
Re3: Generating Longer Stories With Recursive Reprompting and Revision by Yang et al. Not AIIDE but accepted for EMNLP. This is another in recent works using discourse structure outlining and refinement of the details to build a story.

There are a couple more discourse-planning (or at least, event-based) articles I found via my code that searches arxiv for text-gen papers of interest, like EtriCA: Event-Triggered Context-Aware Story Generation Augmented by Cross Attention by Tang et al, and the related NGEP: A Graph-based Event Planning Framework for Story Generation, by several of the same authors.
Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning, by Louis Castricato et al, using a preference model to improve story outputs (plus fine-tuning).
Story Sifting for Unlikely Events from Max Kreminski. Unlikely events are often more interesting. This would be true in data stories as well.
An article in NME interviewing Meghna Jayanth about narrative design and anti-colonialism.
'Sunless Skies': A Narrative Postmortem video of Chris Gardiner from Failbetter talking thoughtfully about managing a team of writers with a lot of lore behind them, keeping them on target, and the use of external consultants to broaden and add depth on topics like colonialism and representation. (Thanks to Emily Short for the rec.)
Avalon, a procgen 3D environment tool for reinforcement learning, open-sourced. Will produce Quest-compatible VR too.
A Pattern Language For Expressive Environments — game design concepts. “How can virtual spaces convey meaning and evoke emotional states? This is a set of building blocks for expressive level design.”
Bonus Tips on Mastodon, An Evolving Saga
A growing guide to Mastodon by joyeusnoelle. A way to find your twitter follows in the Fediverse for following them there by @luca@vis.social. A tip for setting up a multi-column, tweetdeck-like view from @andrew@fediscience.org.
I’m @arnicas@mstdn.social, I’m mirroring tweets currently. I also opened a cohost account.
Books
⭐ Golden Enclaves, by Naomi Novik. (Fantasy) I loved this ending to the Scholomance series. So many locations! Readers of it and this newsletter will have recognized the Initiation Well from newsletter 29.
⭐ Tomorrow, and Tomorrow, and Tomorrow, by Gabrielle Zevin. (Lit fic) This is an emotional tearjerker about friends who start a game design studio. It’s been well-reviewed and rec’d by both gamers and non-gamers. If you are into game design and game history, and sometimes wonder about this simulation we live in, this is a very good read.
Station Eternity, by Mur Lafferty. (SF) Fun mystery in space. A woman who is constantly surrounded by dead bodies and mysteries to solve has finally moved to an alien space station to escape her fate as a murder-magnet. When a shuttle of humans comes to the station, she finds herself investigating another death. Entertaining aliens, lots of cozy mystery tropes mixed in with the SF adventure.
City of Pearl, by Karen Traviss. (SF) A cop for an environmental hazards unit is sent to a planet where a conservative Christian group has settled… because they’ve made contact with aliens. A welcome surprise in that the religious group is good and decent, and the scientists are annoying assholes who break all the rules. A very easy read, with some good alien eyes on what it means to be “people” and how awful many humans are. Post-capitalist and gene copyrighting.
The Scapegracers, by HA Clarke. (fantasy) My spooky read of the month, a YA queer, diverse, angry girls magic book, in which an outcast teen with nascent magic finds her own coven after doing scary magic at a party for the popular girls. A bit overwritten for me in places, but it was fun. The chapter titled “Who Put Bella in the Witch Elm” was a sign to me.
TV
⭐ Los Espookys (HBO, Spanish with subtitles). Highly recommended for wackiness. A bunch of friends start a consulting business to provide horror & fx related simulations for clients. So many hilarious weirdos in here, like the social influencer American ambassador who dreams of being stationed at the US embassy in Miami. And the parasitic underwater demon who haunts the blue-haired, adopted gay son of chocolate tycoons. I could go on.
The Handmaid’s Tale (Hulu), s1-3. I know, I know, what am I doing?! I started catching up on this as a kind of feminist trauma homework, given Roe.v.Wade and American Republican religious extremism, topped by the anger of Iran’s women. I mostly watch it with tears on my face (let’s normalize menopausal emotion). It’s a testament to the writing and acting that it makes me feel conflicted empathy for the collaborators and colluders, like the commander’s wife Serena and Aunt Lydia. It’s a very hard watch, full of rapes, executions, “gender traitors,” slavery, on-screen maiming and torture as penance, with profound religious hypocrisy to justify the repression and brutality. Of course there is a brothel, of course there is.
The Peripheral, Amazon Prime. I started it and like it. It sure does have the boys with guns thing going on, though.
Games
⭐ Immortality. I have thoughts. Hmm, I spent A.Lot.Of.Time on this one. I went back in after hitting the “credits” ending scene, because I wanted to know more. I spent almost an equivalent amount of time digging around for more material. Then I hit the online “walkthroughs” (hah) and videos, and spent many more hours even just watching walkthroughs. I don’t think you can really “shortcut” this one. The story remains cryptic, and it’s hard to get it all. But I remained fascinated by the three movies, by the acting, by the backstory/understory, even by the struggle to find it. Take heed of the content warnings, though: it’s a profane, gory, explicit, haunted scene going on here.
The Excavation of Hob’s Barrow. I wanted a ghostly, folklore-filled narrative adventure for October, and this one delivers. I am not a huge fan of pixel art, but the atmosphere in this very old-school point-and-click game is well done. After a frustrating day trudging around a dour village and moors where everyone lies constantly to our heroine, I thought: “What if I just hang in the pub drinking pints by the fire and see if people will eventually come in and tell me what’s going on in this town.” I realized maybe I need a vacation?
This is a pic of West Kennet Long Barrow, which our heroine compares to Hob’s Barrow:
VR
My only news here is that Townscaper for VR is out and it is fab. Available on the Quest 2.
Poem
i plant a tree but later i can't find it. massless light won't quite slant. rain later proves conceptually wrong. zombies devour worthless blobs of ink. stoic electrons ignore stage directions. lovely petals wear pernicious masks. so what if it doesn't work, so what? i welcome flocks of dreams through flimsy doors. fluted winds rifle moth wings. lightwaves perpetually beckon windowpanes. bruised suns twirl in shimmering futility. sprouting feathers, vagrant gods compete. spiders drifting homeward paint their webs. so what if it works, so what?
—Camille Martin, via @tomsnarky
So, that was another month and a half of a lot of news! As I said above, I am adding paid subscriptions for people who want to support this newsletter (I take time off to write it). You could still just buy me a coffee, too. I may be breaking it up into two, depending on how the subscriptions go.
Happy Halloween, Lynn / @arnicas