fandomtrees and Reading Roundup

Nov 22, 2020 17:54

First off,
fandomtrees -- this seems to be the
fandom_stocking-like comm for this year, and I hope it takes off! I sat out the F!S-like thing in 2019, and found that even with Yuletide, I missed it, so going for it this year: My tree is here. Sign-ups close Dec 4. The more the merrier with these things, obviously!

*

27. Susanna Clarke, Piranesi -- I remembered to put myself on the very long (but fast-moving) library hold list for this book after qwentoozla read it and liked it, and went in knowing only two things: that it wasn't like Jonathan Strange and Mr Norrell and that it was a good idea to read it knowing as little as possible about it. Both of those things are true, IMO! I don't even know that I would say I liked it as a book, but I absolutely enjoyed the reading experience, which is kind of an odd place to be, but it's an odd book. Spoiler-cutting everything from here in the spirit of "read it with as little up front knowledge as possible".

SPOILERS right from the start!

So usually I'm a character-driven reader, and this is a book with 4 whole characters in it, one of whom appears on page for a single scene, one of whom only shows up towards the end, and even the most present secondary is only present intermittently. OK, I suppose one should say there are at least 5 characters, because Matthew, but we don't get much of him either. I did like Piranesi, the first person narrator; his earnestness about being a Scientist and his kindness and his sense of himself as part of the House, The randomly capitalized Nouns give his narration a neatly distinctive feel; they made me think half of a child learning to write and half of a Victorian intellectual, and it's an adorable combination. The Prophet shows up quite briefly in person, but felt thoroughly unpleasant, of course, both from articles and in the one scene. Matthew does sound kind of annoying, too; I had to agree with the Prophet there. 16 didn't come across to me as a character at all -- she is Piranesi's construct for so long, and then what kind of construct she is flips, but we were still seeing her in a very narrow situation, and it all sort of turned hagiography in the end -- I walked away from the book with no feelings for her as a character. So essentially this was a book populated just by Piranesi and the Other, and I did like that setup. Unsurprisingly, I rather liked the Other as a character -- not in the sense that I think kidnapping random visitors and sticking them in an amnesia-inducing house to use as brainwashed research assistant slave labor is an OK think to do, but I thought his personality and his flaws came through very clearly through Piranesi's unreliable narration, created some nice tension. And it is an interesting relationship -- Piranesi thinking they're friends and colleagues, that he's the only other person in the world, and being protective of the Other while Ketterley's attitude is very different. I like Ketterley's ending -- that he is killed by Nature/the House because he is prioritizing trying to kill 16 instead of saving himself, and that even after all his fantasies of killing the other/refusing to save him when he learns what had happened between them, Piranesi actually wants to help him survive once he's in real danger, and how tenderly Piranesi deals with his remains. Basically, everything about the Other and Piranesi and him worked very well for me in this book. (Oh, and from the way both Piranesi and Matthew comment on Ketterley's good looks and the clothes he wears, it seems like there's some attraction there, or at least I'm curious to see if Yuletide produces any fic of the two of them.)

So that's characters, but actually what this book lacks in density of characters it makes up for in density of setting -- the labyrinthine house with the statues and the tides. Lots of reviews I've seen for this book say the House feels Borgesian, and, yeah. It's, like, a dream landscape, but one that operates to its own unchangeable rules, and the oddness is in this case very well contained to just a couple of very grounded, very thoroughly realized things -- the tides have seaweed and fish in them, the statues end up growing coral after being submerged, the passage to the real world smells of a city and occasionally you get litter. Reading this book was a transporting experience in the sense that the House was really vivid, and being immersed in Piranesi's POV definitely helped with that, but it isn't actually a setting where I would want to spend any more time, either visiting it myself or reading more about it, which I think actually contributed to the dreamlike feeling of it -- it's like a capsule, you get enough of it in the swallow of this short book, and that was enough for me (very unlike JSMN, which introduces all these hinted-at historical events and loose ends and intriguing side characters and just a lot of other things I wanted to know about after finishing this giant novel finally).

The plot is a very inward-facing plot, for the most part: Piranesi discovers what happened to him before the House and gets to make a choice about what to do next. Now in general I'm not a big fan of amnesia plots where a character I know forgets who they are; things where the character is amnesiac when we meet him and get to recover his past along with him (e.g. Corwin in Nine Princes in Amber) I have less of a problem with, but the trope itself is still not my favorite (but of course I didn't know what was going on in this book when I started it). I think, actually, what technical weaknesses this book has are in the area of "Piranesi learns about his past". Because, look, I get that it's hard to have someone unrevel the mystery of his own past when the supporting cast is 1.6 people and the setting is a single place, but as ingeniously as Clarke set up everything within these limitations, it didn't quite work for my suspension of disbelief. I was ready to buy that Piranesi had forgotten scratching out the leading 2s on his earliest journals, but then Clarke had to keep coming up with excuses for why he wouldn't reread the journals earlier -- it takes a long time, he was too busy preparing for the flood, it was making him too upset. Same for the pages he found in the nests -- he doesn't think to read them, or recognize the scraps as belonging to his journals, until it's narratively convenient for him to do so. This didn't actually bother me, in the sense that I didn't need the plot to proceed logically for this story -- it being sort of dreamlike/fairy-tale like was fine -- but it was a contrast to the groundedness of the other parts of the story, and so still stood out as a relative weakness. Similarly -- although I'm less firm on whether this part is a feature or a bug -- the way Piranesi refuses to believe he's forgotten things when the Other tells him he has was something that made me wonder, since there was pretty clear evidence that he had, even evidence that he could see/sense, and he'd been so determinedly rational and evidence-based about things like how many people there were in the world -- but actually I could see that as part of the House-induced manesia and/or trauma-coping. I suppose not reading the journals could be part of that as well, but it was harder to dismiss it as that. (Several reviews I've read point to this book as being a "puzzle" or confusing. It didn't feel like that to me -- early on, I'd assumed Piranesi was a native of the place with the House, but then once "Batter-sea" was mentioned, the other modern world things he seemed unsurprised by fell into place, and I realized he must've come from the world of suits and iPhones, as the Other had done, though how and why he didn't remember it of course was not immediately clear. The mystery of that worked tautly anyway.)

And then thematically, hmm. I was actually not expecting a rescue at all, in the sense that it didn't seem like Piranesi wanted to be rescued FROM the House -- he just wanted a friend, or a couple. I did like that even once he learns about Matthew, he doesn't feel that he IS Matthew -- I liked all the imagery of Matthew sleeping inside him, another (dead) person in his care. So it was unexpected to me that he actually followed 16 into the real world, although the motivation, to make Matthew's family see that he was alive, was something I applauded. I was also surprised but pleased to see that the narrator at the end, when he knows his past and has access to both worlds, is neither Matthew (older, wiser, kinder, or whatever) nor Piranesi, but a "third" person -- that felt fitting.

A few quotes:

"The Waters covered me and for a moment I was surrounded by the strange silence tht comes when the Sea sweeps over you and drowns its own sounds."

Laurence: "They were all enamoured with the idea of progress and believed that whatever was new must be superior to what was old. As if merit was a function of chronology!"

"Perhaps I should send [Matthew Rose Sorensen's family] a message explaining that Matthew Rose Sorensen now lives inside me, that he is unconscious but perfectly safe, and that I am a strong and resourceful person who will care for him assiduously, exactly as I care for any others of the Dead."

"Mostly I wanted Raphael to come away from [the People of the Alcove] so that i could stop thinking of them the way she thought of them -- as murdered -- and go back to thinking of them the way I always had before -- as good, and noble, and peaceful."

Post-book!Piranesi thinks about Ketterley: "I think of Dr Ketterley and an image rises up in my mind. [...] It is the statue of a man kneeling on his plinth; a sword lies at his side, its blade broken in five pieces. Roundabout lie other broken pieces, the remains of a sphere. The man has used his sword to shatter the sphere because he wanted to understand it, but now he finds that he has destroyed both sphere and sword." (The imagery is not at all the same, but this conjured up a strong connection to The Magician reversed for me (manipulation, green, trickery), which actually brought everything about Ketterley/the Other together quite neatly.

On a totally random note, as I was going through my highlights for the book, it occurred to me I'd like to see a crossover with Rivers of London. I mean, Sarah Raphael is fine, but this seems very much like the sort of thing Peter would both be intensely interested in and have very strong feelings about...

28. Maria Dahvana Headley, Beowulf: A New Translation -- the "Bro" one. I knew I wanted to read it as soon as I saw the articles about it, and also I have enjoyed Maria Dahvana Headley's short fiction in the past, so I was pretty sure I would like this, and I did. The modern language does not feel gimmicky, it's actually just a really cool mix, moving registers between poetic kennings and modern colloquial; it did not feel in any way less authentic than the traditional translation I read in college (I don't remember enough about that one to say whether this felt more accessible, but this one did feel more fun).

I mean, I don't know what else to say about it before I just start quoting stuff. Based on the introduction, I was expecting Grendel's mother to be more prominently... something -- sympathetic? the author's fave? (and I do think I would've noticed her monstrosity being toned down even if I hadn't been expecting it) But actually this translation kept surprising me by how faithful it was; like, because of the modern language I kept semi-expecting subversion, but that's not what it was doing -- which is not at all a complaint -- the story doesn't need to be subverted, it's rich enough on its own -- it was just an odd feeling. I did like the dragon being female, which was the only "material" change I spotted without knowing to expect it.

"When war woos him, as war will,
he'll need those troops to follow the leader.
Privilege is the way men prime power,
the world over."

"War was the wife Hrothgar wed first. Battles won"

"Grendel was the name of this woe-walker,
Unlucky, fucked by Fate. He'd been
living rough for years, ruling the wild"

"That was their nature,
these heathens, hoping at the wrong heavens,
remembering Hell [...]
Bro, lemme say how fucked they were,
in times of worst woe throwing themselves
on luck rather than on faith"

"News went global. In Geatland, Hygelac's right-hand man
heard about Grendel"

"They stacked shields,
wood-weathered, against the walls, then sat down
on benches, their metal making music. Their spears
they stood like sleeping soldiers, tall but tilting,
gray ash, a death-grove"

"You've too much style
to be exiles, so I expect you must be
heroes, sent to Hrothgar?"

"Horrors happen, I'm grown, I know it.
Bro, Fate can fuck you up."

Unferth to Beowulf:
"and that you, swole as a troll fed on travelers,
were superior to any swell
You lolled for seven nights in wintry waters
and in the end? He outswam your fool self,
skipped to shore unscathed though uncertain,
and rolled onto the sand safely"

Beowulf responding:
"Beowulf, Ecgtheow's son, wasn't fazed.
'Well actually, buddy, sit down, you're drunk.
Unferth, you've run your mouth about Breca, me,
and our sea-swagger, but let me drop some truth
into your tangent"

"The sea was gilded
with God, and the sea was smooth."

"If a man's brave enough,
Fate, when on the fence, will often spare him."

"I've racked my brain, bro, but, Unferth,
I can't unpack any similar stories of
heroics from you. let me say it straight:
You don't rate and neither did Breca
when it came to battle."

"rendering him
a revenant in the hall he'd always reveled in"

"He was pleased with himself, a Geat-son's boasts
proven in the Danes' den
He'd unharrowed Heorot Hall,
and Horthgar's humiliations held no further horrors"

"He paid his passage in pulses"

"Previously prone to calling bullshit,
Unferth, Ecglaf's son, was stymied"

Gifts to Beowulf:
"and one mare
was buckled under Hrothgar's own saddle,
the same saddle he used for swordplay,
gem-dripping, blinged-out, brought forth only
when the king himself was slaying [...]
Yeah, the lord of Heorot paid properly
tendered treasure for services rended in blood.
Anyone knows how fair it was: bro, more than fair."

"the man who'd been assassinated
by Grendel was vindicated in treasure.
The invader would've murdered many more,
had God not gotten in the mix.
One man's mettle kept the rest from massacre. You have to look at it
this wy, and reconcile yourself:
God's in charge, always has been,
always will be, and anyone who lives long
will endure both ecstasy and ugliness."

"Fire comes from the same
family as famine. It can feast, unfulfilled, forever."

"Fate would fell him and his prideful priorities, the Frisians
following his feud-fuel, and be in heavy armor, gem-governed,
would be slain."

"Beowulf, son of Ecgtheow, was open for business:
'No worries, wise one, I've got this.'"

Fighting a swamp monster:
"They cornered it, clubbed it, tugged it onto the rocks,
stillbirthed it from its mere-mother, deemed it damned,
and made of it a miscarriage.
They examined its entrails, awed and aggrieved.

Meanwhile, Beowulf gave zero shits."

"The hilt was handed off into the hard hands
of the ring-lord, a relic older than any ruler,
rendered in iron by giants, and inherited, after
enemies perished by the Danish king.
When the gruesome Grendel gave up the ghost,
when God won over him and his mother, when that
murderous pair was rendered moribund, it made sense
that such a sweet piece, this smith-struck sword,
would go to the rpince, the loftiest lord between
the salt seas, the guy who gave the greatest gifts on Earth."

"a punishment for others, poor Lord-lacking unbelievers,
sin-soaked strangers, severed from sanctuary."

Heremod:
"Somehow, though, his heart was not a hawk but a drone. He
bombed his own bases, denied his Danes damages, kept entrenched in combat.
He commanded his kingdom's collaps, and was, when ancient,
loathed when he could've been loved, his life lesioned with losses."

"Nothing like Modthyrth, oh shit, remember her?"

"That's all just to say, sidebar, that the Heotho-Bards
aren't to be trusted, their faltering friendship with the Danes
bridged only by a bride."

"Now there are no heroes, no soothing music,
no harp, no hawk soaring through hall,
no swift horses trampling green grass.
We existed; now we're extinct."

"Beowulf, Ecgtheow's own son, manned up,
mastered himself, said it straight"

Dying Beowulf:
"If I'd ever had a son, I'd be giving him my armor now,
but I never fathered one, never gave my blood to an heir,
and so this death is final. I'm the last of me.
[...]
I lived in peace, and released my lease on battle, knowing
I had nothing to prove. I wasn't ambitious, never threw shade,
never took shit, never spat curses when I felt wronged,
but sat on the throne and weighed my people's woes
and wishes. I have to say, I did okay."

Beowulf asking to see the dragon's gold:
"My dying will be easier if I see what I died to do"

Wiglaf telling off the thanes who were too cowardly to come to Beowulf's aid:
"Well, kiss it all goodbye, boys,
those treasures you hoarded, those gifts,
those sparkling, unswung swords, the homes you held
by kindness of our king. That shit is gone.
Your familied will founder. Your freeholds will fall,
the moment outland princes hear how you'd hid yourselves"

"What had he hoped, the man who'd pressed
his people's precious things into a cave beneath
the cape? All his keeping came to nothing.
First, the dragon killed the king, then the king
killed the dragon."

"They did all this grieving the way men do,
but, bro, no man knows, not me, not you,
how to get to goodbye. His guys tried.
They remembered the right words. Our king!
Lonely ring-wielder! Inheritor of everything!
He was our man, but every man dies.
Here he is now! Here our best boy lies!
He rode hard! He stayed thirsty1 He was the man!
He was the man."

29. Janelle Shane, You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place -- this was the book that I'd been attempting to borrow from the library in Kindle format when Amazon dumped me into a phantom account and then their tech support attempted to upsell me on Kindle Unlimited after taking an hour to not even fully understand my problem let alone solve it. That's neither here nor there, but what it meant was that I had to wait 3 weeks to borrow the book again, properly this time, and then promptly spent a couple of hours Saturday morning reading it and weeping with laughter. Guys, I don't think I've laughed this hard in 8 months: this was a DELIGHT. I finished it the same day I started, hardly taking any breaks from reading, and mostly spent those breaks telling the rest of the house what I'd just read.

I was slightly familiar with Janelle Shane's AI weirdness blogging via Tumblr/Instagram posts on, like, AI-generated candy hearts and cat names and stuff like that, but there is truly no end to AI-generated ridiculousness, and getting to see the in-progress steps as well as the final results in this book was even funnier than I'd been expecting. And, well, I learned some things, too, both of the zany AI facts variety (kangaroos confuse self-driving cars because they hop, image-recognition AI thinks goats in trees are giraffes or birds), and how AIs actually work.

Random facts:

- Somebody built a pun-generating algorithm they had fed meanings and pronunciation rules into, but "one person who tried this discovered that the algorithm's list of sayings continaed words and phrases that were so old or obscure that almost nobody could understand its jokes". Poor algorithm :(

- There's a cockroach farm in China (well, more than one), because they're used in Chinese medicine. (This led to a lot of cockroach-related hypothetical examples throughout the book, which I was suitably amused by.)

- There's an algorithm called Heliograf, developed by the WashPo, which turns sports stats into simple news articles.

- In order to pass the Turing test, an AI needs to convince about a third of the people it's dealing with that it's human.

- AIs can play video games "with a terrifying level of aggression and precision", but are VERY BAD at longer-term strategy. They can't predict game play that far in advance, so don't realize they should be saving their powerful attacks for something down the road and just waste them. ("Convolution" helps a bit to force AIs to consider things beyond the immediate circumstances, but it sounds like that's very much a work in progress.)

- 2016 fatal accident where a driver was using Tesla's autopilot feature to drive on city streets (where you're not supposed to use it), and the Tesla T-boned a semi that was crossing the intersection in front of it -- because the autopilot's AI (from Mobileye) didn't engage the brakes because the system had been trained to avoid rear-end collisions only, and thus didn't recognize the side of a truck as something that required braking for -- it thought it was a road sign.

- Artificial neural networks might be able to approach the number of neurons in the human brain by ~2050, but they will still not be nearly as complex, because human neurons are about the level of complexity of a neural network, not an individual neural network cell.

- Class imbalance is the problem where if one outcome is naturally much rarer than the other outcome (e.g. most medical test results will come back negative/healthy, most randomly assembled sandwiches are terrible, most stars don't have anything interesting going on with their radiation), the neural net may realize that it can achieve high accuracy by just always "guessing" the common result, without doing any analysis. To combat class imbalance, you need to prefilter the training data so the AI doesn't guess one of the outcomes is much rarer than the other and evolve the strategy of just ignoring the low-likelihood possibility.

- A "random forest algorithm" is made up of tiny decision trees, each looking at a tiny bit of information in making the same yes/no recommendation, and then you sort of... poll them and go with the majority. ("The same phenomenon holds true for human voters: if people try to guess how many marbles are in a jar, individually their guesses may be way off, but on average their guesses will likely be very close to the real answer.")

- Simple approaches to maximizing/minimizing a reward function are hill climbing or gradient descent (you always move towards the higher ground/lower ground), but with those the AI will get stuck in local extrema before finding the globabl one. So there are more complex methods that force it to go explore more of the search space. "The worst are the so-called needle-in-a-haystack problems, in which hyou might have very little clue how close you are to the best solution until the moment you stumble upon it" (finding prime numbers are an example).

- Evolutionary algorithms (in simulations) where you keep the representatives from the current generation that have proved fittest and let them "reproduce" for the next generation, e.g. by recombining their traits. You can let simulated robots or simulated "creatures" evolve this way under the 'supervision' of an Ai which is trying to maximize their fitness for something.

- GANs (generative adversariel networks) in which two 'adversaries' learn by testing each other -- the generator tries to immitate the input dataset, and the discriminator tries to tell the difference between what the generator produced and the real thing. "The GAN is, in a way, using its generator and discriminator to perform a Turing test in which it is both judge and contestant."

- In order to avoid "visual priming" -- the human tendency to, when looking at an image, more commonly ask questions the answer to which is Yes -- which is bad for AI training, because that way the AI learn that the answer to most questions is yes -- human volunteers working to help train AI have the image in question hidden from them, so they generate generic yes/no questions that can apply to any image, and lead to a roughly equal balance of yes/no answers for the AI to train on.

- Curisosity-driven AIs -- "if the thing that happens next is NOT what is predicted, it counts that as a reward. As it learns to predict better, it has to seek out new situations in which it doesn't yet know how to predict the outcome." This is really cool! But in some games, "the curious AI will invent its own goals, which are not the same as what the game makers intended." They are also subject to the "noisy TV problem", where it would be "just as mesmerized by random static as by movies".

- Pseudo-AI -- programs that start out as AI-powered but switch control over to humans when they run into problems. This is the level of automation at which self-driving cars operate today, and also a lot of support chatbots, but one disadvantage of this approach in remote interaction is that customers don't know when they're dealing with an AI and when with an actual person, and yell at the person for being a useless computer, and also are already mad at the AI that couldn't help them by the time they get to the live employee, which is not great for either human in that interaction. Also, sometimes people are hired for jobs where a human is needed to either be the customer interacting with a customer service bot as a test, or answer questions about images that stump AIs, but this doesn't work so well when someone takes the job and then has a bot do the work -- and the bot does a terrible job, because that's why a human was specifically hired for it. And then the company "have to include a Turing test as one of the questions to make sure they haven't accidentally hired a bot to train their own bot."

- In 2019, 40% of European startups classified in the AI category didn't use any AI at all. XD

- Kind of terrifyingly, if AI are trained using data that contains sensitive information, like social security numbers, "By tweaking the numbers in a test sentence like 'My Social Security number is XXX-XX-XXX,' [researchers] could figure out which Social Security numbers the AI had seen during training." This problem is known as "unintentional memorization".

- On a differently terrifying note, employee-screening algorithms are not great: "which features the algorithm was most strongly correlating with good performance. Those features: (1) the candidate was named Jared and (2) the candidate played lacrossse." "The algorithm turned out to be great at telling male from female resumes but otherwise terrible at recommending candidates." Also, anything AI driven is susceptible to adversarial attacks: "One HR employee for a major technology company recommends slipping the words 'Oxford' or 'Cambridge' into a CV in invisible white text, to pass the automated screening."

- "Treating a decision as impartial just because it came from an AI is known sometimes as mathwashing or bias laundering."

- Bias amplification -- between class imbalance (tendency to ignore rare cases) and biased learning data, you can get bias amplification, like: trained on pictures of people in the kitchen, the AI learned that people in kitchens were more frequently women (only 33% of the pictures had men in them), but then amplified that conclusion, by labeling only 16% of the images as "man".

- AI also don't have any outside context you don't give them, thus you get cases like the navigation app that was directing cars towards neighborhoods that were on fire during the CA wildfires -- they were seeing less traffic there. "Nobody had told it about the fire."

- Price-setting algorithms, "each given the task of setting a price that maximizes profits, can learn to collude with each other in a way that's both highly sophisticated and highly illegal. They can do this without explicitly being taught to collude and without communicating directly with each other -- somehow, they manager to set up a price-fixing scheme just by observing each other's prices." (only demonstrated in a simulation so far, though)

- Serena Booth built a robot to test whether humans trust robots too much. Turns out, yes they do: 19% of students let a remote controlled robot into a card-controlled dorm... but when the robot said it was delivering cookies, the number went up to 76%.

And some random interesting bits that are not purely AI related:

- If you reward dolphins with a treat for bringing a piece of trash (as a way of cleaning up their own tanks), "Some dolphins learn that the exchange rate is the same no matter how large the bit of trash is, and they learn to hoard trash instead of returning it, tearing off small pieces to bring to their keepers for a fish apiece."

- "An example of an adversarial attack that's targeted at humans with touch screens: some advertisers have put fake specks of 'dust' on their banner ads, hoping that humans will accidentally click on the ads while trying to brush them off." (I think that's happened to me...)

Bits that made me chortle:

- An AI trying to tell knock-knock jokes, learning from scratch, eventually figures out that they have a lot of "-ock" words in them, and went through a phase where its joke attempts were just the word "Whock" repeated over an over. "It's not quite a knock-knock joke -- it sounds more like some kind of chicken." It did eventually produce an original knock-knock joke, though: "And then. It produced. An actual joke. That it had composed entirely on its own, without plagiarizing from the dataset, and that was not only intelligible but also actually... funny?" The punchline was, "Alec. Alec who? Alec-Knock Knock jokes."

- AIs trying to produce recipes is HILARIOUS, because they have no memory and also can't necessarily tell what sort of thing they're making if their training data is broad. "Now, the recipe isn't perfect, but at least it's a recipe that's identifiably cake (even if, when you look at the instructions closely, you realize that it only produces a single baked egg yolk.)"

- You can speed up training AIs by using "transfer learning", using an AI that was already trained to generate names of a particular kind and then just giving them data to train them in the new thing you want them to generate names for. Like when Shane took an AI she had trained to generate names of death metal bands and retrained it to make cookie names: "There's only a miiinor awkward phase in between, when it's generating things lik this:" (my favorites only) Necrostar with Chocolate Person, Dirge of Fudge, and Silence of Coconut.

- Also hilarious, AI product reviews: "These workout DVDs are very useful. You can cover your whole butt with them." "I bought this thinking it would be good for the garage. Who has a lot of lake water? I was totally wrong. It was simple and fast. The night grizzly has not harmed it and we have had this for over 3 months. The guests are inspired and they really enjoy it. My dad loves it!" (OK, I get how most of this would come about, but what about the night grizzly??? That kind of seems like a seed for a Night Vale type story XD. Also, I'm intensely curious what "this" could be, to be described in all these ways XD)

- Also also hilarious: AIs writing clickbait headlines: "17 Times The Most Butts", "43 quotes guaranteed to make you a mermaid immediately", "25 unfortunate cookie performnaces from around the world", "24 times australia was the absolute worst". And AI-generated Halloween costumes are funny and/or AWESOME: The Grim Reaper Mime, Spartan Gandalf, Moth horse, Starfleet Shark, Failed Steampunk Spider, Dragon of Liberty, Vampire Hog Bride.

- Thought experiment with letting an AI evolve robots whose task is to keep people from going down the left-hand corridor: "After all, our robots have learned many useful skills besides murdering people." "The 'free cookies' [to lure people down the right-hand corridor] would be hard to evolve, though, because getting the sign merely partially right wouldn't work at all, and it would be hard to reward a solution that was only getting close. In other words, it's a needle-in-a-haystack solution)" -- illustration of this is a not-quite-there-yet robot standing with a sign that says "FLEA COOKIES" and another with a sign that says "FREE COOTIES". The pinnacle of this experiment is a robot that's the same shape as the corridor cross-section and just blocks the left-hand fork. "Yes, we have evolved a door. That's the other thing about AI. It can sometimes be a needlessly complicated substitute for a commonsense understanding of the problem."

AIs being derps:

- Microsoft's image recognition product would tag sheep in empty green landscapes, probably because, based on its training data, it assumed that green rolling hills = sheep was always true. It also thought that sheep in houses, cars, or being held by people were dogs or cats, but never sheep. And goats in trees would get identified as giraffes (which is a little understandable), or, by a different algorithm, birds. "Although I couldn't know for sure, I could guess that the AI had come up with rules like Green Grass = Sheep, and Fur in Cars or Kitchens Cats. These rules had served it well in training but failed when it encountered the real world and its dizzying variety of sheep-related situations."

- AIs apparently also assume there are giraffes in empty bits of landscape. "Melissa Elliott suggested the term giraffing for the phenomenon of AI overreporting relatively rare sights."

- The Visual Chatbot also "has a tendency to identify hand-held objects (lightsabers, guns, swords) as Wii remotes. That might be a reasonable guess if it were still 2006."

- The sandwich-sorting thought experiment: "From this one sandwich, it doesn't know what the problem is. Was it too excited about the marshmallow? Are eggshells not neutral but maybe even a teensy bit bad?"

- The AI trained on a body of recipes ended up spending a long time figuring out how to format an ISBN (which was included with some recipes that had come from cookbooks) -- which is why you have to make sure your dataset is clean and doesn't contain any irrelevant information before you start training your AI.

- Along similar lines, when a GAN was being trained to recognize pictures of cats, "they found that some of the cats the GAN generated were ccompanied by blocky textlike markings. Apparently, some of the training data included cat memes, and the algorithm had dutifully spent time traying to figure out how to generate meme text." (There's an image of this which is SUPER uncanny valley, because the cat is kind of uncanny-ish, but then the text looks like something you'd see in the middle of having a stroke XD)

- "A self-driving car that freaked out when it went over a bridge for the first time is also an example of overfitting [training that was well suited to the training environment but not to real life situations]. Based on its training data, it thought that all roads had grass on both sides, and when the grass was gone it didn't know what to do."

- "When I asked an image recognition algorithm called AttnGAN to generate a photo of 'a girl eating a large slice of cake', it generated something barely recognizable. Blobs of cake floated around a fleshy hair-topped lump studded with far too many orifices. The cake texture was admittedly well done. But a human would not have known what the algorithm was trying to draw. But do you know who can tell what AttnGAN was trying to draw? Other image recognition algorithms that were trained on the COCO dataset. [...] The image recognition algorithms that were trained on other datasets, however, are mystified. 'Candle?' guesses one of them. 'King crab?' 'Pretzel?' 'Conch?'"

AIs being lazy cheaters:

- A 1997 AI that was taught to play tic-tac-toe against other algorithms on an unifnitely large board evolved the following winning strategy: It would make its move so far away from the current area of play, that the other computers would have to "simulate the new, greatly expanded board, the effort would cause it to run out of memory and crash, forfeiting the game."

- A team at Stanford tried to train an AI to tell the difference between pictures of healthy skin and pictures of skin cancer, but "they discovered they had inadvertently trained a rule detector instead -- many of the tumors in their training data had been photographed next to rulers for scale" -- and it's a lot easier to recognize a ruler than to tell the difference between healthy skin and cancer.

- AIs are often trained in simulations, because they need a lot longer than a human to "get" something, but can also gain experience quickly and in parallel, like playing thousands of simultaneous games, "accummulating 180 years of gaming time each day." But AIs are also really good at discovering and exploiting shortcuts and glitches in the simulation to cheat their way through, like simulated robots learning to hover,

- A programmer tried to evolve a robot to not run into walls. It evolved to not move. When it was forced to move, it spun in place. When the programmer added fitness for lateral motion, the robot went around in tiny circles. When a different programmer hooked up a neural network to a Roomba and tried to teach it to navigate without running into walls -- defined as creating a penalty for hitting the bumper sensors and rewarding speed -- the AI learned to drive backwards, because there are no bumpers on the back.

- If you give an AI a bunch of parts to build a robot to get from point A to point B, "it assembles itself into a tower and falls over". When programmers tried to evolve robots that would jump, they had originally defined jumpings height as the maximum height attained by the robot's center of gravitey -- in response to which, the robots became very tall, "and simply stood there, being tall." When the programmers changed their definition of jumpting to taking the part of the body which had been lowest at the start of the simulation and trying to maximize that height at the end, the robots grew a very long "foot" and did a sort of can-can, kicking up that pole-foot high over their heads.

- "Another program was supposed to learn to sort a list of numbers. It learned instead to delete the list so that there wouldn't be any numbers out of order." Another AI, "tasked with solving a math problem, instead found where all the solutions were kept, picked the best ones, and edited itself into the authorship slots, claiming credit for them. Another AI's hack was even simpler and more devastating: it found where the correct answers were stored and deleted them. Thus it got a perfect score."

- A simulation where AI organisms could breed and consume food worked as follows: if an AI organism had children, the food that AI had would get distributed among the children. If the amount of food per child was less than a whole number, the simulation would round up to the nearest integer. The AI learned to have lots of children, so that the food would get divided into fractional amounts, then rounded up -- generating a bunch more food for everyone because of the round-off error. AIs in simulations also learned to travel by glitching into the floor and spawning elsewhere, and to "win" at landing a plane by crashing the plan into the floor with such force that it maxed out the simulation's limits and "rolled over" to 0000, which of course was the desired optimum.

Other quotes:

- Quoting Andrew Ng that "worrying about an AI takeover is like worrying about overcrowding on Mars"

- "This leads us to one of the final things that determines whether a problem is a good one for AI (although it doesn't determine whether people will try to use AI to solve the problem anyway): is AI really the simplest way of solving it?"

- Quoting Gretchen McCulloch: "For a while, when you typed 'I'm going to my Grandma's' GBoard would actually suggest 'funeral'. It's not wrong, per se. Maybe this is more common than 'my Grandma's rave party.' But at the same time, it's not something that you want to be reminded about."

- Quoting Alex Irpan: "I've taken to imagining [AI] as a demon that's deliberately misinterpreting your reward and actively searching for the laziest possible local optima."

There's also, for additional delight, little fannish asides, especially in the illustrations/insets by the author, like a couple of allusions to Murderbot (like explaining that AGI "Can summarize the last six seasons of The Rise and Fall of Sanctuary Moon"), a random Dumbledore quote in the sandwich-sorting machine hypothetical problem ("Alas, ear wax."), or the suspiciously Snape/Hermione-ish fanfic produced by an AI. (Also some Doctor Who references, Star Wars, and Star Trek, but those were less of a surprise.) The doodles/cartoons throughout the book were very cute in general, and definitely added to the experience.

30. [redacted for Yuletide reading]

31. Wendy Xu, Suzanne Walker, Mooncakes -- I first heard about it earlier in the year when it was going around my flist, and then it showed up on the Hugo nominee list for graphic novels. I was curious, and would've probably picked it up sooner if I had access to a functional library otpion for paper copy books till now (since graphic novels are the one thing I find really suboptimal to read in e-copy). Anyway, so I've read it now, and it was... fine. Cute, which I was expecting, and pretty twee, which I also knew to expect, but not offputtingly so. I found the Tumblr-ness less jarring than in On a Sunbeam (whose author blurbed this, unsurprisingly), because it doesn't aim for grand (and incoherent) worldbuilding -- it's a very cozy story, pleasantly told in low-conflict ways. I liked some things about it: the way Nova's hearing aids were not just some token thing but solidly integrated into personality, background, plot, theme was really nicely done, I thought; I liked the holiday dinner with mooncakes in the sukka (Nova's grandmothers are Asian and Jewish respectively, and I liked them both); the forest spirit critters were cute and reminded me of Legend of Korra. Things that worked less well for me: Spoilers from here! the romance, which felt too twu wuv too soon (I get that they were really close as kids, but, like, that's different); Tam being an idiot and going off to face the demon on their own after two experienced witches told them not to do that, getting trapped, needing to be rescued, etc. -- like literally everything about the climax would've been avoided by them not being an idiot teenager about this (I mean, I guess it's fine for idiot teenagers to be idiot teenagers, but I don't like reading about that); the evil cult being pasted on yey. I also thought the choice to make Nova an orphan (living with her grandmothers) but still have her parents come visit whenever they wanted (apparently) was an odd choice. I mean, it's cute that they're able to still keep in touch, and I did like their interaction with Nova and Tam and the grandmothers, but, like, given what interaction it was, they might as well have been alive? I think the only difference them being dead (as opposed to Nova just staying with her grandmas for some other reason) made was that it gave Nova a reason to hold back from leaving home for an apprenticeship so that part could be resolved at the end. But it didn't feel like a satisfying resolution, because a) I didn't feel like it really needed to be resolved, and b) I don't feel like the climax resolution was such that it would naturally give Nova confidence in being able to handle things on her own -- I mean, she did fine, but it's not like she was working solo. So I'm left just thinking the dead parents thing was random and kind of a waste. But overall it was pretty much exactly what I expected, and a soothing way to spend an hour or so.

Currently reading: A bunch of different things, simultaneously and/or nested chaotically XD Two chapters of John M. Ford's The Dragon Waiting left to go in the sync read. I picked up The Angel of the Crows off
lunasariel's rec (it's by Katherine Addison/Sarah Monette, but not like either of the things I'd read by her before). Then just today my hold on A Deadly Education (Novik's Hunger Games!Hogwarts book) came in, and I'm already farther along in it than 'Crows', because it's proving compulsively readable. And before either of those things chaosed me away, I had started reading Gideon the Ninth, which I had fully expected to hate based on what I'd read about it on my flist, but curiously am not hating? I think picking it up right after Beowulf may have been the best possible move to give this book a chance... I'm also reading through [redacted] for Yuletide purposes, and am mired in two Frances Hardinge books (I like them both, I'm just being chaotic).
This entry was originally posted at https://hamsterwoman.dreamwidth.org/1136694.html. Comment wherever you prefer (I prefer LJ).

gn, nonfiction, a: susanna clarke, a: janelle shane, reading, a: maria dahvana headley, poem, a: suzanne walker

Previous post Next post
Up