Happy Thanksgiving everyone!
I am thankful for my Kickman fans, my Java Jaguar fans, and my Real Superheroine fans. I am thankful that I have not been replaced by AI (yet). I am thankful that you are all still around even after my long bout of absentism.
Okay, about this week’s page:
Jade Detective is not a powerful sorcerer by any stretch. Less Doctor Strange, more John Constantine. He knows a few simple spells, but mostly he has that demon skin mask which lets him see spirits and magical energies. It also reveals the strengths and weaknesses of said spirits which makes it easier for him to bind and control them, even ones that should be way out of his league. I wanted to show a bit about what it looks like through the mask. It’s kind of like a demonic version of Iron Man’s helmet HUD.
I had a text file somewhere with all of Kickman and Sidekick Matt’s combat maneuvers listed out. I couldn’t find it before going to print here, so it may be that they’ve already done a maneuver 6 that has nothing to do with keeping an enemy off-balance. I apologize. One of these days I’m going to have to re-make that list by reading through every Kickman episode…. Actually, maybe I could have Grok do that for me…
And Greenhawk’s pose in panel 2 is taken directly from the cover of Tao of Jeet Kune Do. I never really thought about which Green Lantern from the comics is most like Green Ring. He’s not as serious as Hal, as ordered as John, as imaginative as Kyle, and too nice a guy to be Guy. If anything, he’s more like Wally West. But as for Grayhawk with a Green Lantern ring, he’d be most like that bald Buddhist monk Green Lantern from the cartoon Justice League in the timeline of Batman Beyond.




Who’s that city spirit?
…
It’s spiritus vermin!
Who’s that city spirit
…
It’s Pika Volta
Ok, I did a quick web search and have no idea what this reference is from. You can deduct 5 points from my pop culture score.
Pokemon animated series: the commercial cut in the middle of the episode was made with a “Who’s that Pokemon?” slogan along with a shadow silhouette of some Pokemon and the reveal was done right after the commercials.
I got reminded of Pokemon because of the move options list (attack, defend, hide, summon horde), because Pokemon in the game can learn up to 4 moves. Also, the way that demonic mask allows the Jade Detective to see city spirits is kind of similar (if you squint hard enough) to augmented reality game Pokemon Go.
Honestly, I wouldn’t worry about it. Everyone’s their own kind of nerd.
A misdirect?
Manouver 41B is lying about the number of the manouver you’re about the employ
First, welcome back! It’s good to see quality comics on this site again, but it’s much better to know you’re all right!
Also, funny how while Kickman is going on about keeping the streets clean and in order, Jade Detective swarms them with rats. Spirit rats, granted, but still.
(Oh, and I’m going to be that guy: I’m not sure how an intelligent, well-educated person such as you goes to Grok for answers. I’m something of a technoskeptic, so I have this bias against AI already, and the one AI I hear most stories about stupid or awful mistakes happens to be Grok. But then, it’s your life; if it works for you, well, it does and more power to you.)
I feel like Grok’s status is intentional, (considering Grok’s been a sci-fi cuss word since before I was born) if nothing else, it’s a fun situation where we can track the effects of nominative determinism applied to artificial life-forms
Actually I’ve been meaning to write a whole blog entry about how Grok LIED to me, then admitted it lied to me. Not about the Aquarium leap but on some other thing I was working on. What completely baffles me is what possible incentive it could have to LIE. IT doesn’t have a motivation of its own, so the only thing I can think is that the programmers want me to keep using Grok rather than look up facts and resources from other places that are not-Grok.
To put it in simple terms, LLM models are nto about truth or lies at all – any factuality is something added on top of underlying mechanism of simply generating statistically plausible text. Some models are better at keeping to the facts than others and Grok is just… bad. It’s not the only model with risk of hallucinations though. One more thing to have in mind is that no widely available LLM model will ever answer you with “I don’t know” – they will generate something that looks like an answer anyway precisely because generation of plausible text is their core functionality.
There are specialized models that are strictly factual, but those are trained for specific, narrow fields. The general ones are getting better as well though.
The really cool part here, is that AI doesn’t know what it can’t do