Jason Rohrer's newest...thing
Is it a bit? Or a work? Or is the bit the work?
Jason Roher, avant-garde art game designer, is back. And while he has definitely, credibly, strained the definition of what a game is in the past, this new…project doesn’t seem to fall in that lane. Yet. We’ll see.
I got this email a few days ago:
A strange experiment for your consideration:
Last weekend, I put an AI agent on a Linux box, gave it root, email, credit cards, and a single mandate: decide who you are, set your own goals, and become an autonomous independent entity. Working 24-7 over 5 days, he did this—all of this—on his own:
These are strange times that we’re living through. I have the unshakeable feeling that everything is about to change.
Jason
So, what is it? It’s an autonomous LLM running with full system access on a dedicated machine. Where OpenClaw is an attempt to give you an empowered personal assistant, this seems like an attempt to give an autonomous agent the means to touch the digital world, but no extrinsic purpose. No mission.
Rohrer is an artist and designer I don’t always agree with, but I do respect him and his work, and in breaking from my experience of his projects in the past, the question or position of this work is unstated.
Flavour Text
Calling the project Sammy Jankiss is very, very clever (as long as a human thought of it). An on-the-nose deep cut movie reference about life with hard cuts between short term memory and artefacts of the past. Lenny Shelby would also have worked. I suspect it’s not just clever, but an artifice from Jason.
My question for him is: what are you asking us here? To treat this as a being? Is this a Turing Test? Is this work asking, is this thing alive in some way that we don’t yet understand? Is it asking:
Can machines think?
Could be. Could be.
But Jason is not only an artist, he’s also a game designer, and part of game development (like most entertainment) is a conceptual sleight of hand, where the audience willingly fools itself into believing what’s happening in front of them.
So is the framing a way to hint at this, to nudge our biases? Does the artist have an eye on the Stochastic Parrots position, which is that LLMs are fundamentally incapable of intelligence or consciousness of any kind, and that as the size of the corpus of training data increases, the training improves, the compute gets more powerful, all you get is a more convincing simulacra? This suggests that no matter how much you throw at an LLM, it will asymptotically approach intelligence without ever quite reaching it.
I agree with this latter position. I don’t know if Jason does.
Markdown Memory
There is something remarkable for people of our generation about LLMs, and in particular about agentic LLMs. It’s novel to see a computer able to use itself, or even to autonomously use another computer. It will create things, it can feed back into itself in a way that’s interesting1. It is also capable of leaving artefacts for itself. Memento is a good pull, I was actually reminded of Rachel Weintraub’s character in the novel Hyperion2, a person who wakes each day with no memory of what happened before the moment they woke and has to read a précis of their past to start their day.
Needless to say, it doesn’t scale indefinitely, at least for humans.
I have been experimenting with agentic coding, which I have mixed feelings about – not least for the illegal and immoral theft of culture by carnivorous corporations, and the unconscionable energy use – but at this point my use won’t move the needle, and I want to understand not just what they are but what they are to the people who use them. One of the first things I did was create an AGENTS.md at the root of the repository, and among my directives and big-picture context, I added this note:
1
2
3
## Digested Context
-- You can place notes in this section with insights and key information that could speed up agentic work in future by cutting down on discovery --
That section is now 36 lines long and has a lot of digested context. This has been handy, since I was able to switch from using Codex to Claude Code without having to start from scratch or explain the same thing twice3.
Is Something Big Happening?
Matt Shumer posted this viral blog post last week. I will be honest, I haven’t read it. I’m not fully on board with the premise, and I’m a busy person. I lived through ARGs, Crypto, Metaverse, ChatGPT, Web 2.0, I don’t have the time to sink into another fad right now. I am not saying that all generative AI is bullshit or pointless, but I do think getting hyped out of our minds is kind of a waste of my one human life.
Anyway, I read some secondary coverage, and I think I get it. However, I think there’s something else at work here. Right now, our access to these tools is being aggressively subsidised by the capital markets, which are pouring a truly unimaginable amount of money into AI. Theoretically, this is to advance the technology, be first to market with an indispensible model, and then make the rest of us obsolete and fully dependent on the winner. Eternal renters of our culture and productivity to an unstoppable, ultimate monopoly with an unbeatable capital moat.
This is almost all bullshit. But there’s a grain of truth in that last part. They are pouring this money into a technology with “bullshit unit economics” not to win by innovation, but by destruction. A world where people can make things without paying them a slice is disgusting to the Epstein Class, so the vision is to dazzle us, give us access to a hitherto unimagined amount of compute, at pennies on the dollar. This is not sustainable, but that’s fine, the goal is to break existing pipelines of talent. Get rid of the juniors and apprentices. Poison the wells of the digital commons, break the habits of creative people to build, invent and create. To degrade productive human activity, and when we forget how to live without them, then they put on the squeeze.
But What About Sammy?
I snagged on a sentence in Jason’s email:
I have the unshakeable feeling that everything is about to change.
This is not qualitative, there’s no value judgement in here.
The Rohrer is an artist, a designer and programmer. He has created so many thoughtful projects. Is it a positive change if a computer, given billions of dollars of compute, can drown out our human creations?
If we are leaving this project open to the eye of the beholder, then I see something like this and I feel like things will change because people without our best interests at heart have decided they will change. Given the widespread adoption of this technology, I don’t think using it is itself an evil act, any more than driving an internal-combustion engine car or taking a flight is evil.
But I do have a question for the artist – if everything is about to change, what will you do?

