INDIEPIXEL
  • HOME
  • BLOG
  • PROJECTS
  • ABOUT
  • CONTACT

INDIEPIXEL BLOG

Vibecoding...

...a Space Invaders Clone with Claude Opus 4.5

Can you trust code you've never seen?

I ran an experiment. The rules were simple: I would build a complete Space Invaders clone using Claude Opus 4.5, but I wasn't allowed to look at a single line of code during the process. Only the result. A complete blackbox session where I could describe what I wanted, see the game running, give feedback, iterate. But never peek under the hood. My only option to ask questions and give new instructions, if the model showed me code, I looked away or pretended I didn't see it.

The goal was to simulate what I suspect will become increasingly common: developers (and non-developers) shipping code they've never actually read. Vibecoding as the new normal.

After the game was done (pixel art sprites, destructible shields, smooth animations, the works) I asked Claude to help me analyze the very code it had written. A kind of AI self-audit.

The good news? The code actually works. It's structured, it has a proper game loop, pre-rendered sprite caching, a clean state machine. Not the chaotic mess I half-expected.

But the vibecode tells are there. 'Magic numbers' scattered everywhere... a nested ternary that computes difficulty scaling across six conditions in a single unreadable line. Dead code: a fully implemented pointInBox() function that's never called anywhere. Copy-pasted border-drawing loops that beg for abstraction. The classic signs of iterative prompting where each fix adds complexity without cleanup.

The most telling artifact? An explicit collision inset parameter with a default value that gets overridden to zero in the only place it actually matters. That's the fingerprint of 'make the collisions feel better' -> AI adds parameter -> problem solved -> nobody refactors -> nobody cares... well, I do.

This isn't the bloody mess I expected but we're not in Kansas anymore, either. The game is playable, the visuals are consistent, the shield destruction actually feels good. But the code carries - debt - the kind that accumulates when iteration happens faster than understanding. It works, but modifying it round after round required proper reverse engineering skills that were never applied by the model nor me (qua rules).

If this is the future of development - and I suspect it is! - then future vibecoders will need a few skills they probably won't possess: being able to read and _understand_ code they didn't write, they must be able to audit AI outputs.. simply knowing when 'it works' isn't enough.

Play the result yourself (click the screenshot)

| CREDITS |
Vibecoder: @gizmo64k, code & analysis: Claude Opus 4.5 (Anthropic) /gizmo | 2025-12-09

bob.

Local AI Content Generation

Live Demos @ Visuarize's DMEXCO 2025 Booth

DMEXCO REEL

At DMEXCO 2025, we demonstrated how AI-powered content creation can work through local, offline processing.. no cloud dependency required!

Conversations at the booth centered on integrating AI into virtual production pipelines without the latency, costs, and data concerns of cloud services. We showcased AI actors and content generation tools running entirely locally with full control over workflows & data.All actors featured in our demonstration reel came from our internal AI model database, every shot was generated live at the booth during demo sessions. This real-time capability showed how teams can leverage AI tools, including virtual actors, while maintaining complete control over their production.

The strong response confirmed there's real demand for AI solutions that don't require constant cloud connectivity.

As AI becomes standard in content production, the question isn't whether to adopt it, but how to implement it on your terms.

Cheers, /gizmo | 2025-09-18

| CREDITS |
@gizmo64k & @zebrapunk: AI Models

gizmo64k vs The Dor Brothers

When Good Artists Copy, What Do Great Artists do?

I wrote 'Indifference' (https://www.youtube.com/watch?v=P7TEk6yieJs) after losing someone I cared about. A trans friend who couldn't take it anymore. The aggressive culture around us, the society that drove her to that point - I poured all of that raw grief into this song. It was my deeply emotional outcry about loss, about watching someone you love get crushed by a world that refuses to give her a fair chance.

Then the Dor Brothers showed up with their 270k followers and...well, let's assume they got inspired ... a lot. (https://www.youtube.com/watch?v=o7kSskAs_9k)

But they didn't simply copy-paste my aesthetic - they appear to have understood the deeper themes well enough to incorporate elements that parallel my lyrics on several occasions, and they enhanced it with Matrix references! It's a well known fact that the Wachowski sisters made Matrix as a trans allegory. So it seems they grasped exactly what they were adapting.

It appears they took my work about loss, grief, and anger at the society that killed my friend - and adapted it to make it... marketable? Palatable? More widely appealing?

But that's not why I'm writing this. I'm not angry about artistic appropriation itself!

I reached out to them directly on August 7, 2025 seeking a constructive solution for what I see as the real issue here... Sadly I got no response.

See, there's a difference between appropriation and abandonment. When you transform other artists' words, their trauma, their loss, their pain and their visual language, I believe you inherit responsibility for what that work was trying to accomplish. You cannot just take the aesthetic and the themes - you have to take the responsibility, too!

My video had a donation link for Hamburg's #4Be - a trans support organization. Real IBAN details. Real help for real people. People with a 40% suicide attempt rate. Their video? Information about how promising the filmmaking future looks... and their name. No charity. No call to action.

The Dor Brothers version had 88 times more views - over 50,000 people saw it. Even small donations from that viewership could have made a huge difference for #4Be's capacity to help people in crisis. But what concerns me most is that their version becomes the 'official' version algorithmically. When you adapt the aesthetics and themes but leave behind the philanthropic mission, artistic appropriation can become activism suppression.

That's what this is really about - When you steal as a great artist, then steal the mission, too!

If you read this and you support the mission that got left behind, here's how you can help.. Donate to:

Förderverein Therapiehilfe e.V.
IBAN: DE56 2005 0550 1235 1225 28
BIC: HASPDEHHXXX
Reference: '4Be'
More info: https://www.therapiehilfe.de/standorte/4be-transsuchthilfe/

Note: I am not affiliated with this organization and receive no financial benefit from this fundraising appeal. I'm simply sharing this information to support their important work. /gizmo | 2025-08-12

gizmo64k vs Dor Brothers comparison

gizmo64k - Chain of Command

Music video

Chain of Command music video

A 6-minute dark comedy / music video that only rewards those willing to watch until the end. Some messages can't be rushed.

'Chain of Command' features ANN.e confronting three billionaires in an underground bunker - a musical exploration of corporate power, wealth inequality, tech monopolies, labor exploitation, and environmental destruction. The video examines themes of accountability, moral responsibility, and the true cost of extreme wealth concentration.

Production Note: This video faced significant challenges during creation, as virtually all AI image and video generators attempted to censor or refuse collaboration on the content. Even in the creative process, I encountered the very dynamic my song critiques - automated systems protecting established power structures from uncomfortable artistic examination. The difficulty in simply making art that questions authority became its own form of evidence.

Inspired by @renmakesmusic ‘s refusal to let algorithms decide what art should be - showing that authentic expression will always find its audience. This video wouldn’t exist without him!

Cheers, /gizmo | 2025-08-10

| CREDITS |
@gizmo64k: written & directed

gizmo64k - Jessie & The Good Guys

Music video

I Used AI to Make a Music Video About Something We All Avoid Talking About. I just released a music video ('Jessie & The Good Guys') that I created almost exclusively using AI tools, and honestly, it wouldn't exist without these tools.

The song is about something most of us don't want to face: how we've gotten really good at ignoring terrible things happening around us while still thinking we're decent people. You know that feeling when you see something awful on the news and just... scroll past? That's what this is about.

Creating this involved a lot more than just a bunch of prompts. I used three custom AI tools I developed myself to achieve the level of consistency you see. The final result is something that literally couldn't exist without genAI acting as a renderer. So no one-stop prompt→result as many suspect, this was done using layers and layers of constructed/filmed/painted material that iterated over time and time again, slowly moving in the right direction, adjusting, backtracking and redirecting, both artistically and technically in order to get each shot right, the whole process took months to get right.

And naturally, the video features my alter ego stand in, the bunny-masked man in a suit and all of his friends and families, they look harmless and kind of silly, but their faces are hidden. That's the whole point. A shout out to The Dor Brothers, I see what you did there! ;)

What struck me during this process once again is how AI became a creative medium in its own right, not a replacement for creativity. Getting consistent results required understanding both the technical capabilities and limitations of these tools, then working within and around them to achieve the vision. It is a lot more work than most people think.

I'm fully aware that generative AI is facing significant backlash right now, and the ethical weight of using these models didn't make this decision easy for me. But sometimes the message feels urgent enough to push through those concerns.

The song talks about how we've normalized things that should shock us. How we've learned to call homelessness a 'choice' and treat some people as disposable. How we follow systems that hurt people while telling ourselves we're just being practical.AI didn't make this easy, but it made it possible. There's a huge difference between those two things.Maybe that's what we need right now, more people pushing the boundaries of what's possible with emerging tools to hold up mirrors to society, even when (especially when) we don't like what we see. /gizmo | 2025-07-16

| CREDITS |
@gizmo64k: written & directed

Indifference music video

gizmo64k - Indifference

Music video #TransLivesMatter #LGBTQ

Indifference music video

I wrote this song from a place of rage and heartbreak after losing a dear friend who was failed by our apathetic society. Every line reflects the pain of watching someone I loved being dehumanized and abandoned. The metaphor of privilege as a knife emerged from witnessing how unaware people were of the harm they caused her. This track is my emotional outcry against the indifference that I believe took her life and continues to threaten LGBTQ+ lives everywhere. I created this as both tribute and call to action. I hope it makes you feel what I felt, reflect on your own privilege, and maybe even act.

Support trans lives with your donation:
#4Be #TransSuchtHilfe offers crucial counseling and support services for trans, non-binary and gender-diverse people in Hamburg, helping them navigate addiction issues, mental health challenges, and transition processes. Your support can make a real difference in saving lives!

Donate to:
Förderverein Therapiehilfe e.V.
IBAN: DE56 2005 0550 1235 1225 28
BIC: HASPDEHHXXX
Reference: 4Be

Note: I am not affiliated with this organization and receive no financial benefit from this fundraising appeal. I'm simply sharing this information to support their important work! /gizmo | 2025-05-08

| CREDITS |
@gizmo64k: written & directed. Created using: animate7, indiepixel.studio, comfyUI, lumalabs, runway, krita, suno, bitwig, davinci resolve

Painted Reality

Drawing -> photorealism -> video

In this weekend's session I was using ComfyUI, SD1.5, Flux Schnell, LTXV and a few snippets created in Pika to turn a couple of illustrations into realistic video footage - and what can I say..

...with a bit of effort the sky is the limit, really.

Have a great week, y'all! /gizmo | 2025-01-26

Painted Reality demo video

gizmo64k - Meister Lampe

Music video

Meister Lampe, music video by gizmo64k

Thanks to Lightricks’ incredibly fast and lightweight LTXV 2b video model it is now possible to produce fully fletched music videos without being dependent on any of the big SAAS players - 100% indie & created completely locally on mid range gaming hardware (e.g. any PC with an 8gb 3070ti or better).

Here is my proof of concept: “Meister Lampe, created exclusively using LTXV 0.9.1 via img2video in ComfyUI.. /gizmo | 2025-01-15

| CREDITS |
@gizmo64k: written & directed

gizmo64k - Das Privileg

Music video

Das Privileg, music video by gizmo64k

I wrote a new song about something that's been on my mind for quite a while - the weird phenomenon that most people born with privilege [of any kind] are blind to the fact. And I'm no exception. Seems like only when we are stripped of said privilege we become truly aware. I wonder what the world would look like if this was different.

Lyrics are in German, here is the rough translation:
When you've got it, you're unable to grasp, When you've got it, it's not a problem you have, It'll only reveal itself in this one way, not the other: When it doesn't come to you - the privilege.When you've got it, you're unable to see, When you've got it, you are part of the problem, It'll only reveal itself in this one way, not the other: When it doesn't come to you - the privilege. /gizmo | 2024-12-20

| CREDITS |
@gizmo64k: written & directed

Animate7 - Retouch, Paint & Layer Update

preview/wip of our new photo editor with layers, brushes and seamless animator integration

This is a big step up for Animate7 - our in-house inference and animation tool.

We've developed a Photoshop-/Krita-like editor and inpainting system with layers, brushes, and all the tools you'd expect from a professional image editor, built from the ground up to work seamlessly with the animator, giving us complete control over the final output within one environment. No more switching between different applications or dealing with clunky workflows.

We can paint, edit, animate, and fine-tune everything in one streamlined interface designed specifically for this purpose.

This brings us much closer to our vision to gain ultimate control over every aspect of AI-generated content. /gizmo | 2024-12-12

| CREDITS |
@gizmo64k: development, interface design & integration

Animate7 photo editor and animator integration demo

gizmo64k - Dämonen

Music video

gizmo64k - Dämonen music video

I needed a break from tool development, so I wrote a new song and made another experimental music video over the weekend. This one has got some strong 90s vibes and features German lyrics, an umlaut and a bunch of harmless, friendly demons. /gizmo | 2024-12-10

| CREDITS |
@gizmo64k: written & directed | Visuals created with: Animate7 (AI CREW), ComfyUI, Krita, Luma Dream Machine, Inifnity.ai - Audio: Suno, Bitwig Studio - Video edit: Davinci Resolve |

Animate7

Work in progress preview of our inhouse tool.

Hey everyone, here's a quick update video on our current status with ANIMATE7. We switched from a single-user desktop app to a backend/frontend multi-user web application.

A lot has happened under the hood. And there are new features, too:
almost 10x performance increase, group keyframes into clips, clips are footage agnostic and modular (reuse them everywhere), sequence / video editing (overlay keyframe animation over existig material), render and preview mp4 videos with a single click of a button, asset management! Handle millions of assets with ease, ai search through all your images, videos, pdf, cbz/cbr, no tags required - e.g. you can search and find a scene in a movie by describing it (essentially reverse prompting) adjustable image based similarity search & context based 2d navigation of the whole asset library.. literally a similarity map you can explore interactively!

Have a great day! /gizmo | 2024-11-14

| CREDITS |
gizmo64k & @zebrapunk: development, interface design & integration

Animate7 work in progress video

Realtime AI Puppeteering

Live webcam to virtual character transformation - 5-10fps at 1920x1080 with 30-40ms delay

live webcam to AI character transformation with StreamDeck control AI puppeteering with parameter override controls

A month of intense work implementing LCM and optimizing my SD1.5 image2video pipeline pays off! Two major breakthroughs here: First, live webcam capture processing where I appear as completely different people in real-time. With a StreamDeck, I can switch between characters instantly - currently hitting 5-10fps at full 1920x1080 with only 30-40ms delay on a single RTX 4090. Second, granular parameter control that overrides input characteristics e.g. I can force eyes open/closed, adjust gender expression, modify facial features, all with physical buttons regardless of what the webcam sees. The implications are staggering.. if I can push the framerate higher, this becomes a viable system for live virtual puppeteering on stage. Imagine actors performing as any character, morphing between identities in real-time, with full control over every aspect of their virtual appearance. We're talking about the future of live performance here! /gizmo | 2024-07-25

| CREDITS |
@gizmo64k: concept, pipeline optimization, LCM integration & AI CREW development | Generated using AI CREW with LCM (Stable Diffusion 1.5)

Live Audio to Image Daydreaming

First live-performance-ready AI system - endless visual morphing to conversation mood

This is it - the first test that actually fulfills my original goal of creating something with AI that can be used live on stage!

The pipeline is beautifully simple yet complex:
Whisper generates a livestream of text from audio, Alpaca creates prompts from that text every 3 seconds, then my latent space wandering morphs to those new destinations within half a second. The result? An endless image generator that picks up the mood of conversations and morphs into visual representations in real-time. Every 3 seconds, it's somewhere completely different, yet the transitions are smooth and intentional. I'm outputting a constant 1920x1080 NDI stream, making it ready for live broadcast or stage projection.

It's like having a visual AI that dreams along with whatever's happening in the room.

This opens up completely new possibilities for live performances, installations, and interactive experiences. /gizmo | 2024-06-28

| CREDITS |
@gizmo64k: concept, pipeline development & AI CREW integration | Whisper: speech-to-text | Alpaca: prompt generation | AI CREW: visual generation (Stable Diffusion 1.5)

realtime audio to image daydreaming test 1 realtime audio to image daydreaming test 2

Character Consistency Tests

Can latent wandering animate faces without changing identity?

Work-in-progress, but the results look promising! I'm testing whether my latent wandering technique can handle facial animations (e.g.rotating heads, changing expressions, different lighting ..) while keeping the same character identity throughout. This is one of the holy grails of AI animation: maintaining character consistency across multiple frames and angles. The challenge is that small movements in latent space can completely morph a face into someone else entirely. But with careful keyframe selection and controlled interpolation paths, these first tests suggest it might actually be possible to animate specific characters without them morphing into random people. Still early days, but if this works reliably, it opens up completely new possibilities for AI-generated narrative content. /gizmo | 2024-06-27

| CREDITS |
@gizmo64k: character design, animation testing & AI CREW development | Generated using AI CREW (Stable Diffusion 1.5)

 
soldier character consistency test in forest setting
ANN.E character consistency test in night street setting

Retro Remedy FMV

Early 90s Amiga FMV aesthetics + first lipsync experiments

Retro Remedy FMV - Amiga-style full motion video

This one's a direct follow-up to my pixel art experiments, but this time I'm going for that classic early 90s Amiga FMV (full motion video) look. You know, those grainy, low-res, gloriously chunky (to planar) video sequences that made us feel like we were living in the future back in the day! The challenge here is capturing that specific aesthetic - the compression artifacts, the limited color palette, that somehow made everything feel more cinematic. But here's the real breakthrough: for the first time, I attempt some rudimentary lipsync! It's basic, but it's a start. The whole thing celebrates retro gaming culture and how these 'relics from yesteryears' sands' become our shield against modern digital chaos. Sometimes you need to go backwards to move forward. /gizmo | 2024-06-24

| CREDITS |
@gizmo64k: lyrics, concept, visuals, direction & AI CREW development | ANN.E (Suno): voice samples & beats | Generated using AI CREW (Stable Diffusion 1.5)

Time Blind

First stable backgrounds - no more wobbling!

This one's personal - about being time blind and how it affects every aspect of life. But here's the interesting technical part of the video: for some of the takes I achieved that everything that's NOT supposed to move actually REMAINS stable. No more wobbling backgrounds! This marks a huge leap forward in my pipeline. But watch for yourself.. The entire 2-minute video - from concept to final render - was created in under 2 days using a single RTX 4090, completely offline using my SD 1.5 toolchain. No Runway, no Sora, no big tech services - 100% indie & local with near full control. Sometimes the most personal projects push the technology the furthest. /gizmo | 2024-05-23

| CREDITS |
@gizmo64k: concept, lyrics, direction, tool-chain development & rendering | suno.ai: voice samples & beats | Generated using AI CREW (Stable Diffusion 1.5)

Shoutout to Russell Barkley, PhD whose work has been a big help and served as an inspiration for this video - Thank you!

Time Blind - AI music video with stable backgrounds

Latent Space Wandering Between Keyframes

I've been working on a system that gives me much more granular control over the generation process. The basic idea is to treat the latent space as a navigable multidimensional landscape where I can define keyframes and let the system smoothly interpolate between these positions. I can now create animations that maintain temporal coherence while still allowing for creative drift and evolution. These flower sequences demonstrate the technique - each video starts from a defined keyframe and wanders through semantically related regions of latent space, creating organic transitions that feel both controlled and spontaneous. /gizmo | 2024-05-05

| CREDITS |
All experiments, coding & post: Chris (@gizmo64k)

 
latent space wandering flowers sequence 1
latent space wandering flowers sequence 2
latent space wandering flowers sequence 3

S-P-i-C-Y V-ill-Ai-N-S

First complete AI-generated music video - a defining breakthrough moment

This marks a massive milestone - my first complete music video using my toolchain as the final renderer for the visuals and suno-generated samples for for the audio track. The concept of 'posh foodie gangster rap' (shout-out to @copyboy7!) gives me the perfect playground to test my AI CREW toolkit based on Stable Diffusion 1.5. Here's the thing: I maintain full creative control over every aspect - the composition, timing, narrative flow - while AI handles the rendering. It's not about replacing the artist, it's about having the most incredible renderer imaginable! The whole pipeline is human-directed but AI-rendered, and oh boy, what a renderer it is! I think we've crossed a threshold: image generation as a legitimate creative tool for animation! /gizmo | 2024-05-17

| CREDITS |
@gizmo64k: bars/lyrics, arrangement, concept art, set & fashion design, direction, lighting, camera, editing and tool-chain development | suno.ai: voice samples & beats | Concept inspiration: @copyboy7 (@BeatsandEatsBro)

S-P-i-C-Y V-ill-Ai-N-S complete AI music video

AI generated pixel art

Yet another frontier for Stable Diffusion 1.5

Even though Stable Diffusion 1.5 has proven highly effective at producing high-quality images, I didn't think it could be a viable tool to produce intricate pixel art - until I tried it out. I think it is safe to say that with the right approach, artists can leverage SD 1.5 to create stunning pixel art works. /gizmo | 2023-11-26

Check out the full gallery here

teahouse 008

Background fix

...and now the background is under control, too! This turned out better than I had hoped for! Can't wait to finish the whole video! /gizmo | 2023-10-29

| CREDITS |
Skyler (aka DJ Stosslüften): performing artist | Chris (@gizmo64k): camera, vfx & post.

 
retouched ape video
001.jpg #001
002.jpg #002

Stable Diffusion finally stable!

...this is it! We managed to get Stable Diffusion stable enough to do animations! Judge for yourself, check out the before and after videos! Admittedly, there's still some jitter left - mainly in the background, so it is not perfect yet. But still, this increase in visual quality is really motivating! /gizmo | 2023-10-15

| CREDITS |
Skyler (aka DJ Stosslüften): performing artist | Chris (@gizmo64k): camera, vfx & post.

 
retouched ape video
retouched ape video
001.jpg #001
002.jpg #002
003.jpg #003
004.jpg #004

image -> text -> json

Using AI to index thousands of old photos and make them searchable

The benefits of deep diving into topics like generative ai are the random happy insights one gets. For example that Stable Diffusion isn't a single model but consists of multiple components such as a variational autoencoder, forward/reverse diffusion, a noise predictor, and of course a module for text conditioning, which imprints text and images into the same (latent) space and thus is the key element for text-to-image generation. Basically a word can trigger an image. And this works in the other direction, too! Long story short: I wrote a tool that scans my old photo archive and generates a brief description for each image using a locally hosted AI - no need for uploading hundreds of gigabytes to a random big tech company with questionable privacy practices - neat! /gizmo | 2023-09-13

img2text2json

Finally getting rid of Automatic1111 et al. ...

...by coding my own Stable Diffusion inference toolkit. This way I have full control over the generation process and will be able to implement and test my own image generation concepts much more easily. Currently I use python for the back end and pygame for the front end, but this will probably change once the toolkit becomes more interactive. /gizmo | 2023-09-11

stable diffusion inference screenshot

 

CONTACT & IMPRINT / IMPRESSUM

DATENSCHUTZHINWEISE / PRIVACY STATEMENTS (GERMAN ONLY)