retracktor dev log

It's been a while since I was able to work on Pro/Retracktor. The time since my retreat to Sweden was littered with unexpected chores, sickness and client work that kept me from ever looking at the Protracktor source. Today I was able to finally take a look at it again and figure out where I stand and what to do next.

And while I was reading code and listening to some mods, I realised that I never tackled stereo separation. The original Protracker on the AMIGA had 100% stereo separation because it was based on the four digital hardware channels where two of them would pan hard to the right and two of them pan hard to the left. This is often not exactly what you want (And Retracktor will definitely have free panning per channel), and so most player routines allow you to gradually adjust stereo separation from zero to 100%.

It took a tiny refactoring of the player code to enable stereo separation but I got it done within around 30 minutes which is nice because it shows me that I haven't forgotten everything about this code base.

If I get to more stuff in the near future, I'll run some experiments on how to structure UI code with SDL so that I can start building a simple visualiser for the modules as a first step to a full fledged Protracker editor.

I'm really happy to be back on this. Here's to hoping I will find a bit more time for it.

It's been a while since my last entry in the devlog and there are reasons for that but none of them are worth talking about. For the last 1 ½ weeks I have been in my refuge in Sweden and while I have other duties, some chunk of time went into making Protracktor a reality.

I'm now at a point where the play routine seems to play most modules almost correctly. What I need to do now is take my original TinyMod port to JavaScript (or rather: Coffeescript) and make it run again in some form. Technically it would probably need a port to use an AudioWorker for the rendering, but that would be another one of these distractions I seem to run into. The reason why I want to get that working is to compare a couple of things – I've noticed some interesting differences in playback between MilkyTracker and my routine that hint to some subtle bugs (maybe) in TinyMod or just my port in handling channel volumes and this is bad for a player but would be unacceptable for an actual module editor.

The next step would be to refactor the whole thing a little bit so that it works better with Rust's borrowing. The majority of my time trying to get the port to work was fighting with borrowing and figuring out how to restructure the code in a way that reduced the amount of references I need to throw around. The original code is C++ and since everything is running in a single thread, it doesn't particularly care if references (or pointers) are thrown around and are held as references in several places at once. My idea is here to dissolve the class based structure as much as possible and clearly divide up the data between actual module data (that needs to be read only for the player code so that I can modify it in the editor) and runtime player state (of which there is actually a lot)

The other thing I've been dealing with is that the original code for the most part uses signed integers for everything (except for the places that need floating point math) and you can't really do that in Rust when you're trying to index into arrays or Vecs. I've been trying to replace signed integers with unsigned ones but quickly ran into a ton of issues where stuff needed to be signed to work properly.

On the way I discovered that I completely misunderstood what the “as” keyword in Rust is doing. I totally expected it to cast integers but what's actually happening for integers is that the binary representation is not changed. This makes sense as “as” is supposed to never panic and you can't really usefully cast a negative integer into an unsigned integer. To give you an example, let unsigned: u8 = 255; let signed = unsigned as i8; will end up with signed being -1, as that's represented as the same bits as 255 in an unsigned byte.

This is actually useful in places but means in need to be really careful to do the correct thing. There are ways to do actual conversion, but they, obviously, can fail and so need error handling.

Combine this with C/C++ lax way of handling array access (the original C++ code does a lot of pointer math, something that need careful rewriting with slices in Rust for the most part) and there are some real challenges for me to clean up the code.

Then I'll try to extract the play routine up to a point that I can use it in both a simple CLI player (which currently is what this is) and the actual Protracktor editor.

I haven't made any progress on the UI side of things, but I really would love to start with something simple that at least visualises the module structure in real time and maybe try to add some scope graphics if possible as well.

It was really an incredible moment yesterday evening when I figured out the final bug in the code that messed up the sound and heard a clear sound for the first time. Turns out I did some math wrong in the loader and the sample offsets were all messed up. The module file format is such a weird relic from the past where binary files were actually, you know, binary files, instead of the usual zip archive with a bunch of easy to parse files in them.

I hope, now that I have something somewhat substantial to show, that I can quickly open up my development process and start all the stuff I planned like making videos and maybe even streaming my coding sessions. (He writes, typing this into the void of an unknown blog on the intarweb)

Due to a quite sticky cold and some other responsibilities progress on Retracktor has been a bit slower than before. I did spend some more time learning rust, so it wasn't completely lost.

One thing I realised is that coming up with a tracker engine from scratch is probably a bit of a large ask. I have rough ideas about what I want the engine to do, but it is a lot. One piece of code that I come back to regularly to figure out how to structure the code is ct2, which is a web based partial replica of Protracker, the famous AMIGA tracker from the 1990s. It is based on TinyMod by legendary kebby/farbrausch.

a screenshot of Protracker 2.3e

So here's my new plan: Re-Implementing tinymod (or, rather, the ct2 player code) in Rust and then gradually adding UI to it to be able to use it as a game controller controlled protracker clone. This is a much smaller target than what I have in mind for Retracktor and should be easy-ish in terms of the sound code and doable in terms of the UI in a relatively short time frame.

It gives me enough surface to really dig deep into how Rust works and how to build something like this in Rust. It also is enough UI to learn a lot about how a controller based interface could work without directly copying LSDJ/M8.

A usable Protracker that works on any retro handheld would also be a really nice thing in it self.

I'll have some time next week and will try to get cracking with this.

Anbernic RG Nano handheld console (In pinkish red) next to a Micro SD card to show how tiny it is - It's about 4cm wide and 7cm long)

OMG that thing is tiny.

It has a 240x240 screen and technically should make a good lower boundary of devices retracktor could run on. It does only have a single core Cortex A7 CPU, so maybe it is actually a bit too limited but that remains to be seen. I could imagine a world where I could feature flag out parts of an engine to allow for more or less CPU power to be demanded.

I actually have to test sound output on the device – It does not come with a dedicated mini jack for headphones, instead it did come with a USB-C-to-minijack adapter. The sound is supposed to be quite good, so let's see.

Unfortunately, LSDJ doesn't quite work on it (yet) as the navigation shortcuts are taken over by hotkeys for the emulator. I wonder if one could deactivate these for specific roms via config files, but who knows. EDIT: Actually, this was easy to solve with a keymap that simply unbinds a few key combinations:


Over the past two days I have (unsuccessfully) tried to compile a simple Android app with a rust based SDL core to check out how easy it would be to build an Android app.

Turns out, not very easy. There are some good tutorials and it seems possible in theory, but I failed to compile both SDLimage and SDLttf for Android, because I am not very familiar with Android NDK. It is doable, all these libraries have Android specific build scripts, but anything I tried failed.

I'm sure I'll get this done eventually but it is clear to me that I shouldn't have wasted so much time (and, frankly, energy) on this and should have instead focused on the other stuff.

I did, however, managed to get started with a simple text system based on pixel fonts (basically, a PNG sprite sheet at this point) and found a lovely TTF rendition of the original Protracker font that allowed me to quickly whip up the PNG. The reason I am thinking about using a pixel font is because SDLttf is not exactly easy to use anyway and simply having one texture around (I could, of course, generate that on the fly with SDLttf) and blitting small images into a canvas seems much more efficient for now.

I also played around with scaling and it seems like a 1x pixel font will give quite crisp text when scaled up with canvas.copy() (SDL_RenderCopy in original SDL parlance), so this should work great for building a simple, mostly text based UI.

Today I made a huge jump forward in that I set up a test project to play around with SDL's audio API and managed to get it to play a sample in floating point format with simple interpolation and at varying playback speeds. This is basically the first step towards a real tracker-ish proof of concept.

My main issue was that the audio converter built into SDL always emits a Vec<u8> regardless of the type you're converting to. It took me a good while and a plea for help on Mastodon (Thanks to for quick help) to figure out how to do that in a somewhat safe way.

Now that I have that out of the way, next step will be a tiny single voice tracker (without any FX support) implementation with some hardcoded data. This will allow me to figure out the proper maths for playback speed. I looked up some formulas (especially helpful: This odd website and I guess the main question will be if it makes sense to store a lot of values as tables of if we can calculate the ratios on the fly.

Then, after that, I think I will try to work out how to properly encapsulate the tracker engine, both in terms of code organisation but also data structures.

In the mean time I'll continue to learn more about rust. Specifically, I think, multithreading and how to move data between threads will become quite important quickly.

In parallel, I'll continue to play around with SDL graphics to figure out how to best build the UI in a way so that it scales to different screen sizes and also different aspect ratios. I will try to not spend too much time on this and rather constrain myself to something that will work on most screens, but it would be really nice, for example, if we could use extra space on super wide displays (such as the Retroid devices).

One thing I also want to solve relatively quickly is figuring out if I can compile the current test projects to Android, given the proliferation of Android based Retro handhelds, this seems like a sensible Idea.

Oh and lastly, I need to verify that the controller mappings on my target devices somewhat make sense.

I am currently making good progress, as I have a little bit more time right now for my own projects which is great. I hope to get most of the big boulders out of the way before the end of the summer when things will pick up on other projects again.

I played around a bit with the rust sdl2 API to figure out how to use TTF fonts.

I am not sure I want to actually use the TTF stuff, as it is quite unwieldy. Text rendering in SDL2 is definitely not great. I guess the only benefit of directly using TTF is (in contrast to, say, a baked in bitmap font with some home grown text routines) that I don't need to care for UTF-8 characters (for naming things, mostly) as they will just work.

On the other hand, if we assume that people will name things with a controller based on-screen keyboard anyway, we can safely control the charset and thus using custom fonts with a very reduced character set would not matter.

At this point I am also not worrying at all about i18n. For the forseeable future, this will be english only software.

I've been looking at a couple of things to get inspiration, most notably MilkyTracker which has an SDL based Linux port and comes with a completely home grown UI framework.

My guess right now is that text output (which will be the majority of stuff on the screen) is not something I particularly need to worry about as it will be plenty fast in even the most unoptimised version.

Nevertheless, finding good abstractions for all of this is going to be tricky, especially in a language that still feels as foreign as rust. One of my goals for this week is to learn me a whole bunch more of that rusty goodness.

What feels a bit frustrating, still, is that a lot of the code examples are based on older rust editions and getting them working in my setup often requires guesswork on my end. Hopefully that gets better with more experience and more actual rust knowledge.

This is a very small step, but an important one: I managed to get a simple SDL2 example program in rust to cross compile for arm64 and actually run it on an RG351V.

The Cross crate for rust is a bit ridiculous but it got the job done (of course not with me not creating a custom Dockerfile to add SDL2 to the build image).

Even though the source says nothing about full screen, on the 351V, the app opened up in full screen (probably a feature of their “window manager”) and did it's thing (a simple color cycle).

Next steps will be to figure out how to detect the screen size and scale the window respectively. I will figure out how to scale the UI later. And then add support for the controller surface so that I can somehow quit the app as well (as the ESC key is notably absent from retro consoles.

I like making music with pocketable devices. The first time this was somewhat feasible (for me) was my Zire 71 (from the last generations of Palm Pilot devices) which ran a weird DAW like thing called Bhajis Loops that was actually surprisingly powerful. It took a while for iPhones to get some decent music software, but at some point it was possible to do some amazing stuff (I think my favourite was Korg Gadget, but unfortunately, that worked much better on an iPad. On Android, even in 2023 the situation isn't great but at least there's FL Studio.

And then, of course, there are Gameboys and such. The original Gameboy runs both LSDJ, which has been around for ages and is a super powerful tool for making, well, Gameboy music, and Nanoloop, which is in itself a software/hardware with an amazing story. The Nintendo DS later got Korg DS-10, which was fun to play with, but is quite limiting.

And now, these two worlds have collided in the most beautiful, chaotic, and, mostly illegal way in the form of the so-called retro consoles. Take a mobile phone SOC from a few generations back, because they are lying around in warehouses, do the same with a screen nobody wants anymore because it's too low resolution, slap some weird home grown Linux on it with the state of the art emulators for everything from the NES to Gamecube and beyond and add some game controller hardware to the whole thing and you have something like the (now quite ancient) Anbernic RG351P.

In the grand scheme of things, these are actually quite powerful devices. They may be too weak to run high end games and such, but Audio software isn't actually that demanding, again, in the grand scheme of things. I do remember the times when running real time audio synthesis on an AMIGA was almost unthinkable.

And so this idea was born. What if I take the idea of LSDJ and modernise it so that it can use these awesome hardwares to full effect.

“That's M8, mate” I hear you cry.

Thing is: I really like the concept of M8. I have had it in my hands and it is an awesome device. But it is awfully expensive as the custom hardware that it is.

Just to be clear: My goal is not to clone the M8 software and run it on cheap commodity hardware. I want to build something that can stand on its own. M8 has often been criticised for copying LSDJ so blatantly and I think that's a fair point. I want to see if I can take the whole thing further. Expect similar capabilities as M8 has, musically (At least that's my first benchmark) but expect a different approach. Expect something that takes inspiration from everything from Chris Huelsbecks Sound Monitor, Protracker, Fasttracker to Renoise, Buzz, SunVox and, yes, LSDJ and M8 as well. Expect something ruthlessly optimised for modern game controllers.

I am at the very beginning of my ride here. But I wanted to try something: Why not do everything completely out in the open. I'll share my full process here. I'll write more about my motivation. I will write about my research. I will write about my design process. I will try to have conversations here with people who I trust to have useful opinions on what I try to achieve here.

My hope is that at a point not so far into the future, I'll end up with something that runs on a below 100€ device and allows you to actually make music. And then, at that point, I will find enough people that are willing to support me on this quest, so that I can keep the whole thing open but still have some sort of extra income through this. I know this is a risky bet and that retro consoles are not exactly the space where the big money is. But I'm willing to try.

Wish me luck.

Enter your email to subscribe to updates.