C64 Assembly In Parts

[Michal Sapka] wanted to learn a new skill, so he decided on the Commodore 64 assembly language. We didn’t say he wanted to learn a new skill that might land him a job. But we get it and even applaud it. Especially since he’s written a multi-part post about what he’s doing and how you can do it, too. So far, there are four parts, and we’d bet there are more to come.

The series starts with the obligatory “hello world,” as well as some basic setup steps. By part 2, you are learning about registers and numbers. Part 3 covers some instructions, and by part 4, he finds that there are even more registers to contend with.

Continue reading “C64 Assembly In Parts”

Abusing DuckDB-WASM To Create Doom In SQL

These days you can run Doom anywhere on just about anything, with things like porting Doom to JavaScript these days about as interesting as writing Snake in BASIC on one’s graphical calculator. In a twist, [Patrick Trainer] had the idea to use SQL instead of JS to do the heavy lifting of the Doom game loop. Backed by the Web ASM version of  the analytical DuckDB database software, a Doom-lite clone was coded that demonstrates the principle that anything in life can be captured in a spreadsheet or database application.

Rather than having the game world state implemented in JavaScript objects, or pixels drawn to a Canvas/WebGL surface, this implementation models the entire world state in the database. To render the player’s view, the SQL VIEW feature is used to perform raytracing (in SQL, of course). Any events are defined as SQL statements, including movement. Bullets hitting a wall or impacting an enemy result in the bullet and possibly the enemy getting DELETE-ed.

The role of JavaScript in this Doom clone is reduced to gluing the chunks of SQL together and handling sprite Z-buffer checks as well as keyboard input. The result is a glorious ASCII-based game of Doom which you can experience yourself with the DuckDB-Doom project on GitHub. While not very practical, it was absolutely educational, showing that not only is it fun to make domain specific languages do things they were never designed for, but you also get to learn a lot about it along the way.

Thanks to [Particlem] for the tip.

Porting COBOL Code And The Trouble With Ditching Domain Specific Languages

Whenever the topic is raised in popular media about porting a codebase written in an ‘antiquated’ programming language like Fortran or COBOL, very few people tend to object to this notion. After all, what could be better than ditching decades of crusty old code in a language that only your grandparents can remember as being relevant? Surely a clean and fresh rewrite in a modern language like Java, Rust, Python, Zig, or NodeJS will fix all ailments and make future maintenance a snap?

For anyone who has ever had to actually port large codebases or dealt with ‘legacy’ systems, their reflexive response to such announcements most likely ranges from a shaking of one’s head to mad cackling as traumatic memories come flooding back. The old idiom of “if it ain’t broke, don’t fix it”, purportedly coined in 1977 by Bert Lance, is a feeling that has been shared by countless individuals over millennia. Even worse, how can you ‘fix’ something if you do not even fully understand the problem?

In the case of languages like COBOL this is doubly true, as it is a domain specific language (DSL). This is a very different category from general purpose system programming languages like the aforementioned ‘replacements’. The suggestion of porting the DSL codebase is thus to effectively reimplement all of COBOL’s functionality, which should seem like a very poorly thought out idea to any rational mind.

Continue reading “Porting COBOL Code And The Trouble With Ditching Domain Specific Languages”

Illustration of author surveying the fruits of his labor by Bomberanian

Learning Linux Kernel Modules Using COM Binary Support

Have you ever felt the urge to make your own private binary format for use in Linux? Perhaps you have looked at creating the smallest possible binary when compiling a project, and felt disgusted with how bloated the ELF format is? If you are like [Brian Raiter], then this has led you down many rabbit holes, with the conclusion being that flat binary formats are the way to go if you want sleek, streamlined binaries. These are formats like COM, which many know from MS-DOS, but which was already around in the CP/M days. Here ‘flat’ means that the entire binary is loaded into RAM without any fuss or foreplay.

Although Linux does not (yet) support this binary format, the good news is that you can learn how to write kernel modules by implementing COM support for the Linux kernel. In the article [Brian] takes us down this COM rabbit hole, which involves setting up a kernel module development environment and exploring how to implement a binary file format. This leads us past familiar paths for those who have looked at e.g. how the Linux kernel handles the shebang (#!) and ‘misc’ formats.

On Windows, the kernel identifies the COM file by its extension, after which it gives it 640 kB & an interrupt table to play with. The kernel module does pretty much the same, which still involves a lot of code.

Of course, this particular rabbit hole wasn’t deep enough yet, so the COM format was extended into the .♚ (Unicode U+265A) format, because this is 2025 and we have to use all those Unicode glyphs for something. This format extension allows for amazing things like automatically exiting after finishing execution (like crashing).

At the end of all these efforts we have not only learned how to write kernel modules and add new binary file formats to Linux, we have also learned to embrace the freedom of accepting the richness of the Unicode glyph space, rather than remain confined by ASCII. All of which is perfectly fine.

Top image: Illustration of [Brian Raiter] surveying the fruits of his labor by [Bomberanian]

The Incomplete JSON Pretty Printer (Brought To You By Vibes)

Incomplete JSON (such as from a log that terminates unexpectedly) doesn’t parse cleanly, which means anything that usually prints JSON nicely, won’t. Frustration with this is what led [Simon Willison] to make The Incomplete JSON pretty printer, a single-purpose web tool that pretty-prints JSON regardless of whether it’s complete or not.

Making a tool to solve a particular issue is a fantastic application of software, but in this case it also is a good lead-in to some thoughts [Simon] has to share about vibe coding. The incomplete JSON printer is a perfect example of vibe coding, being the product of [Simon] directing an LLM to iteratively create a tool and not looking at the actual code once.

Sometimes, however the machine decides to code something is fine.

[Simon] shares that the term “vibe coding” was first used in a social media post by [Andrej Karpathy], who we’ve seen shared a “hello world” of GPT-based LLMs as well as how to train one in pure C, both of which are the product of a deep understanding of the subject (and fantastically educational) so he certainly knows how things work.

Anyway, [Andrej] had a very specific idea he was describing with vibe coding: that of engaging with the tool in almost a state of flow for something like a weekend project, just focused on iterating one’s way to what they want without fussing the details. Why? Because doing so is new, engaging, and fun.

Since then, vibe coding as a term seems to get used to refer to any and all AI-assisted coding, a subject on which folks have quite a few thoughts (many of which were eagerly shared on a recent Ask Hackaday on the subject).

Of course human oversight is critical to a solid and reliable development workflow. But not all software is the same. In the case of the Incomplete JSON Pretty Printer, [Simon] really doesn’t care what the code actually looks like. He got it made in a short amount of time, the tool does exactly what he wants, and it’s hard to imagine the stakes being any lower. To [Simon], however the LMM decided to do things is fine, and there’s a place for that.

Using Integer Addition To Approximate Float Multiplication

Once the domain of esoteric scientific and business computing, floating point calculations are now practically everywhere. From video games to large language models and kin, it would seem that a processor without floating point capabilities is pretty much a brick at this point. Yet the truth is that integer-based approximations can be good enough to hit the required accuracy. For example, approximating floating point multiplication with integer addition, as [Malte Skarupke] recently had a poke at based on an integer addition-only LLM approach suggested by [Hongyin Luo] and [Wei Sun].

As for the way this works, it does pretty much what it says on the tin: adding the two floating point inputs as integer values, followed by adjusting the exponent. This adjustment factor is what gets you close to the answer, but as the article and comments to it illustrate, there are plenty of issues and edge cases you have to concern yourself with. These include under- and overflow, but also specific floating point inputs.

Unlike in scientific calculations where even minor inaccuracies tend to propagate and cause much larger errors down the line, graphics and LLMs do not care that much about float point precision, so the ~7.5% accuracy of the integer approach is good enough. The question is whether it’s truly more efficient as the paper suggests, rather than a fallback as seen with e.g. integer-only audio decoders for platforms without an FPU.

Since one of the nice things about FP-focused vector processors like GPUs and derivatives (tensor, ‘neural’, etc.) is that they can churn through a lot of data quite efficiently, the benefits of shifting this to the ALU of a CPU and expecting (energy) improvements seem quite optimistic.

Ask Hackaday: Vibe Coding

Vibe coding is the buzzword of the moment. What is it? The practice of writing software by describing the problem to an AI large language model and using the code it generates. It’s not quite as simple as just letting the AI do your work for you because the developer is supposed to spend time honing and testing the result, and its proponents claim it gives a much more interactive and less tedious coding experience. Here at Hackaday, we are pleased to see the rest of the world catch up, because back in 2023, we were the first mainstream hardware hacking news website to embrace it, to deal with a breakfast-related emergency.

Jokes aside, though, the fad for vibe coding is something which should be taken seriously, because it’s seemingly being used in enough places that vibe coded software will inevitably affect our lives.  So here’s the Ask Hackaday: is this a clever and useful tool for making better software more quickly, or a dangerous tool for creating software nobody quite understands, containing bugs which could cause a disaster?

Our approach to writing software has always been one of incrementally building something from the ground up, which satisfies the need. Readers will know that feeling of being in touch with how a project works at all levels, with a nose for immediately diagnosing any problems that might occur. If an AI writes the code for us, the feeling is that we might lose that connection, and inevitably this will lead to less experienced coders quickly getting out of their depth. Is this pessimism, or the grizzled voice of experience? We’d love to know your views in the comments. Are our new AI overlords the new senior developers? Or are they the worst summer interns ever?